Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I've import contacts from gmail by using gdata api,
and is there any apis like that for hotmail/live/Aol ? | false | 1,938,945 | 0.132549 | 1 | 0 | 2 | There is Windows Live Contact API for Hotmail/Live mail.
Yahoo Contact API for Yahoo also exists, but to this date, no AOL contact api.
I would suggest you try openinviter (openinviter.com) to import contacts. Unfortunately, you will not have OAuth capabilities, but it is the best class out there and works with 90+ different email providers.
Note: it is written in php, but creating a wrapper won't be too hard. | 0 | 637 | 0 | 0 | 2009-12-21T08:50:00.000 | python,api | Is there any libraries could import contacts from hotmail/live/aol account? | 1 | 1 | 3 | 1,939,043 | 0 |
0 | 0 | Is it possible to do in place edit of XML document using xpath ?
I'd prefer any python solution but Java would be fine too. | false | 1,964,583 | 0.099668 | 0 | 0 | 1 | Using XML to store data is probably not optimal, as you experience here. Editing XML is extremely costly.
One way of doing the editing is parsing the xml into a tree, and then inserting stuff into that three, and then rebuilding the xml file.
Editing an xml file in place is also possible, but then you need some kind of search mechanism that finds the location you need to edit or insert into, and then write to the file from that point. Remember to also read the remaining data, because it will be overwritten. This is fine for inserting new tags or data, but editing existing data makes it even more complicated.
My own rule is to not use XML for storage, but to present data. So the storage facility, or some kind of middle man, needs to form xml files from the data it has. | 0 | 375 | 0 | 2 | 2009-12-26T23:08:00.000 | python,xpath | edit in place using xpath | 1 | 1 | 2 | 1,964,631 | 0 |
0 | 0 | I'm creating a python script which accepts a path to a remote file and an n number of threads. The file's size will be divided by the number of threads, when each thread completes I want them to append the fetch data to a local file.
How do I manage it so that the order in which the threads where generated will append to the local file in order so that the bytes don't get scrambled?
Also, what if I'm to download several files simultaneously? | false | 1,965,213 | 0.049958 | 0 | 0 | 1 | You need to fetch completely separate parts of the file on each thread. Calculate the chunk start and end positions based on the number of threads. Each chunk must have no overlap obviously.
For example, if target file was 3000 bytes long and you want to fetch using three thread:
Thread 1: fetches bytes 1 to 1000
Thread 2: fetches bytes 1001 to 2000
Thread 3: fetches bytes 2001 to 3000
You would pre-allocate an empty file of the original size, and write back to the respective positions within the file. | 1 | 3,862 | 0 | 0 | 2009-12-27T04:56:00.000 | python,multithreading | File downloading using python with threads | 1 | 1 | 4 | 1,965,219 | 0 |
0 | 0 | I'm looking at existing python code that heavily uses Paramiko to do SSH and FTP. I need to allow the same code to work with some hosts that do not support a secure connection and over which I have no control.
Is there a quick and easy way to do it via Paramiko, or do I need to step back, create some abstraction that supports both paramiko and Python's FTP libraries, and refactor the code to use this abstraction? | true | 1,977,571 | 1.2 | 1 | 0 | 7 | No, paramiko has no support for telnet or ftp -- you're indeed better off using a higher-level abstraction and implementing it twice, with paramiko and without it (with the ftplib and telnetlib modules of the Python standard library). | 0 | 7,051 | 0 | 6 | 2009-12-29T23:30:00.000 | python,paramiko | Does Paramiko support non-secure telnet and ftp instead of just SSH and SFTP? | 1 | 1 | 1 | 1,978,007 | 0 |
1 | 0 | I'm using Web2Py and i want to import my program simply once per session... not everytime the page is loaded. is this possible ? such as "import Client" being used on the page but only import it once per session.. | false | 1,978,426 | 1 | 0 | 0 | 6 | In web2py your models and controllers are executed, not imported. They are executed every time a request arrives. If you press the button [compile] in admin, they will be bytecode compiled and some other optimizations are performs.
If your app (in models and controllers) does "import somemodule", then the import statement is executed at every request but "somemodule" is actually imported only the first time it is executed, as you asked. | 0 | 953 | 0 | 4 | 2009-12-30T04:27:00.000 | python,web2py | Web2py Import Once per Session | 1 | 1 | 1 | 1,980,510 | 0 |
0 | 0 | I want to develop a tool for my project using python. The requirements are:
Embed a web server to let the user get some static files, but the traffic is not very high.
User can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?
Besides the web server, the tool need some background job to do, so these jobs need to be done with the web server.
So, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here. | false | 1,978,791 | 0 | 0 | 0 | 0 | Why dont you use open source build tools (continuous integration tools) like Cruise. Most of them come with a web server/xml interface and sometimes with fancy reports as well. | 0 | 1,002 | 0 | 0 | 2009-12-30T06:56:00.000 | python,cherrypy | Python web server? | 1 | 4 | 5 | 1,978,818 | 0 |
0 | 0 | I want to develop a tool for my project using python. The requirements are:
Embed a web server to let the user get some static files, but the traffic is not very high.
User can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?
Besides the web server, the tool need some background job to do, so these jobs need to be done with the web server.
So, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here. | false | 1,978,791 | -0.119427 | 0 | 0 | -3 | This sounds like a fun project. So, why don't write your own HTTP server? Its not so complicated after all, HTTP is a well-known and easy to implement protocol and you'll gain a lot of new knowledge!
Check documentation or manual pages (whatever you prefer) of socket(), bind(), listen(), accept() and so on. | 0 | 1,002 | 0 | 0 | 2009-12-30T06:56:00.000 | python,cherrypy | Python web server? | 1 | 4 | 5 | 1,979,792 | 0 |
0 | 0 | I want to develop a tool for my project using python. The requirements are:
Embed a web server to let the user get some static files, but the traffic is not very high.
User can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?
Besides the web server, the tool need some background job to do, so these jobs need to be done with the web server.
So, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here. | false | 1,978,791 | 0.039979 | 0 | 0 | 1 | Use the WSGI Reference Implementation wsgiref already provided with Python
Use REST protocols with JSON (not XML-RPC). It's simpler and faster than XML.
Background jobs are started with subprocess. | 0 | 1,002 | 0 | 0 | 2009-12-30T06:56:00.000 | python,cherrypy | Python web server? | 1 | 4 | 5 | 1,979,714 | 0 |
0 | 0 | I want to develop a tool for my project using python. The requirements are:
Embed a web server to let the user get some static files, but the traffic is not very high.
User can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?
Besides the web server, the tool need some background job to do, so these jobs need to be done with the web server.
So, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here. | true | 1,978,791 | 1.2 | 0 | 0 | 3 | what about the internal python webserver ?
just type "python web server" in google, and host the 1st result... | 0 | 1,002 | 0 | 0 | 2009-12-30T06:56:00.000 | python,cherrypy | Python web server? | 1 | 4 | 5 | 1,979,101 | 0 |
0 | 0 | I am porting some Java code to Python and we would like to use Python 3 but I can't find LDAP module for Python 3 in Windows.
This is forcing us to use 2.6 version and it is bothersome as rest of the code is already in 3.0 format. | true | 1,982,442 | 1.2 | 0 | 0 | -3 | This answer is no longer accurate; see below for other answers.
Sorry to break this on you, but I don't think there is a python-ldap for Python 3 (yet)...
That's the reason why we should keep active development at Python 2.6 for now (as long as most crucial dependencies (libs) are not ported to 3.0). | 0 | 16,151 | 0 | 12 | 2009-12-30T20:56:00.000 | python,ldap,python-3.x | Does Python 3 have LDAP module? | 1 | 1 | 4 | 1,982,479 | 0 |
0 | 0 | I want to upload a file from my computer to a file hoster like hotfile.com via a Python script. Because Hotfile is only offering a web-based upload service (no ftp).
I need Python first to login with my username and password and after that to upload the file. When the file transfer is over, I need the Download and Delete-link (which is generated right after the Upload has finished).
Is this even possible? If so, can anybody tell me how the script looks like or even give my hints how to build it?
Thanks | false | 1,993,060 | 0 | 1 | 0 | 0 | You mention they do not offer FTP, but I went to their site and found the following:
How to upload with FTP?
ftp.hotfile.com user: your hotfile
username pass: your hotfile password
You can upload and make folders, but
cant rename,move files
Try it. If it works, using FTP from within Python will be a very simple task. | 0 | 7,457 | 0 | 2 | 2010-01-02T22:14:00.000 | python,authentication,file-upload,automation | Upload file to a website via Python script | 1 | 1 | 3 | 1,993,139 | 0 |
0 | 0 | I have a need to display some basic info about a facebook group on a website i am building. All i am really looking to show is the total number of members, and maybe a list of the few most recent people who joined.
I would like to not have to login to FB to accomplish this, is there an API for groups that allows anonymous access? or do i have to go the screen scraping route? | true | 2,008,816 | 1.2 | 1 | 0 | 1 | Use the Python Facebook module on Google Code. | 0 | 342 | 0 | 1 | 2010-01-05T20:23:00.000 | python,django,facebook | Python + Facebook, getting info about a group easily | 1 | 1 | 1 | 2,012,448 | 0 |
0 | 0 | How do I get the MAC address of a remote host on my LAN? I'm using Python and Linux. | false | 2,010,816 | 0 | 1 | 0 | 0 | Many years ago, I was tasked with gathering various machine info from all machines on a corporate campus. One desired piece of info was the MAC address, which is difficult to get on a network that spanned multiple subnets. At the time, I used the Windows built-in "nbtstat" command.
Today there is a Unix utility called "nbtscan" that provides similar info. If you do not wish to use an external tool, maybe there are NetBIOS libraries for python that could be used to gather the info for you? | 0 | 22,526 | 1 | 6 | 2010-01-06T03:40:00.000 | python,linux,networking,mac-address | Get remote MAC address using Python and Linux | 1 | 1 | 7 | 2,010,975 | 0 |
1 | 0 | I'm looking for a payment gateway company so we can avoid tiresome PCI-DSS certification and its associated expenses. I'll get this out the way now, I don't want Paypal. It does what I want but it's really not a company I want to trust with any sort of money.
It needs to support the following flow:
User performs actions on our site, generating an amount that needs to be paid.
Our server contacts the gateway asynchronously (no hidden inputs) and tells it about the user, how much they need to pay. The gateway returns a URL and perhaps a tentative transaction ID.
Our server stores the transaction ID and redirects the user to the URL provided by the gateway.
The user fills out their payment details on the remote server.
When they have completed that, the gateway asynchronously contacts our server with the outcome, transaction id, etc and forwards them back to us (at a predestined URL).
We can show the user their order is complete/failed/etc. Fin.
If at all possible, UK or EU based and developer friendly.
We don't need any concept of a shopping basket as we have that all handled in our code already.
We have (or at least will have by launch) a proper merchant banking account - so cover services like Paypay aren't needed.
If their API covers Python (we're using Django) explicitly, all the better but I think I'm capable enough to decipher any other examples and transcode them into Python myself. | false | 2,022,067 | 0.07983 | 0 | 0 | 2 | It sounds like you want something like Worldpay or even Google Checkout. But it all depends what your turnover is, because these sorts of providers (who host the payment page themselves), tend to take a percentage of every transaction, rather than a fixed monthly fee that you can get from elsewhere.
The other thing to consider is, if you have any way of taking orders over the phone, and the phone operators need to take customers' credit card details, then your whole internal network will need to be PCI compliant, too.
If you JUST need it for a website, then that makes it easier. If you have a low turnover, then check out the sites mentioned above. If you have a high turnover, then it may work out more cost effective in the long run to get PCI-DSS certified and still keep control of credit card transactions in-house, giving you more flexibility, and cheaper transaction costs. | 0 | 3,656 | 0 | 9 | 2010-01-07T17:01:00.000 | python,payment-gateway,payment | Looking for a payment gateway | 1 | 3 | 5 | 2,023,105 | 0 |
1 | 0 | I'm looking for a payment gateway company so we can avoid tiresome PCI-DSS certification and its associated expenses. I'll get this out the way now, I don't want Paypal. It does what I want but it's really not a company I want to trust with any sort of money.
It needs to support the following flow:
User performs actions on our site, generating an amount that needs to be paid.
Our server contacts the gateway asynchronously (no hidden inputs) and tells it about the user, how much they need to pay. The gateway returns a URL and perhaps a tentative transaction ID.
Our server stores the transaction ID and redirects the user to the URL provided by the gateway.
The user fills out their payment details on the remote server.
When they have completed that, the gateway asynchronously contacts our server with the outcome, transaction id, etc and forwards them back to us (at a predestined URL).
We can show the user their order is complete/failed/etc. Fin.
If at all possible, UK or EU based and developer friendly.
We don't need any concept of a shopping basket as we have that all handled in our code already.
We have (or at least will have by launch) a proper merchant banking account - so cover services like Paypay aren't needed.
If their API covers Python (we're using Django) explicitly, all the better but I think I'm capable enough to decipher any other examples and transcode them into Python myself. | true | 2,022,067 | 1.2 | 0 | 0 | 4 | You might want to take a look at Adyen (www.adyen.com). They are European and provide a whole lot of features and a very friendly interface. They don't charge a monthly or set up fee and seem to be reasonably priced per transaction.
Their hosted payments page can be completely customised which was an amazing improvement for us. | 0 | 3,656 | 0 | 9 | 2010-01-07T17:01:00.000 | python,payment-gateway,payment | Looking for a payment gateway | 1 | 3 | 5 | 2,258,716 | 0 |
1 | 0 | I'm looking for a payment gateway company so we can avoid tiresome PCI-DSS certification and its associated expenses. I'll get this out the way now, I don't want Paypal. It does what I want but it's really not a company I want to trust with any sort of money.
It needs to support the following flow:
User performs actions on our site, generating an amount that needs to be paid.
Our server contacts the gateway asynchronously (no hidden inputs) and tells it about the user, how much they need to pay. The gateway returns a URL and perhaps a tentative transaction ID.
Our server stores the transaction ID and redirects the user to the URL provided by the gateway.
The user fills out their payment details on the remote server.
When they have completed that, the gateway asynchronously contacts our server with the outcome, transaction id, etc and forwards them back to us (at a predestined URL).
We can show the user their order is complete/failed/etc. Fin.
If at all possible, UK or EU based and developer friendly.
We don't need any concept of a shopping basket as we have that all handled in our code already.
We have (or at least will have by launch) a proper merchant banking account - so cover services like Paypay aren't needed.
If their API covers Python (we're using Django) explicitly, all the better but I think I'm capable enough to decipher any other examples and transcode them into Python myself. | false | 2,022,067 | 0.07983 | 0 | 0 | 2 | I just finished something exactly like this using First Data Global Gateway (don't really want to provide a link, can find with Google). There's no Python API because their interface is nothing but http POST.
You have the choice of gathering credit card info yourself before posting the form to their server, as long as the connection is SSL and the referring URL is known to them (meaning it's your form but you can't store or process the data first).
In the FDGG gateway "terminal interface" you configure your URL endpoints for authorization accepted/failed and it will POST transaction information.
I can't say it was fun and their "test" mode was buggy but it works. Sorry, don't know if it's available in UK/EU but it's misnamed if it isn't :) | 0 | 3,656 | 0 | 9 | 2010-01-07T17:01:00.000 | python,payment-gateway,payment | Looking for a payment gateway | 1 | 3 | 5 | 2,023,033 | 0 |
0 | 0 | I am trying to make an HTTP request in Python 2.6.4, using the urllib module. Is there any way to set the request headers?
I am sure that this is possible using urllib2, but I would prefer to use urllib since it seems simpler. | false | 2,031,745 | 0.132549 | 0 | 0 | 2 | There isn't any way to do that, which is precisely the reason urllib is deprecated in favour of urllib2. So just use urllib2 rather than writing new code to a deprecated interface. | 0 | 872 | 0 | 0 | 2010-01-09T00:27:00.000 | python,http,urllib,python-2.6,python-2.x | Any way to set request headers when doing a request using urllib in Python 2.x? | 1 | 1 | 3 | 2,031,786 | 0 |
0 | 0 | Has anyone done a Python CLI to edit Firefox bookmarks ?
My worldview is that of Unix file trees; I want
find /re/ in given or all fields in given or all subtrees
cd
ls with context
mv this ../there/
Whether it uses bookamrks.html or places.sqlite is secondary -- whatever's easier.
Clarification added: I'd be happy to quit Firefox, edit bookmarks in the CLI, import the new database in Firefox.
In otherwords, database locking is a moot point; first let's see code for a rough cut CLI.
(Why a text CLI and not a GUI ?
CLIs are simpler (for me), and one could easily program e.g.
mv old-bookmarks to 2009/same-structure/.
Nonetheless links to a really good bookmarker GUI, for Firefox or anything else, would be useful too.) | false | 2,034,373 | 0 | 0 | 0 | 0 | I don't know about all the features you've mentioned but "Organize bookmars" option in the Bookmarks menu is pretty decent with respect to features. | 0 | 2,530 | 0 | 1 | 2010-01-09T18:18:00.000 | python,firefox,command-line-interface,bookmarks | Python CLI to edit Firefox bookmarks? | 1 | 1 | 2 | 2,034,590 | 0 |
0 | 0 | I have built a Python server to which various clients can connect, and I need to set a predefined series of messages from clients to the server - For example the client passes in a name to the server when it first connects.
I was wondering what the best way to approach this is? How should I build a simple protocol for their communication?
Should the messages start with a specific set of bytes to mark them out as part of this protocol, then contain some sort of message id? Any suggestions or further reading appreciated. | false | 2,042,133 | 0.049958 | 0 | 0 | 1 | Read some protocols, and try to find one that looks like what you need. Does it need to be message-oriented or stream-oriented? Does it need request order to be preserved, does it need requests to be paired with responses? Do you need message identifiers? Retries, back-off? Is it an RPC protocol, a message queue protocol? | 0 | 2,451 | 0 | 3 | 2010-01-11T13:37:00.000 | python,sockets,protocols | Python Sockets - Creating a message format | 1 | 1 | 4 | 2,042,187 | 0 |
1 | 0 | Often times I want to automate http queries. I currently use Java(and commons http client), but would probably prefer a scripting based approach. Something really quick and simple. Where I can set a header, go to a page and not worry about setting up the entire OO lifecycle, setting each header, calling up an html parser... I am looking for a solution in ANY language, preferable scripting | false | 2,043,058 | 0 | 1 | 0 | 0 | What about using PHP+Curl, or just bash? | 0 | 4,483 | 0 | 8 | 2010-01-11T16:15:00.000 | python,ruby,perl,http,scripting | Scripting HTTP more effeciently | 1 | 1 | 12 | 2,043,069 | 0 |
0 | 0 | I more or less know how to use select() to take a list of sockets, and only return the ones that are ready to read/write something. The project I'm working on now has a class called 'user'. Each 'user' object contains its own socket. What I would like to do is pass a list of users to a select(), and get back a list of only the users where user.socket is ready to read/write. Any thoughts on where to start on this?
Edit: Changed switch() to select(). I need to proofread better. | true | 2,046,727 | 1.2 | 0 | 0 | 2 | You should have your User class implement a fileno(self) method which returns self.thesocket.fileno() -- that's the way to make select work on your own classes (sockets only on windows, arbitrary files on Unix-like systems). Not sure what switch is supposed to me -- don't recognize it as a standard library (or built-in) Python concept...? | 0 | 547 | 0 | 1 | 2010-01-12T04:28:00.000 | python,sockets | Creating waitable objects in Python | 1 | 1 | 1 | 2,046,760 | 0 |
0 | 0 | I am pretty sure the answer is no but of course there are cleverer guys than me!
Is there a way to construct a lazy SAX based XML parser that can be stopped (e.g. raising an exception is a possible way of doing this) but also resumable ?
I am looking for a possible solution for Python >= 2.6 with standard XML libraries. The "lazy" part is also trivial: I am really after the "resumable" property here. | true | 2,059,455 | 1.2 | 0 | 0 | 0 | Expat can be stopped and is resumable. AFAIK Python SAX parser uses Expat. Does the API really not expose the stopping stuff to the Python side??
EDIT: nope, looks like the parser stopping isn't available from Python... | 0 | 1,020 | 0 | 1 | 2010-01-13T19:07:00.000 | python,xml,sax | Lazy SAX XML parser with stop/resume | 1 | 1 | 1 | 2,059,524 | 0 |
0 | 0 | Any function in that Graphviz which can do that?
If not, any other free software that can do that? | false | 2,066,259 | 0 | 0 | 0 | 0 | Compute the complement yourself, then plot it. | 0 | 2,271 | 0 | 2 | 2010-01-14T17:49:00.000 | python,graph,plot,graphviz,complement | How to draw complement of a network graph? | 1 | 1 | 2 | 2,066,328 | 0 |
0 | 0 | I'm writing a client/server application in Python and I'm finding it necessary to get a new connection to the server for each request from the client. My server is just inheriting from TCPServer and I'm inheriting from BaseRequestHandler to do my processing. I'm not calling self.request.close() anywhere in the handler, but somehow the server seems to be hanging up on my client. What's up? | false | 2,066,810 | 0 | 0 | 0 | 0 | You sure the client is not hanging up on the server? This is a bit too vague to really tell what is up, but generally a server that is accepting data from a client will quit the connection of the read returns no data. | 0 | 9,010 | 0 | 10 | 2010-01-14T19:09:00.000 | python,sockets | Does the TCPServer + BaseRequestHandler in Python's SocketServer close the socket after each call to handle()? | 1 | 2 | 4 | 2,066,907 | 0 |
0 | 0 | I'm writing a client/server application in Python and I'm finding it necessary to get a new connection to the server for each request from the client. My server is just inheriting from TCPServer and I'm inheriting from BaseRequestHandler to do my processing. I'm not calling self.request.close() anywhere in the handler, but somehow the server seems to be hanging up on my client. What's up? | true | 2,066,810 | 1.2 | 0 | 0 | 9 | Okay, I read the code (on my Mac, SocketServer.py is at /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/).
Indeed, TCPServer is closing the connection. In BaseServer.handle_request, process_request is called, which calls close_request. In the TCPServer class, close_request calls self.request.close(), and self.request is just the socket used to handle the request.
So the answer to my question is "Yes". | 0 | 9,010 | 0 | 10 | 2010-01-14T19:09:00.000 | python,sockets | Does the TCPServer + BaseRequestHandler in Python's SocketServer close the socket after each call to handle()? | 1 | 2 | 4 | 2,072,002 | 0 |
1 | 0 | I'd like to grab daily sunrise/sunset times from a web site. Is it possible to scrape web content with Python? what are the modules used? Is there any tutorial available? | false | 2,081,586 | 1 | 0 | 0 | 63 | I'd really recommend Scrapy.
Quote from a deleted answer:
Scrapy crawling is fastest than mechanize because uses asynchronous operations (on top of Twisted).
Scrapy has better and fastest support for parsing (x)html on top of libxml2.
Scrapy is a mature framework with full unicode, handles redirections, gzipped responses, odd encodings, integrated http cache, etc.
Once you are into Scrapy, you can write a spider in less than 5 minutes that download images, creates thumbnails and export the extracted data directly to csv or json. | 0 | 208,635 | 0 | 188 | 2010-01-17T16:06:00.000 | python,web-scraping,screen-scraping | Web scraping with Python | 1 | 1 | 10 | 8,603,040 | 0 |
0 | 0 | i am new to network programming in python. I wanted to know that what is the maximum size packet we can transmit or receive on python socket? and how to find out it? | false | 2,091,097 | 0.066568 | 0 | 0 | 1 | I don't think there's any Python-specific limits. UDP packets have a theoretical limit of circa 65kb and TCP no upper limit, but you'll have flow control problems if you use packets much more than a few kilobytes. | 0 | 7,528 | 0 | 6 | 2010-01-19T04:35:00.000 | python,sockets,network-programming | What is the maximum packet size a python socket can handle? | 1 | 1 | 3 | 2,091,157 | 0 |
1 | 0 | I just downloaded Beautiful Soup and I've decided I'll make a small library (is that what they call them in Python?) that will return results of a movie given and IMDB movie search.
My question is, how exactly does this import thing work?
For example, I downloaded BeautifulSoup and all it is, is a .py file. Does that file have to be in the same folder as the my python application (my project that will use the library)? | false | 2,095,505 | -0.033321 | 0 | 0 | -1 | Might not be relevant, but have you considered using imdbpy? Last time I used it it worked pretty well... | 1 | 217 | 0 | 2 | 2010-01-19T17:28:00.000 | python,import | A few questions regarding Pythons 'import' feature | 1 | 1 | 6 | 2,097,024 | 0 |
1 | 0 | In my python application using mod_wsgi and cherrypy ontop of Apache my response code get changed to a 500 from a 403. I am explicitly setting this to 403.
i.e.
cherrypy.response.status = 403
I do not understand where and why the response code that the client receives is 500. Does anyone have any experience with this problem> | false | 2,106,377 | 0.197375 | 0 | 0 | 1 | The HTTP 500 error is used for internal server errors. Something in the server or your application is likely throwing an exception, so no matter what you set the response code to be before this, CherryPy will send a 500 back.
You can look into whatever tools CherryPy includes for debugging or logging (I'm not familiar with them). You can also set breakpoints into your code and continue stepping into the CherryPy internals until it hits the error case. | 0 | 1,737 | 0 | 3 | 2010-01-21T01:46:00.000 | python,apache,mod-wsgi,cherrypy | CherryPy changes my response code | 1 | 1 | 1 | 2,106,456 | 0 |
0 | 0 | I have been working with python for a while now. Recently I got into Sockets with Twisted which was good for learning Telnet, SSH, and Message Passing. I wanted to take an idea and implement it in a web fashion. A week of searching and all I can really do is create a resource that handles GET and POST all to itself. And this I am told is bad practice.
So The questions I have after one week:
* Are other options like Tornado and Standard Python Sockets a better (or more popular) approach?
* Should one really use separate resources in Twisted GET and POST operations?
* What is a good resource to start in this area of Python Development?
My background with languages are C, Java, HTML/DHTML/XHTML/XML and my main systems (even home) are Linux. | true | 2,114,847 | 1.2 | 0 | 0 | 1 | I'd recommend against building your own web server and handling raw socket calls to build web applications; it makes much more sense to just write your web services as wsgi applications and use an existing web server, whether it's something like tornado or apache with mod_wsgi. | 0 | 474 | 1 | 2 | 2010-01-22T04:01:00.000 | python,webserver,twisted,tornado | Python approach to Web Services and/or handeling GET and POST | 1 | 1 | 3 | 2,114,986 | 0 |
0 | 0 | I have extensive experience with PHP cURL but for the last few months I've been coding primarily in Java, utilizing the HttpClient library.
My new project requires me to use Python, once again putting me at the crossroads of seemingly comparable libraries: pycurl and urllib2.
Putting aside my previous experience with PHP cURL, what is the recommended library in Python? Is there a reason to use one but not the other? Which is the more popular option? | false | 2,121,945 | 0.049958 | 1 | 0 | 1 | Use urllib2. It's got very good documentation in python, while pycurl is mostly C documentation. If you hit a wall, switch to mechanize or pycurl. | 0 | 4,948 | 0 | 2 | 2010-01-23T03:20:00.000 | python,urllib2,pycurl | Python: urllib2 or Pycurl? | 1 | 2 | 4 | 2,122,198 | 0 |
0 | 0 | I have extensive experience with PHP cURL but for the last few months I've been coding primarily in Java, utilizing the HttpClient library.
My new project requires me to use Python, once again putting me at the crossroads of seemingly comparable libraries: pycurl and urllib2.
Putting aside my previous experience with PHP cURL, what is the recommended library in Python? Is there a reason to use one but not the other? Which is the more popular option? | false | 2,121,945 | 0.148885 | 1 | 0 | 3 | urllib2 is part of the standard library, pycurl isn't (so it requires a separate step of download/install/package etc). That alone, quite apart from any difference in intrinsic quality, is guaranteed to make urllib2 more popular (and can be a pretty good pragmatical reason to pick it -- convenience!-). | 0 | 4,948 | 0 | 2 | 2010-01-23T03:20:00.000 | python,urllib2,pycurl | Python: urllib2 or Pycurl? | 1 | 2 | 4 | 2,121,967 | 0 |
0 | 0 | I have a big list of Twitter users stored in a database, almost 1000.
I would like to use the Streaming API in order to stream tweets from these users, but I cannot find an appropriate way to do this.
Help would be very much appreciated. | false | 2,123,651 | 0.197375 | 1 | 0 | 2 | You can track 400 filter words and 5000 userids via streaming api.
Filter words can be something apple, orange, ipad etc...
And in order to track any user's timeline you need to get the user's twitter user id. | 0 | 2,823 | 0 | 6 | 2010-01-23T15:34:00.000 | php,python,twitter | Streaming multiple tweets - from multiple users? - Twitter API | 1 | 1 | 2 | 8,286,513 | 0 |
1 | 0 | I am looking for a way to connect a frontend server (running Django) with a backend server.
I want to avoid inventing my own protocol on top of a socket, so my plan was to use SimpleHTTPServer + JSON or XML.
However, we also require some security (authentication + encryption) for the connection, which isn't quite as simple to implement.
Any ideas for alternatives? What mechanisms would you use? I definitely want to avoid CORBA (we have used it before, and it's way too complex for what we need). | false | 2,125,149 | 0.197375 | 0 | 0 | 1 | Use a client side certificate for the connection. This is a good monetization technique to get more income for your client side app. | 0 | 324 | 0 | 1 | 2010-01-23T23:18:00.000 | python,json,networking,ipc | Network IPC With Authentication (in Python) | 1 | 1 | 1 | 2,125,162 | 0 |
0 | 0 | What library should I use for network programming? Is sockets the best, or is there a higher level interface, that is standard?
I need something that will be pretty cross platform (ie. Linux, Windows, Mac OS X), and it only needs to be able to connect to other Python programs using the same library. | false | 2,128,266 | 0 | 0 | 0 | 0 | Socket is low level api, it is mapped directly to operating system interface.
Twisted, Tornado ... are high level framework (of course they are built on socket because socket is low level).
When it come to TCP/IP programming, you should have some basic knowledge to make a decision about what you shoud use:
Will you use well-known protocol like HTTP, FTP or create your own protocol?
Blocking or non-blocking? Twisted, Tornado are non-blocking framework (basically like nodejs).
Of course, socket can do everything because every other framework is base on its ;) | 0 | 1,164 | 0 | 2 | 2010-01-24T18:51:00.000 | python,sockets,network-programming | Network programming in Python | 1 | 2 | 8 | 39,160,652 | 0 |
0 | 0 | What library should I use for network programming? Is sockets the best, or is there a higher level interface, that is standard?
I need something that will be pretty cross platform (ie. Linux, Windows, Mac OS X), and it only needs to be able to connect to other Python programs using the same library. | false | 2,128,266 | 0.024995 | 0 | 0 | 1 | The socket module in the standard lib is in my opinion a good choice if you don't need high performance.
It is a very famous API that is known by almost every developpers of almost every languages. It's quite sipmple and there is a lot of information available on the internet. Moreover, it will be easier for other people to understand your code.
I guess that an event-driven framework like Twisted has better performance but in basic cases standard sockets are enough.
Of course, if you use a higher-level protocol (http, ftp...), you should use the corresponding implementation in the python standard library. | 0 | 1,164 | 0 | 2 | 2010-01-24T18:51:00.000 | python,sockets,network-programming | Network programming in Python | 1 | 2 | 8 | 2,128,966 | 0 |
0 | 0 | I have a python program with many threads. I was thinking of creating a socket, bind it to localhost, and have the threads read/write to this central location. However I do not want this socket open to the rest of the network, just connections from 127.0.0.1 should be accepted. How would I do this (in Python)? And is this a suitable design? Or is there something a little more elegant? | false | 2,135,595 | 0 | 0 | 0 | 0 | notionOn TCP/IP networks 127.0.0.0/8 is a non-routeable network, so you should not be able to send an IP datagram destined to 127.0.0.1 across a routed infrastructure. The router will just discard the datagram. However, it is possible to construct and send datagrams with a destination address of 127.0.0.1, so a host on the same network (IP sense of network) as your host could possibly get the datagram to your host's TCP/IP stack. This is where your local firewal comes into play. Your local (host) firewall should have a rule that discards IP datagrams destined for 127.0.0.0/8 coming into any interface other than lo0 (or the equivalent loopback interface). If your host either 1) has such firewall rules in place or 2) exists on its own network (or shared with only completely trusted hosts) and behind a well configured router, you can safely just bind to 127.0.0.1 and be fairly certain any datagrams you receive on the socket came from the local machine. The prior answers address how to open and bind to 127.0.0.1. | 0 | 15,248 | 0 | 10 | 2010-01-25T21:00:00.000 | python,client-server | Creating a socket restricted to localhost connections only | 1 | 2 | 6 | 2,135,937 | 0 |
0 | 0 | I have a python program with many threads. I was thinking of creating a socket, bind it to localhost, and have the threads read/write to this central location. However I do not want this socket open to the rest of the network, just connections from 127.0.0.1 should be accepted. How would I do this (in Python)? And is this a suitable design? Or is there something a little more elegant? | false | 2,135,595 | 0 | 0 | 0 | 0 | If you do sock.bind((port,'127.0.0.1')) it will only listen on localhost, and not on other interfaces, so that's all you need. | 0 | 15,248 | 0 | 10 | 2010-01-25T21:00:00.000 | python,client-server | Creating a socket restricted to localhost connections only | 1 | 2 | 6 | 2,135,628 | 0 |
1 | 0 | I've well developed Python Server having workflows, views, object - ORM/OSV, etc...
Server/Client communication based on socket protocol, can be done by any of service
1. XMLRPC Service
2. Socket Service
now I want to develop a Fully Ajax based GUI web Client..
I've web/socket services to communicate with server.
what I need is to select the technology, I've several options like,
ExtJS - CherryPy
GWT
Ext-GWT
CheeryPy
Django + JQuery
Django + Extjs
???
???... | false | 2,138,868 | 0 | 0 | 0 | 0 | How about Pylons + SQLAlchemy + ExtJS? We use it and it works great! | 0 | 2,943 | 0 | 5 | 2010-01-26T10:51:00.000 | python,django,gwt,extjs,webclient | Which technology is preferable to build a web based GUI Client? | 1 | 2 | 5 | 2,140,012 | 0 |
1 | 0 | I've well developed Python Server having workflows, views, object - ORM/OSV, etc...
Server/Client communication based on socket protocol, can be done by any of service
1. XMLRPC Service
2. Socket Service
now I want to develop a Fully Ajax based GUI web Client..
I've web/socket services to communicate with server.
what I need is to select the technology, I've several options like,
ExtJS - CherryPy
GWT
Ext-GWT
CheeryPy
Django + JQuery
Django + Extjs
???
???... | false | 2,138,868 | 0.039979 | 0 | 0 | 1 | I'm not sure I understood exactly on the server side, but i'm a big fan of Flex as a way to develop proper software for the browser, rather than mess of trying to make HTML do things it was never made for. Partly an idealistic reasoning, but I also am still not impressed by the 'feel' of JS-based GUIs.
Flex has good server-communication options... web-services, sockets, remote objects, etc. | 0 | 2,943 | 0 | 5 | 2010-01-26T10:51:00.000 | python,django,gwt,extjs,webclient | Which technology is preferable to build a web based GUI Client? | 1 | 2 | 5 | 2,138,949 | 0 |
1 | 0 | I need to scrape a site with python. I obtain the source html code with the urlib module, but I need to scrape also some html code that is generated by a javascript function (which is included in the html source). What this functions does "in" the site is that when you press a button it outputs some html code. How can I "press" this button with python code? Can scrapy help me? I captured the POST request with firebug but when I try to pass it on the url I get a 403 error. Any suggestions? | false | 2,148,493 | 0.158649 | 0 | 0 | 4 | I have had to do this before (in .NET) and you are basically going to have to host a browser, get it to click the button, and then interrogate the DOM (document object model) of the browser to get at the generated HTML.
This is definitely one of the downsides to web apps moving towards an Ajax/Javascript approach to generating HTML client-side. | 0 | 17,188 | 0 | 18 | 2010-01-27T16:20:00.000 | javascript,python,browser,screen-scraping | scrape html generated by javascript with python | 1 | 1 | 5 | 2,148,595 | 0 |
1 | 0 | In Python's mechanize.Browser module, when you submit a form the browser instance goes to that page. For this one request, I don't want that; I want it just to stay on the page it's currently on and give me the response in another object (for looping purposes). Anyone know a quick to do this?
EDIT:
Hmm, so I have this kind of working with ClientForm.HTMLForm.click(), which returns a urllib2 request, but I need the cookies from mechanize's cookiejar to be used on my urllib2.urlopen request. Is there a method in mechanize that will let me send a request just like urllib2 with the exception that cookies will be imported? | true | 2,152,098 | 1.2 | 0 | 0 | 7 | The answer to my immediate question in the headline is yes, with mechanize.Browser.open_novisit(). It works just like open(), but it doesn't change the state of the Browser instance -- that is, it will retrieve the page, and your Browser object will stay where it was. | 0 | 1,070 | 0 | 3 | 2010-01-28T03:24:00.000 | python,screen-scraping,mechanize | Can I get my instance of mechanize.Browser to stay on the same page after calling b.form.submit()? | 1 | 1 | 1 | 2,167,177 | 0 |
0 | 0 | How can I retrieve contacts from hotmail with python?
Is there any example? | false | 2,165,517 | 0 | 1 | 0 | 0 | use octazen, but you have to pay for it | 0 | 1,864 | 0 | 2 | 2010-01-29T22:08:00.000 | python,hotmail | How do I retrieve Hotmail contacts with python | 1 | 1 | 3 | 2,458,380 | 0 |
1 | 0 | Does Python have screen scraping libraries that offer JavaScript support?
I've been using pycurl for simple HTML requests, and Java's HtmlUnit for more complicated requests requiring JavaScript support.
Ideally I would like to be able to do everything from Python, but I haven't come across any libraries that would allow me to do it. Do they exist? | false | 2,190,502 | -0.057081 | 0 | 0 | -2 | I have not found anything for this. I use a combination of beautifulsoup and custom routines... | 0 | 10,580 | 0 | 14 | 2010-02-03T08:11:00.000 | python,screen-scraping,htmlunit,pycurl | Screen scraping with Python | 1 | 1 | 7 | 2,190,517 | 0 |
0 | 0 | I'd like to make sure that my message was delivered to a queue.
To do so I'm adding the mandatory param to the basic_publish.
What else should I do to receive the basic.return message if my message wasn't successfully delivered?
I can't use channel.wait() to listen for the basic.return because when my message is successfully delivered the wait() function hangs forever. (There is no timeout)
On the other hand. When I don't call channel.wait() the channel.returned_messages will remain empty, even if the message isn't delivered.
I use py-amqplib version 0.6.
Any solution is welcome. | true | 2,191,695 | 1.2 | 0 | 0 | 1 | It is currently impossible as the basic.return is sent asynchronously when a message is dropped in broker. When message was sent successfully no data is reported from server.
So pyAMQP can't listen for such messages.
I've read few threads about this problem. Possible solution were:
use txAMQP, twisted version of amqp that handles basic.return
use pyAMQP with wait with timeout. (I'm not sure if that is currently possible)
ping server frequently with synchronous commands so that pyAMQP will able to pick basic.return messages when they arrive.
Because the level of support for pyAMQP and rabbitMQ in general is quite low, we decided not to use amqp broker at all. | 0 | 1,192 | 0 | 2 | 2010-02-03T12:08:00.000 | python,rabbitmq,amqp,py-amqplib | How to use listen on basic.return in python client of AMQP | 1 | 1 | 3 | 2,240,828 | 0 |
0 | 0 | What is the point of '/segment/segment/'.split('/') returning ['', 'segment', 'segment', '']?
Notice the empty elements. If you're splitting on a delimiter that happens to be at position one and at the very end of a string, what extra value does it give you to have the empty string returned from each end? | false | 2,197,451 | 1 | 0 | 0 | 9 | Having x.split(y) always return a list of 1 + x.count(y) items is a precious regularity -- as @gnibbler's already pointed out it makes split and join exact inverses of each other (as they obviously should be), it also precisely maps the semantics of all kinds of delimiter-joined records (such as csv file lines [[net of quoting issues]], lines from /etc/group in Unix, and so on), it allows (as @Roman's answer mentioned) easy checks for (e.g.) absolute vs relative paths (in file paths and URLs), and so forth.
Another way to look at it is that you shouldn't wantonly toss information out of the window for no gain. What would be gained in making x.split(y) equivalent to x.strip(y).split(y)? Nothing, of course -- it's easy to use the second form when that's what you mean, but if the first form was arbitrarily deemed to mean the second one, you'd have lot of work to do when you do want the first one (which is far from rare, as the previous paragraph points out).
But really, thinking in terms of mathematical regularity is the simplest and most general way you can teach yourself to design passable APIs. To take a different example, it's very important that for any valid x and y x == x[:y] + x[y:] -- which immediately indicates why one extreme of a slicing should be excluded. The simpler the invariant assertion you can formulate, the likelier it is that the resulting semantics are what you need in real life uses -- part of the mystical fact that maths is very useful in dealing with the universe.
Try formulating the invariant for a split dialect in which leading and trailing delimiters are special-cased... counter-example: string methods such as isspace are not maximally simple -- x.isspace() is equivalent to x and all(c in string.whitespace for c in x) -- that silly leading x and is why you so often find yourself coding not x or x.isspace(), to get back to the simplicity which should have been designed into the is... string methods (whereby an empty string "is" anything you want -- contrary to man-in-the-street horse-sense, maybe [[empty sets, like zero &c, have always confused most people;-)]], but fully conforming to obvious well-refined mathematical common-sense!-). | 1 | 93,416 | 0 | 146 | 2010-02-04T05:14:00.000 | python,string,split | Why are empty strings returned in split() results? | 1 | 2 | 8 | 2,197,605 | 0 |
0 | 0 | What is the point of '/segment/segment/'.split('/') returning ['', 'segment', 'segment', '']?
Notice the empty elements. If you're splitting on a delimiter that happens to be at position one and at the very end of a string, what extra value does it give you to have the empty string returned from each end? | false | 2,197,451 | 1 | 0 | 0 | 6 | Well, it lets you know there was a delimiter there. So, seeing 4 results lets you know you had 3 delimiters. This gives you the power to do whatever you want with this information, rather than having Python drop the empty elements, and then making you manually check for starting or ending delimiters if you need to know it.
Simple example: Say you want to check for absolute vs. relative filenames. This way you can do it all with the split, without also having to check what the first character of your filename is. | 1 | 93,416 | 0 | 146 | 2010-02-04T05:14:00.000 | python,string,split | Why are empty strings returned in split() results? | 1 | 2 | 8 | 2,197,494 | 0 |
0 | 0 | I am trying to verify that a video service is provided from an URL in python. I am asking does anyone know of any good libraries to use or a way to do this. I have not found much info for this on the web.
Thanks | false | 2,207,110 | 0.148885 | 0 | 0 | 3 | If you do not want to use a library, as suggested by synack, you can open a socket connection to the given URL and send an RTSP DESCRIEBE request. That is actually quite simple since RTSP is text-based HTTP-like. You would need to parse the response for a meaningful result, e.g look for the presence of media streams. | 0 | 4,091 | 0 | 4 | 2010-02-05T12:23:00.000 | python,url,video-streaming,rtsp | Verify RTSP service via URL | 1 | 2 | 4 | 2,247,454 | 0 |
0 | 0 | I am trying to verify that a video service is provided from an URL in python. I am asking does anyone know of any good libraries to use or a way to do this. I have not found much info for this on the web.
Thanks | false | 2,207,110 | 0 | 0 | 0 | 0 | I don't believe Live555 provides a python library. However, they do provide source code that can be compiled to build openRTSP. This is a simple command-line utility that will perform the entire RTSP handshake to connect to the server and begin streaming to the client. It also can provide statistic measurements (such as jitter, number of packets lost, etc.) that can be used to measure the quality of the streaming connection. | 0 | 4,091 | 0 | 4 | 2010-02-05T12:23:00.000 | python,url,video-streaming,rtsp | Verify RTSP service via URL | 1 | 2 | 4 | 2,421,464 | 0 |
1 | 0 | I am building a Python web application and we will need a user identity verification solution... something to verify the users identity during account registration.
I was wondering if anyone had any experience in integrating such a solution. What vendors/products out there have worked well with you? Any tips?
I don't have any experience in this matter so feel free to let me know if any additional information is required.
Thanks in advance! | false | 2,223,790 | 0 | 0 | 0 | 0 | You should have a look at WS-Trust.
A implementation of that is Windows Identity Foundation. But I'm sure You'll find more. | 0 | 747 | 0 | 5 | 2010-02-08T18:12:00.000 | python,identity,verification | Online identity verification solution | 1 | 2 | 4 | 2,223,896 | 0 |
1 | 0 | I am building a Python web application and we will need a user identity verification solution... something to verify the users identity during account registration.
I was wondering if anyone had any experience in integrating such a solution. What vendors/products out there have worked well with you? Any tips?
I don't have any experience in this matter so feel free to let me know if any additional information is required.
Thanks in advance! | false | 2,223,790 | 0.049958 | 0 | 0 | 1 | There are many different ways to implement a verification system, the concept is quite simple but actually building can be a hassle, especially if you are doing it from scratch.
The best way to approach this is to find a framework that handles the aspect of verification. Turbogears and Pylons are both capable of this rather than doing it yourself or using third party apps.
Personally I have worked on commercial projects using both frameworks and was able to sort out verification quite easily.
User verification utilizes specific concepts and low level technology such as: the internet's stateless characteristic, session handling, database design, etc...
So the point I am making is that it would be better if you rather got a good, stable framework that could do the dirty work for you.
By the way what framework are you thinking of using? That would help me give a more detailed answer.
Hope this helps? | 0 | 747 | 0 | 5 | 2010-02-08T18:12:00.000 | python,identity,verification | Online identity verification solution | 1 | 2 | 4 | 2,224,273 | 0 |
0 | 0 | I'm considering to use XML-RPC.NET to communicate with a Linux XML-RPC server written in Python. I have tried a sample application (MathApp) from Cook Computing's XML-RPC.NET but it took 30 seconds for the app to add two numbers within the same LAN with server.
I have also tried to run a simple client written in Python on Windows 7 to call the same server and it responded in 5 seconds. The machine has 4 GB of RAM with comparable processing power so this is not an issue.
Then I tried to call the server from a Windows XP system with Java and PHP. Both responses were pretty fast, almost instantly. The server was responding quickly on localhost too, so I don't think the latency arise from server.
My googling returned me some problems regarding Windows' use of IPv6 but our call to server does include IPv4 address (not hostname) in the same subnet. Anyways I turned off IPv6 but nothing changed.
Are there any more ways to check for possible causes of latency? | false | 2,235,643 | 0 | 1 | 0 | 0 | Run a packet capture on the client machine, check the network traffic timings versus the time the function is called.
This may help you determine where the latency is in your slow process, e.g. application start-up time, name resolution, etc.
How are you addressing the server from the client? By IP? By FQDN? Is the addressing method the same in each of the applications your using?
If you call the same remote procedure multiple times from the same slow application, does the time taken increase linearly? | 0 | 1,326 | 0 | 1 | 2010-02-10T09:23:00.000 | c#,.net,python,windows-7,xml-rpc | Slow XML-RPC in Windows 7 with XML-RPC.NET | 1 | 1 | 2 | 2,235,703 | 0 |
0 | 0 | I'd like to tell urllib2.urlopen (or a custom opener) to use 127.0.0.1 (or ::1) to resolve addresses. I wouldn't change my /etc/resolv.conf, however.
One possible solution is to use a tool like dnspython to query addresses and httplib to build a custom url opener. I'd prefer telling urlopen to use a custom nameserver though. Any suggestions? | false | 2,236,498 | 0 | 0 | 0 | 0 | You will need to implement your own dns lookup client (or using dnspython as you said). The name lookup procedure in glibc is pretty complex to ensure compatibility with other non-dns name systems. There's for example no way to specify a particular DNS server in the glibc library at all. | 0 | 13,644 | 0 | 17 | 2010-02-10T11:46:00.000 | python,dns,urllib2,dnspython,urlopen | Tell urllib2 to use custom DNS | 1 | 1 | 3 | 2,237,322 | 0 |
0 | 0 | We have a script which pulls some XML from a remote server. If this script is running on any server other than production, it works.
Upload it to production however, and it fails. It is using cURL for the request but it doesn't matter how we do it - fopen, file_get_contents, sockets - it just times out. This also happens if I use a Python script to request the URL.
The same script, supplied with another URL to query, works - every time. Obviously it doesn't return the XML we're looking for but it DOES return SOMETHINg - it CAN connect to the remote server.
If this URL is requested via the command line using, for example, curl or wget, again, data is returned. It's not the data we're looking for (in fact, it returns an empty root element) but something DOES come back.
Interestingly, if we strip out query string elements from the URL (the full URL has 7 query string elements and runs to about 450 characters in total) the script will return the same empty XML response. Certain combinations of the query string will once again cause the script to time out.
This, as you can imagine, has me utterly baffled - it seems to work in every circumstance EXCEPT the one it needs to work in. We can get a response on our dev servers, we can get a response on the command line, we can get a response if we drop certain QS elements - we just can't get the response we want with the correct URL on the LIVE server.
Does anyone have any suggestions at all? I'm at my wits end! | false | 2,236,864 | 0.197375 | 1 | 0 | 1 | Run Wireshark and see how far the request goes. Could be a firewall issue, a DNS resolution problem, among other things.
Also, try bumping your curl timeout to something much higher, like 300s, and see how it goes. | 0 | 606 | 0 | 0 | 2010-02-10T12:54:00.000 | php,python,xml,apache,curl | PHP / cURL problem opening remote file | 1 | 1 | 1 | 2,236,930 | 0 |
0 | 0 | I am looking for a good library in python that will help me parse RSS feeds. Has anyone used feedparser? Any feedback? | false | 2,244,836 | -0.024995 | 0 | 0 | -1 | I Strongly recommend feedparser. | 0 | 23,570 | 0 | 41 | 2010-02-11T13:57:00.000 | python,rss,feedparser | RSS feed parser library in Python | 1 | 2 | 8 | 2,245,462 | 0 |
0 | 0 | I am looking for a good library in python that will help me parse RSS feeds. Has anyone used feedparser? Any feedback? | false | 2,244,836 | 0.049958 | 0 | 0 | 2 | If you want an alternative, try xml.dom.minidom.
Like "Django is Python", "RSS is XML". | 0 | 23,570 | 0 | 41 | 2010-02-11T13:57:00.000 | python,rss,feedparser | RSS feed parser library in Python | 1 | 2 | 8 | 2,245,280 | 0 |
0 | 0 | I browsed the python socket docs and google for two days but I did not find any answer. Yeah I am a network programming newbie :)
I would like to implement some LAN chatting system with specific function for our needs. I am at the very beginning. I was able to implement a client-server model where the client connects to the server (socket.SOCK_STREAM) and they are able to change messages. I want to step forward. I want the client to discover the LAN with a broadcast how many other clients are available.
I failed. Is it possible that a socket.SOCK_STREAM type socket could not be used for this task?
If so, what are my opportunities? using udp packets? How I have to listen for brodcast messages/packets? | true | 2,247,228 | 1.2 | 0 | 0 | 4 | The broadcast is defined by the destination address.
For example if your own ip is 192.168.1.2, the broadcast address would be 192.168.1.255 (in most cases)
It is not related directly to python and will probably not be in its documentation. You are searching for network "general" knowledge, to a level much higher than sockets programming
*EDIT
Yes you are right, you cannot use SOCK_STREAM. SOCK_STREAM defines TCP communication. You should use UDP for broadcasting with socket.SOCK_DGRAM | 0 | 3,935 | 0 | 6 | 2010-02-11T19:45:00.000 | python | stream socket send/receive broadcast messages? | 1 | 1 | 1 | 2,247,237 | 0 |
1 | 0 | I have edited about 100 html files locally, and now I want to push them to my live server, which I can only access via ftp.
The HTML files are in many different directories, but hte directory structure on the remote machine is the same as on the local machine.
How can I recursively descend from my top-level directory ftp-ing all of the .html files to the corresponding directory/filename on the remote machine?
Thanks! | false | 2,263,782 | 0 | 1 | 0 | 0 | umm, maybe by pressing F5 in mc for linux or total commander for windows? | 0 | 409 | 0 | 0 | 2010-02-15T02:37:00.000 | python,networking,scripting,ftp | How to upload all .html files to a remote server using FTP and preserving file structure? | 1 | 2 | 4 | 2,263,804 | 0 |
1 | 0 | I have edited about 100 html files locally, and now I want to push them to my live server, which I can only access via ftp.
The HTML files are in many different directories, but hte directory structure on the remote machine is the same as on the local machine.
How can I recursively descend from my top-level directory ftp-ing all of the .html files to the corresponding directory/filename on the remote machine?
Thanks! | false | 2,263,782 | 0 | 1 | 0 | 0 | if you have a mac, you can try cyberduck. It's good for syncing remote directory structures via ftp. | 0 | 409 | 0 | 0 | 2010-02-15T02:37:00.000 | python,networking,scripting,ftp | How to upload all .html files to a remote server using FTP and preserving file structure? | 1 | 2 | 4 | 2,299,546 | 0 |
0 | 0 | Given this:
It%27s%20me%21
Unencode it and turn it into regular text? | true | 2,277,302 | 1.2 | 0 | 0 | 11 | Take a look at urllib.unquote and urllib.unquote_plus. That will address your problem. Technically though url "encoding" is the process of passing arguments into a url with the & and ? characters (e.g. www.foo.com?x=11&y=12). | 0 | 5,032 | 0 | 11 | 2010-02-16T23:50:00.000 | python,url,encoding | How do I url unencode in Python? | 1 | 1 | 3 | 2,277,313 | 0 |
0 | 0 | I want to use ms office communicator client apis, and i wan to use those in python is it possible to do ? | true | 2,286,790 | 1.2 | 0 | 0 | 2 | >>> import win32com.client
>>> msg = win32com.client.Dispatch('Communicator.UIAutomation')
>>> msg.InstantMessage('[email protected]') | 0 | 1,868 | 0 | 2 | 2010-02-18T06:48:00.000 | python,api,office-communicator | How can we use ms office communicator client exposed APIs in python, is that possible? | 1 | 1 | 3 | 3,754,769 | 0 |
0 | 0 | I'm writing service in python that async ping domains. So it must be able to ping many ip's at the same time. I wrote it on epoll ioloop, but have problem with packets loss.
When there are many simultaneous ICMP requests much part of replies on them didn't reach my servise. What may cause this situation and how i can make my service ping many hosts at the same time without packet loss?
Thanks) | true | 2,299,751 | 1.2 | 0 | 0 | 0 | A problem you might be having is due to the fact that ICMP is layer 3 of the OSI model and does not use a port for communication. In short, ICMP isn't really designed for this. The desired behavior is still possible but perhaps the IP Stack you are using is getting in the way and if this is on a Windows system then 100% sure this is your problem. I would fire up Wireshark to make sure you are actually getting incoming packets, if this is the case then I would use libpcap to track in ICMP replies. If the problem is with sending then you'll have to use raw sockets and build your own ICMP packets. | 0 | 788 | 0 | 1 | 2010-02-19T21:32:00.000 | python,ping,icmp | Problem with asyn icmp ping | 1 | 1 | 1 | 2,299,927 | 0 |
0 | 0 | I am writing a general Client-Server socket program where the client sends commands to the Server, which executes it and sends the result to the Client.
However if there is an error while executing a command, I want to be able to inform the Client of an error. I know I could send the String "ERROR" or maybe something like -1 etc, but these could also be part of the command output. Is there any better way of sending an error or an exception over a socket.
My Server is in Java and Client is in Python | false | 2,302,761 | 0 | 0 | 0 | 0 | Typically when doing client-server communication you need to establish some kind of protocol. One very simple protocol is to send the String "COMMAND" before you send any commands and the String "ERROR" before you send any errors. This doubles the number of Strings you have to send but gives more flexibility.
There are also a number of more sophisticated protocols already developed. Rather than sending Strings you could construct a Request object which you then serialize and send to the client. The client can then reconstruct the Request object and perform the request whether it's performing an error or running a command. | 0 | 134 | 0 | 1 | 2010-02-20T16:05:00.000 | java,python,sockets | Pass error on socket | 1 | 2 | 2 | 2,302,787 | 0 |
0 | 0 | I am writing a general Client-Server socket program where the client sends commands to the Server, which executes it and sends the result to the Client.
However if there is an error while executing a command, I want to be able to inform the Client of an error. I know I could send the String "ERROR" or maybe something like -1 etc, but these could also be part of the command output. Is there any better way of sending an error or an exception over a socket.
My Server is in Java and Client is in Python | false | 2,302,761 | 0 | 0 | 0 | 0 | You're already (necessarily) establishing some format or protocol whereby strings are being sent back and forth -- either you're somehow terminating each string, or sending its length first, or the like. (TCP is intrinsically just a stream so without such a protocol there would be no way the recipient could possibly know when the command or output is finished!-)
So, whatever approach you're using to delimiting strings, just make it so the results sent back from server to client are two strings each and every time: one being the error description (empty if no error), the other being the commands's results (empty if no results). That's going to be trivial both to send and receive/parse, and have minimal overhead (sending an empty string should be as simple as sending just a terminator or a length of 0). | 0 | 134 | 0 | 1 | 2010-02-20T16:05:00.000 | java,python,sockets | Pass error on socket | 1 | 2 | 2 | 2,302,805 | 0 |
1 | 0 | I am having trouble exchanging my Oauth request token for an Access Token. My Python application successfully asks for a Request Token and then redirects to the Google login page asking to grant access to my website. When I grant access I retrieve a 200 status code but exchanging this authorized request token for an access token gives me a 'The token is invalid' message.
The Google Oauth documentation says: "Google redirects with token and verifier regardless of whether the token has been authorized." so it seems that authorizing the request token fails but then I am not sure how I should get an authorized request token. Any suggestions? | true | 2,306,984 | 1.2 | 0 | 0 | 1 | When you're exchanging for the access token, the oauth_verifier parameter is required. If you don't provide that parameter, then google will tell you that the token is invalid. | 0 | 1,332 | 0 | 2 | 2010-02-21T18:50:00.000 | python,oauth,google-api | Exchange Oauth Request Token for Access Token fails Google API | 1 | 1 | 1 | 3,110,939 | 0 |
0 | 0 | Using Python, how does one parse/access files with Linux-specific features, like "~/.mozilla/firefox/*.default"? I've tried this, but it doesn't work.
Thanks | false | 2,313,053 | 0.099668 | 0 | 0 | 2 | It's important to remember:
use of the tilde ~ expands the home directory as per Poke's answer
use of the forward slash / is the separator for linux / *nix directories
by default, *nix systems such as linux for example has a wild card globbing in the shell, for instance echo *.* will return back all files that match the asterisk dot asterisk (as per Will McCutcheon's answer!) | 0 | 7,357 | 1 | 3 | 2010-02-22T18:17:00.000 | python,linux,path | Python: How to Access Linux Paths | 1 | 1 | 4 | 2,313,168 | 0 |
0 | 0 | I am coding a python (2.6) interface to a web service. I need to communicate via http so that :
Cookies are handled automatically,
The requests are asynchronous,
The order in which the requests are sent is respected (the order in which the responses to these requests are received does not matter).
I have tried what could be easily derived from the build-in libraries, facing different problems :
Using httplib and urllib2, the requests are synchronous unless I use thread, in which case the order is not guaranteed to be respected,
Using asyncore, there was no library to automatically deal with cookies send by the web service.
After some googling, it seems that there are many examples of python scripts or libraries that match 2 out of the 3 criteria, but not the 3 of them. I am thinking of reading through the cookielib sources and adapting what I need of it to asyncore (or only to my application in a ad hoc manner), but it seems strange that nothing like this exists yet, as I guess I am not the only one interested. If anyone knows of pointers about this problem, it would be greatly appreciated.
Thank you.
Edit to clarify :
What I am doing is a local proxy that interfaces my IRC client with a webchat. It creates a socket that listens to IRC connections, then upon receiving one, it logs in the webchat via http. I don't have access to the behaviour of the webchat, and it uses cookies for session IDs. When client sends several IRC requests to my python proxy, I have to forward them to the webchat's server via http and with cookies. I also want to do this asynchronously (I don't want to wait for the http response before I send the next request), and currently what happens is that the order in which the http requests are sent is not the order in which the IRC commands were received.
I hope this clarifies the question, and I will of course detail more if it doesn't. | true | 2,315,151 | 1.2 | 0 | 0 | 2 | Using httplib and urllib2, the
requests are synchronous unless I use
thread, in which case the order is not
guaranteed to be respected
How would you know that the order has been respected unless you get your response back from the first connection before you send the response to the second connection? After all, you don't care what order the responses come in, so it's very possible that the responses come back in the order you expect but that your requests were processed in the wrong order!
The only way you can guarantee the ordering is by waiting for confirmation that the first request has successfully arrived (eg. you start receiving the response for it) before beginning the second request. You can do this by not launching the second thread until you reach the response handling part of the first thread. | 0 | 1,531 | 0 | 1 | 2010-02-22T23:43:00.000 | python,http,asynchronous,cookies | Python: Asynchronous http requests sent in order with automatic handling of cookies? | 1 | 1 | 1 | 2,317,612 | 0 |
0 | 0 | i have a string
'''
{"session_key":"3.KbRiifBOxY_0ouPag6__.3600.1267063200-16423986","uid":164
23386,"expires":12673200,"secret":"sm7WM_rRtjzXeOT_jDoQ__","sig":"6a6aeb66
64a1679bbeed4282154b35"}
'''
how to get the value .
thanks | false | 2,330,857 | 0 | 1 | 0 | 0 | For a simple-to-code method, I suggest using ast.parse() or eval() to create a dictionary from your string, and then accessing the fields as usual. The difference between the two functions above is that ast.parse can only evaluate base types, and is therefore more secure if someone can give you a string that could contain "bad" code. | 0 | 115 | 0 | 1 | 2010-02-25T01:00:00.000 | python | which is the best way to get the value of 'session_key','uid','expires' | 1 | 1 | 3 | 2,330,884 | 0 |
0 | 0 | I found the quoted text in Programming Python 3rd edition by Mark Lutz from Chapter 16: Server-Side Scripting (page 987):
Forms also include a method option to specify the encoding style to be used to send data over a socket to the target server machine. Here, we use the post style, which contacts the server and then ships it a stream of user input data in a separate transmission. An alternative get style ships input information to the server in a single transmission step, by adding user inputs to the end of the URL used to invoke the script, usually after a ? character (more on this soon).
I read this with some puzzlement. As far as I know post data is sent in the same transmission as a part of the same http header. I have never heard of this extra step for post data transmission.
I quickly looked over the relevant HTTP rfc's and didn't notice any distinction in version 1.0 or 1.1. I also used wireshark for some analysis and didn't notice multiple transmissions for post.
Am I missing something fundamental or is this an error in the text? | true | 2,339,742 | 1.2 | 0 | 0 | 0 | Yes, there is only one transmission of data between server and client.
The context of the passage was referring to communication between web server and cgi application. Server communication between the web server and the cgi application using POST happens in two separate transfers. The request for the python script is sent by the server in a single transfer and then the POST data is sent separately over stdin (two transfers).
Whereas with GET the input data is passed as env vars or command line args in one transfer. | 0 | 372 | 0 | 2 | 2010-02-26T05:51:00.000 | python,http,post,cgi,get | HTTP POST Requests require multiple transmissions? | 1 | 2 | 3 | 2,790,879 | 0 |
0 | 0 | I found the quoted text in Programming Python 3rd edition by Mark Lutz from Chapter 16: Server-Side Scripting (page 987):
Forms also include a method option to specify the encoding style to be used to send data over a socket to the target server machine. Here, we use the post style, which contacts the server and then ships it a stream of user input data in a separate transmission. An alternative get style ships input information to the server in a single transmission step, by adding user inputs to the end of the URL used to invoke the script, usually after a ? character (more on this soon).
I read this with some puzzlement. As far as I know post data is sent in the same transmission as a part of the same http header. I have never heard of this extra step for post data transmission.
I quickly looked over the relevant HTTP rfc's and didn't notice any distinction in version 1.0 or 1.1. I also used wireshark for some analysis and didn't notice multiple transmissions for post.
Am I missing something fundamental or is this an error in the text? | false | 2,339,742 | 0.066568 | 0 | 0 | 1 | Simple POST request is in single step. but when you are uploading a file, than the form is posted in multiple parts. In that case, the content type application/x-www-form-urlencoded is changed to multipart/form-data. | 0 | 372 | 0 | 2 | 2010-02-26T05:51:00.000 | python,http,post,cgi,get | HTTP POST Requests require multiple transmissions? | 1 | 2 | 3 | 2,339,754 | 0 |
0 | 0 | I am developing a project that requires a single configuration file whose data is used by multiple modules.
My question is: what is the common approach to that? should i read the configuration file from each
of my modules (files) or is there any other way to do it?
I was thinking to have a module named config.py that reads the configuration files and whenever I need a config I do import config and then do something like config.data['teamsdir'] get the 'teamsdir' property (for example).
response: opted for the conf.py approach then since it it is modular, flexible and simple
I can just put the configuration data directly in the file, latter if i want to read from a json file a xml file or multiple sources i just change the conf.py and make sure the data is accessed the same way.
accepted answer: chose "Alex Martelli" response because it was the most complete. voted up other answers because they where good and useful too. | false | 2,348,927 | 0.197375 | 0 | 0 | 4 | The approach you describe is ok. If you want to add support for user config files, you can use execfile(os.path.expanduser("~/.yourprogram/config.py")). | 1 | 6,382 | 0 | 23 | 2010-02-27T20:53:00.000 | python,configuration-files | python single configuration file | 1 | 3 | 4 | 2,348,941 | 0 |
0 | 0 | I am developing a project that requires a single configuration file whose data is used by multiple modules.
My question is: what is the common approach to that? should i read the configuration file from each
of my modules (files) or is there any other way to do it?
I was thinking to have a module named config.py that reads the configuration files and whenever I need a config I do import config and then do something like config.data['teamsdir'] get the 'teamsdir' property (for example).
response: opted for the conf.py approach then since it it is modular, flexible and simple
I can just put the configuration data directly in the file, latter if i want to read from a json file a xml file or multiple sources i just change the conf.py and make sure the data is accessed the same way.
accepted answer: chose "Alex Martelli" response because it was the most complete. voted up other answers because they where good and useful too. | true | 2,348,927 | 1.2 | 0 | 0 | 10 | I like the approach of a single config.py module whose body (when first imported) parses one or more configuration-data files and sets its own "global variables" appropriately -- though I'd favor config.teamdata over the round-about config.data['teamdata'] approach.
This assumes configuration settings are read-only once loaded (except maybe in unit-testing scenarios, where the test-code will be doing its own artificial setting of config variables to properly exercise the code-under-test) -- it basically exploits the nature of a module as the simplest Pythonic form of "singleton" (when you don't need subclassing or other features supported only by classes and not by modules, of course).
"One or more" configuration files (e.g. first one somewhere in /etc for general default settings, then one under /usr/local for site-specific overrides thereof, then again possibly one in the user's home directory for user specific settings) is a common and useful pattern. | 1 | 6,382 | 0 | 23 | 2010-02-27T20:53:00.000 | python,configuration-files | python single configuration file | 1 | 3 | 4 | 2,349,182 | 0 |
0 | 0 | I am developing a project that requires a single configuration file whose data is used by multiple modules.
My question is: what is the common approach to that? should i read the configuration file from each
of my modules (files) or is there any other way to do it?
I was thinking to have a module named config.py that reads the configuration files and whenever I need a config I do import config and then do something like config.data['teamsdir'] get the 'teamsdir' property (for example).
response: opted for the conf.py approach then since it it is modular, flexible and simple
I can just put the configuration data directly in the file, latter if i want to read from a json file a xml file or multiple sources i just change the conf.py and make sure the data is accessed the same way.
accepted answer: chose "Alex Martelli" response because it was the most complete. voted up other answers because they where good and useful too. | false | 2,348,927 | 0.148885 | 0 | 0 | 3 | One nice approach is to parse the config file(s) into a Python object when the application starts and pass this object around to all classes and modules requiring access to the configuration.
This may save a lot of time parsing the config. | 1 | 6,382 | 0 | 23 | 2010-02-27T20:53:00.000 | python,configuration-files | python single configuration file | 1 | 3 | 4 | 2,349,159 | 0 |
0 | 0 | I know a little of dom, and would like to learn about ElementTree. Python 2.6 has a somewhat older implementation of ElementTree, but still usable. However, it looks like it comes with two different classes: xml.etree.ElementTree and xml.etree.cElementTree. Would someone please be so kind to enlighten me with their differences? Thank you. | false | 2,351,694 | 0.158649 | 0 | 0 | 4 | ElementTree is implemented in python while cElementTree is implemented in C. Thus cElementTree will be faster, but also not available where you don't have access to C, such as in Jython or IronPython or on Google App Engine.
Functionally, they should be equivalent. | 0 | 14,861 | 0 | 24 | 2010-02-28T16:32:00.000 | python,xml | What are the Difference between cElementtree and ElementTree? | 1 | 2 | 5 | 2,351,710 | 0 |
0 | 0 | I know a little of dom, and would like to learn about ElementTree. Python 2.6 has a somewhat older implementation of ElementTree, but still usable. However, it looks like it comes with two different classes: xml.etree.ElementTree and xml.etree.cElementTree. Would someone please be so kind to enlighten me with their differences? Thank you. | true | 2,351,694 | 1.2 | 0 | 0 | 31 | It is the same library (same API, same features) but ElementTree is implemented in Python and cElementTree is implemented in C.
If you can, use the C implementation because it is optimized for fast parsing and low memory use, and is 15-20 times faster than the Python implementation.
Use the Python version if you are in a limited environment (C library loading not allowed). | 0 | 14,861 | 0 | 24 | 2010-02-28T16:32:00.000 | python,xml | What are the Difference between cElementtree and ElementTree? | 1 | 2 | 5 | 2,351,707 | 0 |
1 | 0 | I'm currently trying to scrape a website that has fairly poorly-formatted HTML (often missing closing tags, no use of classes or ids so it's incredibly difficult to go straight to the element you want, etc.). I've been using BeautifulSoup with some success so far but every once and a while (though quite rarely), I run into a page where BeautifulSoup creates the HTML tree a bit differently from (for example) Firefox or Webkit. While this is understandable as the formatting of the HTML leaves this ambiguous, if I were able to get the same parse tree as Firefox or Webkit produces I would be able to parse things much more easily.
The problems are usually something like the site opens a <b> tag twice and when BeautifulSoup sees the second <b> tag, it immediately closes the first while Firefox and Webkit nest the <b> tags.
Is there a web scraping library for Python (or even any other language (I'm getting desperate)) that can reproduce the parse tree generated by Firefox or WebKit (or at least get closer than BeautifulSoup in cases of ambiguity). | false | 2,397,295 | 0.019997 | 0 | 0 | 1 | Well, WebKit is open source so you could use its own parser (in the WebCore component), if any language is acceptable | 0 | 4,506 | 0 | 9 | 2010-03-07T18:07:00.000 | python,firefox,webkit,web-scraping | Web scraping with Python | 1 | 1 | 10 | 2,397,311 | 0 |
0 | 0 | I am trying to make a simple IRC client in Python (as kind of a project while I learn the language).
I have a loop that I use to receive and parse what the IRC server sends me, but if I use raw_input to input stuff, it stops the loop dead in its tracks until I input something (obviously).
How can I input something without the loop stopping?
(I don't think I need to post the code, I just want to input something without the while 1: loop stopping.)
I'm on Windows. | false | 2,408,560 | 0.028564 | 0 | 0 | 2 | I'd do what Mickey Chan said, but I'd use unicurses instead of normal curses.
Unicurses is universal (works on all or at least almost all operating systems) | 0 | 93,857 | 0 | 73 | 2010-03-09T11:17:00.000 | python,windows,input | Non-blocking console input? | 1 | 1 | 14 | 53,794,715 | 0 |
0 | 0 | Say there are two empty Queues. Is there a way to get an item from the queue that gets it first?
So I have a queue of high anonymous proxies, queues of anonymous and transparent ones. Some threads may need only high anon. proxies, while others may accept both high anon. and just anon. proxies. That's why I can't put them all to a single queue. | true | 2,411,306 | 1.2 | 0 | 0 | 0 | If I had this problem (and "polling", i.e. trying each queue alternately with short timeouts, was unacceptable -- it usually is, being very wasteful of CPU time etc), I would tackle it by designing a "multiqueue" object -- one with multiple condition variables, one per "subqueue" and an overall one. A put to any subqueue would signal that subqueue's specific condition variable as well as the overall one; a get from a specific subqueue would only wait on its specific condition variable, but there would also be a "get from any subqueue" which waits on the overall condition variable instead. (If more combinations than "get from this specific subqueue" or "get from any subqueue" need to be supported, just as many condition variables as combinations to support would be needed).
It would be much simpler to code if get and put were reduced to their bare bones (no timeouts, no no-waits, etc) and all subqueues used a single overall mutex (very small overhead wrt many mutexes, and much easier to code in a deadlock-free way;-). The subqueues could be exposed as "simplified queue-like duckies" to existing code which assumes it's dealing with a plain old queue (e.g. the multiqueue could support indexing to return proxy objects for the purpose).
With these assumptions, it wouldn't be much code, though it would be exceedingly tricky to write and inspect for correctness (alas, testing is of limited use when very subtle threading code is in play) -- I can't take the time for that right now, though I'd be glad to give it a try tonight (8 hours from now or so) if the assumptions are roughly correct and no other preferable answer has surfaced. | 1 | 119 | 0 | 0 | 2010-03-09T18:01:00.000 | python,multithreading | How to get an item from a set of Queues? | 1 | 2 | 2 | 2,411,865 | 0 |
0 | 0 | Say there are two empty Queues. Is there a way to get an item from the queue that gets it first?
So I have a queue of high anonymous proxies, queues of anonymous and transparent ones. Some threads may need only high anon. proxies, while others may accept both high anon. and just anon. proxies. That's why I can't put them all to a single queue. | false | 2,411,306 | 0 | 0 | 0 | 0 | You could check both queues in turn, each time using a short timeout. That way you would most likely read from the first queue that receives data. However, this solution is prone to race conditions if you will be getting many items on a regular basis.
If that is the case, do you have a good reason for not just writing data to one queue? | 1 | 119 | 0 | 0 | 2010-03-09T18:01:00.000 | python,multithreading | How to get an item from a set of Queues? | 1 | 2 | 2 | 2,411,355 | 0 |
0 | 0 | A search for "python" and "xml" returns a variety of libraries for combining the two.
This list probably faulty:
xml.dom
xml.etree
xml.sax
xml.parsers.expat
PyXML
beautifulsoup?
HTMLParser
htmllib
sgmllib
Be nice if someone can offer a quick summary of when to use which, and why. | false | 2,430,423 | 0.197375 | 0 | 0 | 4 | I find xml.etree essentially sufficient for everything, except for BeautifulSoup if I ever need to parse broken XML (not a common problem, differently from broken HTML, which BeautifulSoup also helps with and is everywhere): it has reasonable support for reading entire XML docs in memory, navigating them, creating them, incrementally-parsing large docs. lxml supports the same interface, and is generally faster -- useful to push performance when you can afford to install third party Python extensions (e.g. on App Engine you can't -- but xml.etree is still there, so you can run exactly the same code). lxml also has more features, and offers BeautifulSoup too.
The other libs you mention mimic APIs designed for very different languages, and in general I see no reason to contort Python into those gyrations. If you have very specific needs such as support for xslt, various kinds of validations, etc, it may be worth looking around for other libraries yet, but I haven't had such needs in a long time so I'm not current wrt the offerings for them. | 0 | 803 | 0 | 8 | 2010-03-12T04:02:00.000 | python,xml | Which XML library for what purposes? | 1 | 3 | 4 | 2,430,575 | 0 |
0 | 0 | A search for "python" and "xml" returns a variety of libraries for combining the two.
This list probably faulty:
xml.dom
xml.etree
xml.sax
xml.parsers.expat
PyXML
beautifulsoup?
HTMLParser
htmllib
sgmllib
Be nice if someone can offer a quick summary of when to use which, and why. | true | 2,430,423 | 1.2 | 0 | 0 | 6 | The DOM/SAX divide is a basic one. It applies not just to python since DOM and SAX are cross-language.
DOM: read the whole document into memory and manipulate it.
Good for:
complex relationships across tags in the markup
small intricate XML documents
Cautions:
Easy to use excessive memory
SAX: parse the document while you read it. Good for:
Long documents or open ended streams
places where memory is a constraint
Cautions:
You'll need to code a stateful parser, which can be tricky
beautifulsoup:
Great for HTML or not-quite-well-formed markup. Easy to use and fast. Good for screen scraping, etc. It can work with markup where the XML based ones would just through an error saying the markup is incorrect.
Most of the rest I haven't used, but I don't think there's hard and fast rules about when to use which. Just your standard considerations: who is going to maintain the code, which APIs do you find most easy to use, how well do they work, etc.
In general, for basic needs, it's nice to use the standard library modules since they are "standard" and thus available and well known. However, if you need to dig deep into something, almost always there are newer nonstandard modules with superior functionality outside of the standard library. | 0 | 803 | 0 | 8 | 2010-03-12T04:02:00.000 | python,xml | Which XML library for what purposes? | 1 | 3 | 4 | 2,430,541 | 0 |
0 | 0 | A search for "python" and "xml" returns a variety of libraries for combining the two.
This list probably faulty:
xml.dom
xml.etree
xml.sax
xml.parsers.expat
PyXML
beautifulsoup?
HTMLParser
htmllib
sgmllib
Be nice if someone can offer a quick summary of when to use which, and why. | false | 2,430,423 | 0.049958 | 0 | 0 | 1 | For many problems you can get by with the xml. It has the major advantage of being part of the standard library. This means that it is pre-installed on almost every system and that the interface will be static. It is not the best, or the fastest, but it is there.
For everything else there is lxml. Specically, lxml is best for parsing broken HTML, xHTML, or suspect feeds. It uses libxml2 and libxslt to handle XPath, XSLT, and EXSLT. The tutorial is clear and the interface is simplistically straight-forward. The rest of the libraries mentioned exist because lxml was not available in its current form.
This is my opinion. | 0 | 803 | 0 | 8 | 2010-03-12T04:02:00.000 | python,xml | Which XML library for what purposes? | 1 | 3 | 4 | 2,430,695 | 0 |
0 | 0 | We have server on Python and client + web service on Ruby. That works only if file from URL is less than 800 k. It seems like "socket.puts data" in a client works, but "output = socket.gets" - not. I think problem is in a Python part. For big files tests run "Connection reset by peer". Is it buffer size variable by default somewhere in a Python? | false | 2,435,294 | 0 | 0 | 0 | 0 | Could you add a little more information and code to your example?
Are you thinking about sock.recv_into() which takes a buffer and buffer size as arguments? Alternately, are you hitting a timeout issue by failing to have a keepalive on the Ruby side?
Guessing in advance of knowledge. | 0 | 183 | 0 | 0 | 2010-03-12T19:34:00.000 | python,ruby,client,size | File size in Python server | 1 | 1 | 1 | 2,435,731 | 0 |
0 | 0 | I'm have an action /json that returns json from the server.
Unfortunately in IE, the browser likes to cache this json.
How can I make it so that this action doesn't cache? | false | 2,439,987 | 0.049958 | 0 | 0 | 1 | The jQuery library has pretty nice ajax functions, and settings to control them. One of them is is called "cache" and it will automatically append a random number to the query that essentially forces the browser to not cache the page. This can be set along with the parameter "dataType", which can be set to "json" to make the ajax request get json data. I've been using this in my code and haven't had a problem with IE.
Hope this helps | 1 | 1,603 | 0 | 2 | 2010-03-13T20:54:00.000 | python,internet-explorer,caching,pylons | Disable browser caching in pylons | 1 | 1 | 4 | 3,907,139 | 0 |
0 | 0 | I am using urllib2 to interact with a website that sends back multiple Set-Cookie headers. However the response header dictionary only contains one - seems the duplicate keys are overriding each other.
Is there a way to access duplicate headers with urllib2? | false | 2,454,494 | 0 | 0 | 0 | 0 | set-cookie is different though. From RFC 6265:
Origin servers SHOULD NOT fold multiple Set-Cookie header fields into
a single header field. The usual mechanism for folding HTTP headers
fields (i.e., as defined in [RFC2616]) might change the semantics of
the Set-Cookie header field because the %x2C (",") character is used
by Set-Cookie in a way that conflicts with such folding.
In theory then, this looks like a bug. | 0 | 3,778 | 0 | 3 | 2010-03-16T13:06:00.000 | python,header,urllib2,setcookie | urllib2 multiple Set-Cookie headers in response | 1 | 1 | 3 | 39,896,162 | 0 |
0 | 0 | How would I download files (video) with Python using wget and save them locally? There will be a bunch of files, so how do I know that one file is downloaded so as to automatically start downloding another one?
Thanks. | false | 2,467,609 | -1 | 0 | 0 | -6 | No reason to use python. Avoid writing a shell script in Python and go with something like bash or an equivalent. | 0 | 92,663 | 1 | 32 | 2010-03-18T04:55:00.000 | python,linux | Using wget via Python | 1 | 2 | 6 | 2,467,717 | 0 |
0 | 0 | How would I download files (video) with Python using wget and save them locally? There will be a bunch of files, so how do I know that one file is downloaded so as to automatically start downloding another one?
Thanks. | false | 2,467,609 | 1 | 0 | 0 | 9 | No reason to use os.system. Avoid writing a shell script in Python and go with something like urllib.urlretrieve or an equivalent.
Edit... to answer the second part of your question, you can set up a thread pool using the standard library Queue class. Since you're doing a lot of downloading, the GIL shouldn't be a problem. Generate a list of the URLs you wish to download and feed them to your work queue. It will handle pushing requests to worker threads.
I'm waiting for a database update to complete, so I put this together real quick.
#!/usr/bin/python
import sys
import threading
import urllib
from Queue import Queue
import logging
class Downloader(threading.Thread):
def __init__(self, queue):
super(Downloader, self).__init__()
self.queue = queue
def run(self):
while True:
download_url, save_as = queue.get()
# sentinal
if not download_url:
return
try:
urllib.urlretrieve(download_url, filename=save_as)
except Exception, e:
logging.warn("error downloading %s: %s" % (download_url, e))
if __name__ == '__main__':
queue = Queue()
threads = []
for i in xrange(5):
threads.append(Downloader(queue))
threads[-1].start()
for line in sys.stdin:
url = line.strip()
filename = url.split('/')[-1]
print "Download %s as %s" % (url, filename)
queue.put((url, filename))
# if we get here, stdin has gotten the ^D
print "Finishing current downloads"
for i in xrange(5):
queue.put((None, None)) | 0 | 92,663 | 1 | 32 | 2010-03-18T04:55:00.000 | python,linux | Using wget via Python | 1 | 2 | 6 | 2,467,646 | 0 |
0 | 0 | I have a python server that listens on a couple sockets. At startup, I try to connect to these sockets before listening, so I can be sure that nothing else is already using that port. This adds about three seconds to my server's startup (which is about .54 seconds without the test) and I'd like to trim it down. Since I'm only testing localhost, I think a timeout of about 50 milliseconds is more than ample for that. Unfortunately, the socket.setdefaulttimeout(50) method doesn't seem to work for some reason.
How I can trim this down? | false | 2,470,971 | 0.057081 | 1 | 0 | 2 | Are you on Linux? If so, perhaps your application could run netstat -lant (or netstat -lanu if you're using UDP) and see what ports are in use. This should be faster... | 0 | 46,722 | 0 | 29 | 2010-03-18T15:21:00.000 | python,sockets | Fast way to test if a port is in use using Python | 1 | 3 | 7 | 2,471,078 | 0 |
0 | 0 | I have a python server that listens on a couple sockets. At startup, I try to connect to these sockets before listening, so I can be sure that nothing else is already using that port. This adds about three seconds to my server's startup (which is about .54 seconds without the test) and I'd like to trim it down. Since I'm only testing localhost, I think a timeout of about 50 milliseconds is more than ample for that. Unfortunately, the socket.setdefaulttimeout(50) method doesn't seem to work for some reason.
How I can trim this down? | false | 2,470,971 | 0.057081 | 1 | 0 | 2 | Simon B's answer is the way to go - don't check anything, just try to bind and handle the error case if it's already in use.
Otherwise you're in a race condition where some other app can grab the port in between your check that it's free and your subsequent attempt to bind to it. That means you still have to handle the possibility that your call to bind will fail, so checking in advance achieved nothing. | 0 | 46,722 | 0 | 29 | 2010-03-18T15:21:00.000 | python,sockets | Fast way to test if a port is in use using Python | 1 | 3 | 7 | 2,471,762 | 0 |
0 | 0 | I have a python server that listens on a couple sockets. At startup, I try to connect to these sockets before listening, so I can be sure that nothing else is already using that port. This adds about three seconds to my server's startup (which is about .54 seconds without the test) and I'd like to trim it down. Since I'm only testing localhost, I think a timeout of about 50 milliseconds is more than ample for that. Unfortunately, the socket.setdefaulttimeout(50) method doesn't seem to work for some reason.
How I can trim this down? | true | 2,470,971 | 1.2 | 1 | 0 | 14 | How about just trying to bind to the port you want, and handle the error case if the port is occupied?
(If the issue is that you might start the same service twice then don't look at open ports.)
This is the reasonable way also to avoid causing a race-condition, as @eemz said in another answer. | 0 | 46,722 | 0 | 29 | 2010-03-18T15:21:00.000 | python,sockets | Fast way to test if a port is in use using Python | 1 | 3 | 7 | 2,471,039 | 0 |
0 | 0 | Im trying to write some kind of multi protocol bot (jabber/irc) that would read messages from fifo file (one liners mostly) and then send them to irc channel and jabber contacts. So far, I managed to create two factories to connect to jabber and irc, and they seem to be working.
However, I've problem with reading the fifo file - I have no idea how to read it in a loop (open file, read line, close file, jump to open file and so on) outside of reactor loop to get the data I need to send, and then get that data to reactor loop for sending in both protocols. I've been looking for information on how to do it in best way, but Im totally lost in the dark. Any suggestion/help would be highly appreciated.
Thanks in advance! | false | 2,476,234 | 0.099668 | 1 | 0 | 1 | The fifo is the problem. Read from a socket instead. This will fit info the Twisted event-driven model much better. Trying to do things outside the control of the reactor is usually the wrong approach.
---- update based on feedback that the fifo is an external constraint, not avoidable ----
OK, the central issue is that you can not write code in the main (and only) thread of your Twisted app that makes blocking read calls to a fifo. That will cause the whole app to stall if there is nothing to read. So you're either looking at reading the fifo asynchronously, creating a separate thread to read it, or splitting the app in two.
The last option is the simplest - modify the Twisted app so that it listens on a socket and write a separate little "forwarder" app that runs in a simple loop, reading the fifo and writing everything it hears to the socket. | 0 | 1,766 | 0 | 4 | 2010-03-19T09:45:00.000 | python,twisted,xmpp,irc,fifo | Python (Twisted) - reading from fifo and sending read data to multiple protocols | 1 | 2 | 2 | 2,476,445 | 0 |
0 | 0 | Im trying to write some kind of multi protocol bot (jabber/irc) that would read messages from fifo file (one liners mostly) and then send them to irc channel and jabber contacts. So far, I managed to create two factories to connect to jabber and irc, and they seem to be working.
However, I've problem with reading the fifo file - I have no idea how to read it in a loop (open file, read line, close file, jump to open file and so on) outside of reactor loop to get the data I need to send, and then get that data to reactor loop for sending in both protocols. I've been looking for information on how to do it in best way, but Im totally lost in the dark. Any suggestion/help would be highly appreciated.
Thanks in advance! | false | 2,476,234 | 0.291313 | 1 | 0 | 3 | You can read/write on a file descriptor without blocking the reactor as you do with sockets, by the way doesn't sockets use file descriptors?
In your case create a class that implements twisted.internet.interfaces.IReadDescriptor and add to reactor using twisted.internet.interfaces.IReactorFDSet.addReader. For an example of IReadDescriptor implementation look at twisted.internet.tcp.Connection.
I cannot be more specific because i never did by my self, but i hope this could be a start point. | 0 | 1,766 | 0 | 4 | 2010-03-19T09:45:00.000 | python,twisted,xmpp,irc,fifo | Python (Twisted) - reading from fifo and sending read data to multiple protocols | 1 | 2 | 2 | 2,478,970 | 0 |
0 | 0 | I have installed boto like so: python setup.py install; and then when I launch my python script (that imports moduls from boto) on shell, an error like this shows up: ImportError: No module named boto.s3.connection
How to settle the matter? | false | 2,481,417 | 0.462117 | 0 | 0 | 5 | I fixed same problem at Ubuntu using apt-get install python-boto | 0 | 3,095 | 0 | 1 | 2010-03-20T00:50:00.000 | python,boto | Problem importing modul2s from boto | 1 | 2 | 2 | 6,861,411 | 0 |
0 | 0 | I have installed boto like so: python setup.py install; and then when I launch my python script (that imports moduls from boto) on shell, an error like this shows up: ImportError: No module named boto.s3.connection
How to settle the matter? | false | 2,481,417 | 0.197375 | 0 | 0 | 2 | This can happen if the Python script does not use your default python executable. Check the shebang on the first line of the script (on *nix) or the .py file association (on Windows) and run that against setup.py instead. | 0 | 3,095 | 0 | 1 | 2010-03-20T00:50:00.000 | python,boto | Problem importing modul2s from boto | 1 | 2 | 2 | 2,481,420 | 0 |
1 | 0 | I need a recommendation for a pythonic library that can marshall python objects to XML(let it be a file).
I need to be able read that XML later on with Java (JAXB) and unmarshall it.
I know JAXB has some issues that makes it not play nice with .NET XML libraries so a recommendation on something that actually works would be great. | true | 2,492,490 | 1.2 | 0 | 0 | 1 | As Ignacio says, XML is XML. On the python side, I recommend using lxml, unless you have more specific needs that are better met by another library. If you are restricted to the standard library, look at ElementTree or cElementTree, which are also excellent, and which inspired (and are functionally mostly equivalent to) lxml.etree.
Edit: On closer look, it seems you are not just looking for XML, but for XML representations of objects. For this, check out lxml.objectify, or Amara. I haven't tried using them for interoperability with Java, but they're worth a try. If you're just looking for a way to do data exchange, you might also try custom JSON objects. | 0 | 475 | 0 | 2 | 2010-03-22T13:23:00.000 | java,python,xml,marshalling,interop | Python XML + Java XML interoperability | 1 | 1 | 2 | 2,492,599 | 0 |
0 | 0 | HI,
I have a device that exposes a telnet interface which you can log into using a username and password and then manipulate the working of the device.
I have to write a C program that hides the telnet aspect from the client and instead provides an interface for the user to control the device.
What would be a good way to proceed. I tried writing a simple socket program but it stops at the login prompt. My guess is that i am not following the TCP protocol.
Has anyone attempted this, is there an opensource library out there to do this?
Thanks
Addition:
Eventually i wish to expose it through a web api/webservice. The platform is linux. | false | 2,519,598 | 0.066568 | 0 | 0 | 3 | While telnet is almost just a socket tied to a terminal it's not quite. I believe that there can be some control characters that get passed shortly after the connection is made. If your device is sending some unexpected control data then it may be confusing your program.
If you haven't already, go download wireshark (or tshark or tcpdump) and monitor your connection. Wireshark (formerly ethereal) is cross platform and pretty easy to use for simple stuff. Filter with tcp.port == 23 | 0 | 18,409 | 0 | 9 | 2010-03-25T21:37:00.000 | python,c,telnet | Writing a telnet client | 1 | 3 | 9 | 2,525,924 | 0 |
0 | 0 | HI,
I have a device that exposes a telnet interface which you can log into using a username and password and then manipulate the working of the device.
I have to write a C program that hides the telnet aspect from the client and instead provides an interface for the user to control the device.
What would be a good way to proceed. I tried writing a simple socket program but it stops at the login prompt. My guess is that i am not following the TCP protocol.
Has anyone attempted this, is there an opensource library out there to do this?
Thanks
Addition:
Eventually i wish to expose it through a web api/webservice. The platform is linux. | false | 2,519,598 | 0.088656 | 0 | 0 | 4 | telnet's protocol is pretty straightforward... you just create a TCP connection, and send and receive ASCII data. That's pretty much it.
So all you really need to do is create a program that connects via TCP, then reads characters from the TCP socket and parses it to update the GUI, and/or writes characters to the socket in response to the user manipulating controls in the GUI.
How you would implement that depends a lot on what software you are using to construct your interface. On the TCP side, a simple event loop around select() would be sufficient. | 0 | 18,409 | 0 | 9 | 2010-03-25T21:37:00.000 | python,c,telnet | Writing a telnet client | 1 | 3 | 9 | 2,519,786 | 0 |
0 | 0 | HI,
I have a device that exposes a telnet interface which you can log into using a username and password and then manipulate the working of the device.
I have to write a C program that hides the telnet aspect from the client and instead provides an interface for the user to control the device.
What would be a good way to proceed. I tried writing a simple socket program but it stops at the login prompt. My guess is that i am not following the TCP protocol.
Has anyone attempted this, is there an opensource library out there to do this?
Thanks
Addition:
Eventually i wish to expose it through a web api/webservice. The platform is linux. | false | 2,519,598 | 0.022219 | 0 | 0 | 1 | Unless the application is trivial, a better starting point would be to figure out how you're going to create the GUI. This is a bigger question and will have more impact on your project than how exactly you telnet into the device. You mention C at first, but then start talking about Python, which makes me believe you are relatively flexible in the matter.
Once you are set on a language/platform, then look for a telnet library -- you should find something reasonable already implemented. | 0 | 18,409 | 0 | 9 | 2010-03-25T21:37:00.000 | python,c,telnet | Writing a telnet client | 1 | 3 | 9 | 2,521,858 | 0 |
0 | 0 | This whole topic is way out of my depth, so forgive my imprecise question, but I have two computers both connected to one LAN.
What I want is to be able to communicate one string between the two, by running a python script on the first (the host) where the string will originate, and a second on the client computer to retrieve the string.
What is the most efficient way for an inexperienced programmer like me to achieve this? | false | 2,534,527 | -0.132549 | 0 | 0 | -2 | File share and polling filesystem every minute. No joke. Of course, it depends on what are requirements for your applications and what lag is acceptable but in practice using file shares is quite common. | 0 | 996 | 0 | 3 | 2010-03-28T20:55:00.000 | python | Python inter-computer communication | 1 | 2 | 3 | 2,534,778 | 0 |
0 | 0 | This whole topic is way out of my depth, so forgive my imprecise question, but I have two computers both connected to one LAN.
What I want is to be able to communicate one string between the two, by running a python script on the first (the host) where the string will originate, and a second on the client computer to retrieve the string.
What is the most efficient way for an inexperienced programmer like me to achieve this? | false | 2,534,527 | 0.26052 | 0 | 0 | 4 | First, lets get the nomenclature straight. Usually the part that initiate the communication is the client, the parts that is waiting for a connection is a server, which then will receive the data from the client and generate a response. From your question, the "host" is the client and the "client" seems to be the server.
Then you have to decide how to transfer the data. You can use straight sockets, in which case you can use SocketServer, or you can rely on an existing protocol, like HTTP or XML-RPC, in which case you will find ready to use library packages with plenty of examples (e.g. xmlrpclib and SimpleXMLRPCServer) | 0 | 996 | 0 | 3 | 2010-03-28T20:55:00.000 | python | Python inter-computer communication | 1 | 2 | 3 | 2,534,582 | 0 |