Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
I'd like to write a python library to wrap a REST-style API offered by a particular Web service. Does anyone know of any good learning resources for such work, preferably aimed at intermediate Python programmers? I'd like a good article on the subject, but I'd settle for nice, clear code examples. CLARIFICATION: What I'm looking to do is write a Python client to interact with a Web service -- something to construct HTTP requests and parse XML/JSON responses, all wrapped up in Python objects.
false
517,237
0
0
0
0
You should take a look at PyFacebook. This is a python wrapper for the Facebook API, and it's one of the most nicely done API's I have ever used.
1
15,114
0
21
2009-02-05T18:29:00.000
python,web-services,api,rest
HOWTO: Write Python API wrapper?
1
2
5
981,474
0
0
0
Hi im coding to code a Tool that searchs for Dirs and files. have done so the tool searchs for dirs, but need help to make it search for files on websites. Any idea how it can be in python?
false
520,362
0.049958
0
0
1
You cannot get a directory listing on a website. Pedantically, HTTP has no notion of directory. Pratically, WebDAV provides a directory listing verb, so you can use that if WebDAV is enabled. Otherwise, the closest thing you can do is similar to what recursive wget does: get a page, parse the HTML, look for hyperlinks (a/@href in xpath), filter out hyperlinks that do not point to URL below the current page, recurse into the remaining urls. You can do further filtering, depending on your use case, such as removing the query part of the URL (anything after the first ?). When the server has a directory listing feature enabled, this gives you something usable. This also gives you something usable if the website has no directory listing but is organized in a sensible way.
0
188
0
0
2009-02-06T13:55:00.000
python
Search Files& Dirs on Website
1
3
4
520,423
0
0
0
Hi im coding to code a Tool that searchs for Dirs and files. have done so the tool searchs for dirs, but need help to make it search for files on websites. Any idea how it can be in python?
false
520,362
0.049958
0
0
1
You can only do this if you have permission to browse directories on the site and no default page exists.
0
188
0
0
2009-02-06T13:55:00.000
python
Search Files& Dirs on Website
1
3
4
520,397
0
0
0
Hi im coding to code a Tool that searchs for Dirs and files. have done so the tool searchs for dirs, but need help to make it search for files on websites. Any idea how it can be in python?
false
520,362
0.049958
0
0
1
Is this tool scanning the directories of your own website (in which the tool is running), or external sites?
0
188
0
0
2009-02-06T13:55:00.000
python
Search Files& Dirs on Website
1
3
4
520,373
0
0
0
Does anyone know if there is some parameter available for programmatic search on yahoo allowing to restrict results so only links to files of specific type will be returned (like PDF for example)? It's possible to do that in GUI, but how to make it happen through API? I'd very much appreciate a sample code in Python, but any other solutions might be helpful as well.
true
522,781
1.2
1
0
0
Thank you. I found myself that something like this works OK (file type is the first argument, and query is the second): format = sys.argv[1] query = " ".join(sys.argv[2:]) srch = create_search("Web", app_id, query=query, format=format)
0
1,551
0
1
2009-02-07T00:27:00.000
python,yahoo-api,yahoo-search
how to search for specific file type with yahoo search API?
1
1
3
526,491
0
0
0
How can I accept cookies in a python script?
false
525,773
0.033321
0
0
1
There's the cookielib library. You can also implement your own cookie storage and policies, the cookies are found in the set-cookie header of the response (Set-Cookie: name=value), then you send the back to a server in one or more Cookie headers in the request (Cookie: name=value).
0
12,543
0
11
2009-02-08T14:09:00.000
python,cookies
Accept Cookies in Python
1
2
6
525,982
0
0
0
How can I accept cookies in a python script?
false
525,773
0.033321
0
0
1
I believe you mean having a Python script that tries to speak HTTP. I suggest you to use a high-level library that handles cookies automatically. pycurl, mechanize, twill - you choose. For Nikhil Chelliah: I don't see what's not clear here. Accepting a cookie happens client-side. The server can set a cookie.
0
12,543
0
11
2009-02-08T14:09:00.000
python,cookies
Accept Cookies in Python
1
2
6
525,966
0
0
0
I need script to add quotes in url string from url.txt from http://www.site.com/info.xx to "http://www.site.com/info.xx"
false
543,199
0
0
0
0
write one... perl is my favourite scripting language... it appears you may prefer Python. just read in the file and add \" before and after it.. this is pretty easy in perl. this seems more like a request than a question... should this be on stackoverflow?
0
8,426
0
2
2009-02-12T20:54:00.000
python,ruby,perl
Add Quotes in url string from file
1
1
5
543,218
0
1
0
I'm working on a simple web crawler in python and I wan't to make a simple queue class, but I'm not quite sure the best way to start. I want something that holds only unique items to process, so that the crawler will only crawl each page once per script run (simply to avoid infinite looping). Can anyone give me or point me to a simple queue example that I could run off of?
false
549,536
0.039979
0
0
1
Why not use a list if you need order (or even a heapq, as was formerly suggested by zacherates before a set was suggested instead) and also use a set to check for duplicates?
0
2,069
0
1
2009-02-14T18:44:00.000
python,queue
Simple unique non-priority queue system
1
1
5
549,555
0
0
0
I am using libcurl to DL a webpage, then i am scanning it for data and doing something with one of the links. However, once in a while the page is different then i except thus i extract bad data and pycurl throws an exception. I tried finding the exception name for pycurl but had no luck. Is there a way i can get the traceback to execute a function so i can dump the file so i can look at the file input and see were my code went wrong?
false
550,804
0.132549
0
0
2
Can you catch all exceptions somewhere in the main block and use sys.exc_info() for callback information and log that to your file. exc_info() returns not just exception type, but also call traceback so there should information what went wrong.
0
992
0
2
2009-02-15T12:29:00.000
python,error-handling,pycurl
python runtime error, can dump a file?
1
1
3
550,815
0
0
0
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python. The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
false
561,486
0.053283
0
0
4
Base64 takes 4 bytes/characters to encode 3 bytes and can only encode multiples of 3 bytes (and adds padding otherwise). So representing 4 bytes (your average int) in Base64 would take 8 bytes. Encoding the same 4 bytes in hex would also take 8 bytes. So you wouldn't gain anything for a single int.
0
34,136
0
68
2009-02-18T15:25:00.000
python,url,base64
How to convert an integer to the shortest url-safe string in Python?
1
2
15
561,534
0
0
0
I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python. The base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.
false
561,486
0.02666
0
0
2
If you are looking for a way to shorten the integer representation using base64, I think you need to look elsewhere. When you encode something with base64 it doesn't get shorter, in fact it gets longer. E.g. 11234 encoded with base64 would yield MTEyMzQ= When using base64 you have overlooked the fact that you are not converting just the digits (0-9) to a 64 character encoding. You are converting 3 bytes into 4 bytes so you are guaranteed your base64 encoded string would be 33.33% longer.
0
34,136
0
68
2009-02-18T15:25:00.000
python,url,base64
How to convert an integer to the shortest url-safe string in Python?
1
2
15
561,547
0
0
0
Is it safe to use Python UUID module generated values in URL's of a webpage? Wnat to use those ID's as part of URL's. Are there any non-safe characters ever generated by Python UUID that shouldn't be in URL's?
false
567,324
1
0
0
9
It is good practice to always urlencode data that will be placed into URLs. Then you need not be concerned with the specifics of UUID or if it will change in the future.
0
3,987
0
4
2009-02-19T21:47:00.000
python,url,uuid
Is it safe to use Python UUID module generated values in URL's of a webpage?
1
1
3
567,347
0
1
0
I am using BeautifulStoneSoup to parse an XML document and change some attributes. I noticed that it automatically converts all XML tags to lowercase. For example, my source file has <DocData> elements, which BeautifulSoup converts to <docdata>. This appears to be causing problems since the program I am feeding my modified XML document to does not seem to accept the lowercase versions. Is there a way to prevent this behavior in BeautifulSoup?
true
567,999
1.2
0
0
4
No, that's not a built-in option. The source is pretty straightforward, though. It looks like you want to change the value of encodedName in Tag.__str__.
0
1,662
0
10
2009-02-20T01:52:00.000
python,xml,beautifulsoup
Preventing BeautifulSoup from converting my XML tags to lowercase
1
1
2
568,081
0
0
0
I am using Python to connect to an FTP server that contains a new list of data once every hour. I am only connecting once a day, and I only want to download the newest file in the directory. Is there a way to do this?
false
570,433
0.099668
0
0
1
Seems like any system that is automatically generating a file once an hour is likely to be using an automated naming scheme. Are you over thinking the problem by asking the server for the newest file instead of more easily parsing the file names? This wouldn't work in all cases, and if the directory got large it might become time consuming to get the file listing. But it seems likely to work in most cases.
0
1,672
0
3
2009-02-20T17:10:00.000
python,ftp
How can I get the newest file from an FTP server?
1
1
2
571,410
0
0
0
I have a Python web client that uses urllib2. It is easy enough to add HTTP headers to my outgoing requests. I just create a dictionary of the headers I want to add, and pass it to the Request initializer. However, other "standard" HTTP headers get added to the request as well as the custom ones I explicitly add. When I sniff the request using Wireshark, I see headers besides the ones I add myself. My question is how do a I get access to these headers? I want to log every request (including the full set of HTTP headers), and can't figure out how. any pointers? in a nutshell: How do I get all the outgoing headers from an HTTP request created by urllib2?
false
603,856
0
0
0
0
see urllib2.py:do_request (line 1044 (1067)) and urllib2.py:do_open (line 1073) (line 293) self.addheaders = [('User-agent', client_version)] (only 'User-agent' added)
0
14,482
0
15
2009-03-02T20:24:00.000
python,urllib2
How do you get default headers in a urllib2 Request?
1
1
8
603,916
0
0
0
I'm behind a router, I need a simple command to discover my public ip (instead of googling what's my ip and clicking one the results) Are there any standard protocols for this? I've heard about STUN but I don't know how can I use it? P.S. I'm planning on writing a short python script to do it
false
613,471
0.037482
0
0
3
Your simplest way may be to ask some server on the outside of your network. One thing to keep in mind is that different destinations may see a different address for you. The router may be multihomed. And really that's just where problems begin.
0
19,213
0
28
2009-03-05T03:18:00.000
python,ip-address,tcp
Discovering public IP programmatically
1
3
16
613,477
0
0
0
I'm behind a router, I need a simple command to discover my public ip (instead of googling what's my ip and clicking one the results) Are there any standard protocols for this? I've heard about STUN but I don't know how can I use it? P.S. I'm planning on writing a short python script to do it
false
613,471
0.012499
0
0
1
Here are a few public services that support IPv4 and IPv6: curl http://icanhazip.com curl http://www.trackip.net/ip curl https://ipapi.co/ip curl http://api6.ipify.org curl http://www.cloudflare.com/cdn-cgi/trace curl http://checkip.dns.he.net The following seem to support only IPv4 at this time: curl http://bot.whatismyipaddress.com curl http://checkip.dyndns.org curl http://ifconfig.me curl http://ip-api.com curl http://api.infoip.io/ip It's easy to make an HTTP call programmatically. So all should be relatively easy to use, and you can try multiple different URLs in case one fails.
0
19,213
0
28
2009-03-05T03:18:00.000
python,ip-address,tcp
Discovering public IP programmatically
1
3
16
60,525,518
0
0
0
I'm behind a router, I need a simple command to discover my public ip (instead of googling what's my ip and clicking one the results) Are there any standard protocols for this? I've heard about STUN but I don't know how can I use it? P.S. I'm planning on writing a short python script to do it
false
613,471
0.037482
0
0
3
If the network has an UpNp server running on the gateway you are able to talk to the gateway and ask it for your outside IP address.
0
19,213
0
28
2009-03-05T03:18:00.000
python,ip-address,tcp
Discovering public IP programmatically
1
3
16
613,518
0
0
0
I want to get all the messages from my gmail inbox, but I am facing 2 problems. It does not get all the emails, (as per the count in stat function) The order of emails it get is random. I am unsure if its the problem with poplib or gmail pop server. What am I missing here?
false
617,892
0.099668
1
0
2
You can also try imaplib module since GMail also provides access to email via IMAP protocol.
0
1,096
0
1
2009-03-06T06:55:00.000
python,python-2.5,poplib
Poplib not working correctly?
1
1
4
628,130
0
0
0
I need to download emails from the gmail inbox only using poplib.Unfortunately I do not see any option to select Inbox alone, and poplib gives me emails from sent items too. How do I select emails only from inbox? I dont want to use any gmail specific libraries.
true
625,148
1.2
1
0
4
POP3 has no concept of 'folders'. If gmail is showing you both 'sent' as well as 'received' mail, then you really don't have any option but to receive all that email. Perhaps you would be better off using IMAP4 instead of POP3. Python has libraries that will work with gmail's IMAP4 server.
0
2,392
0
2
2009-03-09T05:49:00.000
python,gmail,pop3,poplib
Select mails from inbox alone via poplib
1
1
3
625,175
0
0
0
So I'm in the middle of web-based filesystem abstraction layer development. Just like file browser, except it has some extra features like freaky permissions etc. I would like users to be notified somehow about directory changes. So, i.e. when someone uploads a new file via FTP, certain users should get a proper message. It is not required for the message to be extra detailed, I don't really need to show the exact resource changed. The parent directory name should be enough. What approach would you recommend?
false
649,623
0
0
0
0
A simple approach would be to monitor/check the last modification date of the working directory (using os.stat() for example). Whenever a file in a directory is modified, the working directory's (the directory the file is in) last modification date changes as well. At least this works on the filesystems I am working on (ufs, ext3). I'm not sure if all filesystems do it this way.
0
2,053
0
3
2009-03-16T08:22:00.000
python,file,filesystems,checksum
Directory checksum with python?
1
1
3
649,665
0
0
0
I need advice and how to got about setting up a simple service for my users. I would like to add a new feature where users can send and receive emails from their gmail account. I have seen this done several times and I know its possible. There use to be a project for "Libgmailer" at sourceforge but I think it was abandoned. Is anyone aware of anything similar? I have found that Gmail has a Python API but my site is making use of PHP. I really need ideas on how to best go about this! Thanks all for any input
true
656,180
1.2
1
0
6
any library/source that works with imap or pop will work.
0
498
0
1
2009-03-17T21:51:00.000
php,python,email,gmail
Implementation: How to retrieve and send emails for different Gmail accounts?
1
3
3
656,198
0
0
0
I need advice and how to got about setting up a simple service for my users. I would like to add a new feature where users can send and receive emails from their gmail account. I have seen this done several times and I know its possible. There use to be a project for "Libgmailer" at sourceforge but I think it was abandoned. Is anyone aware of anything similar? I have found that Gmail has a Python API but my site is making use of PHP. I really need ideas on how to best go about this! Thanks all for any input
false
656,180
0
1
0
0
Just a thought, Gmail supports POP/IMAP access. Could you do it using those protocols? It would mean asking your users to go into their gmail and enable it though.
0
498
0
1
2009-03-17T21:51:00.000
php,python,email,gmail
Implementation: How to retrieve and send emails for different Gmail accounts?
1
3
3
656,205
0
0
0
I need advice and how to got about setting up a simple service for my users. I would like to add a new feature where users can send and receive emails from their gmail account. I have seen this done several times and I know its possible. There use to be a project for "Libgmailer" at sourceforge but I think it was abandoned. Is anyone aware of anything similar? I have found that Gmail has a Python API but my site is making use of PHP. I really need ideas on how to best go about this! Thanks all for any input
false
656,180
0
1
0
0
Well if Google didn't come up with anything personally I'd see if I could reverse engineer the Python API by implementing it and watching it with a packet sniffer. My guess is it's just accessing some web service which should be pretty easy to mimic regardless of the language you're using.
0
498
0
1
2009-03-17T21:51:00.000
php,python,email,gmail
Implementation: How to retrieve and send emails for different Gmail accounts?
1
3
3
656,194
0
1
0
Is there a way I can preserve the original order of attributes when processing XML with minidom? Say I have: <color red="255" green="255" blue="233" /> when I modify this with minidom the attributes are rearranged alphabetically blue, green, and red. I'd like to preserve the original order. I am processing the file by looping through the elements returned by elements = doc.getElementsByTagName('color') and then I do assignments like this e.attributes["red"].value = "233".
false
662,624
0.022219
0
0
1
1.Custom your own 'Element.writexml' method. from 'minidom.py' copy Element's writexml code to your own file. rename it to writexml_nosort, delete 'a_names.sort()' (python 2.7) or change 'a_names = sorted(attrs.keys())' to 'a_names = attrs.keys()'(python 3.4) change the Element's method to your own: minidom.Element.writexml = writexml_nosort; 2.custom your favorite order: right_order = ['a', 'b', 'c', 'a1', 'b1'] 3.adjust your element 's _attrs node._attrs = OrderedDict( [(k,node._attrs[k]) for k in right_order ] )
0
8,252
0
13
2009-03-19T15:23:00.000
python,xml,minidom
Preserve order of attributes when modifying with minidom
1
1
9
29,696,911
0
0
0
I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.
false
667,508
0.059928
1
0
3
One solution is to attach a timestamp to each queue item and to discard the item after 8 seconds have passed. You can perform this check each time the queue is added to. This only works if you limit the queue size to 5 and discard any additions whilst the queue is full.
0
108,156
0
173
2009-03-20T19:02:00.000
python,algorithm,message-queue
What's a good rate limiting algorithm?
1
1
10
667,528
0
0
0
I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?
false
667,640
1
0
0
47
Short answer: use a non-blocking recv(), or a blocking recv() / select() with a very short timeout. Long answer: The way to handle socket connections is to read or write as you need to, and be prepared to handle connection errors. TCP distinguishes between 3 forms of "dropping" a connection: timeout, reset, close. Of these, the timeout can not really be detected, TCP might only tell you the time has not expired yet. But even if it told you that, the time might still expire right after. Also remember that using shutdown() either you or your peer (the other end of the connection) may close only the incoming byte stream, and keep the outgoing byte stream running, or close the outgoing stream and keep the incoming one running. So strictly speaking, you want to check if the read stream is closed, or if the write stream is closed, or if both are closed. Even if the connection was "dropped", you should still be able to read any data that is still in the network buffer. Only after the buffer is empty will you receive a disconnect from recv(). Checking if the connection was dropped is like asking "what will I receive after reading all data that is currently buffered ?" To find that out, you just have to read all data that is currently bufferred. I can see how "reading all buffered data", to get to the end of it, might be a problem for some people, that still think of recv() as a blocking function. With a blocking recv(), "checking" for a read when the buffer is already empty will block, which defeats the purpose of "checking". In my opinion any function that is documented to potentially block the entire process indefinitely is a design flaw, but I guess it is still there for historical reasons, from when using a socket just like a regular file descriptor was a cool idea. What you can do is: set the socket to non-blocking mode, but than you get a system-depended error to indicate the receive buffer is empty, or the send buffer is full stick to blocking mode but set a very short socket timeout. This will allow you to "ping" or "check" the socket with recv(), pretty much what you want to do use select() call or asyncore module with a very short timeout. Error reporting is still system-specific. For the write part of the problem, keeping the read buffers empty pretty much covers it. You will discover a connection "dropped" after a non-blocking read attempt, and you may choose to stop sending anything after a read returns a closed channel. I guess the only way to be sure your sent data has reached the other end (and is not still in the send buffer) is either: receive a proper response on the same socket for the exact message that you sent. Basically you are using the higher level protocol to provide confirmation. perform a successful shutdow() and close() on the socket The python socket howto says send() will return 0 bytes written if channel is closed. You may use a non-blocking or a timeout socket.send() and if it returns 0 you can no longer send data on that socket. But if it returns non-zero, you have already sent something, good luck with that :) Also here I have not considered OOB (out-of-band) socket data here as a means to approach your problem, but I think OOB was not what you meant.
0
175,519
0
69
2009-03-20T19:31:00.000
python,sockets
How to tell if a connection is dead in python
1
2
6
15,175,067
0
0
0
I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?
true
667,640
1.2
0
0
41
It depends on what you mean by "dropped". For TCP sockets, if the other end closes the connection either through close() or the process terminating, you'll find out by reading an end of file, or getting a read error, usually the errno being set to whatever 'connection reset by peer' is by your operating system. For python, you'll read a zero length string, or a socket.error will be thrown when you try to read or write from the socket.
0
175,519
0
69
2009-03-20T19:31:00.000
python,sockets
How to tell if a connection is dead in python
1
2
6
667,710
0
1
0
I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages. The flow would look like: User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .
false
668,257
0.039979
0
0
2
I'd just build a service in twisted that did that concurrent fetch and analysis and access that from web.py as a simple http request.
1
10,221
0
9
2009-03-20T22:40:00.000
python,asynchronous
Python: simple async download of url content?
1
3
10
668,772
0
1
0
I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages. The flow would look like: User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .
false
668,257
0
0
0
0
Actually you can integrate twisted with web.py. I'm not really sure how as I've only done it with django (used twisted with it).
1
10,221
0
9
2009-03-20T22:40:00.000
python,asynchronous
Python: simple async download of url content?
1
3
10
668,723
0
1
0
I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. Is there a simple way to setup an async / callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages. The flow would look like: User request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results I recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .
false
668,257
0
0
0
0
I'm not sure I'm understanding your question, so I'll give multiple partial answers to start with. If your concern is that web.py is having to download data from somewhere and analyze the results before responding, and you fear the request may time out before the results are ready, you could use ajax to split the work up. Return immediately with a container page (to hold the results) and a bit of javascript to poll the sever for the results until the client has them all. Thus the client never waits for the server, though the user still has to wait for the results. If your concern is tying up the server waiting for the client to get the results, I doubt if that will actually be a problem. Your networking layers should not require you to wait-on-write If you are worrying about the server waiting while the client downloads static content from elsewhere, either ajax or clever use of redirects should solve your problem
1
10,221
0
9
2009-03-20T22:40:00.000
python,asynchronous
Python: simple async download of url content?
1
3
10
668,486
0
1
0
I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?
false
670,398
0
0
0
0
You could create 2 applications? 1 for deployment and the other for testing. Alternatively, you can also include an oauth_callback parameter when you requesting for a request token. Some providers will redirect to the url specified by oauth_callback (eg. Twitter, Google) but some will ignore this callback url and redirect to the one specified during configuration (eg. Yahoo)
0
19,222
0
37
2009-03-22T01:37:00.000
python,oauth
How do I develop against OAuth locally?
1
3
7
3,117,885
0
1
0
I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?
false
670,398
0.141893
0
0
5
This was with the Facebook OAuth - I actually was able to specify 'http://127.0.0.1:8080' as the Site URL and the callback URL. It took several minutes for the changes to the Facebook app to propagate, but then it worked.
0
19,222
0
37
2009-03-22T01:37:00.000
python,oauth
How do I develop against OAuth locally?
1
3
7
7,971,246
0
1
0
I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server/hosting? Is this even possible?
false
670,398
1
0
0
10
In case you are using *nix style system, create a alias like 127.0.0.1 mywebsite.dev in /etc/hosts (you need have the line which is similar to above mentioned in the file, Use http://website.dev/callbackurl/for/app in call back URL and during local testing.
0
19,222
0
37
2009-03-22T01:37:00.000
python,oauth
How do I develop against OAuth locally?
1
3
7
12,107,449
0
0
0
Very occasionally when making a http request, I am waiting for an age for a response that never comes. What is the recommended way to cancel this request after a reasonable period of time?
true
683,493
1.2
0
0
2
Set the HTTP request timeout.
0
578
0
1
2009-03-25T21:17:00.000
python,httpwebrequest
Timeout on a HTTP request in python
1
1
3
683,519
0
0
0
I have been asked to quote a project where they want to see sent email using POP. I am pretty sure this is not possible, but I thought if it was. So is it possible given a users POP email server details to access their sent mail? If so any examples in Python or fetchmail?
false
690,527
0.148885
1
0
3
Pop doesn't support sent email. Pop is an inbox only, Sent mail will be stored in IMAP, Exchange or other proprietary system.
0
3,820
0
0
2009-03-27T16:45:00.000
python,email,pop3,fetchmail
Is it possible to Access a Users Sent Email over POP?
1
4
4
690,536
0
0
0
I have been asked to quote a project where they want to see sent email using POP. I am pretty sure this is not possible, but I thought if it was. So is it possible given a users POP email server details to access their sent mail? If so any examples in Python or fetchmail?
false
690,527
0.049958
1
0
1
The smtp (mail sending) server could forward a copy of all sent mail back to the sender, they could then access this over pop.
0
3,820
0
0
2009-03-27T16:45:00.000
python,email,pop3,fetchmail
Is it possible to Access a Users Sent Email over POP?
1
4
4
690,541
0
0
0
I have been asked to quote a project where they want to see sent email using POP. I am pretty sure this is not possible, but I thought if it was. So is it possible given a users POP email server details to access their sent mail? If so any examples in Python or fetchmail?
true
690,527
1.2
1
0
5
POP3 only handles receiving email; sent mail is sent via SMTP in these situations, and may be sent via a different ISP to the receiver (say, when you host your own email server, but use your current ISP to send). As such, this isn't directly possible. IMAP could do it, as this offers server side email folders as well as having the server handle the interface to both send and receive SMTP traffic
0
3,820
0
0
2009-03-27T16:45:00.000
python,email,pop3,fetchmail
Is it possible to Access a Users Sent Email over POP?
1
4
4
690,542
0
0
0
I have been asked to quote a project where they want to see sent email using POP. I am pretty sure this is not possible, but I thought if it was. So is it possible given a users POP email server details to access their sent mail? If so any examples in Python or fetchmail?
false
690,527
0.049958
1
0
1
Emails are not sent using POP, but collected from a server using POP. They are sent using SMTP, and they don't hang around on the server once they're gone. You might want to look into IMAP?
0
3,820
0
0
2009-03-27T16:45:00.000
python,email,pop3,fetchmail
Is it possible to Access a Users Sent Email over POP?
1
4
4
690,540
0
1
0
I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it? Edit: I think I've found the problem. My character data contains "&lt" and "&gt" characters, presumably from html tags. After being parsed, these are converted to "<" and ">" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening?
false
708,531
0
0
0
0
You could load it into Firefox, if you don't have an XML editor. Firefox shows you the error.
0
2,338
0
0
2009-04-02T06:35:00.000
python,xml,sax
Python SAX parser says XML file is not well-formed
1
3
4
715,813
0
1
0
I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it? Edit: I think I've found the problem. My character data contains "&lt" and "&gt" characters, presumably from html tags. After being parsed, these are converted to "<" and ">" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening?
false
708,531
0
0
0
0
I would second recommendation to try to parse it using another XML parser. That should give an indication as to whether it's the document that's wrong, or parser. Also, the actual error message might be useful. One fairly common problem for example is that the xml declaration (if one is used, it's optional) must be the very first thing -- not even whitespace is allowed before it.
0
2,338
0
0
2009-04-02T06:35:00.000
python,xml,sax
Python SAX parser says XML file is not well-formed
1
3
4
711,033
0
1
0
I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it? Edit: I think I've found the problem. My character data contains "&lt" and "&gt" characters, presumably from html tags. After being parsed, these are converted to "<" and ">" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening?
false
708,531
0.099668
0
0
2
I would suggest putting those tags back in and making sure it still works. Then, if you want to take them out, do it one at a time until it breaks. However, I question the wisdom of taking them out. If it's your XML file, you should understand it better. If it's a third-party XML file, you really shouldn't be fiddling with it (until you understand it better :-).
0
2,338
0
0
2009-04-02T06:35:00.000
python,xml,sax
Python SAX parser says XML file is not well-formed
1
3
4
708,546
0
0
0
I have seen many projects using simplejson module instead of json module from the Standard Library. Also, there are many different simplejson modules. Why would use these alternatives, instead of the one in the Standard Library?
false
712,791
1
0
0
6
Another reason projects use simplejson is that the builtin json did not originally include its C speedups, so the performance difference was noticeable.
1
141,384
0
405
2009-04-03T06:56:00.000
python,json,simplejson
What are the differences between json and simplejson Python modules?
1
4
13
714,748
0
0
0
I have seen many projects using simplejson module instead of json module from the Standard Library. Also, there are many different simplejson modules. Why would use these alternatives, instead of the one in the Standard Library?
false
712,791
0.03076
0
0
2
In python3, if you a string of b'bytes', with json you have to .decode() the content before you can load it. simplejson takes care of this so you can just do simplejson.loads(byte_string).
1
141,384
0
405
2009-04-03T06:56:00.000
python,json,simplejson
What are the differences between json and simplejson Python modules?
1
4
13
38,016,773
0
0
0
I have seen many projects using simplejson module instead of json module from the Standard Library. Also, there are many different simplejson modules. Why would use these alternatives, instead of the one in the Standard Library?
false
712,791
0.076772
0
0
5
The builtin json module got included in Python 2.6. Any projects that support versions of Python < 2.6 need to have a fallback. In many cases, that fallback is simplejson.
1
141,384
0
405
2009-04-03T06:56:00.000
python,json,simplejson
What are the differences between json and simplejson Python modules?
1
4
13
712,795
0
0
0
I have seen many projects using simplejson module instead of json module from the Standard Library. Also, there are many different simplejson modules. Why would use these alternatives, instead of the one in the Standard Library?
false
712,791
0
0
0
0
I came across this question as I was looking to install simplejson for Python 2.6. I needed to use the 'object_pairs_hook' of json.load() in order to load a json file as an OrderedDict. Being familiar with more recent versions of Python I didn't realize that the json module for Python 2.6 doesn't include the 'object_pairs_hook' so I had to install simplejson for this purpose. From personal experience this is why i use simplejson as opposed to the standard json module.
1
141,384
0
405
2009-04-03T06:56:00.000
python,json,simplejson
What are the differences between json and simplejson Python modules?
1
4
13
31,269,030
0
0
0
Is there a list somewhere of recommendations of different Python-based REST frameworks for use on the serverside to write your own RESTful APIs? Preferably with pros and cons. Please feel free to add recommendations here. :)
false
713,847
0
0
0
0
I strongly recommend TurboGears or Bottle: TurboGears: less verbose than django more flexible, less HTML-oriented but: less famous Bottle: very fast very easy to learn but: minimalistic and not mature
0
241,535
0
321
2009-04-03T13:13:00.000
python,web-services,rest,frameworks
Recommendations of Python REST (web services) framework?
1
2
16
1,722,910
0
0
0
Is there a list somewhere of recommendations of different Python-based REST frameworks for use on the serverside to write your own RESTful APIs? Preferably with pros and cons. Please feel free to add recommendations here. :)
false
713,847
1
0
0
8
I don't see any reason to use Django just to expose a REST api, there are lighter and more flexible solutions. Django carries a lot of other things to the table, that are not always needed. For sure not needed if you only want to expose some code as a REST service. My personal experience, fwiw, is that once you have a one-size-fits-all framework, you'll start to use its ORM, its plugins, etc. just because it's easy, and in no time you end up having a dependency that is very hard to get rid of. Choosing a web framework is a tough decision, and I would avoid picking a full stack solution just to expose a REST api. Now, if you really need/want to use Django, then Piston is a nice REST framework for django apps. That being said, CherryPy looks really nice too, but seems more RPC than REST. Looking at the samples (I never used it), probably web.py is the best and cleanest if you only need REST.
0
241,535
0
321
2009-04-03T13:13:00.000
python,web-services,rest,frameworks
Recommendations of Python REST (web services) framework?
1
2
16
6,897,383
0
1
0
I have a web application and I would like to enable real time SMS notifications to the users of the applications. Note: I currently cannot use the Twitter API because I live in West Africa, and Twitter doesn't send SMS to my country. Also email2sms is not an option because the mobile operators don't allow that in my country.
false
716,946
0
1
0
0
I don't have any knowledge in this area. But I think you'll have to talk to the mobile operators, and see if they have any API for sending SMS messages. You'll probably have to pay them, or have some scheme for customers to pay them. Alternatively there might be some 3rd party that implements this functionality.
0
2,976
0
1
2009-04-04T11:47:00.000
python,sms,notifications
How do I enable SMS notifications in my web apps?
1
2
7
716,953
0
1
0
I have a web application and I would like to enable real time SMS notifications to the users of the applications. Note: I currently cannot use the Twitter API because I live in West Africa, and Twitter doesn't send SMS to my country. Also email2sms is not an option because the mobile operators don't allow that in my country.
false
716,946
0.057081
1
0
2
The easiest way to accomplish this is by using a third party API. Some I know that work well are: restSms.me Twilio.com Clicatell.com I have used all of them and they easiest/cheapest one to implement was restSms.me Hope that helps.
0
2,976
0
1
2009-04-04T11:47:00.000
python,sms,notifications
How do I enable SMS notifications in my web apps?
1
2
7
5,414,483
0
1
0
Problems how to make an Ajax buttons (upward and downward arrows) such that the number can increase or decrease how to save the action af an user to an variable NumberOfVotesOfQuestionID I am not sure whether I should use database or not for the variable. However, I know that there is an easier way too to save the number of votes. How can you solve those problems? [edit] The server-side programming language is Python.
false
719,194
0.148885
0
0
3
You create the buttons, which can be links or images or whatever. Now hook a JavaScript function up to each button's click event. On clicking, the function fires and Sends a request to the server code that says, more or less, +1 or -1. Server code takes over. This will vary wildly depending on what framework you use (or don't) and a bunch of other things. Code connects to the database and runs a query to +1 or -1 the score. How this happens will vary wildly depending on your database design, but it'll be something like UPDATE posts SET score=score+1 WHERE score_id={{insert id here}};. Depending on what the database says, the server returns a success code or a failure code as the AJAX request response. Response gets sent to AJAX, asynchronously. The JS response function updates the score if it's a success code, displays an error if it's a failure. You can store the code in a variable, but this is complicated and depends on how well you know the semantics of your code's runtime environment. It eventually needs to be pushed to persistent storage anyway, so using the database 100% is a good initial solution. When the time for optimizing performance comes, there are enough software in the world to cache database queries to make you feel woozy so it's not that big a deal.
0
7,516
0
32
2009-04-05T16:07:00.000
javascript,python,html,ajax
How can you make a vote-up-down button like in Stackoverflow?
1
1
4
719,293
0
1
0
I have a python script that runs continuously. It outputs 2 lines of info every 30 seconds. I'd like to be able to view this output on the web. In particular, I'd like the site to auto-update (add the new output at the top of the page/site every 30 seconds without having to refresh the page). I understand I can do this with javascript but is there a python only based solution? Even if there is, is javascript the way to go? I'm more than willing to learn javascript if needed but if not, I'd like to stay focused on python. Sorry for the basic question but I'm still clueless when it comes to web programming. Thx!
false
731,470
0
0
0
0
Is this for a real webapp? Or is this a convenience thing for you to view output in the browser? If it's more so for convenience, you could consider using mod_python. mod_python is an extension for the apache webserver that embeds a python interpreter in the web server (so the script runs server side). It would easily let you do this sort of thing locally or for your own convenience. Then you could just run the script with mod python and have the handler post your results. You could probably easily implement the refreshing too, but I would not know off the top of my head how to do this. Hope this helps... check out mod_python. It's not too bad once you get everything configured.
0
16,918
0
7
2009-04-08T19:31:00.000
javascript,python
What's easiest way to get Python script output on the web?
1
3
9
731,629
0
1
0
I have a python script that runs continuously. It outputs 2 lines of info every 30 seconds. I'd like to be able to view this output on the web. In particular, I'd like the site to auto-update (add the new output at the top of the page/site every 30 seconds without having to refresh the page). I understand I can do this with javascript but is there a python only based solution? Even if there is, is javascript the way to go? I'm more than willing to learn javascript if needed but if not, I'd like to stay focused on python. Sorry for the basic question but I'm still clueless when it comes to web programming. Thx!
false
731,470
0
0
0
0
JavaScript is the primary way to add this sort of interactivity to a website. You can make the back-end Python, but the client will have to use JavaScript AJAX calls to update the page. Python doesn't run in the browser, so you're out of luck if you want to use just Python. (It's also possible to use Flash or Java applets, but that's a pretty heavyweight solution for what seems like a small problem.)
0
16,918
0
7
2009-04-08T19:31:00.000
javascript,python
What's easiest way to get Python script output on the web?
1
3
9
731,476
0
1
0
I have a python script that runs continuously. It outputs 2 lines of info every 30 seconds. I'd like to be able to view this output on the web. In particular, I'd like the site to auto-update (add the new output at the top of the page/site every 30 seconds without having to refresh the page). I understand I can do this with javascript but is there a python only based solution? Even if there is, is javascript the way to go? I'm more than willing to learn javascript if needed but if not, I'd like to stay focused on python. Sorry for the basic question but I'm still clueless when it comes to web programming. Thx!
false
731,470
0.022219
0
0
1
You need Javascript in one way or another for your 30 second refresh. Alternatively, you could set a meta tag refresh for every 30 seconds to redirect to the current page, but the Javascript route will prevent page flicker.
0
16,918
0
7
2009-04-08T19:31:00.000
javascript,python
What's easiest way to get Python script output on the web?
1
3
9
731,477
0
0
0
I'm looking for a well-supported multithreaded Python HTTP server that supports chunked encoding replies. (I.e. "Transfer-Encoding: chunked" on responses). What's the best HTTP server base to start with for this purpose?
false
732,222
0.066568
0
0
2
Twisted supports chunked transfer and it does so transparently. i.e., if your request handler does not specify a response length, twisted will automatically switch to chunked transfer and it will generate one chunk per call to Request.write.
1
6,143
0
7
2009-04-08T22:58:00.000
python,http,chunked-encoding
Python HTTP server that supports chunked encoding?
1
1
6
9,326,192
0
1
0
Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?
false
765,964
0.049958
0
1
1
You will have to build the whole access logic to S3 in your applications
0
4,305
0
10
2009-04-19T19:51:00.000
python,django,amazon-web-services,amazon-s3
Amazon S3 permissions
1
3
4
766,030
0
1
0
Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?
false
765,964
1
0
1
8
Have the user hit your server Have the server set up a query-string authentication with a short expiration (minutes, hours?) Have your server redirect to #2
0
4,305
0
10
2009-04-19T19:51:00.000
python,django,amazon-web-services,amazon-s3
Amazon S3 permissions
1
3
4
768,090
0
1
0
Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?
false
765,964
1
0
1
14
There are various ways to control access to the S3 objects: Use the query string auth - but as you noted this does require an expiration date. You could make it far in the future, which has been good enough for most things I have done. Use the S3 ACLS - but this requires the user to have an AWS account and authenticate with AWS to access the S3 object. This is probably not what you are looking for. You proxy the access to the S3 object through your application, which implements your access control logic. This will bring all the bandwidth through your box. You can set up an EC2 instance with your proxy logic - this keeps the bandwidth closer to S3 and can reduce latency in certain situations. The difference between this and #3 could be minimal, but depends your particular situation.
0
4,305
0
10
2009-04-19T19:51:00.000
python,django,amazon-web-services,amazon-s3
Amazon S3 permissions
1
3
4
768,050
0
0
0
For fun, I've been toying around with writing a load balancer in python and have been trying to figure the best (correct?) way to test if a port is available and the remote host is still there. I'm finding that, once connected, it becomes difficult to tell when the remote host goes down. I've turned keep alive on, but can't get it to recognize a downed connection sooner than a minute (I realize polling more often than a minute might be overkill, but lets say I wanted to), even after setting the various TCP_KEEPALIVE options to their lowest. When I use nonblocking sockets, I've noticed that a recv() will return an error ("resource temporarily unavailable") when it reads from a live socket, but returns "" when reading from a dead one (send and recv of 0 bytes, which might be the cause?). That seems like an odd way to test for it connected, though, and makes it impossible to tell if the connected died but after sending some data. Aside from connecting/disconnecting for every check, is there something I can do? Can I manually send a tcp keepalive, or can I establish a lower level connection that will let me test the connectivity without sending real data the remote server would potentially process?
true
771,399
1.2
0
0
2
I'd recommend not leaving your (single) test socket connected - make a new connection each time you need to poll. Every load balancer / server availability system I've ever seen uses this method instead of a persistent connection. If the remote server hasn't responded within a reasonable amount of time (e.g. 10s) mark it as "down". Use timers and signals rather than function response codes to handle that timeout.
0
3,463
0
3
2009-04-21T07:11:00.000
python,tcp,monitoring,port
Monitoring a tcp port
1
4
5
771,422
0
0
0
For fun, I've been toying around with writing a load balancer in python and have been trying to figure the best (correct?) way to test if a port is available and the remote host is still there. I'm finding that, once connected, it becomes difficult to tell when the remote host goes down. I've turned keep alive on, but can't get it to recognize a downed connection sooner than a minute (I realize polling more often than a minute might be overkill, but lets say I wanted to), even after setting the various TCP_KEEPALIVE options to their lowest. When I use nonblocking sockets, I've noticed that a recv() will return an error ("resource temporarily unavailable") when it reads from a live socket, but returns "" when reading from a dead one (send and recv of 0 bytes, which might be the cause?). That seems like an odd way to test for it connected, though, and makes it impossible to tell if the connected died but after sending some data. Aside from connecting/disconnecting for every check, is there something I can do? Can I manually send a tcp keepalive, or can I establish a lower level connection that will let me test the connectivity without sending real data the remote server would potentially process?
false
771,399
0
0
0
0
It is theoretically possible to spam a keepalive packet. But to set it to very low intervals, you may need to dig into raw sockets. Also, your host may ignore it if its coming in too fast. The best way to check if a host is alive in a TCP connection is to send data, and wait for an ACK packet. If the ACK packet arrives, the SEND function will return non-zero.
0
3,463
0
3
2009-04-21T07:11:00.000
python,tcp,monitoring,port
Monitoring a tcp port
1
4
5
771,438
0
0
0
For fun, I've been toying around with writing a load balancer in python and have been trying to figure the best (correct?) way to test if a port is available and the remote host is still there. I'm finding that, once connected, it becomes difficult to tell when the remote host goes down. I've turned keep alive on, but can't get it to recognize a downed connection sooner than a minute (I realize polling more often than a minute might be overkill, but lets say I wanted to), even after setting the various TCP_KEEPALIVE options to their lowest. When I use nonblocking sockets, I've noticed that a recv() will return an error ("resource temporarily unavailable") when it reads from a live socket, but returns "" when reading from a dead one (send and recv of 0 bytes, which might be the cause?). That seems like an odd way to test for it connected, though, and makes it impossible to tell if the connected died but after sending some data. Aside from connecting/disconnecting for every check, is there something I can do? Can I manually send a tcp keepalive, or can I establish a lower level connection that will let me test the connectivity without sending real data the remote server would potentially process?
false
771,399
0.039979
0
0
1
"it becomes difficult to tell when the remote host goes down" Correct. This is a feature of TCP. The whole point of TCP is to have an enduring connection between ports. Theoretically an application can drop and reconnect to the port through TCP (the socket libraries don't provide a lot of support for this, but it's part of the TCP protocol).
0
3,463
0
3
2009-04-21T07:11:00.000
python,tcp,monitoring,port
Monitoring a tcp port
1
4
5
773,207
0
0
0
For fun, I've been toying around with writing a load balancer in python and have been trying to figure the best (correct?) way to test if a port is available and the remote host is still there. I'm finding that, once connected, it becomes difficult to tell when the remote host goes down. I've turned keep alive on, but can't get it to recognize a downed connection sooner than a minute (I realize polling more often than a minute might be overkill, but lets say I wanted to), even after setting the various TCP_KEEPALIVE options to their lowest. When I use nonblocking sockets, I've noticed that a recv() will return an error ("resource temporarily unavailable") when it reads from a live socket, but returns "" when reading from a dead one (send and recv of 0 bytes, which might be the cause?). That seems like an odd way to test for it connected, though, and makes it impossible to tell if the connected died but after sending some data. Aside from connecting/disconnecting for every check, is there something I can do? Can I manually send a tcp keepalive, or can I establish a lower level connection that will let me test the connectivity without sending real data the remote server would potentially process?
false
771,399
0
0
0
0
ping was invented for that purpose also you might be able to send malformed TCP packets to your destination. For example, in the TCP headers there is a flag for acknowleging end of transmission, its the FIN message. If you send a message with ACK and FIN the remote host should complain with a return packet and you'll be able to evaluate round trip time.
0
3,463
0
3
2009-04-21T07:11:00.000
python,tcp,monitoring,port
Monitoring a tcp port
1
4
5
771,415
0
0
0
Hi I have a small comment shoutbox type cgi process running on a server and currently when someone leaves a comment I simply format that comment into html i.e <p class="title">$title</p> <p class="comment">$comment</p> and store in a flat file. Would it be faster and acceptably low in LOC to reimplement the storage in xml or json, in a simple spec of my own or stick with the simple html route?. I don't want to use relational database for this.
true
777,090
1.2
1
0
3
If a flat file is fast enough, then go with that, since it's very simple and accessible. Storing as XML and JSON but still using a flat file probably is very comparable in performance. You might want to consider (ignore this if you just left it out of your question) sanitizing/filtering the text, so that users can't break your HTML by e.g. entering "</p>" in the comment text.
0
331
0
1
2009-04-22T12:56:00.000
python,xml,json
fastest way to store comment data python
1
1
3
777,119
0
0
0
Other than basic python syntax, what other key areas should I learn to get a website live? Is there a web.config in the python world? Which libraries handle things like authentication? or is that all done manually via session cookies and database tables? Are there any web specific libraries? Edit: sorry! I am well versed in asp.net, I want to branch out and learn Python, hence this question (sorry, terrible start to this question I know).
false
777,924
0
1
0
0
Oh, golly. Look, this is gonna be real hard to answer because, read as you wrote it, you're missing a lot of steps. Like, you need a web server, a design, some HTML, and so on. Are you building from the ground up? Asking about Python makes me suspect you may be using something like Zope.
0
177
0
0
2009-04-22T15:46:00.000
python
Other than basic python syntax, what other key areas should I learn to get a website live?
1
1
5
777,952
0
0
0
I am writing an rather simple http web server in python3. The web server needs to be simple - only basic reading from config files, etc. I am using only standard libraries and for now it works rather ok. There is only one requirement for this project, which I can't implement on my own - virtual hosts. I need to have at least two virtual hosts, defined in config files. The problem is, that I can't find a way how can I implement them in python. Does anyone have any guides, articles, maybe some simple implementation how can this be done? I would be grateful for any help.
false
781,466
1
0
0
10
Virtual hosts work by obeying the Host: header in the HTTP request. Just read the headers of the request, and take action based on the value of the Host: header
1
3,802
0
3
2009-04-23T12:20:00.000
python,http,python-3.x,virtualhost
Python3 Http Web Server: virtual hosts
1
1
2
781,474
0
1
0
Of course an HTML page can be parsed using any number of python parsers, but I'm surprised that there don't seem to be any public parsing scripts to extract meaningful content (excluding sidebars, navigation, etc.) from a given HTML doc. I'm guessing it's something like collecting DIV and P elements and then checking them for a minimum amount of text content, but I'm sure a solid implementation would include plenty of things that I haven't thought of.
false
796,490
0.039979
0
0
1
What is meaningful and what is not, it depends on the semantic of the page. If the semantics is crappy, your code won't "guess" what is meaningful. I use readability, which you linked in the comment, and I see that on many pages I try to read it does not provide any result, not talking about a decent one. If someone puts the content in a table, you're doomed. Try readability on a phpbb forum you'll see what I mean. If you want to do it, go with a regexp on <p></p>, or parse the DOM.
0
3,581
0
8
2009-04-28T06:40:00.000
python,html,parsing,semantics,html-content-extraction
python method to extract content (excluding navigation) from an HTML page
1
1
5
796,530
0
1
0
If I have some xml containing things like the following mediawiki markup: " ...collected in the 12th century, of which [[Alexander the Great]] was the hero, and in which he was represented, somewhat like the British [[King Arthur|Arthur]]" what would be the appropriate arguments to something like: re.findall([[__?__]], article_entry) I am stumbling a bit on escaping the double square brackets, and getting the proper link for text like: [[Alexander of Paris|poet named Alexander]]
false
809,837
0.049958
0
0
1
RegExp: \w+( \w+)+(?=]]) input [[Alexander of Paris|poet named Alexander]] output poet named Alexander input [[Alexander of Paris]] output Alexander of Paris
0
1,132
0
3
2009-05-01T01:11:00.000
python,regex,mediawiki
Python regex for finding contents of MediaWiki markup links
1
1
4
809,900
0
1
0
i'm trying to save an image from a website using selenium server & python client. i know the image's URL, but i can't find the code to save it, either when it's the the document itself, or when it's embedded in the current browser session. the workaround i found so far is to save the page's screenshot (there are 2 selenium methods for doing just that), but i want the original image. i don't mind fiddling with the clicking menu options etc. but i couldn't found how. thanks
false
816,704
0.119427
0
0
3
To do this the way you want (to actually capture the content sent down to the browser) you'd need to modify Selenium RC's proxy code (see ProxyHandler.java) and store the files locally on the disk in parallel to sending the response back to the browser.
0
9,916
0
9
2009-05-03T09:51:00.000
python,selenium
save an image with selenium & firefox
1
2
5
827,891
0
1
0
i'm trying to save an image from a website using selenium server & python client. i know the image's URL, but i can't find the code to save it, either when it's the the document itself, or when it's embedded in the current browser session. the workaround i found so far is to save the page's screenshot (there are 2 selenium methods for doing just that), but i want the original image. i don't mind fiddling with the clicking menu options etc. but i couldn't found how. thanks
false
816,704
-0.039979
0
0
-1
How about going to the image URL and then taking a screenshot of the page? Firefox displays the image in full screen. Hope this helps..
0
9,916
0
9
2009-05-03T09:51:00.000
python,selenium
save an image with selenium & firefox
1
2
5
816,777
0
0
0
Given an ip address (say 192.168.0.1), how do I check if it's in a network (say 192.168.0.0/24) in Python? Are there general tools in Python for ip address manipulation? Stuff like host lookups, ip adddress to int, network address with netmask to int and so on? Hopefully in the standard Python library for 2.5.
false
819,355
0.014285
0
0
2
#This works properly without the weird byte by byte handling def addressInNetwork(ip,net): '''Is an address in a network''' # Convert addresses to host order, so shifts actually make sense ip = struct.unpack('>L',socket.inet_aton(ip))[0] netaddr,bits = net.split('/') netaddr = struct.unpack('>L',socket.inet_aton(netaddr))[0] # Must shift left an all ones value, /32 = zero shift, /0 = 32 shift left netmask = (0xffffffff << (32-int(bits))) & 0xffffffff # There's no need to mask the network address, as long as its a proper network address return (ip & netmask) == netaddr
0
174,240
0
123
2009-05-04T08:59:00.000
python,networking,ip-address,cidr
How can I check if an ip is in a network in Python?
1
1
28
10,053,031
0
0
0
I'm using wx.FileDialog in a wxPython 2.8.8.0 application, under Xubuntu 8.10.. My problem is that this dialog isn't network-aware, so I can't browse Samba shares. I see that this problem plagues other applications too (Firefox, Audacious...) so I'd like to ask where I could find informations on how to make it work. Is that dialog supposed to be already network-aware? Am I missing something? Some library maybe? Or should I write my own implementation? Many thanks!
true
825,724
1.2
0
0
0
Robin Dunn himself told me that It's using the "native" GTK file dialog, just like the other apps, so there isn't anything that wx can do about it. So as a workaround I ended up installing gvfs-fuse and browsing the network through $HOME/.gvfs.. A bit klunky but it works.
0
276
0
1
2009-05-05T16:22:00.000
linux,ubuntu,wxpython
Network-aware wx.FileDialog
1
1
1
876,524
1
0
0
Many python libraries, even recently written ones, use httplib2 or the socket interface to perform networking tasks. Those are obviously easier to code on than Twisted due to their blocking nature, but I think this is a drawback when integrating them with other code, especially GUI one. If you want scalability, concurrency or GUI integration while avoiding multithreading, Twisted is then a natural choice. So I would be interested in opinions in those matters: Should new networking code (with the exception of small command line tools) be written with Twisted? Would you mix Twisted, http2lib or socket code in the same project? Is Twisted pythonic for most libraries (it is more complex than alternatives, introduce a dependency to a non-standard package...)? Edit: please let me phrase this in another way. Do you feel writing new library code with Twisted may add a barrier to its adoption? Twisted has obvious benefits (especially portability and scalability as stated by gimel), but the fact that it is not a core python library may be considered by some as a drawback.
true
846,950
1.2
0
0
0
Should new networking code (with the exception of small command line tools) be written with Twisted? Maybe. It really depends. Sometimes its just easy enough to wrap the blocking calls in their own thread. Twisted is good for large scale network code. Would you mix Twisted, http2lib or socket code in the same project? Sure. But just remember that Twisted is single threaded, and that any blocking call in Twisted will block the entire engine. Is Twisted pythonic for most libraries (it is more complex than alternatives, introduce a dependency to a non-standard package...)? There are many Twisted zealots that will say it belongs in the Python standard library. But many people can implement decent networking code with asyncore/asynchat.
0
1,969
1
6
2009-05-11T06:30:00.000
python,networking,sockets,twisted,httplib2
Is Twisted an httplib2/socket replacement?
1
1
2
847,014
0
1
0
I want to communicate using flex client with GAE, I am able to communicate using XMl from GAE to FLex but how should I post from flex3 to python code present on App Engine. Can anyone give me a hint about how to send login information from Flex to python Any ideas suggest me some examples.....please provide me some help Regards, Radhika
false
854,353
0
0
0
0
Do a HTTP post from Flex to your AppEngine app using the URLRequest class.
0
1,782
1
0
2009-05-12T19:11:00.000
python,apache-flex,google-app-engine
How to establish communication between flex and python code build on Google App Engine
1
1
2
854,403
0
0
0
Nice to meet you. A socket makes a program in Python by Linux (the transmission of a message) ⇒ Windows (the reception), b ut the following errors occur and cannot connect now. Linux, Windows are network connection together, and there is the authority to cut. socket.error: (111, 'Connection refused') Could you help me!?
false
881,332
0.379949
0
0
2
111 means the listener is down/not accepting connections - restart the Windows app that should be listening for connections, or disconnect any already-bound clients.
0
619
0
0
2009-05-19T07:00:00.000
python,sockets
Error occurs when I connect with socket in Python
1
1
1
881,349
0
0
0
buildin an smtp client in python . which can send mail , and also show that mail has been received through any mail service for example gmail !!
false
892,196
0
1
0
0
Depends what you mean by "received". It's possible to verify "delivery" of a message to a server but there is no 100% reliable guarantee it actually ended up in a mailbox. smtplib will throw an exception on certain conditions (like the remote end reporting user not found) but just as often the remote end will accept the mail and then either filter it or send a bounce notice at a later time.
0
769
0
0
2009-05-21T10:04:00.000
python,smtp
How would one build an smtp client in python?
1
1
3
892,264
0
1
0
I want to create a simple online poll application. I have created a backend in python that handles vote tracking, poll display, results display and admin setup. However, if I wanted a third party to be able to embed the poll in their website, what would be the recommended way of doing so? I would love to be able to provide a little javascript to drop into the third parties web page, but I can't use javascript because it would require a cross-domain access. What approach would provide an easy general solution for third parties?
false
903,104
0.099668
0
0
1
Make your app into a Google Gadget, Open Social gadget, or other kind of gadgets -- these are all designed to be embeddable into third-party pages with as little fuss as possible.
0
552
0
1
2009-05-24T04:53:00.000
javascript,python,cross-domain
How to embed a Poll in a Web Page
1
1
2
903,112
0
1
0
I'm trying to write a script which can automatically download gameplay videos. The webpages look like dota.sgamer.com/Video/Detail/402 and www.wfbrood.com/movie/spl2009/movie_38214.html, they have flv player embedded in the flash plugin. Is there any library to help me find out the exact flv urls? or any other ideas to get it? Many thanks for your replies
false
905,403
0
0
0
0
if the embed player makes use of some variable where the flv path is set then you can download it, if not.. I doubt you find something to do it "automaticly" since every site make it's own player and identify the file by id not by path, which makes hard to know where the flv file is.
0
382
0
1
2009-05-25T05:07:00.000
python,download,flv
Is there any library to find out urls of embedded flvs in a webpage?
1
1
2
905,451
0
0
0
I have this- en.wikipedia.org/w/api.php?action=login&lgname=user&lgpassword=password But it doesn't work because it is a get request. What would the the post request version of this? Cheers!
false
909,929
0
0
0
0
Since your sample is in PHP, use $_REQUEST, this holds the contents of both $_GET and $_POST.
0
276
0
0
2009-05-26T10:11:00.000
python,forms,post,get
Changing a get request to a post in python?
1
1
3
909,975
0
0
0
Hey i have a webpage for searching a database. i would like to be able to implement cookies using python to store what a user searches for and provide them with a recently searched field when they return. is there a way to implement this using the python Cookie library??
false
920,278
0.099668
0
0
1
Usually, we do the following. Use a framework. Establish a session. Ideally, ask for a username of some kind. If you don't want to ask for names or anything, you can try to the browser's IP address as the key for the session (this can turn into a nightmare, but you can try it.) Using the session identification (username or IP address), save the searches in a database on your server. When the person logs in again, retrieve their query information from your local database. Moral of the story. Don't trust the cookie to have anything it but session identification. And even then, it will get hijacked either on purpose or accidentally. Intentional hijacking is the way one person poses as another. Accident hijacking occurs when multiple people share the same IP address (because they share the same computer).
0
1,329
0
0
2009-05-28T10:57:00.000
python,cookies
Using cookies with python to store searches
1
1
2
920,727
0
0
0
I know there is ftplib for ftp, shutil for local files, what about NFS? I know urllib2 can get files via HTTP/HTTPS/FTP/FTPS, but it can't put files. If there is a uniform library that automatically detects the protocol (FTP/NFS/LOCAL) with URI and deals with file transfer (get/put) transparently, it's even better, does it exist?
false
925,716
0.099668
0
0
1
Have a look at KDE IOSlaves. They can manage all the protocol you describe, plus a few others (samba, ssh, ...). You can instantiates IOSlaves through PyKDE or if that dependency is too big, you can probably manage the ioslave from python with the subprocess module.
0
1,426
0
3
2009-05-29T12:22:00.000
python,file,ftp,networking,nfs
Is there a uniform python library to transfer files using different protocols
1
1
2
926,044
0
0
0
I am trying to automate a app using python. I need help to send keyboard commands through python. I am using powerBook G4.
false
939,746
0
0
0
0
To the best of my knowledge, python does not contain the ability to simulate keystrokes. You can however use python to call a program which has the functionality that you need for OS X. You could also write said program using Objective C most likely. Or you could save yourself the pain and use Automator. Perhaps if you posted more details about what you were automating, I could add something further.
0
665
0
1
2009-06-02T14:02:00.000
python
How i can send the commands from keyboards using python. I am trying to automate mac app (GUI)
1
1
2
947,222
0
1
0
Goal: simple browser app, for navigating files on a web server, in a tree view. Background: Building a web site as a learning experience, w/ Apache, mod_python, Python code. (No mod_wsgi yet.) What tools should I learn to write the browser tree? I see JavaScript, Ajax, neither of which I know. Learn them? Grab a JS example from the web and rework? Can such a thing be built in raw HTML? Python I'm advanced beginner but I realize that's server side. If you were going to build such a toy from scratch, what would you use? What would be the totally easy, cheesy way, the intermediate way, the fully professional way? No Django yet please -- This is an exercise in learning web programming nuts and bolts.
false
941,638
0
0
0
0
set "Indexes" option to the directory in the apache config. To learn how to build webapps in python, learn django.
0
552
0
2
2009-06-02T20:11:00.000
javascript,python,html,web-applications
How to construct a web file browser?
1
2
4
943,612
0
1
0
Goal: simple browser app, for navigating files on a web server, in a tree view. Background: Building a web site as a learning experience, w/ Apache, mod_python, Python code. (No mod_wsgi yet.) What tools should I learn to write the browser tree? I see JavaScript, Ajax, neither of which I know. Learn them? Grab a JS example from the web and rework? Can such a thing be built in raw HTML? Python I'm advanced beginner but I realize that's server side. If you were going to build such a toy from scratch, what would you use? What would be the totally easy, cheesy way, the intermediate way, the fully professional way? No Django yet please -- This is an exercise in learning web programming nuts and bolts.
false
941,638
0.049958
0
0
1
If you want to make interactive browser, you have to learn JS and ajax. If you want to build only browser based on links, python would be enough.
0
552
0
2
2009-06-02T20:11:00.000
javascript,python,html,web-applications
How to construct a web file browser?
1
2
4
941,664
0
0
0
I use python 2.4.1 on Linux, and a python package written inside the company I work in, for establishing a connection between 2 hosts for test purposes. Upon establishing the connection the side defined as the client side failed when calling socket.connect with the correct parameters (I checked) with the error code 111. After searching the web for this error means, I learned that it means that the connection was actively refused. But the code in the package for establishing the connection is supposed to deal with it, only it knows 10061 as the error code for this same error: The connection is refused. Could it be that there are identical error codes for the same logical errors? Could it be that 111 is a system error of the Linux OS, as 10061 is python's or even another OS? Even so, isn't the entire concept of error codes to unify the logical errors with the same codes? Should I simply add the 111 error code to the handling condition?
true
961,465
1.2
0
0
6
It appears Python is exposing the error code from the OS - the interpretation of the code is OS-dependent. 111 is ECONNREFUSED on many Linux systems, and on Cygwin. 146 is ECONNREFUSED on Solaris. 10061 is WSAECONNREFUSED in winerror.h - it's the Windows Socket API's version of ECONNREFUSED. No doubt on other systems, it's different again. The correct way to handle this is use symbolic comparisons based on the OS's definition of ECONNREFUSED; that's the way you do it in C, for example. In other words, have a constant called ECONNREFUSED that has the value of ECONNREFUSED for that platform, in a platform-specific library (which will be necessary to link to the OS's socket primitives in any case), and compare error codes with the ECONNREFUSED constant, rather than magic numbers. I don't know what Python's standard approach to OS error codes is. I suspect it's not terribly well thought out.
0
867
0
2
2009-06-07T08:08:00.000
python,error-handling,sockets
Identical Error Codes
1
1
1
961,484
0
1
0
I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size. The easy way to get the files is use "wget -r -np -A files url". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb/sec. But, just to play around I was also using python to build my urlparser. Downloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb/sec. I use HTMLParser for parsing the href tags. I am not sure why is this happening. Are there any settings for this. Thanks
false
974,741
0
0
0
0
There shouldn't be a difference really. All urlretrieve does is make a simple HTTP GET request. Have you taken out your data processing code and done a straight throughput comparison of wget vs. pure python?
0
37,225
0
9
2009-06-10T10:18:00.000
python,urllib2,wget
wget Vs urlretrieve of python
1
5
10
975,759
0
1
0
I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size. The easy way to get the files is use "wget -r -np -A files url". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb/sec. But, just to play around I was also using python to build my urlparser. Downloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb/sec. I use HTMLParser for parsing the href tags. I am not sure why is this happening. Are there any settings for this. Thanks
false
974,741
0
0
0
0
Please show us some code. I'm pretty sure that it has to be with the code and not on urlretrieve. I've worked with it in the past and never had any speed related issues.
0
37,225
0
9
2009-06-10T10:18:00.000
python,urllib2,wget
wget Vs urlretrieve of python
1
5
10
976,135
0
1
0
I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size. The easy way to get the files is use "wget -r -np -A files url". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb/sec. But, just to play around I was also using python to build my urlparser. Downloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb/sec. I use HTMLParser for parsing the href tags. I am not sure why is this happening. Are there any settings for this. Thanks
false
974,741
0
0
0
0
You can use wget -k to engage relative links in all urls.
0
37,225
0
9
2009-06-10T10:18:00.000
python,urllib2,wget
wget Vs urlretrieve of python
1
5
10
2,350,655
0
1
0
I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size. The easy way to get the files is use "wget -r -np -A files url". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb/sec. But, just to play around I was also using python to build my urlparser. Downloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb/sec. I use HTMLParser for parsing the href tags. I am not sure why is this happening. Are there any settings for this. Thanks
false
974,741
0.019997
0
0
1
Since python suggests using urllib2 instead of urllib, I take a test between urllib2.urlopen and wget. The result is, it takes nearly the same time for both of them to download the same file.Sometimes, urllib2 performs even better. The advantage of wget lies in a dynamic progress bar to show the percent finished and the current download speed when transferring. The file size in my test is 5MB.I haven't used any cache module in python and I am not aware of how wget works when downloading big size file.
0
37,225
0
9
2009-06-10T10:18:00.000
python,urllib2,wget
wget Vs urlretrieve of python
1
5
10
7,782,898
0
1
0
I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size. The easy way to get the files is use "wget -r -np -A files url". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb/sec. But, just to play around I was also using python to build my urlparser. Downloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb/sec. I use HTMLParser for parsing the href tags. I am not sure why is this happening. Are there any settings for this. Thanks
false
974,741
0.019997
0
0
1
Maybe you can wget and then inspect the data in Python?
0
37,225
0
9
2009-06-10T10:18:00.000
python,urllib2,wget
wget Vs urlretrieve of python
1
5
10
974,809
0
1
0
I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.
false
978,581
0.066568
0
0
3
Consuming the webservices is more efficient because there are a lot more things you can do to scale your webservices and webserver (via caching, etc.). By consuming the middle layer, you also have the options to change the returned data format (e.g. you can decide to use JSON rather than XML). Scaling database is much harder (involving replication, etc.) so in general, reduce hits on DB if you can.
0
856
0
1
2009-06-10T23:09:00.000
python,mysql,xml,django,parsing
Is it more efficient to parse external XML or to hit the database?
1
9
9
978,590
0
1
0
I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.
false
978,581
0
0
0
0
It depends from case to case, you'll have to measure (or at least make an educated guess). You'll have to consider several things. Web service it might hit database itself it can be cached it will introduce network latency and might be unreliable or it could be in local network and faster than accessing even local disk DB might be slow since it needs to access disk (although databases have internal caches, but those are usually not targeted) should be reliable Technology itself doesn't mean much in terms of speed - in one case database parses SQL, in other XML parser parses XML, and database is usually acessed via socket as well, so you have both parsing and network in either case. Caching data in your application if applicable is probably a good idea.
0
856
0
1
2009-06-10T23:09:00.000
python,mysql,xml,django,parsing
Is it more efficient to parse external XML or to hit the database?
1
9
9
978,717
0
1
0
I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.
false
978,581
0.022219
0
0
1
There is not enough information to be able to say for sure in the general case. Why don't you do some tests and find out? Since it sounds like you are using python you will probably want to use the timeit module. Some things that could effect the result: Performance of the web service you are using Reliability of the web service you are using Distance between servers Amount of data being returned I would guess that if it is cacheable, that a cached version of the data will be faster, but that does not necessarily mean using a local RDBMS, it might mean something like memcached or an in memory cache in your application.
0
856
0
1
2009-06-10T23:09:00.000
python,mysql,xml,django,parsing
Is it more efficient to parse external XML or to hit the database?
1
9
9
978,603
0
1
0
I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.
false
978,581
0
0
0
0
As a few people have said, it depends, and you should test it. Often external services are slow, and caching them locally (in a database in memory, e.g., with memcached) is faster. But perhaps not. Fortunately, it's cheap and easy to test.
0
856
0
1
2009-06-10T23:09:00.000
python,mysql,xml,django,parsing
Is it more efficient to parse external XML or to hit the database?
1
9
9
978,773
0
1
0
I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.
false
978,581
-0.022219
0
0
-1
It sounds like you essentially want to cache results, and are wondering if it's worth it. But if so, I would NOT use a database (I assume you are thinking of a relational DB): RDBMSs are not good for caching; even though many use them. You don't need persistence nor ACID. If choice was between Oracle/MySQL and external web service, I would start with just using service. Instead, consider real caching systems; local or not (memcache, simple in-memory caches etc). Or if you must use a DB, use key/value store, BDB works well. Store response message in its serialized form (XML), try to fetch from cache, if not, from service, parse. Or if there's a convenient and more compact serialization, store and fetch that.
0
856
0
1
2009-06-10T23:09:00.000
python,mysql,xml,django,parsing
Is it more efficient to parse external XML or to hit the database?
1
9
9
979,028
0
1
0
I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.
false
978,581
0
0
0
0
Test definitely. As a rule of thumb, XML is good for communicating between apps, but once you have the data inside of your app, everything should go into a database table. This may not apply in all cases, but 95% of the time it has for me. Anytime I ever tried to store data any other way (ex. XML in a content management system) I ended up wishing I would have just used good old sprocs and sql server.
0
856
0
1
2009-06-10T23:09:00.000
python,mysql,xml,django,parsing
Is it more efficient to parse external XML or to hit the database?
1
9
9
978,864
0
1
0
I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.
true
978,581
1.2
0
0
4
Everyone is being very polite in answering this question: "it depends"... "you should test"... and so forth. True, the question does not go into great detail about the application and network topographies involved, but if the question is even being asked, then it's likely a) the DB is "local" to the application (on the same subnet, or the same machine, or in memory), and b) the webservice is not. After all, the OP uses the phrases "external service" and "display on your own site." The phrase "parsing it once or however many times you need to each day" also suggests a set of data that doesn't exactly change every second. The classic SOA myth is that the network is always available; going a step further, I'd say it's a myth that the network is always available with low latency. Unless your own internal systems are crap, sending an HTTP query across the Internet will always be slower than a query to a local DB or DB cluster. There are any number of reasons for this: number of hops to the remote server, outage or degradation issues that you can't control on the remote end, and the internal processing time for the remote web service application to analyze your request, hit its own persistence backend (aka DB), and return a result. Fire up your app. Do some latency and response times to your DB. Now do the same to a remote web service. Unless your DB is also across the Internet, you'll notice a huge difference. It's not at all hard for a competent technologist to scale a DB, or for you to completely remove the DB from caching using memcached and other paradigms; the latency between servers sitting near each other in the datacentre is monumentally less than between machines over the Internet (and more secure, to boot). Even if achieving this scale requires some thought, it's under your control, unlike a remote web service whose scaling and latency are totally opaque to you. I, for one, would not be too happy with the idea that the availability and responsiveness of my site are based on someone else entirely. Finally, what happens if the remote web service is unavailable? Imagine a world where every request to your site involves a request over the Internet to some other site. What happens if that other site is unavailable? Do your users watch a spinning cursor of death for several hours? Do they enjoy an Error 500 while your site borks on this unexpected external dependency? If you find yourself adopting an architecture whose fundamental features depend on a remote Internet call for every request, think very carefully about your application before deciding if you can live with the consequences.
0
856
0
1
2009-06-10T23:09:00.000
python,mysql,xml,django,parsing
Is it more efficient to parse external XML or to hit the database?
1
9
9
978,857
0
1
0
I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.
false
978,581
0.022219
0
0
1
It depends - who is calling the web service? Is the web service called every time the user hits the page? If that's the case I'd recommend introducing a caching layer of some sort - many web service API's throttle the amount of hits you can make per hour. Whether you choose to parse the cached XML on the fly or call the data from a database probably won't matter (unless we are talking enterprise scaling here). Personally, I'd much rather make a simple SQL call than write a DOM Parser (which is much more prone to exceptional scenarios).
0
856
0
1
2009-06-10T23:09:00.000
python,mysql,xml,django,parsing
Is it more efficient to parse external XML or to hit the database?
1
9
9
978,607
0
1
0
I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.
false
978,581
1
0
0
6
First off -- measure. Don't just assume that one is better or worse than the other. Second, if you really don't want to measure, I'd guess the database is a bit faster (assuming the database is relatively local compared to the web service). Network latency usually is more than parse time unless we're talking a really complex database or really complex XML.
0
856
0
1
2009-06-10T23:09:00.000
python,mysql,xml,django,parsing
Is it more efficient to parse external XML or to hit the database?
1
9
9
978,593
0
0
0
I tried using the ssl module in Python 2.6 but I was told that it wasn't available. After installing OpenSSL, I recompiled 2.6 but the problem persists. Any suggestions?
true
979,551
1.2
0
0
4
Did you install the OpenSSL development libraries? I had to install openssl-devel on CentOS, for example. On Ubuntu, sudo apt-get build-dep python2.5 did the trick (even for Python 2.6).
0
7,646
0
2
2009-06-11T05:56:00.000
python,ssl,openssl
Adding SSL support to Python 2.6
1
2
3
979,598
0