Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
1
0
Having great luck working with single-source feed parsing in Universal Feed Parser, but now I need to run multiple feeds through it and generate chronologically interleaved output (not RSS). Seems like I'll need to iterate through URLs and stuff every entry into a list of dictionaries, then sort that by the entry timestamps and take a slice off the top. That seems do-able, but pretty expensive resource-wise (I'll cache it aggressively for that reason). Just wondering if there's an easier way - an existing library that works with feedparser to do simple aggregation, for example. Sample code? Gotchas or warnings? Thanks.
false
1,496,067
0.099668
0
0
1
Here is already suggestion to store data in the database, e.g. bsddb.btopen() or any RDBMS. Take a look at heapq.merge() and bisect.insort() or use one of B-tree implementations if you'd like to merge data in memory.
0
1,280
0
0
2009-09-30T04:04:00.000
python,django
Aggregating multiple feeds with Universal Feed Parser
1
1
2
1,496,616
0
0
0
I've got a form, returned by Python mechanize Browser and got via forms() method. How can I perform XPath search inside form node, that is, among descendant nodes of the HTML form node? TIA Upd: How to save html code of the form?
true
1,509,404
1.2
0
0
1
By parsing the browser contents with lxml, which has xpath support.
0
596
0
0
2009-10-02T13:10:00.000
python,mechanize
How to search XPath inside Python ClientForm object?
1
1
1
1,509,434
0
0
0
I am making a community for web-comic artist who will be able to sync their existing website to this site. However, I am in debate for what CMS I should use: Drupal or Wordpress. I have heard great things about Drupal, where it is really aimed for Social Networking. I actually got to play a little bit in the back end of Drupal and it seemed quite complicated to me, but I am not going to give up to fully understand how Drupal works. As for Wordpress, I am very familiar with the Framework. I have the ability to extend it to do what I want, but I am hesitating because I think the framework is not built for communities (I think it may slow down in the future). I also have a unrelated question as well: Should I go with a Python CMS? I heard very great things about Python and how much better it is compare to PHP. Your advice is appreciated.
true
1,513,062
1.2
1
0
9
Difficult decision. Normally I would say 'definitely Drupal' without hesitation, as Drupal was build as a System for community sites from the beginning, whereas Wordpress still shows its heritage as a blogging solution, at least that's what I hear quite often. But then I'm working with Drupal all the time recently and haven't had a closer look at Wordpress for quite a while. That said, Drupal has grown into a pretty complex system over the years, so there is quite a learning curve for newcomers. Given that you are already familiar with Wordpress, it might be more efficient for you to go with that, provided it can do all that you need. So I would recommend Drupal, but you should probably get some opinions from people experienced with Wordpress concerning the possibility to turn it into a community site first. As for the Python vs. PHP CMS question, I'd say that the quality of a CMS is a function of the ability of its developers, the maturity of the system, the surrounding 'ecosystem', etc. and not of the particular language used to build it. (And discussions about the quality of one established language vs. another? Well - let's just not go there ;)
0
3,472
0
2
2009-10-03T07:17:00.000
python,wordpress,drupal,content-management-system,social-networking
Drupal or Wordpress CMS as a Social Network?
1
1
6
1,513,657
0
1
0
There is a Python mechanize object with a form with almost all values set, but not yet submitted. Now I want to fetch another page using cookies from mechanize instance, but without resetting the page, forms and so on, e.g. so that the values remain set (I just need to get body string of another page, nothing else). So is there a way to: Tell mechanize not to reset the page (perhaps, through UserAgentBase)? Make urllib2 use mechanize's cookie jar? NB: urllib2.HTTPCookieProcessor(self.br._ua_handlers["_cookies"].cookiejar) doesn't work Any other way to pass cookie to urllib?
false
1,513,823
0.132549
0
0
2
Some wild ideas: Fetch the second page before filling in the form? Or fetch the new page and then goBack()? Although maybe that will reset the values.
0
1,699
0
3
2009-10-03T14:08:00.000
python,mechanize
How to get a http page using mechanize cookies?
1
1
3
1,513,899
0
0
0
Can someone please tell how to write a Non-Blocking server code using the socket library alone.Thanks
false
1,515,686
0.099668
0
0
2
Why socket alone? It's so much simpler to use another standard library module, asyncore -- and if you can't, at the very least select! If you're constrained by your homework's condition to only use socket, then I hope you can at least add threading (or multiprocessing), otherwise you're seriously out of luck -- you can make sockets with timeout, but juggling timing-out sockets without the needed help from any of the other obvious standard library modules (to support either async or threaded serving) is a serious mess indeed-y...;-).
0
2,625
0
0
2009-10-04T05:35:00.000
python,sockets,nonblocking
Non Blocking Server in Python
1
1
4
1,515,698
0
0
0
I was trying to find out how I can go about verifying a self-signed certificate by a server in python. I could not find much data in google. I also want to make sure that the server url Thanks in advance for any insights.
false
1,519,074
0
0
0
0
It is impossible to verify a self-signed certificate because of its very nature: it is self-signed. You have to sign a certificate by some other trusted third party's certificate to be able to verify anything, and after this you can add that third party's certificate to the list of your trusted CAs and then you will be able to verify certificates signed by that certificate/CA. If you want recommendations about how to do this in Python, you should provide the name of the SSL library you are using, since there is a choice of SSL libraries for Python.
0
6,958
0
9
2009-10-05T09:37:00.000
python,ssl
Verifying peer in SSL using python
1
1
4
1,520,341
0
0
0
I'm new to Python and reading someone else's code: should urllib.urlopen() be followed by urllib.close()? Otherwise, one would leak connections, correct?
false
1,522,636
1
0
0
6
Strictly speaking, this is true. But in practice, once (if) urllib goes out of scope, the connection will be closed by the automatic garbage collector.
0
53,885
0
73
2009-10-05T21:59:00.000
python,urllib
should I call close() after urllib.urlopen()?
1
2
5
1,522,662
0
0
0
I'm new to Python and reading someone else's code: should urllib.urlopen() be followed by urllib.close()? Otherwise, one would leak connections, correct?
false
1,522,636
0.039979
0
0
1
You basically do need to explicitly close your connection when using IronPython. The automatic closing on going out of scope relies on the garbage collection. I ran into a situation where the garbage collection did not run for so long that Windows ran out of sockets. I was polling a webserver at high frequency (i.e. as high as IronPython and the connection would allow, ~7Hz). I could see the "established connections" (i.e. sockets in use) go up and up on PerfMon. The solution was to call gc.collect() after every call to urlopen.
0
53,885
0
73
2009-10-05T21:59:00.000
python,urllib
should I call close() after urllib.urlopen()?
1
2
5
55,125,414
0
1
0
I'd like to know if the following situation and scripts are at all possible: I'm looking to have a photo-gallery (Javascript) webpage that will display in order of the latest added to the Dropbox folder (PHP or Python?). That is, when someone adds a picture to the Dropbox folder, there is a script on the webpage that will check the Dropbox folder and then embed those images onto the webpage via the newest added and the webpage will automatically be updated. Is it at all possible to link to a Dropbox folder via a webpage? If so, how would I best go about using scripts to automate the process of updating the webpage with new content? Any and all help is very appreciated, thanks!
false
1,522,951
0.132549
0
0
2
If you can install the DropBox client on the webserver then it would be simple to let it sync your folder and then iterate over the contents of the folder with a programming language (PHP, Python, .NET etc) and produce the gallery page. This could be done every time the page is requested or as a scheduled job which recreayes a static page. This is all dependent on you having access to install the client on your server.
0
4,092
0
2
2009-10-05T23:43:00.000
php,python,html,dropbox
Update a gallery webpage via Dropbox?
1
1
3
2,074,899
0
1
0
I'm supposed to display images at certain times of the day on the webpage, Please can anyone tell me how to go about it
false
1,524,713
0
0
0
0
You could make a Date object in javascript. Check the current time and depending on the time, you set the img src to whatever image you want for that time of day :) or hide the image through myimg.style.visibility = "hidden" if you dont want to display an image at that moment.
0
328
0
1
2009-10-06T10:15:00.000
python,django
How do I display images at different times on webpage
1
2
4
1,524,724
0
1
0
I'm supposed to display images at certain times of the day on the webpage, Please can anyone tell me how to go about it
false
1,524,713
0
0
0
0
If you need to change the image before a page refresh, you could use jquery ajax call to get the correct image. jquery has some interval functionality which would allow this.
0
328
0
1
2009-10-06T10:15:00.000
python,django
How do I display images at different times on webpage
1
2
4
1,524,812
0
0
0
Anyone know of a good feed parser for python 3.1? I was using feedparser for 2.5 but it doesn't seem to be ported to 3.1 yet, and it's apparently more complicated than just running 2to3.py on it. Any help?
false
1,527,230
0
0
0
0
Start porting feedparser to Python 3.1.
0
3,122
0
8
2009-10-06T18:21:00.000
python,rss,python-3.x,feed
Python 3.1 RSS Parser?
1
1
4
1,568,128
0
0
0
I have a server that has to respond to HTTP and XML-RPC requests. Right now I have an instance of SimpleXMLRPCServer, and an instance of BaseHTTPServer.HTTPServer with a custom request handler, running on different ports. I'd like to run both services on a single port. I think it should be possible to modify the CGIXMLRPCRequestHandler class to also serve custom HTTP requests on some paths, or alternately, to use multiple request handlers based on what path is requested. I'm not really sure what the cleanest way to do this would be, though.
false
1,540,011
0
1
0
0
Is there a reason not to run a real webserver out front with url rewrites to the two ports you are usign now? It's going to make life much easier in the long run
0
849
0
0
2009-10-08T19:45:00.000
python,http,xml-rpc
Python HTTP server with XML-RPC
1
2
3
1,540,053
0
0
0
I have a server that has to respond to HTTP and XML-RPC requests. Right now I have an instance of SimpleXMLRPCServer, and an instance of BaseHTTPServer.HTTPServer with a custom request handler, running on different ports. I'd like to run both services on a single port. I think it should be possible to modify the CGIXMLRPCRequestHandler class to also serve custom HTTP requests on some paths, or alternately, to use multiple request handlers based on what path is requested. I'm not really sure what the cleanest way to do this would be, though.
true
1,540,011
1.2
1
0
0
Use SimpleXMLRPCDispatcher class directly from your own request handler.
0
849
0
0
2009-10-08T19:45:00.000
python,http,xml-rpc
Python HTTP server with XML-RPC
1
2
3
1,543,370
0
0
0
If yes are there any frameworks/Tutorials/tips/etc recommended? N00b at Python but I have tons of PHP experience and wanted to expand my skill set. I know Python is great at server side execution, just wanted to know about client side as well.
false
1,540,214
1
1
0
7
Silverlight can run IronPython, so you can make Silverlight applications. Which is client-side.
0
54,388
0
68
2009-10-08T20:27:00.000
python,client-side
Can Python be used for client side web development?
1
3
8
1,540,379
0
0
0
If yes are there any frameworks/Tutorials/tips/etc recommended? N00b at Python but I have tons of PHP experience and wanted to expand my skill set. I know Python is great at server side execution, just wanted to know about client side as well.
false
1,540,214
-0.024995
1
0
-1
No. Browsers don't run Python.
0
54,388
0
68
2009-10-08T20:27:00.000
python,client-side
Can Python be used for client side web development?
1
3
8
1,540,233
0
0
0
If yes are there any frameworks/Tutorials/tips/etc recommended? N00b at Python but I have tons of PHP experience and wanted to expand my skill set. I know Python is great at server side execution, just wanted to know about client side as well.
false
1,540,214
0.07486
1
0
3
On Windows, any language that registers for the Windows Scripting Host can run in IE. At least the ActiveState version of Python could do that; I seem to recall that has been superseded by a more official version these days. But that solution requires the user to install a python interpreter and run some script or .reg file to put the correct "magic" into the registry for the hooks to work.
0
54,388
0
68
2009-10-08T20:27:00.000
python,client-side
Can Python be used for client side web development?
1
3
8
7,437,506
0
0
0
I need to implement a small test utility which consumes extremely simple SOAP XML (HTTP POST) messages. This is a protocol which I have to support, and it's not my design decision to use SOAP (just trying to prevent those "why do you use protocol X?" answers) I'd like to use stuff that's already in the basic python 2.6.x installation. What's the easiest way to do that? The sole SOAP message is really simple, I'd rather not use any enterprisey tools like WSDL class generation if possible. I already implemented the same functionality earlier in Ruby with just plain HTTPServlet::AbstractServlet and REXML parser. Worked fine. I thought I could a similar solution in Python with BaseHTTPServer, BaseHTTPRequestHandler and the elementree parser, but it's not obvious to me how I can read the contents of my incoming SOAP POST message. The documentation is not that great IMHO.
false
1,547,520
0.099668
1
0
1
I wrote something like this in Boo, using a .Net HTTPListener, because I too had to implement someone else's defined WSDL. The WSDL I was given used document/literal form (you'll need to make some adjustments to this information if your WSDL uses rpc/encoded). I wrapped the HTTPListener in a class that allowed client code to register callbacks by SOAP action, and then gave that class a Start method that would kick off the HTTPListener. You should be able to do something very similar in Python, with a getPOST() method on BaseHTTPServer to: extract the SOAP action from the HTTP headers use elementtree to extract the SOAP header and SOAP body from the POST'ed HTTP call the defined callback for the SOAP action, sending these extracted values return the response text given by the callback in a corresponding SOAP envelope; if the callback raises an exception, catch it and re-wrap it as a SOAP fault Then you just implement a callback per SOAP action, which gets the XML content passed to it, parses this with elementtree, performs the desired action (or mock action if this is tester), and constructs the necessary response XML (I was not too proud to just create this explicitly using string interpolation, but you could use elementtree to create this by serializing a Python response object). It will help if you can get some real SOAP sample messages in order to help you not tear out your hair, especially in the part where you create the necessary response XML.
0
515
0
1
2009-10-10T09:49:00.000
python,http,soap
A minimalist, non-enterprisey approach for a SOAP server in Python
1
1
2
1,547,642
0
0
0
I have a two websites in php and python. When a user sends a request to the server I need php/python to send an HTTP POST request to a remote server. I want to reply to the user immediately without waiting for a response from the remote server. Is it possible to continue running a php/python script after sending a response to the user. In that case I'll first reply to the user and only then send the HTTP POST request to the remote server. Is it possible to create a non-blocking HTTP client in php/python without handling the response at all? A solution that will have the same logic in php and python is preferable for me. Thanks
false
1,555,517
0.033321
1
0
1
What you need to do is have the PHP script execute another script that does the server call and then sends the user the request.
0
12,400
0
14
2009-10-12T16:22:00.000
php,python,nonblocking
sending a non-blocking HTTP POST request
1
1
6
1,555,614
0
1
0
I currently have some Ruby code used to scrape some websites. I was using Ruby because at the time I was using Ruby on Rails for a site, and it just made sense. Now I'm trying to port this over to Google App Engine, and keep getting stuck. I've ported Python Mechanize to work with Google App Engine, but it doesn't support DOM inspection with XPATH. I've tried the built-in ElementTree, but it choked on the first HTML blob I gave it when it ran into '&mdash'. Do I keep trying to hack ElementTree in there, or do I try to use something else? thanks, Mark
false
1,563,165
1
0
0
11
Beautiful Soup.
0
1,959
0
2
2009-10-13T21:58:00.000
python,google-app-engine,xpath,beautifulsoup,mechanize
What pure Python library should I use to scrape a website?
1
2
5
1,563,177
0
1
0
I currently have some Ruby code used to scrape some websites. I was using Ruby because at the time I was using Ruby on Rails for a site, and it just made sense. Now I'm trying to port this over to Google App Engine, and keep getting stuck. I've ported Python Mechanize to work with Google App Engine, but it doesn't support DOM inspection with XPATH. I've tried the built-in ElementTree, but it choked on the first HTML blob I gave it when it ran into '&mdash'. Do I keep trying to hack ElementTree in there, or do I try to use something else? thanks, Mark
false
1,563,165
1
0
0
6
lxml -- 100x better than elementtree
0
1,959
0
2
2009-10-13T21:58:00.000
python,google-app-engine,xpath,beautifulsoup,mechanize
What pure Python library should I use to scrape a website?
1
2
5
1,563,301
0
0
0
Standard libraries (xmlrpclib+SimpleXMLRPCServer in Python 2 and xmlrpc.server in Python 3) report all errors (including usage errors) as python exceptions which is not suitable for public services: exception strings are often not easy understandable without python knowledge and might expose some sensitive information. It's not hard to fix this, but I prefer to avoid reinventing the wheel. Is there a third party library with better error reporting? I'm interested in good fault messages for all usage errors and hiding internals when reporting internal errors (this is better done with logging). xmlrpclib already have the constants for such errors: NOT_WELLFORMED_ERROR, UNSUPPORTED_ENCODING, INVALID_ENCODING_CHAR, INVALID_XMLRPC, METHOD_NOT_FOUND, INVALID_METHOD_PARAMS, INTERNAL_ERROR.
false
1,571,598
0.099668
1
0
1
I don't think you have a library specific problem. When using any library or framework you typically want to trap all errors, log them somewhere, and throw up "Oops, we're having problems. You may want to contact us at [email protected] with error number 100 and tell us what you did." So wrap your failable entry points in try/catches, create a generic logger and off you go...
0
1,435
0
1
2009-10-15T10:50:00.000
python,xml-rpc
XML-RPC server with better error reporting
1
1
2
1,608,160
0
0
0
Is there a python library which implements a standalone TCP stack? I can't use the usual python socket library because I'm receiving a stream of packets over a socket (they are being tunneled to me over this socket). When I receive a TCP SYN packet addressed to a particular port, I'd like to accept the connection (send a syn-ack) and then get the data sent by the other end (ack'ing appropriately). I was hoping there was some sort of TCP stack already written which I could utilize. Any ideas? I've used lwip in the past for a C project -- something along those lines in python would be perfect.
false
1,581,087
0
0
0
0
I know this isn't directly Python related but if you are looking to do heavy network processing, you should consider Erlang instead of Python. Just a suggestion really... you can always take a shot a doing this with Twisted... if you feel adventurous (and have lots of time on your side) ;-)
0
8,496
0
8
2009-10-17T00:50:00.000
python,tcp,network-programming,network-protocols,raw-sockets
Python TCP stack implementation
1
1
5
1,581,097
0
0
0
I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?
false
1,581,895
0.015383
0
0
1
The way I solved this (actually I did this in Scala, not Python) was to use both a Set and a Queue, only adding links to the queue (and set) if they did not already exist in the set. Both the set and queue were encapsulated in a single thread, exposing only a queue-like interface to the consumer threads. Edit: someone else suggested SQLite and that is also something I am considering, if the set of visited URLs needs to grow large. (Currently each crawl is only a few hundred pages so it easily fits in memory.) But the database is something that can also be encapsulated within the set itself, so the consumer threads need not be aware of it.
1
12,544
0
17
2009-10-17T10:21:00.000
python,multithreading,queue
How check if a task is already in python Queue?
1
5
13
1,581,902
0
0
0
I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?
false
1,581,895
0.015383
0
0
1
SQLite is so simple to use and would fit perfectly... just a suggestion.
1
12,544
0
17
2009-10-17T10:21:00.000
python,multithreading,queue
How check if a task is already in python Queue?
1
5
13
1,581,903
0
0
0
I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?
false
1,581,895
-0.046121
0
0
-3
Also, instead of a set you might try using a dictionary. Operations on sets tend to get rather slow when they're big, whereas a dictionary lookup is nice and quick. My 2c.
1
12,544
0
17
2009-10-17T10:21:00.000
python,multithreading,queue
How check if a task is already in python Queue?
1
5
13
1,581,908
0
0
0
I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?
false
1,581,895
0.015383
0
0
1
Why only use the array (ideally, a dictionary would be even better) to filter things you've already visited? Add things to your array/dictionary as soon as you queue them up, and only add them to the queue if they're not already in the array/dict. Then you have 3 simple separate things: Links not yet seen (neither in queue nor array/dict) Links scheduled to be visited (in both queue and array/dict) Links already visited (in array/dict, not in queue)
1
12,544
0
17
2009-10-17T10:21:00.000
python,multithreading,queue
How check if a task is already in python Queue?
1
5
13
1,581,920
0
0
0
I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?
false
1,581,895
0
0
0
0
instead of "array of pages already visited" make an "array of pages already added to the queue"
1
12,544
0
17
2009-10-17T10:21:00.000
python,multithreading,queue
How check if a task is already in python Queue?
1
5
13
1,582,421
0
1
0
I have snapshots of multiple webpages taken at 2 times. What is a reliable method to determine which webpages have been modified? I can't rely on something like an RSS feed, and I need to ignore minor noise like date text. Ideally I am looking for a Python solution, but an intuitive algorithm would also be great. Thanks!
false
1,587,902
-0.049958
0
0
-1
just take snapshots of the files with MD5 or SHA1...if the values differ the next time you check, then they are modified.
0
2,946
0
6
2009-10-19T10:13:00.000
python,diff,webpage,snapshot
how to determine if webpage has been modified
1
1
4
1,588,461
0
0
0
I'm looking for a way to prevent multiple hosts from issuing simultaneous commands to a Python XMLRPC listener. The listener is responsible for running scripts to perform tasks on that system that would fail if multiple users tried to issue these commands at the same time. Is there a way I can block all incoming requests until the single instance has completed?
false
1,589,150
0
0
0
0
There are several choices: Use single-process-single-thread server like SimpleXMLRPCServer to process requests subsequently. Use threading.Lock() in threaded server. You some external locking mechanism (like lockfile module or GET_LOCK() function in mysql) in multiprocess server.
0
5,976
0
7
2009-10-19T14:54:00.000
python,xml-rpc
Python XMLRPC with concurrent requests
1
2
3
1,590,010
0
0
0
I'm looking for a way to prevent multiple hosts from issuing simultaneous commands to a Python XMLRPC listener. The listener is responsible for running scripts to perform tasks on that system that would fail if multiple users tried to issue these commands at the same time. Is there a way I can block all incoming requests until the single instance has completed?
false
1,589,150
0
0
0
0
Can you have another communication channel? If yes, then have a "call me back when it is my turn" protocol running between the server and the clients. In other words, each client would register its intention to issue requests to the server and the said server would "callback" the next-up client when it is ready.
0
5,976
0
7
2009-10-19T14:54:00.000
python,xml-rpc
Python XMLRPC with concurrent requests
1
2
3
1,589,181
0
0
0
I have an XML document that I would like to update after it already contains data. I thought about opening the XML file in "a" (append) mode. The problem is that the new data will be written after the root closing tag. How can I delete the last line of a file, then start writing data from that point, and then close the root tag? Of course I could read the whole file and do some string manipulations, but I don't think that's the best idea..
false
1,591,579
0.044415
0
0
2
To make this process more robust, you could consider using the SAX parser (that way you don't have to hold the whole file in memory), read & write till the end of tree and then start appending.
0
153,237
0
49
2009-10-19T22:52:00.000
python,xml,io
How to update/modify an XML file in python?
1
1
9
1,591,732
0
1
0
So, within a webapp.RequestHandler subclass I would use self.request.uri to get the request URI. But, I can't access this outside of a RequestHandler and so no go. Any ideas? I'm running Python and I'm new at it as well as GAE.
true
1,593,483
1.2
0
0
2
You should generally be doing everything within some sort of RequestHandler or the equivalent in your non-WebApp framework. However, if you really insist on being stuck in the early 1990s and writing plain CGI scripts, the environment variables SERVER_NAME and PATH_INFO may be what you want; see a CGI reference for more info.
0
786
1
1
2009-10-20T09:33:00.000
python,google-app-engine
Get the request uri outside of a RequestHandler in Google App Engine (Python)
1
1
2
1,593,985
0
0
0
I simply want to create an automatic script that can run (preferably) on a web-server, and simply 'clicks' on an object of a web page. I am new to Python or whatever language this would be used for so I thought I would go here to ask where to start! This may seem like I want the script to scam advertisements or do something illegal, but it's simply to interact with another website.
false
1,597,833
1
0
0
6
It doesn't have to be Python, I've seen it done in PHP and Perl, and you can probably do it in many other languages. The general approach is: 1) You give your app a URL and it makes an HTTP request to that URL. I think I have seen this done with php/wget. Probably many other ways to do it. 2) Scan the HTTP response for other URLs that you want to "click" (really, sending HTTP requests to them), and then send requests to those. Parsing the links usually requires some understanding of regular expressions (if you are not familiar with regular expressions, brush up on it - it's important stuff ;)).
0
23,157
0
17
2009-10-20T23:18:00.000
python,bots
Where do I start with a web bot?
1
1
4
1,597,878
0
0
0
I have a XML document "abc.xml": I need to write a function replace(name, newvalue) which can replace the value node with tag 'name' with the new value and write it back to the disk. Is this possible in python? How should I do this?
false
1,602,919
0.197375
0
0
2
Sure it is possible. The xml.etree.ElementTree module will help you with parsing XML, finding tags and replacing values. If you know a little bit more about the XML file you want to change, you can probably make the task a bit easier than if you need to write a generic function that will handle any XML file. If you are already familiar with DOM parsing, there's a xml.dom package to use instead of the ElementTree one.
0
2,424
0
0
2009-10-21T19:02:00.000
python,xml,python-3.x
Setting value for a node in XML document in Python
1
1
2
1,603,011
0
0
0
I have been taking a few graduate classes with a professor I like alot and she raves about SAS all of the time. I "grew up" learning stats using SPSS, and with their recent decisions to integrate their stats engine with R and Python, I find it difficult to muster up the desire to learn anything else. I am not that strong in Python, but I can get by with most tasks that I want to accomplish. Admittedly, I do see the upside to SAS, but I have learned to do some pretty cool things combining SPSS and Python, like grabbing data from the web and analyzing it real-time. Plus, I really like that I can use the GUI to generate the base for my code before I add my final modifications. In SAS, it looks like I would have to program everything by hand (ignoring Enterprise Guide). My question is this. Can you grab data from the web and parse it into SAS datasets? This is a deal-breaker for me. What about interfacing with API's like Google Analytics, Twitter, etc? Are there external IDE's that you can use to write and execute SAS programs? Any help will be greatly appreciated. Brock
false
1,628,372
0.462117
0
0
5
yes. sas 9.2 can interact with soap and restful apis. i haven't had much success with twitter. i have had some success with google spreadsheets (in sas 9.1.3) and i've seen code to pull google analytics (in sas 9.2). as with python and r, you can write the code in any text editor, but you'll need to have sas to actually execute it. lately, i've been bouncing between eclipse, pspad, and sas's enhanced editor for writing code, but i always have to submit in sas.
0
2,215
0
1
2009-10-27T01:56:00.000
python,statistics,sas
SAS and Web Data
1
1
2
1,628,669
0
0
0
Ive been working on a feature of my application to implement a leaderboard - basically stack rank users according to their score. Im currently tracking the score on an individual basis. My thought is that this leaderboard should be relative instead of absolute i.e. instead of having the top 10 highest scoring users across the site, its a top 10 among a user's friend network. This seems better because everyone has a chance to be #1 in their network and there is a form of friendly competition for those that are interested in this sort of thing. Im already storing the score for each user so the challenge is how to compute the rank of that score in real time in an efficient way. Im using Google App Engine so there are some benefits and limitations (e.g., IN [array]) queries perform a sub-query for every element of the array and also are limited to 30 elements per statement For example 1st Jack 100 2nd John 50 Here are the approaches I came up with but they all seem to be inefficient and I thought that this community could come up with something more elegant. My sense is that any solution will likely be done with a cron and that I will store a daily rank and list order to optimize read operations but it would be cool if there is something more lightweight and real time Pull the list of all users of the site ordered by score. For each user pick their friends out of that list and create new rankings. Store the rank and list order. Update daily. Cons - If I get a lot of users this will take forever 2a. For each user pick their friends and for each friend pick score. Sort that list. Store the rank and list order. Update daily. Record the last position of each user so that the pre-existing list can be used for re-ordering for the next update in order to make it more efficient (may save sorting time) 2b. Same as above except only compute the rank and list order for people who's profiles have been viewed in the last day Cons - rank is only up to date for the 2nd person that views the profile
true
1,628,562
1.2
0
0
4
If writes are very rare compared to reads (a key assumption in most key-value stores, and not just in those;-), then you might prefer to take a time hit when you need to update scores (a write) rather than to get the relative leaderboards (a read). Specifically, when a user's score change, queue up tasks for each of their friends to update their "relative leaderboards" and keep those leaderboards as list attributes (which do keep order!-) suitably sorted (yep, the latter's a denormalization -- it's often necessary to denormalize, i.e., duplicate information appropriately, to exploit key-value stores at their best!-). Of course you'll also update the relative leaderboards when a friendship (user to user connection) disappears or appears, but those should (I imagine) be even rarer than score updates;-). If writes are pretty frequent, since you don't need perfectly precise up-to-the-second info (i.e., it's not financials/accounting stuff;-), you still have many viable approaches to try. E.g., big score changes (rarer) might trigger the relative-leaderboards recomputes, while smaller ones (more frequent) get stashed away and only applied once in a while "when you get around to it". It's hard to be more specific without ballpark numbers about frequency of updates of various magnitude, typical network-friendship cluster sizes, etc, etc. I know, like everybody else, you want a perfect approach that applies no matter how different the sizes and frequencies in question... but, you just won't find one!-)
0
1,929
0
0
2009-10-27T03:07:00.000
python,google-app-engine,leaderboard
Real time update of relative leaderboard for each user among friends
1
1
2
1,628,703
0
1
0
Basically, what I'm trying to do is simply make a small script that accesses finds the most recent post in a forum and pulls some text or an image out of it. I have this working in python, using the htmllib module and some regex. But, the script still isn't very convenient as is, it would be much nicer if I could somehow put it into an HTML document. It appears that simply embedding Python scripts is not possible, so I'm looking to see if theres a similar feature like python's htmllib that can be used to access some other webpage and extract some information from it. (Essentially, if I could get this script going in the form of an html document, I could just open one html document, rather than navigate to several different pages to get the information I want to check) I'm pretty sure that javascript doesn't have the functionality I need, but I was wondering about other languages such as jQuery, or even something like AJAX?
false
1,628,564
0.066568
0
0
1
There are two general approaches: Modify your Python code so that it runs as a CGI (or WSGI or whatever) module and generate the page of interest by running some server side code. Use Javascript with jQuery to load the content of interest by running some client side code. The difference between these two approaches is where the third party server sees the requests coming from. In the first case, it's from your web server. In the second case, it's from the browser of the user accessing your page. Some browsers may not handle loading content from third party servers very gracefully (that is, they might pop up warning boxes or something).
0
213
0
1
2009-10-27T03:07:00.000
javascript,jquery,python
Is it possible access other webpages from within another page
1
1
3
1,628,598
0
1
0
Instead of just using urllib does anyone know of the most efficient package for fast, multithreaded downloading of URLs that can operate through http proxies? I know of a few such as Twisted, Scrapy, libcurl etc. but I don't know enough about them to make a decision or even if they can use proxies.. Anyone know of the best one for my purposes? Thanks!
false
1,628,766
0.099668
0
0
1
usually proxies filter websites categorically based on how the website was created. It is difficult to transmit data through proxies based on categories. Eg youtube is classified as audio/video streams therefore youtube is blocked in some places espically schools. If you want to bypass proxies and get the data off a website and put it in your own genuine website like a dot com website that can be registered it to you. When you are making and registering the website categorise your website as anything you want.
0
9,003
0
1
2009-10-27T04:23:00.000
python,proxy,multithreading,web-crawler,pool
Python Package For Multi-Threaded Spider w/ Proxy Support?
1
1
2
5,803,567
0
0
0
I'm learning socket programming (in python) and I was wondering what the best/typical way of encapsulating data is? My packets will be used to issue run, stop, configure, etc. commands on the receiving side. Is it helpful to use JSON or just straight text?
false
1,633,934
0
0
0
0
If you're developing something as a learning exercise you might find it best to go with a structured text (ie. human readable and human writable) format. An example would be to use a fixed number of fields per command, fixed width text fields and/or easily parsable field delimiters. Generally text is less efficient in terms of packet size, but it does have the benefits that you can read it easily if you do a packet capture (eg. using wireshark) or if you want to use telnet to mimic a client. And if this is only a learning exercise then ease of debugging is a significant issue.
1
916
0
2
2009-10-27T22:03:00.000
python,network-programming
Designing a simple network packet
1
2
4
1,634,178
0
0
0
I'm learning socket programming (in python) and I was wondering what the best/typical way of encapsulating data is? My packets will be used to issue run, stop, configure, etc. commands on the receiving side. Is it helpful to use JSON or just straight text?
false
1,633,934
0.049958
0
0
1
I suggest plain text to begin with - it is easier to debug. The format that your text takes depends on what you're doing, how many commands, arguments, etc. Have you fleshed out how your commands will look? Once you figure out what that looks like it'll likely suggest a format all on its own. Are you using TCP or UDP? TCP is easy since it is a stream, but if you're using UDP keep in mind the maximum size of UDP packets and thus how big your message can be.
1
916
0
2
2009-10-27T22:03:00.000
python,network-programming
Designing a simple network packet
1
2
4
1,635,005
0
1
0
I have a file, which change it content in a short time. But I'd like to read it before it is ready. The problem is, that it is an xml-file (log). So when you read it, it could be, that not all tags are closed. I would like to know if there is a possibility to close all opened tags correctly, that there are no problems to show it in the browser (with xslt stylsheet). This should be made by using included features of python.
false
1,644,994
0
0
0
0
You could use BeautifulStoneSoup (XML part of BeautifulSoup). www.crummy.com/software/BeautifulSoup It's not ideal, but it would circumvent the problem if you cannot fix the file's output... It's basically a previously implemented version of what Denis said. You can just join whatever you need into the soup and it will do its best to fix it.
0
1,809
0
5
2009-10-29T16:36:00.000
python,xml
Close all opened xml tags
1
2
4
1,652,871
0
1
0
I have a file, which change it content in a short time. But I'd like to read it before it is ready. The problem is, that it is an xml-file (log). So when you read it, it could be, that not all tags are closed. I would like to know if there is a possibility to close all opened tags correctly, that there are no problems to show it in the browser (with xslt stylsheet). This should be made by using included features of python.
false
1,644,994
0
0
0
0
You can use any SAX parser by feeding data available so far to it. Use SAX handler that just reconstructs source XML, keep the stack of tags opened and close them in reverse order at the end.
0
1,809
0
5
2009-10-29T16:36:00.000
python,xml
Close all opened xml tags
1
2
4
1,645,047
0
0
0
I'm trying to write simple proxy server for some purpose. In it I use httplib to access remote web-server. But there's one problem: web server returns TWO Set-Cookie headers in one response, and httplib mangles them together in httplib.HTTPResponse.getheaders(), effectively joining cookies with comma [which is strange, because getheaders returns a LIST, not DICT, so I thought they wrote it with multiple headers of the same name). So, when I send this joined header back to client, it confuses client. How can I obtain full list of headers in httplib (without just splitting Set-Cookie header on commas)?
true
1,649,401
1.2
0
0
4
HTTPResponse.getheaders() returns a list of combined headers (actually my calling dict.items()). The only place where incoming headers are stored untouched is HTTPResponse.msg.headers.
0
2,867
0
4
2009-10-30T12:03:00.000
python,httplib
How to handle multiple Set-Cookie header in HTTP response
1
1
1
1,649,579
0
0
0
I want a fast way to grab a URL and parse it while streaming. Ideally this should be super fast. My language of choice is Python. I have an intuition that twisted can do this but I'm at a loss to find an example.
false
1,659,380
0
0
0
0
You only need to parse a single URL? Then don't worry. Use urllib2 to open the connection and pass the file handle into ElementTree. Variations you can try would be to use ElementTree's incremental parser or to use iterparse, but that depends on what your real requirements are. There's "super fast" but there's also "fast enough." It's only when you start having multiple simultaneous connections where you should look at Twisted or multithreading.
0
1,573
0
3
2009-11-02T04:06:00.000
python,xml,twisted
How do I fetch an XML document and parse it with Python twisted?
1
1
2
1,682,249
0
0
0
I am interested in your opinions on unittesting code that uses Corba to communicate with a server. Would you mock the Corba objects? In Python that's sort of a pain in the ass because all the methods of Corba objects are loaded dynamically. So you're basically stuck with "mock anything". Thanks! Note: I believe I have not made myself clear enough, so I'll try to give a somewhat more concrete example: A web application needs to display a page containing data received from the server. It obtains the data by calling server_pagetable.getData() and then formats the data, converts them to the correct python types (because Corba does not have e.g. a date type etc.) and finally creates the HTML code to be displayed. And this is what I would like to test - the methods that receive the data and do all the transformations and finally create the HTML code. I believe the most straightforward decision is to mock the Corba objects as they essentially comprise both the networking and db functionality (which ought not to be tested in unit tests). It's just that this is quite a lot of "extra work" to do - mocking all the Corba objects (there is a User object, a server session object, the pagetable object, an admin object etc.). Maybe it's just because I'm stuck with Corba and therefore I have to reflect the object hierarchy dictated by the server with mocks. On the other hand, it could be that there is some cool elegant solution to testing code using Corba that just did not cross my mind.
true
1,660,049
1.2
1
0
3
Don't try to unittest Corba. Assume that Corba works. Unittest your own code. This means: Create a unit test which checks that you correctly set up Corba and that you can invoke a single method and read a property. If that works, all other methods and properties will work, too. After that, test that all the exposed objects work correctly. You don't need Corba for this.
0
625
0
2
2009-11-02T08:38:00.000
python,unit-testing,mocking,corba
Unittesting Corba in Python
1
3
3
1,660,185
0
0
0
I am interested in your opinions on unittesting code that uses Corba to communicate with a server. Would you mock the Corba objects? In Python that's sort of a pain in the ass because all the methods of Corba objects are loaded dynamically. So you're basically stuck with "mock anything". Thanks! Note: I believe I have not made myself clear enough, so I'll try to give a somewhat more concrete example: A web application needs to display a page containing data received from the server. It obtains the data by calling server_pagetable.getData() and then formats the data, converts them to the correct python types (because Corba does not have e.g. a date type etc.) and finally creates the HTML code to be displayed. And this is what I would like to test - the methods that receive the data and do all the transformations and finally create the HTML code. I believe the most straightforward decision is to mock the Corba objects as they essentially comprise both the networking and db functionality (which ought not to be tested in unit tests). It's just that this is quite a lot of "extra work" to do - mocking all the Corba objects (there is a User object, a server session object, the pagetable object, an admin object etc.). Maybe it's just because I'm stuck with Corba and therefore I have to reflect the object hierarchy dictated by the server with mocks. On the other hand, it could be that there is some cool elegant solution to testing code using Corba that just did not cross my mind.
false
1,660,049
0.066568
1
0
1
I would set up a test server, and do live tests on that. Unittesting can be tricky with network stuff, so it's best to keep it as real as possible. Any mocking would be done on the test server, for instance if you need to communicate to three different servers, it could be set up with three different IP addresses to play the role of all three servers.
0
625
0
2
2009-11-02T08:38:00.000
python,unit-testing,mocking,corba
Unittesting Corba in Python
1
3
3
1,660,187
0
0
0
I am interested in your opinions on unittesting code that uses Corba to communicate with a server. Would you mock the Corba objects? In Python that's sort of a pain in the ass because all the methods of Corba objects are loaded dynamically. So you're basically stuck with "mock anything". Thanks! Note: I believe I have not made myself clear enough, so I'll try to give a somewhat more concrete example: A web application needs to display a page containing data received from the server. It obtains the data by calling server_pagetable.getData() and then formats the data, converts them to the correct python types (because Corba does not have e.g. a date type etc.) and finally creates the HTML code to be displayed. And this is what I would like to test - the methods that receive the data and do all the transformations and finally create the HTML code. I believe the most straightforward decision is to mock the Corba objects as they essentially comprise both the networking and db functionality (which ought not to be tested in unit tests). It's just that this is quite a lot of "extra work" to do - mocking all the Corba objects (there is a User object, a server session object, the pagetable object, an admin object etc.). Maybe it's just because I'm stuck with Corba and therefore I have to reflect the object hierarchy dictated by the server with mocks. On the other hand, it could be that there is some cool elegant solution to testing code using Corba that just did not cross my mind.
false
1,660,049
0
1
0
0
I have got similar work to tackle but I probably will not write a test for implementation of CORBA objects or more specifically COM objects (implementation of CORBA). I have to write tests for work that uses these structures as oppose to the structures themselves (although I could land myself in that role too if I ask too many questions). In the end of the day, unittest is integration on a smaller scale so whenever I write tests I am always thinking of input and outputs rather than actual structures. From the way you have written your problem my concentration would be on the data of server_pagetable.getData() and the output HTML without caring too much about what happens inbetween (because that is the code you are testing, you don't want to define the code in the test but ensure that output is correct). If you want to test individual functions inbetween then I would get mock data (essentially still data, so you can generate mock data rather than mock class if possible). Mocks are only used when you don't have parts of the full code and those functions needs some input from those parts of the code but as you are not interested in them or don't have them you simplify the interaction with them. This is just my opinion.
0
625
0
2
2009-11-02T08:38:00.000
python,unit-testing,mocking,corba
Unittesting Corba in Python
1
3
3
51,438,774
0
0
0
Using SUDS SOAP client how do I specify web service URL. I can see clearly that WSDL path is specified in Client constructor but what if I wan't to change web service url?
false
1,670,569
0.066568
0
0
1
I think you have to create a new Client object for each different URL.
0
5,956
0
2
2009-11-03T22:31:00.000
python,soap,suds
Changing web service url in SUDS library
1
1
3
1,670,775
0
0
0
I'm trying to write a Python lib that will implement the client side of a certain chat protocol. After I connect to the server, I start the main loop where I read from the server and handle received commands and here I need to call a callback function (like on_message or on file_received, etc). How should I go about implementing this? Should a start a new thread for each callback function? As maybe some callbacks will take some time to return and I will timeout. Also, If the main loop where I read from the server is in a thread can I write to the socket from another thread(send messages to the server)? Or is there a better approach? Thanks.
true
1,670,735
1.2
0
0
2
I would use the select module, or alternately twisted, however select is a bit more portable, and to my mind somewhat more pythonic.
1
481
0
0
2009-11-03T23:05:00.000
python,multithreading,chat
python chat client lib
1
1
3
1,671,922
0
0
0
I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow) btw. workers are using read-only data so there is no need to maintain locking and communication between them
false
1,674,696
0
1
0
0
Another option is a queue table in the database. The worker processes run in a loop or off cron and poll the queue table for new jobs.
0
2,509
1
4
2009-11-04T15:51:00.000
python,nginx,load-balancing,wsgi,reverse-proxy
how to process long-running requests in python workers?
1
4
7
1,718,183
0
0
0
I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow) btw. workers are using read-only data so there is no need to maintain locking and communication between them
false
1,674,696
0
1
0
0
I think you can configure modwsgi/Apache so it will have several "hot" Python interpreters in separate processes ready to go at all times and also reuse them for new accesses (and spawn a new one if they are all busy). In this case you could load all the preprocessed data as module globals and they would only get loaded once per process and get reused for each new access. In fact I'm not sure this isn't the default configuration for modwsgi/Apache. The main problem here is that you might end up consuming a lot of "core" memory (but that may not be a problem either). I think you can also configure modwsgi for single process/multiple thread -- but in that case you may only be using one CPU because of the Python Global Interpreter Lock (the infamous GIL), I think. Don't be afraid to ask at the modwsgi mailing list -- they are very responsive and friendly.
0
2,509
1
4
2009-11-04T15:51:00.000
python,nginx,load-balancing,wsgi,reverse-proxy
how to process long-running requests in python workers?
1
4
7
1,675,726
0
0
0
I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow) btw. workers are using read-only data so there is no need to maintain locking and communication between them
false
1,674,696
0.028564
1
0
1
The most simple solution in this case is to use the webserver to do all the heavy lifting. Why should you handle threads and/or processes when the webserver will do all that for you? The standard arrangement in deployments of Python is: The webserver start a number of processes each running a complete python interpreter and loading all your data into memory. HTTP request comes in and gets dispatched off to some process Process does your calculation and returns the result directly to the webserver and user When you need to change your code or the graph data, you restart the webserver and go back to step 1. This is the architecture used Django and other popular web frameworks.
0
2,509
1
4
2009-11-04T15:51:00.000
python,nginx,load-balancing,wsgi,reverse-proxy
how to process long-running requests in python workers?
1
4
7
1,682,864
0
0
0
I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. I want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. how to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow) btw. workers are using read-only data so there is no need to maintain locking and communication between them
false
1,674,696
0
1
0
0
You could use nginx load balancer to proxy to PythonPaste paster (which serves WSGI, for example Pylons), that launches each request as separate thread anyway.
0
2,509
1
4
2009-11-04T15:51:00.000
python,nginx,load-balancing,wsgi,reverse-proxy
how to process long-running requests in python workers?
1
4
7
1,676,102
0
0
0
How do I get started with getting going with XML-RPC with joomla? I've been looking around for documentation and finding nothing... I'd like to connect to a joomla server, (after enabling the Core Joomla XML-RPC plugin), and be able to do things like login and add an article, and tweak all the parameters of the article if possible. My xml-rpc client implementation will be in python.
true
1,694,205
1.2
1
0
3
the book "Mastering Joomla 1.5 Extension and Framework Development" has a nice explanation of that. Joomla has a fex XML-RPC plugins that let you do a few things, like the blogger API interface. (plugins/xmlrpc/blogger.php) You should create your own XML-RPC plugin to do the custom things you want.
0
2,670
0
3
2009-11-07T19:53:00.000
python,joomla,xml-rpc
Joomla and XMLRPC
1
1
1
1,696,183
0
1
0
currently im making some crawler script,one of problem is sometimes if i open webpage with PAMIE,webpage can't open and hang forever. are there any method to close PAMIE's IE or win32com's IE ? such like if webpage didn't response or loading complete less than 10sec or so . thanks in advance
false
1,698,362
0
1
0
0
I think what you are looking for is somewhere to set the timeout on your request. I would suggest looking into the documentation on PAMIE.
0
294
0
0
2009-11-08T23:40:00.000
python,time,multithreading,pamie
win32com and PAMIE web page open timeout
1
2
2
1,698,371
0
1
0
currently im making some crawler script,one of problem is sometimes if i open webpage with PAMIE,webpage can't open and hang forever. are there any method to close PAMIE's IE or win32com's IE ? such like if webpage didn't response or loading complete less than 10sec or so . thanks in advance
true
1,698,362
1.2
1
0
2
Just use, to initialize your PAMIE instance, PAMIE(timeOut=100) or whatever. The units of measure for timeOut are "tenths of a second" (!); the default is 3000 (300 seconds, i.e., 5 minutes); with 300 as I suggested, you'd time out after 10 seconds as you request. (You can pass the timeOut= parameter even when you're initializing with a URL, but in that case the timeout will only be active after the initial navigation).
0
294
0
0
2009-11-08T23:40:00.000
python,time,multithreading,pamie
win32com and PAMIE web page open timeout
1
2
2
1,698,422
0
0
0
I am trying to use Python to write a client that connects to a custom http server that uses digest authentication. I can connect and pull the first request without problem. Using TCPDUMP (I am on MAC OS X--I am both a MAC and a Python noob) I can see the first request is actually two http requests, as you would expect if you are familiar with RFC2617. The first results in the 401 UNAUTHORIZED. The header information sent back from the server is correctly used to generate headers for a second request with some custom Authorization header values which yields a 200 OK response and the payload. Everything is great. My HTTPDigestAuthHandler opener is working, thanks to urllib2. In the same program I attempt to request a second, different page, from the same server. I expect, per the RFC, that the TCPDUMP will show only one request this time, using almost all the same Authorization Header information (nc should increment). Instead it starts from scratch and first gets the 401 and regenerates the information needed for a 200. Is it possible with urllib2 to have subsequent requests with digest authentication recycle the known Authorization Header values and only do one request? [Re-read that a couple times until it makes sense, I am not sure how to make it any more plain] Google has yielded surprisingly little so I guess not. I looked at the code for urllib2.py and its really messy (comments like: "This isn't a fabulous effort"), so I wouldn't be shocked if this was a bug. I noticed that my Connection Header is Closed, and even if I set it to keepalive, it gets overwritten. That led me to keepalive.py but that didn't work for me either. Pycurl won't work either. I can hand code the entire interaction, but I would like to piggy back on existing libraries where possible. In summary, is it possible with urllib2 and digest authentication to get 2 pages from the same server with only 3 http requests executed (2 for first page, 1 for second). If you happen to have tried this before and already know its not possible please let me know. If you have an alternative I am all ears. Thanks in advance.
false
1,706,644
0.197375
0
0
1
Although it's not available out of the box, urllib2 is flexible enough to add it yourself. Subclass HTTPDigestAuthHandler, hack it (retry_http_digest_auth method I think) to remember authentication information and define an http_request(self, request) method to use it for all subsequent requests (add WWW-Authenticate header).
0
1,902
0
1
2009-11-10T09:29:00.000
python,authentication,urllib2,digest
Client Digest Authentication Python with URLLIB2 will not remember Authorization Header Information
1
1
1
1,717,241
0
0
0
Then how do I import that? I run everything in python 2.4, but one of my scripts import xml.etree.ElementTree...which is only Python 2.5
true
1,713,398
1.2
0
0
4
Then it fails. You can't import a python 2.5 library while you're running python 2.4. It won't work. Why can't you run python 2.5+?
1
1,923
0
0
2009-11-11T06:20:00.000
python,linux,unix
What if one of my programs runs in python 2.4, but IMPORTS something that requires python 2.5?
1
1
5
1,713,411
0
1
0
Is there any library to deserialize with python which is serialized with java?
false
1,714,624
1
0
0
6
Java binary serialization is really designed to be used with Java. To do it in Python you'd have to have all the relevant Java classes available for inspection, and create Python objects appropriately - it would be pretty hideous and fragile. You're better off using a cross-platform serialization format such as Thrift, Protocol Buffers, JSON or XML. If you can't change which serialization format is used in the Java code, I'd suggest writing new Java code which deserializes from the binary format and then reserializes to a cross-platform format.
0
8,371
0
13
2009-11-11T11:33:00.000
java,python,serialization
Is there any library to deserialize with python which is serialized with java
1
2
7
1,714,644
0
1
0
Is there any library to deserialize with python which is serialized with java?
false
1,714,624
0
0
0
0
If you are using Java classes, then I don't even know what it would mean to deserialize a Java class in a Python environment. If you are only using simple primitives (ints, floats, strings), then it probably wouldn't be too hard to build a Python library that could deserialize the Java format. But as others have said, there are better cross-platform solutions.
0
8,371
0
13
2009-11-11T11:33:00.000
java,python,serialization
Is there any library to deserialize with python which is serialized with java
1
2
7
1,714,862
0
0
0
I currently have a small Python script that I'm using to spawn multiple executables, (voice chat servers), and in the next version of the software, the servers have the ability to receive heartbeat signals on the UDP port. (There will be possibly thousands of servers on one machine, ranging from ports 7878 and up) My problem is that these servers might (read: will) be running on the same machine as my Python script and I had planned on opening a UDP port, and just sending the heartbeat, waiting for the reply, and voila...I could restart servers when/if they weren't responding by killing the task and re-loading the server. Problem is that I cannot open a UDP port that the server is already using. Is there a way around this? The project lead is implementing the heartbeat still, so I'm sure any suggestions in how the heartbeat system could be implemented would be welcome also. -- This is a pretty generic script though that might apply to other programs so my main focus is still communicating on that UDP port.
false
1,722,993
0.099668
0
0
1
I'm pretty sure this is possible on Linux; I don't know about other UNIXes. There are two ways to propagate a file descriptor from one process to another: When a process fork()s, the child inherits all the file descriptors of the parent. A process can send a file descriptor to another process over a "UNIX Domain Socket". See sendmsg() and recvmsg(). In Python, the _multiprocessing extension module will do this for you; see _multiprocessing.sendfd() and _multiprocessing.recvfd(). I haven't experimented with multiple processes listening on UDP sockets. But for TCP, on Linux, if multiple processes all listen on a single TCP socket, one of them will be randomly chosen when a connection comes in. So I suspect Linux does something sensible when multiple processes are all listening on the same UDP socket. Try it and let us know!
0
5,210
1
0
2009-11-12T15:22:00.000
python,udp,communication,daemon,ports
Multiple programs using the same UDP port? Possible?
1
2
2
1,723,643
0
0
0
I currently have a small Python script that I'm using to spawn multiple executables, (voice chat servers), and in the next version of the software, the servers have the ability to receive heartbeat signals on the UDP port. (There will be possibly thousands of servers on one machine, ranging from ports 7878 and up) My problem is that these servers might (read: will) be running on the same machine as my Python script and I had planned on opening a UDP port, and just sending the heartbeat, waiting for the reply, and voila...I could restart servers when/if they weren't responding by killing the task and re-loading the server. Problem is that I cannot open a UDP port that the server is already using. Is there a way around this? The project lead is implementing the heartbeat still, so I'm sure any suggestions in how the heartbeat system could be implemented would be welcome also. -- This is a pretty generic script though that might apply to other programs so my main focus is still communicating on that UDP port.
true
1,722,993
1.2
0
0
2
This isn't possible. What you'll have to do is have one UDP master program that handles all UDP communication over the one port, and communicates with your servers in another way (UDP on different ports, named pipes, ...)
0
5,210
1
0
2009-11-12T15:22:00.000
python,udp,communication,daemon,ports
Multiple programs using the same UDP port? Possible?
1
2
2
1,723,017
0
0
0
In short I'm creating a Flash based multiplayer game and I'm now starting to work on the server-side code. Well I'm the sole developer of the project so I'm seeking a high-level socket library that works well with games to speed up my development time. I was trying to use the Twisted Framework (for Python) but I'm having some personal issues with it so I'm looking for another solution. I'm open to either Java or a Python based library. The main thing is that the library is stable enough for multiplayer games and the library needs to be "high-level" (abstract) since I'm new to socket programming for games. I want to also note that I will be using the raw binary socket for my Flash game (Actionscript 3.0) since I assume it will be faster than the traditional Flash XML socket.
false
1,728,266
0
0
0
0
High-level on one side and raw binary sockets on the other won't work. Sorry, but you'll need to go low-level on the server side too. EDIT: in response to the OP's comment. I am not aware of any "high level" interface of the nature that you are talking about for Java. And frankly I don't think it makes a lot of sense. If you are going to talk bytes over Socket streams you really do need to understand the standard JDK Socket / ServerSocket APIs; e.g. timeouts, keep-alive, etc.
0
8,269
0
10
2009-11-13T09:55:00.000
java,python,sockets
Seeking a High-Level Library for Socket Programming (Java or Python)
1
1
4
1,728,302
0
0
0
I'm not familiar with PowerBuilder but I have a task to create Automatic UI Test Application for PB. We've decided to do it in Python with pywinauto and iaccesible libraries. The problem is that some UI elements like newly added lists record can not be accesed from it (even inspect32 can't get it). Any ideas how to reach this elements and make them testable?
false
1,741,023
0.099668
0
0
2
I'm experimenting with code for a tool for automating PowerBuilder-based GUIs as well. From what I can see, your best bet would be to use the PowerBuilder Native Interface (PBNI), and call PowerScript code from within your NVO. If you like, feel free to send me an email (see my profile for my email address), I'd be interested in exchanging ideas about how to do this.
0
3,825
0
6
2009-11-16T09:23:00.000
python,testing,powerbuilder
How to make PowerBuilder UI testing application?
1
2
4
1,741,142
1
0
0
I'm not familiar with PowerBuilder but I have a task to create Automatic UI Test Application for PB. We've decided to do it in Python with pywinauto and iaccesible libraries. The problem is that some UI elements like newly added lists record can not be accesed from it (even inspect32 can't get it). Any ideas how to reach this elements and make them testable?
false
1,741,023
0.049958
0
0
1
I've seen in AutomatedQa support that they a recipe recommending using msaa and setting some properties on the controls. I do not know if it works.
0
3,825
0
6
2009-11-16T09:23:00.000
python,testing,powerbuilder
How to make PowerBuilder UI testing application?
1
2
4
2,328,021
1
0
0
How can I run a Python program so it outputs its STDOUT and inputs its STDIN to/from a remote telnet client? All the program does is print out text then wait for raw_input(), repeatedly. I want a remote user to use it without needing shell access. It can be single threaded/single user.
false
1,758,276
0.244919
0
0
5
Make the Python script into the shell for that user. (Or if that doesn't work, wrap it up in bash script or even a executable). (You might have to put it in /etc/shells (or equiv.))
0
953
0
3
2009-11-18T19:02:00.000
python,telnet
How can I run a Python program over telnet?
1
2
4
1,758,310
0
0
0
How can I run a Python program so it outputs its STDOUT and inputs its STDIN to/from a remote telnet client? All the program does is print out text then wait for raw_input(), repeatedly. I want a remote user to use it without needing shell access. It can be single threaded/single user.
false
1,758,276
0
0
0
0
You can just create a new linux user and set their shell to your script. Then when they telnet in and enter the username/password, the program runs instead of bash or whatever the default shell is.
0
953
0
3
2009-11-18T19:02:00.000
python,telnet
How can I run a Python program over telnet?
1
2
4
1,760,716
0
0
0
when I set the Firefox proxy with python webdriver, it doesn't wait until the page is fully downloaded, this doesn't happen when I don't set one. How can I change this behavior? Or how can I check that the page download is over?
true
1,785,607
1.2
0
0
1
The simplest thing to do is to poll the page looking for an element you know will be present once the download is complete. The Java webdriver bindings offer a "Wait" class for just this purpose, though there isn't (yet) an analogue for this in the python bindings.
0
1,062
0
1
2009-11-23T20:06:00.000
python,firefox,proxy,webdriver
Python Webdriver doesn't wait until the page is downloaded in Firefox when used with proxy
1
1
1
1,790,122
0
0
0
For example, I want to join a prefix path to resource paths like /js/foo.js. I want the resulting path to be relative to the root of the server. In the above example if the prefix was "media" I would want the result to be /media/js/foo.js. os.path.join does this really well, but how it joins paths is OS dependent. In this case I know I am targeting the web, not the local file system. Is there a best alternative when you are working with paths you know will be used in URLs? Will os.path.join work well enough? Should I just roll my own?
true
1,793,261
1.2
0
0
74
Since, from the comments the OP posted, it seems he doesn't want to preserve "absolute URLs" in the join (which is one of the key jobs of urlparse.urljoin;-), I'd recommend avoiding that. os.path.join would also be bad, for exactly the same reason. So, I'd use something like '/'.join(s.strip('/') for s in pieces) (if the leading / must also be ignored -- if the leading piece must be special-cased, that's also feasible of course;-).
1
128,395
0
147
2009-11-24T22:06:00.000
python,url
How to join components of a path when you are constructing a URL in Python
1
1
14
1,794,540
0
0
0
the normal behavior of urllib/urllib2 is if an error code is sent in the header of the response (i.e 404) an Exception is raised. How do you look for specific errors i.e (40x, or 50x) based on the different errors, do different things. Also, how do you read the actual data being returned HTML/JSON etc (The data usually has error details which is different to the HTML error code)
false
1,803,741
0.099668
0
0
1
In urllib2 HTTPError exception is also a valid HTTP response, so you can treat an HTTP error as an exceptional event or valid response. But in urllib you have to subclass URLopener and define http_error_<code> method[s] or redefine http_error_default to handle them all.
0
5,703
0
4
2009-11-26T13:43:00.000
python,error-handling
Error codes returned by urllib/urllib2 and the actual page
1
1
2
1,803,796
0
1
0
Basically, i have a list of 30,000 URLs. The script goes through the URLs and downloads them (with a 3 second delay in between). And then it stores the HTML in a database. And it loops and loops... Why does it randomly get "Killed."? I didn't touch anything. Edit: this happens on 3 of my linux machines. The machines are on a Rackspace cloud with 256 MB memory. Nothing else is running.
false
1,811,173
0.039979
0
0
1
Is it possible that it's hitting an uncaught exception? Are you running this from a shell, or is it being run from cron or in some other automated way? If it's automated, the output may not be displayed anywhere.
0
20,694
0
17
2009-11-28T00:47:00.000
python,mysql,url
Why does my python script randomly get killed?
1
2
5
1,811,196
0
1
0
Basically, i have a list of 30,000 URLs. The script goes through the URLs and downloads them (with a 3 second delay in between). And then it stores the HTML in a database. And it loops and loops... Why does it randomly get "Killed."? I didn't touch anything. Edit: this happens on 3 of my linux machines. The machines are on a Rackspace cloud with 256 MB memory. Nothing else is running.
false
1,811,173
0.039979
0
0
1
Are you using some sort of queue manager or process manager of some sort ? I got apparently random killed messages when the batch queue manager I was using was sending SIGUSR2 when the time was up. Otherwise I strongly favor the out of memory option.
0
20,694
0
17
2009-11-28T00:47:00.000
python,mysql,url
Why does my python script randomly get killed?
1
2
5
1,811,350
0
0
0
when I can't delete FF cookies from webdriver. When I use the .delete_all_cookies method, it returns None. And when I try to get_cookies, I get the following error: webdriver_common.exceptions.ErrorInResponseException: Error occurred when processing packet:Content-Length: 120 {"elementId": "null", "context": "{9b44672f-d547-43a8-a01e-a504e617cfc1}", "parameters": [], "commandName": "getCookie"} response:Length: 266 {"commandName":"getCookie","isError":true,"response":{"lineNumber":576,"message":"Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsIDOMLocation.host]","name":"NS_ERROR_FAILURE"},"elementId":"null","context":"{9b44672f-d547-43a8-a01e-a504e617cfc1} "} How can I fix it? Update: This happens with clean installation of webdriver with no modifications. The changes I've mentioned in another post were made later than this post being posted (I was trying to fix the issue myself).
true
1,813,044
1.2
0
0
0
Hmm, I actually haven't worked with Webdriver so this may be of no help at all... but in your other post you mention that you're experimenting with modifying the delete cookie webdriver js function. Did get_cookies fail before you were modifying the delete function? What happens when you get cookies before deleting them? I would guess that the modification you're making to the delete function in webdriver-read-only\firefox\src\extension\components\firefoxDriver.js could break the delete function. Are you doing it just for debugging or do you actually want the browser itself to show a pop up when the driver tells it to delete cookies? It wouldn't surprise me if this modification broke. My real advice though would be actually to start using Selenium instead of Webdriver since it's being discontinued in it's current incarnation, or morphed into Selenium. Selenium is more actively developed and has pretty active and responsive forms. It will continue to be developed and stable while the merge is happening, while I take it Webdriver might not have as many bugfixes going forward. I've had success using the Selenium commands that control cookies. They seem to be revamping their documentation and for some reason there isn't any link to the Python API, but if you download selenium rc, you can find the Python API doc in selenium-client-driver-python, you'll see there are a good 5 or so useful methods for controlling cookies, which you use in your own custom Python methods if you want to, say, delete all the cookies with a name matching a certain regexp. If for some reason you do want the browser to alert() some info about the deleted cookies too, you could do that by getting the cookie names/values from the python method, and then passing them to selenium's getEval() statement which will execute arbitrary js you feed it (like "alert()"). ... If you do go the selenium route feel free to contact me if you get a blocker, I might be able to assist.
0
2,862
0
0
2009-11-28T16:59:00.000
python,firefox,webdriver
How to delete Firefox cookies from webdriver in python?
1
1
1
1,814,160
0
0
0
I'm writing a script which needs the browser that selenium is operating close and re-open, without losing its cookies. Any idea on how to go about it? Basically, it's a check to see that if the user opens and closes his browser, his cookies stay intact.
false
1,818,969
0.291313
0
0
3
You should be able to use the stop and start commands. You will need to ensure that you are not clearing cookies between sessions, and depending on the browser you're launching you may also need to use the -browserSessionReuse command line option.
0
1,370
0
1
2009-11-30T10:20:00.000
python,selenium
Close and open a new browser in Selenium
1
2
2
1,819,059
0
0
0
I'm writing a script which needs the browser that selenium is operating close and re-open, without losing its cookies. Any idea on how to go about it? Basically, it's a check to see that if the user opens and closes his browser, his cookies stay intact.
false
1,818,969
0
0
0
0
This is a feature of the browser and not your concern: If there is a bug in the browser, then there is little you can do. If you need to know whether a certain version of the browser works correctly, then define a manual test (write a document that explains the steps), do it once and record the result somewhere (like "Browser XXX version YYY works"). When you know that a certain browser (version) works, then that's not going to change, so there is no need to repeat the test.
0
1,370
0
1
2009-11-30T10:20:00.000
python,selenium
Close and open a new browser in Selenium
1
2
2
1,819,042
0
1
0
My Application has a lot of calculation being done in JavaScript according to how and when the user acts on the application. The project prints out valuable information (through console calls) as to how this calculation is going on, and so we can easily spot any NaNs creeping in. We are planning to integrate Selenium (RC with python) to test or project, but if we could get the console output messages in the python test case, we can identify any NaNs or even any miscalculations. So, is there a way that Selenium can absorb these outputs (preferably in a console-less environment)? If not, I would like to know if I can divert the console calls, may be by rebinding the console variable to something else, so that selenium can get that output and notify the python side. Or if not console, is there any other way that I can achieve this. I know selenium has commands like waitForElementPresent etc., but I don't want to show these intermediate calculations on the application, or is it the only way? Any help appreciated. Thank you.
false
1,819,903
0.099668
0
0
1
If you are purely testing that the JavaScript functions are performing the correct calculations with the given inputs, I would suggest separating your JavaScript from your page and use a JavaScript testing framework to test the functionality. Testing low level code using Selenium is a lot of unnecessary overhead. If you're going against the fully rendered page, this would require your application to be running to a server, which should not be a dependency of testing raw JavaScript. We recently converted our application from using jsUnit to use YUI Test and it has been promising so far. We run about 150 tests in both FireFox and IE in less than three minutes. Our testing still isn't ideal - we still test a lot of JavaScript the hard way using Selenium. However, moving some of the UI tests to YUI Test has saved us a lot of time in our Continuous Integration environment.
0
2,109
0
3
2009-11-30T13:42:00.000
javascript,python,testing,selenium,selenium-rc
Javascript communication with Selenium (RC)
1
1
2
1,902,514
0
1
0
There is CherryPy. Are there any others?
false
1,835,668
0.066568
0
0
1
also: web.py (webpy.org) paste (pythonpaste.org)
0
507
0
2
2009-12-02T20:43:00.000
python,http
What Python-only HTTP/1.1 web servers are available?
1
1
3
1,837,070
0
1
0
I want to automate interaction with a webpage. I've been using pycurl up til now but eventually the webpage will use javascript so I'm looking for alternatives . A typical interaction is "open the page, search for some text, click on a link (which opens a form), fill out the form and submit". We're deploying on Google App engine, if that makes a difference. Clarification: we're deploying the webpage on appengine. But the interaction is run on a separate machine. So selenium seems like it's the best choice.
false
1,836,987
1
0
0
6
Twill and mechanize don't do Javascript, and Qt and Selenium can't run on App Engine ((1)), which only supports pure Python code. I do not know of any pure-Python Javascript interpreter, which is what you'd need to deploy a JS-supporting scraper on App Engine:-(. Maybe there's something in Java, which would at least allow you to deploy on (the Java version of) App Engine? App Engine app versions in Java and Python can use the same datastore, so you could keep some part of your app in Python... just not the part that needs to understand Javascript. Unfortunately I don't know enough about the Java / AE environment to suggest any specific package to try. ((1)): to clarify, since there seems to be a misunderstanding that has gotten so far as to get me downvoted: if you run Selenium or other scrapers on a different computer, you can of course target a site deployed in App Engine (it doesn't matter how the website you're targeting is deployed, what programming language[s] it uses, etc, etc, as long as it's a website you can access [[real website: flash, &c, may likely be different]]). How I read the question is, the OP is looking for ways to have the scraping run as part of an App Engine app -- that is the problematic part, not where you (or somebody else;-) runs the site being scraped!
0
9,387
0
4
2009-12-03T00:59:00.000
python,google-app-engine,pycurl
Automate interaction with a webpage in python
1
1
5
1,837,397
0
1
0
When I view the source of page, I do not find the image src. but the image is displayed on the page. This image is generated by some server side code. I am using the selenium for testing. I want to download this image for verification/comparison. How to get that image using python?
false
1,838,047
0
0
0
0
if you just want to download the image, theres two strategies you can try: use a something like Firebug or Chrome developer tools. right click the element in question, click "inspect element", and look at the css properties of the element. if you look around, you should find something like background-image style or maybe just a normal tag. then you'll have the url to the image. use a something like Firebug or Chrome developer tools: look in the "resources" tab, and look for image files that show up.
0
163
0
0
2009-12-03T06:26:00.000
python
How to get the URL Image which is displyed by script
1
2
3
1,870,291
0
1
0
When I view the source of page, I do not find the image src. but the image is displayed on the page. This image is generated by some server side code. I am using the selenium for testing. I want to download this image for verification/comparison. How to get that image using python?
false
1,838,047
0
0
0
0
If you aren't seeing an actual image tag in the HTML, your next step would seem to be figuring out how its being displayed. The first place I'd suggest looking is in the .css files for this page. Images can actually be embedded using CSS, and this seems like the next likely option after being in the HTML code itself. If it isn't in there, you may be dealing with some form of technique deliberately intended to prevent you from being able to download the image with a script. This may use obfuscated JavaScript or something similar and I wouldn't expect people to be able to give you an easy solution to bypass it (since it was carefully designed to resist exactly that).
0
163
0
0
2009-12-03T06:26:00.000
python
How to get the URL Image which is displyed by script
1
2
3
1,838,243
0
1
0
I want to learn how to use XMPP and to create a simple web application with real collaboration features. I am writing the application with Python(WSGI), and the application will require javascript enabled because I am going to use jQuery or Dojo. I have downloaded Openfire for the server and which lib to choose? SleekXMPP making trouble with tlslite module(python 2.5 and I need only python 2.6). What is your suggestion?
false
1,847,120
0
0
0
0
I have found a lot of issues with Openfire and TLS are not with the xmpp lib :( -- SleekXMPP in the trunk has been converted to Python 3.0 and the branch is maintained for Python 2.5 Unlike Julien, I would only go with Twisted Words if you really need the power of Twisted or if you are already using Twisted. IMO SleekXMPP offers the closest match to the current XEP's in use today.
0
1,373
0
3
2009-12-04T14:02:00.000
javascript,python,xmpp,wsgi
Best XMPP Library for Python Web Application
1
1
3
1,881,020
0
0
0
Are they just the same protocol or something different? I am just confused about it. Actually, I want to call a web service written in C# with ASP.NET by Python. I have tried XMLRPC but it seems just did not work. So what is the actually difference among them? Thanks.
false
1,847,534
0.148885
0
0
3
They are completely different protocols, you need to find out the protocol used by the web service you wish to consume and program to that. Web services is really just a concept XML-RPC, SOAP and REST are actual technologies the implement this concept. These implementations are not interoperable (without some translation layer). All these protocols enable basically the same sort of thing, calling into remote some application over the web. However the details of how they do this differ, they are not just different names for the same protocol.
0
2,232
0
3
2009-12-04T15:06:00.000
c#,python,web-services,xml-rpc
Can anyone explain the difference between XMLRPC, SOAP and also the C# Web Service?
1
3
4
1,847,560
0
0
0
Are they just the same protocol or something different? I am just confused about it. Actually, I want to call a web service written in C# with ASP.NET by Python. I have tried XMLRPC but it seems just did not work. So what is the actually difference among them? Thanks.
true
1,847,534
1.2
0
0
5
All of them use the same transport protocol (HTTP). XMLRPC formats a traditional RPC call with XML for remote execution. SOAP wraps the call in a SOAP envelope (still XML, different formatting, oriented towards message based services rather than RPC style calls). If you're using C#, your best bet is probably SOAP based Web Services (at least out of the options you listed).
0
2,232
0
3
2009-12-04T15:06:00.000
c#,python,web-services,xml-rpc
Can anyone explain the difference between XMLRPC, SOAP and also the C# Web Service?
1
3
4
1,847,573
0
0
0
Are they just the same protocol or something different? I am just confused about it. Actually, I want to call a web service written in C# with ASP.NET by Python. I have tried XMLRPC but it seems just did not work. So what is the actually difference among them? Thanks.
false
1,847,534
0.049958
0
0
1
xml-rpc: Its a mechanism to call remote procedure & function accross network for distributed system integration. It uses XML based message document and HTTP as transport protocol. Further, it only support 6 basic data type as well as array for communication. SOAP: SOAP is also XML-based protocol for information exchange using HTPP transport protocol. However, it is more advanced then XML-RPC protocol. It uses XML formatted message that helps communicating complex data types accross distributed application, and hence is widely used now a days.
0
2,232
0
3
2009-12-04T15:06:00.000
c#,python,web-services,xml-rpc
Can anyone explain the difference between XMLRPC, SOAP and also the C# Web Service?
1
3
4
3,913,857
0
1
0
I'm writing a spider in Python to crawl a site. Trouble is, I need to examine about 2.5 million pages, so I could really use some help making it optimized for speed. What I need to do is examine the pages for a certain number, and if it is found to record the link to the page. The spider is very simple, it just needs to sort through a lot of pages. I'm completely new to Python, but have used Java and C++ before. I have yet to start coding it, so any recommendations on libraries or frameworks to include would be great. Any optimization tips are also greatly appreciated.
false
1,853,673
0.099668
0
0
3
You waste a lot of time waiting for network requests when spidering, so you'll definitely want to make your requests in parallel. I would probably save the result data to disk and then have a second process looping over the files searching for the term. That phase could easily be distributed across multiple machines if you needed extra performance.
0
7,091
0
6
2009-12-05T22:28:00.000
python,web-crawler
Writing a Faster Python Spider
1
3
6
1,853,865
0
1
0
I'm writing a spider in Python to crawl a site. Trouble is, I need to examine about 2.5 million pages, so I could really use some help making it optimized for speed. What I need to do is examine the pages for a certain number, and if it is found to record the link to the page. The spider is very simple, it just needs to sort through a lot of pages. I'm completely new to Python, but have used Java and C++ before. I have yet to start coding it, so any recommendations on libraries or frameworks to include would be great. Any optimization tips are also greatly appreciated.
false
1,853,673
0
0
0
0
What Adam said. I did this once to map out Xanga's network. The way I made it faster is by having a thread-safe set containing all usernames I had to look up. Then I had 5 or so threads making requests at the same time and processing them. You're going to spend way more time waiting for the page to DL than you will processing any of the text (most likely), so just find ways to increase the number of requests you can get at the same time.
0
7,091
0
6
2009-12-05T22:28:00.000
python,web-crawler
Writing a Faster Python Spider
1
3
6
1,854,592
0
1
0
I'm writing a spider in Python to crawl a site. Trouble is, I need to examine about 2.5 million pages, so I could really use some help making it optimized for speed. What I need to do is examine the pages for a certain number, and if it is found to record the link to the page. The spider is very simple, it just needs to sort through a lot of pages. I'm completely new to Python, but have used Java and C++ before. I have yet to start coding it, so any recommendations on libraries or frameworks to include would be great. Any optimization tips are also greatly appreciated.
false
1,853,673
0.099668
0
0
3
Spidering somebody's site with millions of requests isn't very polite. Can you instead ask the webmaster for an archive of the site? Once you have that, it's a simple matter of text searching.
0
7,091
0
6
2009-12-05T22:28:00.000
python,web-crawler
Writing a Faster Python Spider
1
3
6
1,853,689
0
1
0
I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML. I would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for? Thanks. EDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to "dismantle" and understand complex source code, in order to grasp programming techniques and concepts. EDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.
false
1,854,827
0
0
0
0
Maybe you have a project in mind that you want to code up? It's very hard to read what other people write, the best way to learn is to try something. Other people will have gone through the problems you will come across, and so why code is written the way it is may start to make sense. This is an excellent site to post questions, no matter how stupid you consider them.
0
1,248
0
2
2009-12-06T09:04:00.000
python,coding-style,code-readability
How can a total, complete beginner read source code?
1
5
9
2,628,550
0
1
0
I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML. I would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for? Thanks. EDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to "dismantle" and understand complex source code, in order to grasp programming techniques and concepts. EDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.
false
1,854,827
1
0
0
6
I would recommend you understand the basics. What are methods, classes, variables and so on. It would be important to understand the constructs you are seeing. If you don't understand those then it's just going to be a bunch of characters.
0
1,248
0
2
2009-12-06T09:04:00.000
python,coding-style,code-readability
How can a total, complete beginner read source code?
1
5
9
1,854,832
0
1
0
I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML. I would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for? Thanks. EDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to "dismantle" and understand complex source code, in order to grasp programming techniques and concepts. EDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.
false
1,854,827
0.066568
0
0
3
Donald Knuth suggests: "It [is] basically the way you solve some kind of unknown puzzle -- make tables and charts and get a little more information here and make a hypothesis." (From "Coders at Work", Chapter 15) In my opinion, the easiest way to understand a program is to study the data structures first. Write them down, memorize them. Only then, think about how they move through program-time. As an aside, it is sort of a shame how few books there are on code reading. "Coders at Work" is probably the best so far. Ironically, "Reading Code" is one of the worst so far.
0
1,248
0
2
2009-12-06T09:04:00.000
python,coding-style,code-readability
How can a total, complete beginner read source code?
1
5
9
1,973,070
0
1
0
I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML. I would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for? Thanks. EDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to "dismantle" and understand complex source code, in order to grasp programming techniques and concepts. EDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.
false
1,854,827
0.066568
0
0
3
There is no magic way to learn anything without reading and writing code yourself. If you get stuck there are always folks in SO who will help you.
0
1,248
0
2
2009-12-06T09:04:00.000
python,coding-style,code-readability
How can a total, complete beginner read source code?
1
5
9
1,854,841
0
1
0
I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML. I would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for? Thanks. EDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to "dismantle" and understand complex source code, in order to grasp programming techniques and concepts. EDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.
false
1,854,827
0.066568
0
0
3
To understand source code in any language, you first need to learn the language. It's as simple as that! Usually, reading source code (as a sole activity) will hurt your head without giving much benefit in terms of learning the underlying language. You need a structured tour through carefully chosen small source code examples, such as a book or tutorial will give you. Check Amazon out for books and Google for tutorials, try a few. The links offered by some of the other answers would also be a great starting point.
0
1,248
0
2
2009-12-06T09:04:00.000
python,coding-style,code-readability
How can a total, complete beginner read source code?
1
5
9
1,854,836
0
0
0
I'm working on writing a Python client for Direct Connect P2P networks. Essentially, it works by connecting to a central server, and responding to other users who are searching for files. Occasionally, another client will ask us to connect to them, and they might begin downloading a file from us. This is a direct connection to the other client, and doesn't go through the central server. What is the best way to handle these connections to other clients? I'm currently using one Twisted reactor to connect to the server, but is it better have multiple reactors, one per client, with each one running in a different thread? Or would it be better to have a completely separate Python script that performs the connection to the client? If there's some other solution that I don't know about, I'd love to hear it. I'm new to programming with Twisted, so I'm open to suggestions and other resources. Thanks!
true
1,856,786
1.2
0
0
3
Without knowing all the details of the protocol, I would still recommend using a single reactor -- a reactor scales quite well (especially advanced ones such as PollReactor) and this way you will avoid the overhead connected with threads (that's how Twisted and other async systems get their fundamental performance boost, after all -- by avoiding such overhead). In practice, threads in Twisted are useful mainly when you need to interface to a library whose functions could block on you.
0
1,176
1
3
2009-12-06T22:10:00.000
python,twisted,p2p
Proper way to implement a Direct Connect client in Twisted?
1
1
1
1,857,145
0
0
0
I have a Python script gathering info from some remote network devices. The output can be maybe 20 to 1000 lines of text. This then goes into excel on my local PC for now. Now access to this Linux device is convoluted, a citrix session to a remote windows server then ssh to the Linux device half way around the world. There is no ftp, scp, or anything like that, so I can't generate the excel on the Linux device and transfer it locally. The ONLY way to get the info is to copy/paste from the ssh window into the local machine and post-process it My question is what would be the best (from a user point of view as others will be using it) format to generate? 1.as it is now (spaces & tabs), 2.reformat as csv or as 3.convert to xml
false
1,870,383
0
0
0
0
Reformat it as CSV. It's dead easy to do, is fairly human readable, and can be read by loads of pieces of spreadsheet software.
0
720
0
0
2009-12-08T22:36:00.000
python,xml,excel,csv
Python to generate output ready for Excel
1
1
3
1,870,411
0
1
0
I'm working on a small wave thingy where i need to load a wave based on an outside event. So i don't have a context to work with. I've been looking at the python api for a while but i can't figure out the correct way to get a wave object (that i can then call CreateBlip() on) when i just have the waveid. Is there something i've just overlooked? or do I have to make a 'raw' json request instead of using the api ?
false
1,876,908
0
0
0
0
At the moment the answer is that i can't be done. Hopefully in a future version of the Api.
0
95
0
1
2009-12-09T21:11:00.000
python,google-wave
Loading a wave from waveid
1
1
2
1,932,852
0
0
0
What would the best way of unpacking a python string into fields I have data received from a tcp socket, it is packed as follows, I believe it will be in a string from the socket recv function It has the following format uint8 - header uint8 - length uint32 - typeID uint16 -param1 uint16 -param2 uint16 -param3 uint16 -param4 char[24] - name string uint32 - checksum uint8 - footer (I also need to unpack other packets with different formats to the above) How do I unpack these? I am new to python, have done a bit of 'C'. If I was using 'C' I would probably use a structure, would this be the way to go with Python? Regards X
false
1,879,914
0.039979
0
0
1
Is this the best way of doing this or is there a better way It is likely that there will be strings with other formats which will require a different unpacking scheme field1 = struct.unpack('B',data[0]) field2 = struct.unpack('B',data[1]) field3 = struct.unpack('!I',data[2:6]) field4 = struct.unpack('!H',data[6:8]) field5 = struct.unpack('!H',data[8:10]) field6 = struct.unpack('!H',data[10:12]) field7 = struct.unpack('!H',data[12:14]) field8 = struct.unpack('20s',data[14:38]) field9 = struct.unpack('!I',data[38:42]) field10 = struct.unpack('B',data[42]) Regards
0
11,515
0
5
2009-12-10T09:50:00.000
python,string,unpack
Decoding packed data into a structure
1
1
5
1,880,427
0
0
0
I have built an XML-RPC interface in Python and I need to enforce some stricter typing. For example, passing string '10' instead of int 10. I can clean this up with some type casting and a little exception handling, but I am wondering if there is any other way of forcing type integrity such as something XML-RPC specific, a decorator, or something else.
false
1,925,487
0.066568
1
0
1
It's always going to be converted to a string anyway, so why do you care what's being passed in? If you use "%s" % number or even just str(number), then it doesn't matter whether number is a string or an int.
0
220
0
2
2009-12-18T00:12:00.000
python,django
XML-RPC method parameter data typing in Python
1
1
3
1,925,617
0
0
0
I need some code to get the address of the socket i just created (to filter out packets originating from localhost on a multicast network) this: socket.gethostbyname(socket.gethostname()) works on mac but it returns only the localhost IP in linux... is there anyway to get the LAN address thanks --edit-- is it possible to get it from the socket settings itself, like, the OS has to select a LAN IP to send on... can i play on getsockopt(... IP_MULTICAST_IF...) i dont know exactly how to use this though...? --- edit --- SOLVED! send_sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_LOOP, 0) putting this on the send socket eliminated packet echos to the host sending them, which eliminates the need for the program to know which IP the OS has selected to send. yay!
false
1,925,974
0
0
0
0
quick answer - socket.getpeername() (provided that socket is a socket object, not a module) (playing around in python/ipython/idle/... interactive shell is very helpful) .. or if I read you question carefully, maybe socket.getsockname() :)
0
626
1
0
2009-12-18T02:49:00.000
python,sockets,ip-address
How to get the LAN IP that a socket is sending (linux)
1
1
2
1,926,048
0
0
0
I've got a python web crawler and I want to distribute the download requests among many different proxy servers, probably running squid (though I'm open to alternatives). For example, it could work in a round-robin fashion, where request1 goes to proxy1, request2 to proxy2, and eventually looping back around. Any idea how to set this up? To make it harder, I'd also like to be able to dynamically change the list of available proxies, bring some down, and add others. If it matters, IP addresses are assigned dynamically. Thanks :)
true
1,934,088
1.2
0
0
6
Make your crawler have a list of proxies and with each HTTP request let it use the next proxy from the list in a round robin fashion. However, this will prevent you from using HTTP/1.1 persistent connections. Modifying the proxy list will eventually result in using a new or not using a proxy. Or have several connections open in parallel, one to each proxy, and distribute your crawling requests to each of the open connections. Dynamics may be implemented by having the connetor registering itself with the request dispatcher.
0
16,640
0
10
2009-12-19T20:46:00.000
python,proxy,screen-scraping,web-crawler,squid
Rotating Proxies for web scraping
1
1
3
1,934,198
0