Web Development
int64
0
1
Data Science and Machine Learning
int64
0
1
Question
stringlengths
28
6.1k
is_accepted
bool
2 classes
Q_Id
int64
337
51.9M
Score
float64
-1
1.2
Other
int64
0
1
Database and SQL
int64
0
1
Users Score
int64
-8
412
Answer
stringlengths
14
7k
Python Basics and Environment
int64
0
1
ViewCount
int64
13
1.34M
System Administration and DevOps
int64
0
1
Q_Score
int64
0
1.53k
CreationDate
stringlengths
23
23
Tags
stringlengths
6
90
Title
stringlengths
15
149
Networking and APIs
int64
1
1
Available Count
int64
1
12
AnswerCount
int64
1
28
A_Id
int64
635
72.5M
GUI and Desktop Applications
int64
0
1
0
0
I tried using the ssl module in Python 2.6 but I was told that it wasn't available. After installing OpenSSL, I recompiled 2.6 but the problem persists. Any suggestions?
false
979,551
-0.066568
0
0
-1
Use the binaries provided by python.org or by your OS distributor. It's a lot easier than building it yourself, and all the features are usually compiled in. If you really need to build it yourself, you'll need to provide more information here about what build options you provided, what your environment is like, and perhaps provide some logs.
0
7,646
0
2
2009-06-11T05:56:00.000
python,ssl,openssl
Adding SSL support to Python 2.6
1
2
3
996,622
0
0
0
I have a Pythonic HTTP server that is supposed to determine client's IP. How do I do that in Python? Is there any way to get the request headers and extract it from there? PS: I'm using WebPy.
false
979,599
0.291313
0
0
3
web.env.get('REMOTE_ADDR')
0
1,464
0
3
2009-06-11T06:16:00.000
python,http,header,request,ip
Extracting IP from request in Python
1
1
2
979,637
0
0
0
I have installed lxml which was built using a standalone version of libxml2. Reason for this was that the lxml needed a later version of libxml2 than what was currently installed. When I use the lxml module how do I tell it (python) where to find the correct version of the libxml2 shared library?
true
985,155
1.2
0
0
5
Assuming you're talking about a .so file, it's not up to Python to find it -- it's up to the operating system's dynamic library loaded. For Linux, for example, LD_LIBRARY_PATH is the environment variable you need to set.
1
2,097
0
5
2009-06-12T05:28:00.000
python
How to specify native library search path for python
1
1
1
985,176
0
1
0
I have a django application hosted on webfaction which now has a static/private ip. Our network in the office is obviously behind a firewall and the AD server is running behind this firewall. From inside the network i can authenticate using python-ldap with the AD's internal IP address and the port 389 and all works well. When i move this to the hosted webserver i change the ip address and port that has been openend up on our firewall. For simplicity the port we opened up is 389 however the requests to authenticate always timeout. When logged into webfaction and running python from the shell and querying the ipaddress i get webfactional's general ip address rather than my static ip. Is this whats happening when i try and auth in django? the request comes from the underlying ip address that python is running on rather than the static ip that my firewall is expecting? Im fairly clueless to all this networking and port mapping so any help would be much appreciated! Hope that makes sense?
false
990,459
0
0
0
0
There are quite a few components between your hosted django application and your internal AD. You will need to test each to see if everything in the pathways between them is correct. So your AD server is sitting behind your firewall. Your firewall has ip "a.b.c.d" and all traffic to the firewall ip on port 389 is forwarded to the AD server. I would recommend that you change this to a higher more random port on your firewall, btw. Less scans there. With the shell access you can test to see if you can reach your network. Have your firewall admin check the firewall logs while you try one of the following (or something similar with python) : check the route to your firewall (this might not work if webfaction blocks this, otherwise you will see a list of hosts along which your traffic will pass - if there is a firewall on the route somewhere you will see that your connection is lost there as this is dropped by default on most firewalls): tracert a.b.c.d do a telnet to your firewall ip on port 389 (the telnet test will allow your firewall admin to see the connection attempts coming in on port 389 in his log. If those do arrive, that means that external comm should work fine): telnet a.b.c.d 389 Similarly, you need to check that your AD server receives these requests (check your logs) and as well can respond to them. Perhaps your AD server is not set up to talk to the firewall ?
0
2,571
0
0
2009-06-13T10:09:00.000
python,active-directory,ldap,webserver
Python LDAP Authentication from remote web server
1
1
2
991,550
0
1
0
I want to use mechanize with python to get all the links of the page, and then open the links.How can I do it?
false
1,011,975
0.197375
0
0
2
The Browser object in mechanize has a links method that will retrieve all the links on the page.
0
8,097
0
3
2009-06-18T10:32:00.000
python,mechanize
How to get links on a webpage using mechanize and open those links
1
1
2
1,012,022
0
0
0
I'm trying to use python to sftp a file, and the code works great in the interactive shell -- even pasting it in all at once. When I try to import the file (just to compile it), the code hangs with no exceptions or obvious errors. How do I get the code to compile, or does someone have working code that accomplishes sftp by some other method? This code hangs right at the ssh.connect() statement: """ ProblemDemo.py Chopped down from the paramiko demo file. This code works in the shell but hangs when I try to import it! """ from time import sleep import os import paramiko sOutputFilename = "redacted.htm" #-- The payload file hostname = "redacted.com" ####-- WARNING! Embedded passwords! Remove ASAP. sUsername = "redacted" sPassword = "redacted" sTargetDir = "redacted" #-- Get host key, if we know one. hostkeytype = None hostkey = None host_keys = {} try: host_keys = paramiko.util.load_host_keys(os.path.expanduser('~/.ssh/known_hosts')) except IOError: try: # try ~/ssh/ too, because windows can't have a folder named ~/.ssh/ host_keys = paramiko.util.load_host_keys(os.path.expanduser('~/ssh/known_hosts')) except IOError: print '*** Unable to open host keys file' host_keys = {} if host_keys.has_key(hostname): hostkeytype = host_keys[hostname].keys()[0] hostkey = host_keys[hostname][hostkeytype] print 'Using host key of type %s' % hostkeytype ssh = paramiko.Transport((hostname, 22)) ssh.connect(username=sUsername, password=sPassword, hostkey=hostkey) sftp = paramiko.SFTPClient.from_transport(ssh) sftp.chdir (sTargetDir) sftp.put (sOutputFilename, sOutputFilename) ssh.close()
true
1,013,064
1.2
1
0
0
Weirdness aside, I was just using import to compile the code. Turning the script into a function seems like an unnecessary complication for this kind of application. Searched for alternate means to compile and found: import py_compile py_compile.compile("ProblemDemo.py") This generated a pyc file that works as intended. So the lesson learned is that import is not a robust way to compile python scripts.
0
6,518
0
3
2009-06-18T14:45:00.000
python,shell,compilation,sftp
Why does this python code hang on import/compile but work in the shell?
1
1
3
1,013,366
0
0
0
I have formatted text (with newlines, tabs, etc.) coming in from a Telnet connection. I have a python script that manages the Telnet connection and embeds the Telnet response in XML that then gets passed through an XSLT transform. How do I pass that XML through the transform without losing the original formatting? I have access to the transformation script and the python script but not the transform invocation itself.
false
1,015,816
0
0
0
0
Data stored in XML comes out the same way it goes in. So if you store the text in an element, no whitespace and newlines are lost unless you tamper with the data in the XSLT. Enclosing the text in CDATA is unnecessary unless there is some formatting that is invalid in XML (pointy brackets, ampersands, quotes) and you don't want to XML-escape the text under any circumstances. This is up to you, but in any case XML-escaping is completely transparent when the XML is handled with an XML-aware tool chain. To answer your question more specifically, you need to show some input, the essential part of the transformation, and some output.
0
223
0
0
2009-06-19T00:02:00.000
python,xslt
Passing Formatted Text Through XSLT
1
1
2
1,016,919
0
0
0
I'm trying to build some statistics for an email group I participate. Is there any Python API to access the email data on a GoogleGroup? Also, I know some statistics are available on the group's main page. I'm looking for something more complex than what is shown there.
true
1,017,794
1.2
1
0
3
There isn't an API that I know of, however you can access the XML feed and manipulate it as required.
0
1,163
0
6
2009-06-19T12:48:00.000
python,google-groups
Is there an API to access a Google Group data?
1
1
1
1,017,810
0
1
0
I am tired of clicking "File" and then "Save Page As" in Firefox when I want to save some websites. Is there any script to do this in Python? I would like to save the pictures and css files so that when I read it offline, it looks normal.
false
1,035,825
0.039979
0
0
1
Like Cobbal stated, this is largely what wget is designed to do. I believe there's some flags/arguments that you can set to make it download the entire page, CSS + all. I suggest just alias-ing into something more convenient to type, or tossing it into a quick script.
0
2,633
0
3
2009-06-23T23:40:00.000
python
Any Python Script to Save Websites Like Firefox?
1
1
5
1,035,855
0
1
0
I am trying to write a Python-based Web Bot that can read and interpret an HTML page, then execute an onClick function and receive the resulting new HTML page. I can already read the HTML page and I can determine the functions to be called by the onClick command, but I have no idea how to execute those functions or how to receive the resulting HTML code. Any ideas?
false
1,036,660
0
0
0
0
Well obviously python won't interpret the JS for you (though there may be modules out there that can). I suppose you need to convert the JS instructions to equivalent transformations in Python. I suppose ElementTree or BeautifulSoup would be good starting points to interpret the HTML structure.
0
6,042
0
3
2009-06-24T06:15:00.000
python,html,bots
Python Web-based Bot
1
2
7
1,036,758
0
1
0
I am trying to write a Python-based Web Bot that can read and interpret an HTML page, then execute an onClick function and receive the resulting new HTML page. I can already read the HTML page and I can determine the functions to be called by the onClick command, but I have no idea how to execute those functions or how to receive the resulting HTML code. Any ideas?
false
1,036,660
0
0
0
0
Why don't you just sniff what gets sent after the onclick event and replicate that with your bot?
0
6,042
0
3
2009-06-24T06:15:00.000
python,html,bots
Python Web-based Bot
1
2
7
5,873,989
0
0
0
What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. Would some type of concurrency help? PyCurl.CurlMulti object? I am open to all suggestions. Thanks!
false
1,051,275
0.066568
0
0
2
I don't know anything about python, but in general you would want to break the task down into smaller chunks so that they can be run concurrently. You could break it down by file type, or alphabetical or something, and then run a separate script for each portion of the break down.
0
3,744
0
3
2009-06-26T21:02:00.000
python,curl,amazon-s3,amazon-web-services,boto
Downloading a Large Number of Files from S3
1
2
6
1,051,338
0
0
0
What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). At the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. Would some type of concurrency help? PyCurl.CurlMulti object? I am open to all suggestions. Thanks!
false
1,051,275
0
0
0
0
I've been using txaws with twisted for S3 work, though what you'd probably want is just to get the authenticated URL and use twisted.web.client.DownloadPage (by default will happily go from stream to file without much interaction). Twisted makes it easy to run at whatever concurrency you want. For something on the order of 200,000, I'd probably make a generator and use a cooperator to set my concurrency and just let the generator generate every required download request. If you're not familiar with twisted, you'll find the model takes a bit of time to get used to, but it's oh so worth it. In this case, I'd expect it to take minimal CPU and memory overhead, but you'd have to worry about file descriptors. It's quite easy to mix in perspective broker and farm the work out to multiple machines should you find yourself needing more file descriptors or if you have multiple connections over which you'd like it to pull down.
0
3,744
0
3
2009-06-26T21:02:00.000
python,curl,amazon-s3,amazon-web-services,boto
Downloading a Large Number of Files from S3
1
2
6
1,051,408
0
0
0
I need to write a script that connects to a bunch of sites on our corporate intranet over HTTPS and verifies that their SSL certificates are valid; that they are not expired, that they are issued for the correct address, etc. We use our own internal corporate Certificate Authority for these sites, so we have the public key of the CA to verify the certificates against. Python by default just accepts and uses SSL certificates when using HTTPS, so even if a certificate is invalid, Python libraries such as urllib2 and Twisted will just happily use the certificate. How do I verify a certificate in Python?
false
1,087,227
-0.01818
0
0
-1
I was having the same problem but wanted to minimize 3rd party dependencies (because this one-off script was to be executed by many users). My solution was to wrap a curl call and make sure that the exit code was 0. Worked like a charm.
0
206,011
0
87
2009-07-06T14:17:00.000
python
Validate SSL certificates with Python
1
1
11
20,517,707
0
0
0
I have my web page in python, I am able to get the IP address of the user, who will be accessing our web page, we want to get the mac address of the user's PC, is it possible in python, we are using Linux PC, we want to get it on Linux.
false
1,092,379
0.049958
0
0
1
All you can access is what the user sends to you. MAC address is not part of that data.
0
7,015
1
2
2009-07-07T13:35:00.000
python,python-3.x
want to get mac address of remote PC
1
1
4
1,092,392
0
0
0
I am writing an application to test a network driver for handling corrupted data. And I thought of sending this data using raw socket, so it will not be corrected by the sending machine's TCP-IP stack. I am writing this application solely on Linux. I have code examples of using raw sockets in system-calls, but I would really like to keep my test as dynamic as possible, and write most if not all of it in Python. I have googled the web a bit for explanations and examples of the usage of raw sockets in python, but haven't found anything really enlightening. Just a a very old code example that demonstrates the idea, but in no means work. From what I gathered, Raw Socket usage in Python is nearly identical in semantics to UNIX's raw socket, but without the structs that define the packets structure. I was wondering if it would even be better not to write the raw socket part of the test in Python, but in C with system-calls, and call it from the main Python code?
false
1,117,958
0.049958
0
0
2
Eventually the best solution for this case was to write the entire thing in C, because it's not a big application, so it would've incurred greater penalty to write such a small thing in more than 1 language. After much toying with both the C and python RAW sockets, I eventually preferred the C RAW sockets. RAW sockets require bit-level modifications of less than 8 bit groups for writing the packet headers. Sometimes writing only 4 bits or less. python defines no assistance to this, whereas Linux C has a full API for this. But I definitely believe that if only this little bit of header initialization was handled conveniently in python, I would've never used C here.
0
109,072
1
46
2009-07-13T06:36:00.000
python,sockets,raw-sockets
How Do I Use Raw Socket in Python?
1
1
8
1,186,810
0
0
0
To be more specific, I'm using python and making a pool of HTTPConnection (httplib) and was wondering if there is an limit on the number of concurrent HTTP connections on a windows server.
false
1,121,951
0.291313
0
0
3
AFAIK, the numbers of internet sockets (necessary to make TCP/IP connections) is naturally limited on every machine, but it's pretty high. 1000 simulatneous connections shouldn't be a problem for the client machine, as each socket uses only little memory. If you start receiving data through all these channels, this might change though. I've heard of test setups that created a couple of thousands connections simultaneously from a single client. The story is usually different for the server, when it does heavy lifting for each incoming connection (like forking off a worker process etc.). 1000 incoming connections will impact its performance, and coming from the same client they can easily be taken for a DoS attack. I hope you're in charge of both the client and the server... or is it the same machine?
0
4,587
0
4
2009-07-13T20:48:00.000
python
What is the maximum simultaneous HTTP connections allowed on one machine (windows server 2008) using python
1
1
2
1,122,107
0
0
0
I can't run firefox from a sudoed python script that drops privileges to normal user. If i write $ sudo python >>> import os >>> import pwd, grp >>> uid = pwd.getpwnam('norby')[2] >>> gid = grp.getgrnam('norby')[2] >>> os.setegid(gid) >>> os.seteuid(uid) >>> import webbrowser >>> webbrowser.get('firefox').open('www.google.it') True >>> # It returns true but doesn't work >>> from subprocess import Popen,PIPE >>> p = Popen('firefox www.google.it', shell=True,stdout=PIPE,stderr=PIPE) >>> # Doesn't execute the command >>> You shouldn't really run Iceweasel through sudo WITHOUT the -H option. Continuing as if you used the -H option. No protocol specified Error: cannot open display: :0 I think that is not a python problem, but firefox/iceweasel/debian configuration problem. Maybe firefox read only UID and not EUID, and doesn't execute process because UID is equal 0. What do you think about?
true
1,139,835
1.2
0
0
1
This could be your environment. Changing the permissions will still leave environment variables like $HOME pointing at the root user's directory, which will be inaccessible. It may be worth trying altering these variables by changing os.environ before launching the browser. There may also be other variables worth checking.
0
903
1
0
2009-07-16T19:38:00.000
python,browser,debian,uid
Python fails to execute firefox webbrowser from a root executed script with privileges drop
1
1
1
1,140,199
0
0
0
I’m looking for a quick way to get an HTTP response code from a URL (i.e. 200, 404, etc). I’m not sure which library to use.
false
1,140,661
0.07486
0
0
3
The urllib2.HTTPError exception does not contain a getcode() method. Use the code attribute instead.
0
153,420
0
90
2009-07-16T22:27:00.000
python
What’s the best way to get an HTTP response code from a URL?
1
1
8
1,491,225
0
0
0
I have a large xml file (40 Gb) that I need to split into smaller chunks. I am working with limited space, so is there a way to delete lines from the original file as I write them to new files? Thanks!
false
1,145,286
-0.028564
0
0
-1
Its a time to buy a new hard drive! You can make backup before trying all other answers and don't get data lost :)
0
1,833
0
8
2009-07-17T19:41:00.000
python,file
Change python file in place
1
3
7
1,148,604
0
0
0
I have a large xml file (40 Gb) that I need to split into smaller chunks. I am working with limited space, so is there a way to delete lines from the original file as I write them to new files? Thanks!
false
1,145,286
0.028564
0
0
1
I'm pretty sure there is, as I've even been able to edit/read from the source files of scripts I've run, but the biggest problem would probably be all the shifting that would be done if you started at the beginning of the file. On the other hand, if you go through the file and record all the starting positions of the lines, you could then go in reverse order of position to copy the lines out; once that's done, you could go back, take the new files, one at a time, and (if they're small enough), use readlines() to generate a list, reverse the order of the list, then seek to the beginning of the file and overwrite the lines in their old order with the lines in their new one. (You would truncate the file after reading the first block of lines from the end by using the truncate() method, which truncates all data past the current file position if used without any arguments besides that of the file object, assuming you're using one of the classes or a subclass of one of the classes from the io package to read your file. You'd just have to make sure that the current file position ends up at the beginning of the last line to be written to a new file.) EDIT: Based on your comment about having to make the separations at the proper closing tags, you'll probably also have to develop an algorithm to detect such tags (perhaps using the peek method), possibly using a regular expression.
0
1,833
0
8
2009-07-17T19:41:00.000
python,file
Change python file in place
1
3
7
1,145,329
0
0
0
I have a large xml file (40 Gb) that I need to split into smaller chunks. I am working with limited space, so is there a way to delete lines from the original file as I write them to new files? Thanks!
false
1,145,286
0
0
0
0
If time is not a major factor (or wear and tear on your disk drive): Open handle to file Read up to the size of your partition / logical break point (due to the xml) Save the rest of your file to disk (not sure how python handles this as far as directly overwriting file or memory usage) Write the partition to disk goto 1 If Python does not give you this level of control, you may need to dive into C.
0
1,833
0
8
2009-07-17T19:41:00.000
python,file
Change python file in place
1
3
7
1,145,341
0
0
0
I'm trying to extract some data from various HTML pages using a python program. Unfortunately, some of these pages contain user-entered data which occasionally has "slight" errors - namely tag mismatching. Is there a good way to have python's xml.dom try to correct errors or something of the sort? Alternatively, is there a better way to extract data from HTML pages which may contain errors?
false
1,147,090
0
0
0
0
If jython is acceptable to you, tagsoup is very good at parsing junk - if it is, I found the jdom libraries far easier to use than other xml alternatives. This is a snippet from a demo mockup to do with screen scraping from tfl's journey planner: private Document getRoutePage(HashMap params) throws Exception { String uri = "http://journeyplanner.tfl.gov.uk/bcl/XSLT_TRIP_REQUEST2"; HttpWrapper hw = new HttpWrapper(); String page = hw.urlEncPost(uri, params); SAXBuilder builder = new SAXBuilder("org.ccil.cowan.tagsoup.Parser"); Reader pageReader = new StringReader(page); return builder.build(pageReader); }
0
778
0
0
2009-07-18T09:24:00.000
python,xml,dom,expat-parser
Python xml.dom and bad XML
1
1
4
1,149,208
0
1
0
I have a dilemma where I want to create an application that manipulates google contacts information. The problem comes down to the fact that Python only supports version 1.0 of the api whilst Java supports 3.0. I also want it to be web-based so I'm having a look at google app engine, but it seems that only the python version of app engine supports the import of gdata apis whilst java does not. So its either web based and version 1.0 of the api or non-web based and version 3.0 of the api. I actually need version 3.0 to get access to the extra fields provided by google contacts. So my question is, is there a way to get access to the gdata api under Google App Engine using Java? If not is there an ETA on when version 3.0 of the gdata api will be released for python? Cheers.
true
1,148,165
1.2
0
0
0
I'm having a look into the google data api protocol which seems to solve the problem.
0
802
0
0
2009-07-18T18:08:00.000
java,python,google-app-engine,gdata-api
Possible to access gdata api when using Java App Engine?
1
1
3
1,149,886
0
0
0
I'm writing a web-app that uses several 3rd party web APIs, and I want to keep track of the low level request and responses for ad-hock analysis. So I'm looking for a recipe that will get Python's urllib2 to log all bytes transferred via HTTP. Maybe a sub-classed Handler?
false
1,170,744
0.197375
0
0
2
This looks pretty tricky to do. There are no hooks in urllib2, urllib, or httplib (which this builds on) for intercepting either input or output data. The only thing that occurs to me, other than switching tactics to use an external tool (of which there are many, and most people use such things), would be to write a subclass of socket.socket in your own new module (say, "capture_socket") and then insert that into httplib using "import capture_socket; import httplib; httplib.socket = capture_socket". You'd have to copy all the necessary references (anything of the form "socket.foo" that is used in httplib) into your own module, but then you could override things like recv() and sendall() in your subclass to do what you like with the data. Complications would likely arise if you were using SSL, and I'm not sure whether this would be sufficient or if you'd also have to make your own socket._fileobject as well. It appears doable though, and perusing the source in httplib.py and socket.py in the standard library would tell you more.
0
3,899
0
19
2009-07-23T09:56:00.000
python,http,logging,urllib2
How do I get urllib2 to log ALL transferred bytes
1
1
2
1,844,608
0
0
0
This only needs to work on a single subnet and is not for malicious use. I have a load testing tool written in Python that basically blasts HTTP requests at a URL. I need to run performance tests against an IP-based load balancer, so the requests must come from a range of IP's. Most commercial performance tools provide this functionality, but I want to build it into my own. The tool uses Python's urllib2 for transport. Is it possible to send HTTP requests with spoofed IP addresses for the packets making up the request?
false
1,180,878
1
0
0
7
Quick note, as I just learned this yesterday: I think you've implied you know this already, but any responses to an HTTP request go to the IP address that shows up in the header. So if you are wanting to see those responses, you need to have control of the router and have it set up so that the spoofed IPs are all routed back to the IP you are using to view the responses.
0
37,293
0
27
2009-07-25T01:11:00.000
python,http,networking,sockets,urllib2
Spoofing the origination IP address of an HTTP request
1
2
5
1,180,897
0
0
0
This only needs to work on a single subnet and is not for malicious use. I have a load testing tool written in Python that basically blasts HTTP requests at a URL. I need to run performance tests against an IP-based load balancer, so the requests must come from a range of IP's. Most commercial performance tools provide this functionality, but I want to build it into my own. The tool uses Python's urllib2 for transport. Is it possible to send HTTP requests with spoofed IP addresses for the packets making up the request?
false
1,180,878
0.039979
0
0
1
I suggest seeing if you can configure your load balancer to make it's decision based on the X-Forwarded-For header, rather than the source IP of the packet containing the HTTP request. I know that most of the significant commercial load balancers have this capability. If you can't do that, then I suggest that you probably need to configure a linux box with a whole heap of secondary IP's - don't bother configuring static routes on the LB, just make your linux box the default gateway of the LB device.
0
37,293
0
27
2009-07-25T01:11:00.000
python,http,networking,sockets,urllib2
Spoofing the origination IP address of an HTTP request
1
2
5
1,186,102
0
0
0
I wonder what is the best way to handle parallel SSH connections in python. I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way. Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection. Thanks.
false
1,185,855
0.033321
1
0
1
You can simply use subprocess.Popen for that purpose, without any problems. However, you might want to simply install cronjobs on the remote machines. :-)
0
7,048
1
3
2009-07-26T23:19:00.000
python,ssh,parallel-processing
Parallel SSH in Python
1
4
6
1,185,871
0
0
0
I wonder what is the best way to handle parallel SSH connections in python. I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way. Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection. Thanks.
false
1,185,855
0.033321
1
0
1
Reading the paramiko API docs, it looks like it is possible to open one ssh connection, and multiplex as many ssh tunnels on top of that as are wished. Common ssh clients (openssh) often do things like this automatically behind the scene if there is already a connection open.
0
7,048
1
3
2009-07-26T23:19:00.000
python,ssh,parallel-processing
Parallel SSH in Python
1
4
6
1,185,880
0
0
0
I wonder what is the best way to handle parallel SSH connections in python. I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way. Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection. Thanks.
false
1,185,855
0.099668
1
0
3
Yes, you can do this with paramiko. If you're connecting to one server, you can run multiple channels through a single connection. If you're connecting to multiple servers, you can start multiple connections in separate threads. No need to manage multiple processes, although you could substitute the multiprocessing module for the threading module and have the same effect. I haven't looked into twisted conch in a while, but it looks like it getting updates again, which is nice. I couldn't give you a good feature comparison between the two, but I find paramiko is easier to get going. It takes a little more effort to get into twisted, but it could be well worth it if you're doing other network programming.
0
7,048
1
3
2009-07-26T23:19:00.000
python,ssh,parallel-processing
Parallel SSH in Python
1
4
6
1,188,586
0
0
0
I wonder what is the best way to handle parallel SSH connections in python. I need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way. Is this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection. Thanks.
false
1,185,855
-0.033321
1
0
-1
This might not be relevant to your question. But there are tools like pssh, clusterssh etc. that can parallely spawn connections. You can couple Expect with pssh to control them too.
0
7,048
1
3
2009-07-26T23:19:00.000
python,ssh,parallel-processing
Parallel SSH in Python
1
4
6
1,516,547
0
0
0
I've only used XML RPC and I haven't really delved into SOAP but I'm trying to find a good comprehensive guide, with real world examples or even a walkthrough of some minimal REST application. I'm most comfortable with Python/PHP.
false
1,186,839
0.066568
1
0
1
I like the examples in the Richardson & Ruby book, "RESTful Web Services" from O'Reilly.
0
499
0
0
2009-07-27T07:11:00.000
php,python,xml,rest,soap
Real world guide on using and/or setting up REST web services?
1
1
3
1,186,876
0
0
0
I have a web service that accepts passed in params using http POST but in a specific order, eg (name,password,data). I have tried to use httplib but all the Python http POST libraries seem to take a dictionary, which is an unordered data structure. Any thoughts on how to http POST params in order for Python? Thanks!
true
1,188,737
1.2
0
0
2
Why would you need a specific order in the POST parameters in the first place? As far as I know there are no requirements that POST parameter order is preserved by web servers. Every language I have used, has used a dictionary type object to hold these parameters as they are inherently key/value pairs.
1
345
0
2
2009-07-27T15:11:00.000
python,http
Python POST ordered params
1
1
1
1,188,759
0
0
0
I need to port some code that relies heavily on lxml from a CPython application to IronPython. lxml is very Pythonic and I would like to keep using it under IronPython, but it depends on libxslt and libxml2, which are C extensions. Does anyone know of a workaround to allow lxml under IronPython or a version of lxml that doesn't have those C-extension dependencies?
false
1,200,726
0.099668
1
0
1
Something which you might have already considered: An alternative is to first port the lxml library to IPy and then your code (depending on the code size). You might have to write some C# wrappers for the native C calls to the C extensions -- I'm not sure what issues, if any, are involved in this with regards to IPy. Or if the code which you are porting is small, as compared to lxml, then maybe you can just remove the lxml dependency and use the .NET XML libraries.
0
2,349
0
6
2009-07-29T14:36:00.000
.net,xml,ironpython,python,lxml
How to get lxml working under IronPython?
1
1
2
1,211,395
0
0
0
I'm trying to raise an exception on the Server Side of an SimpleXMLRPCServer; however, all attempts get a "Fault 1" exception on the client side. RPC_Server.AbortTest() File "C:\Python25\lib\xmlrpclib.py", line 1147, in call return self.__send(self.__name, args) File "C:\Python25\lib\xmlrpclib.py", line 1437, in __request verbose=self.__verbose File "C:\Python25\lib\xmlrpclib.py", line 1201, in request return self._parse_response(h.getfile(), sock) File "C:\Python25\lib\xmlrpclib.py", line 1340, in _parse_response return u.close() File "C:\Python25\lib\xmlrpclib.py", line 787, in close raise Fault(**self._stack[0]) xmlrpclib.Fault: :Test Aborted by a RPC request">
false
1,201,507
0.099668
1
0
1
Yes, this is what happens when you raise an exception on the server side. Are you expecting the SimpleXMLRPCServer to return the exception to the client? You can only use objects that can be marshalled through XML. This includes boolean : The True and False constants integers : Pass in directly floating-point numbers : Pass in directly strings : Pass in directly arrays : Any Python sequence type containing conformable elements. Arrays are returned as lists structures : A Python dictionary. Keys must be strings, values may be any conformable type. Objects of user-defined classes can be passed in; only their __dict__ attribute is transmitted. dates : in seconds since the epoch (pass in an instance of the DateTime class) or a datetime.datetime instance. binary data : pass in an instance of the Binary wrapper class
0
746
0
0
2009-07-29T16:34:00.000
python,exception,simplexmlrpcserver
Sending an exception on the SimpleXMLRPCServer
1
1
2
1,202,742
0
0
0
How can I get the current Windows' browser proxy setting, as well as set them to a value? I know I can do this by looking in the registry at Software\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyServer, but I'm looking, if it is possible, to do this without messing directly with the registry.
true
1,201,771
1.2
0
0
3
urllib module automatically retrieves settings from registry when no proxies are specified as a parameter or in the environment variables In a Windows environment, if no proxy environment variables are set, proxy settings are obtained from the registry’s Internet Settings section. See the documentation of urllib module referenced in the earlier post. To set the proxy I assume you'll need to use the pywin32 module and modify the registry directly.
0
12,290
0
2
2009-07-29T17:14:00.000
python,windows,proxy,registry
How to set proxy in Windows with Python?
1
1
3
1,205,881
0
1
0
I am trying to download mp3 file to users machine without his/her consent while they are listening the song.So, next time they visit that web page they would not have to download same mp3, but palypack from the local file. this will save some bandwidth for me and for them. it something pandora used to do but I really don't know how to. any ideas?
false
1,211,363
0.132549
0
0
2
Don't do this. Most files are cached anyway. But if you really want to add this (because users asked for it), make it optional (default off).
0
170
0
0
2009-07-31T08:47:00.000
python,django,web-applications
downloading files to users machine?
1
2
3
1,211,434
0
1
0
I am trying to download mp3 file to users machine without his/her consent while they are listening the song.So, next time they visit that web page they would not have to download same mp3, but palypack from the local file. this will save some bandwidth for me and for them. it something pandora used to do but I really don't know how to. any ideas?
true
1,211,363
1.2
0
0
4
You can't forcefully download files to a user without his consent. If that was possible you can only imagine what severe security flaw that would be. You can do one of two things: count on the browser to cache the media file serve the media via some 3rd party plugin (Flash, for example)
0
170
0
0
2009-07-31T08:47:00.000
python,django,web-applications
downloading files to users machine?
1
2
3
1,211,370
0
0
0
Is there a way to limit amount of data downloaded by python's urllib2 module ? Sometimes I encounter with broken sites with sort of /dev/random as a page and it turns out that they use up all memory on a server.
false
1,224,910
0.53705
0
0
3
urllib2.urlopen returns a file-like object, and you can (at least in theory) .read(N) from such an object to limit the amount of data returned to N bytes at most. This approach is not entirely fool-proof, because an actively-hostile site may go to quite some lengths to fool a reasonably trusty received, like urllib2's default opener; in this case, you'll need to implement and install your own opener that knows how to guard itself against such attacks (for example, getting no more than a MB at a time from the open socket, etc, etc).
0
869
0
3
2009-08-03T22:20:00.000
python,urllib2
limit downloaded page size
1
1
1
1,224,950
0
0
0
When I try to automatically download a file from some webpage using Python, I get Webpage Dialog window (I use IE). The window has two buttons, such as 'Continue' and 'Cancel'. I cannot figure out how to click on the Continue Button. The problem is that I don't know how to control Webpage Dialog with Python. I tried to use winGuiAuto to find the controls of the window, but it fails to recognize any Button type controls... An ideas? Sasha A clarification of my question: My purpose is to download stock data from a certain web site. I need to perform it for many stocks so I need python to do it for me in a repetitive way. This specific site exports the data by letting me download it in Excel file by clicking a link. However after clicking the link I get a Web Page dialog box asking me if I am sure that I want to download this file. This Web page dialog is my problem - it is not an html page and it is not a regular windows dialog box. It is something else and I cannot configure how to control it with python. It has two buttons and I need to click on one of them (i.e. Continue). It seems like it is a special kind of window implemented in IE. It is distinguished by its title which looks like this: Webpage Dialog -- Download blalblabla. If I click Continue mannually it opens a regular windows dialog box (open,save,cancel) which i know how to handle with winGuiAuto library. Tried to use this library for the Webpage Dialog window with no luck. Tried to recognize the buttons with Autoit Info tool -no luck either. In fact, maybe these are not buttons, but actually links, however I cannot see the links and there is no source code visible... What I need is someone to tell me what this Web page Dialog box is and how to control it with Python. That was my question.
false
1,225,686
0
0
0
0
You can't, and you don't want to. When you ask a question, try explaining what you are trying to achieve, and not just the task immediately before you. You are likely barking down the wrong path. There is some other way of doing what you are trying to do.
0
2,761
0
0
2009-08-04T04:03:00.000
python,dialog,webpage
How to control Webpage dialog with python
1
1
3
1,226,061
0
0
0
I have a pretty intensive chat socket server written in Twisted Python, I start it using internet.TCPServer with a factory and that factory references to a protocol object that handles all communications with the client. How should I make sure a protocol instance completely destroys itself once a client has disconnected? I've got a function named connectionLost that is fired up once a client disconnects and I try stopping all activity right there but I suspect some reactor stuff (like twisted.words instances) keep running for obsolete protocol instances. What would be the best approach to handle this? Thanks!
true
1,234,292
1.2
0
0
0
ok, for sorting out this issue I have set a __del__ method in the protocol class and I am now logging protocol instances that have not been garbage collected within 1 minute from the time the client has disconnected. If anybody has any better solution I'll still be glad to hear about it but so far I have already fixed a few potential memory leaks using this log. Thanks!
0
721
1
4
2009-08-05T16:23:00.000
python,sockets,twisted,twisted.words
In Twisted Python - Make sure a protocol instance would be completely deallocated
1
1
1
1,236,382
0
1
0
I'm trying to find a Python library that would take an audio file (e.g. .ogg, .wav) and convert it into mp3 for playback on a webpage. Also, any thoughts on setting its quality for playback would be great. Thank you.
false
1,246,131
0.066568
0
0
2
You may use ctypes module to call functions directly from dynamic libraries. It doesn't require you to install external Python libs and it has better performance than command line tools, but it's usually harder to implement (plus of course you need to provide external library).
0
51,200
0
21
2009-08-07T17:51:00.000
python,audio,compression
Python library for converting files to MP3 and setting their quality
1
2
6
1,334,217
0
1
0
I'm trying to find a Python library that would take an audio file (e.g. .ogg, .wav) and convert it into mp3 for playback on a webpage. Also, any thoughts on setting its quality for playback would be great. Thank you.
false
1,246,131
0.033321
0
0
1
Another option to avoid installing Python modules for this simple task would be to just exec "lame" or other command line encoder from the Python script (with the popen module.)
0
51,200
0
21
2009-08-07T17:51:00.000
python,audio,compression
Python library for converting files to MP3 and setting their quality
1
2
6
1,246,816
0
0
0
I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network. Although the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development. What do you think of the python socket module : Is it reliable and fast enough for production quality software ? Are there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ? Thanks in advance, Paul
false
1,253,905
0.148885
0
0
3
Python is a mature language that can do almost anything that you can do in C/C++ (even direct memory access if you really want to hurt yourself). You'll find that you can write beautiful code in it in a very short time, that this code is readable from the start and that it will stay readable (you will still know what it does even after returning one year later). The drawback of Python is that your code will be somewhat slow. "Somewhat" as in "might be too slow for certain cases". So the usual approach is to write as much as possible in Python because it will make your app maintainable. Eventually, you might run into speed issues. That would be the time to consider to rewrite a part of your app in C. The main advantages of this approach are: You already have a running application. Translating the code from Python to C is much more simple than write it from scratch. You already have a running application. After the translation of a small part of Python to C, you just have to test that small part and you can use the rest of the app (that didn't change) to do it. You don't pay a price upfront. If Python is fast enough for you, you'll never have to do the optional optimization. Python is much, much more powerful than C. Every line of Python can do the same as 100 or even 1000 lines of C.
0
1,015
0
4
2009-08-10T09:26:00.000
python,network-programming
Suggestion Needed - Networking in Python - A good idea?
1
2
4
1,254,288
0
0
0
I am considering programming the network related features of my application in Python instead of the C/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network. Although the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development. What do you think of the python socket module : Is it reliable and fast enough for production quality software ? Are there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ? Thanks in advance, Paul
false
1,253,905
0.049958
0
0
1
To answer #1, I know that among other things, EVE Online (the MMO) uses a variant of Python for their server code.
0
1,015
0
4
2009-08-10T09:26:00.000
python,network-programming
Suggestion Needed - Networking in Python - A good idea?
1
2
4
1,253,945
0
0
0
What is the best way to map a network share to a windows drive using Python? This share also requires a username and password.
false
1,271,317
0
0
0
0
I had trouble getting this line to work: win32wnet.WNetAddConnection2(win32netcon.RESOURCETYPE_DISK, drive, networkPath, None, user, password) But was successful with this: win32wnet.WNetAddConnection2(1, 'Z:', r'\UNCpath\share', None, 'login', 'password')
0
68,179
0
33
2009-08-13T11:09:00.000
python,windows,mapping,drive
What is the best way to map windows drives using Python?
1
1
7
20,201,066
0
0
0
What would be the best method to restrict access to my XMLRPC server by IP address? I see the class CGIScript in web/twcgi.py has a render method that is accessing the request... but I am not sure how to gain access to this request in my server. I saw an example where someone patched twcgi.py to set environment variables and then in the server access the environment variables... but I figure there has to be a better solution. Thanks.
false
1,273,297
0
1
0
0
I'd use a firewall on windows, or iptables on linux.
0
3,265
0
2
2009-08-13T17:03:00.000
python,twisted
Python Twisted: restricting access by IP address
1
1
3
1,273,455
0
1
0
I'm trying to scrap a page in youtube with python which has lot of ajax in it I've to call the java script each time to get the info. But i'm not really sure how to go about it. I'm using the urllib2 module to open URLs. Any help would be appreciated.
false
1,281,075
0.07983
0
0
2
Here is how I would do it: Install Firebug on Firefox, then turn the NET on in firebug and click on the desired link on YouTube. Now see what happens and what pages are requested. Find the one that are responsible for the AJAX part of page. Now you can use urllib or Mechanize to fetch the link. If you CAN pull the same content this way, then you have what you are looking for, then just parse the content. If you CAN'T pull the content this way, then that would suggest that the requested page might be looking at user login credentials, sessions info or other header fields such as HTTP_REFERER ... etc. Then you might want to look at something more extensive like the scrapy ... etc. I would suggest that you always follow the simple path first. Good luck and happy "responsibly" scraping! :)
0
4,364
0
2
2009-08-15T03:34:00.000
python,ajax,screen-scraping
Scraping Ajax - Using python
1
1
5
3,134,226
0
0
0
suppose, I need to perform a set of procedure on a particular website say, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website. I know there is a webbrowser module in python, but I want to do this without invoking any web browser. It hast to be a pure script. Is there a module available in python, which can help me do that? thanks
false
1,292,817
1
0
0
19
selenium will do exactly what you want and it handles javascript
0
108,234
0
29
2009-08-18T09:23:00.000
python,browser-automation
How to automate browsing using python?
1
3
15
3,486,971
0
0
0
suppose, I need to perform a set of procedure on a particular website say, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website. I know there is a webbrowser module in python, but I want to do this without invoking any web browser. It hast to be a pure script. Is there a module available in python, which can help me do that? thanks
false
1,292,817
0.013333
0
0
1
The best solution that i have found (and currently implementing) is : - scripts in python using selenium webdriver - PhantomJS headless browser (if firefox is used you will have a GUI and will be slower)
0
108,234
0
29
2009-08-18T09:23:00.000
python,browser-automation
How to automate browsing using python?
1
3
15
20,679,640
0
0
0
suppose, I need to perform a set of procedure on a particular website say, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website. I know there is a webbrowser module in python, but I want to do this without invoking any web browser. It hast to be a pure script. Is there a module available in python, which can help me do that? thanks
false
1,292,817
0
0
0
0
httplib2 + beautifulsoup Use firefox + firebug + httpreplay to see what the javascript passes to and from the browser from the website. Using httplib2 you can essentially do the same via post and get
0
108,234
0
29
2009-08-18T09:23:00.000
python,browser-automation
How to automate browsing using python?
1
3
15
3,988,708
0
0
0
I'm developing an FTP client in Python ftplib. How do I add proxies support to it (most FTP apps I have seen seem to have it)? I'm especially thinking about SOCKS proxies, but also other types... FTP, HTTP (is it even possible to use HTTP proxies with FTP program?) Any ideas how to do it?
false
1,293,518
0.066568
0
0
2
Standard module ftplib doesn't support proxies. It seems the only solution is to write your own customized version of the ftplib.
0
18,202
0
9
2009-08-18T12:28:00.000
python,proxy,ftp,ftplib
Proxies in Python FTP application
1
1
6
1,293,579
0
0
0
Long story short, I created a new gmail account, and linked several other accounts to it (each with 1000s of messages), which I am importing. All imported messages arrive as unread, but I need them to appear as read. I have a little experience with python, but I've only used mail and imaplib modules for sending mail, not processing accounts. Is there a way to bulk process all items in an inbox, and simply mark messages older than a specified date as read?
false
1,296,446
0.049958
1
0
1
Just go to the Gmail web interface, do an advanced search by date, then select all and mark as read.
0
5,833
0
5
2009-08-18T20:52:00.000
python,email,gmail,imap,pop3
Parse Gmail with Python and mark all older than date as "read"
1
2
4
1,296,476
0
0
0
Long story short, I created a new gmail account, and linked several other accounts to it (each with 1000s of messages), which I am importing. All imported messages arrive as unread, but I need them to appear as read. I have a little experience with python, but I've only used mail and imaplib modules for sending mail, not processing accounts. Is there a way to bulk process all items in an inbox, and simply mark messages older than a specified date as read?
false
1,296,446
0.049958
1
0
1
Rather than try to parse our HTML why not just use the IMAP interface? Hook it up to a standard mail client and then just sort by date and mark whichever ones you want as read.
0
5,833
0
5
2009-08-18T20:52:00.000
python,email,gmail,imap,pop3
Parse Gmail with Python and mark all older than date as "read"
1
2
4
1,296,465
0
0
0
I have setup the logging module for my new python script. I have two handlers, one sending stuff to a file, and one for email alerts. The SMTPHandler is setup to mail anything at the ERROR level or above. Everything works great, unless the SMTP connection fails. If the SMTP server does not respond or authentication fails (it requires SMTP auth), then the whole script dies. I am fairly new to python, so I am trying to figure out how to capture the exception that the SMTPHandler is raising so that any problems sending the log message via email won't bring down my entire script. Since I am also writing errors to a log file, if the SMTP alert fails, I just want to keep going, not halt anything. If I need a "try:" statement, would it go around the logging.handlers.SMTPHandler setup, or around the individual calls to my_logger.error()?
false
1,304,593
0
1
0
0
You probably need to do both. To figure this out, I suggest to install a local mail server and use that. This way, you can shut it down while your script runs and note down the error message. To keep the code maintainable, you should extends SMTPHandler in such a way that you can handle the exceptions in a single place (instead of wrapping every logger call with try-except).
0
1,574
0
1
2009-08-20T07:45:00.000
python,logging,handler
Python logging SMTPHandler - handling offline SMTP server
1
1
2
1,304,622
0
0
0
I need to make a cURL request to a https URL, but I have to go through a proxy as well. Is there some problem with doing this? I have been having so much trouble doing this with curl and php, that I tried doing it with urllib2 in Python, only to find that urllib2 cannot POST to https when going through a proxy. I haven't been able to find any documentation to this effect with cURL, but I was wondering if anyone knew if this was an issue?
false
1,308,760
0
0
0
0
No problem since the proxy server supports the CONNECT method.
0
9,009
0
2
2009-08-20T20:56:00.000
php,python,curl,https,urllib2
cURL: https through a proxy
1
1
2
1,308,768
0
0
0
I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software. How can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections. Can I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well.
false
1,308,879
0.132549
0
0
2
Normally you just listen on 0.0.0.0. This is an alias for all IP addresses.
0
9,577
0
6
2009-08-20T21:17:00.000
.net,python,networking,sockets
Simulate multiple IP addresses for testing
1
2
3
1,308,897
0
0
0
I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software. How can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections. Can I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well.
true
1,308,879
1.2
0
0
5
A. consider using Bonjour (zeroconf) for service discovery B. You can assign 1 or more IP addresses the same NIC: On XP, Start -> Control Panel -> Network Connections and select properties on your NIC (usually 'Local Area Connection'). Scroll down to Internet Protocol (TCP/IP), select it and click on [Properties]. If you are using DHCP, you will need to get a static, base IP, from your IT. Otherwise, click on [Advanced] and under 'IP Addresses' click [Add..] Enter the IP information for the additional IP you want to add. Repeat for each additional IP address. C. Consider using VMWare, as you can configure multiple systems and virtual IPs within a single, logical, network of "computers". -- sky
0
9,577
0
6
2009-08-20T21:17:00.000
.net,python,networking,sockets
Simulate multiple IP addresses for testing
1
2
3
1,309,096
0
0
0
Im in the process of writing a python script to act as a "glue" between an application and some external devices. The script itself is quite straight forward and has three distinct processes: Request data (from a socket connection, via UDP) Receive response (from a socket connection, via UDP) Process response and make data available to 3rd party application However, this will be done repetitively, and for several (+/-200 different) devices. So once its reached device #200, it would start requesting data from device #001 again. My main concern here is not to bog down the processor whilst executing the script. UPDATE: I am using three threads to do the above, one thread for each of the above processes. The request/response is asynchronous as each response contains everything i need to be able to process it (including the senders details). Is there any way to allow the script to run in the background and consume as little system resources as possible while doing its thing? This will be running on a windows 2003 machine. Any advice would be appreciated.
false
1,352,760
0.462117
0
0
5
If you are using blocking I/O to your devices, then the script won't consume any processor while waiting for the data. How much processor you use depends on what sorts of computation you are doing with the data.
1
738
1
0
2009-08-30T00:58:00.000
python,performance,process,background
Python script performance as a background process
1
1
2
1,352,777
0
0
0
How can I download files from a website using wildacrds in Python? I have a site that I need to download file from periodically. The problem is the filenames change each time. A portion of the file stays the same though. How can I use a wildcard to specify the unknown portion of the file in a URL?
false
1,359,090
1
0
0
7
If the filename changes, there must still be a link to the file somewhere (otherwise nobody would ever guess the filename). A typical approach is to get the HTML page that contains a link to the file, search through that looking for the link target, and then send a second request to get the actual file you're after. Web servers do not generally implement such a "wildcard" facility as you describe, so you must use other techniques.
1
1,403
0
1
2009-08-31T19:46:00.000
python
Wildcard Downloads with Python
1
1
2
1,359,101
0
0
0
I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?
false
1,364,173
1
0
0
10
On Mac press Ctrl+\ to quit a python process attached to a terminal.
0
234,195
0
142
2009-09-01T19:17:00.000
python
Stopping python using ctrl+c
1
7
12
48,303,184
0
0
0
I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?
false
1,364,173
1
0
0
24
This post is old but I recently ran into the same problem of Ctrl+C not terminating Python scripts on Linux. I used Ctrl+\ (SIGQUIT).
0
234,195
0
142
2009-09-01T19:17:00.000
python
Stopping python using ctrl+c
1
7
12
40,704,008
0
0
0
I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?
false
1,364,173
0.049958
0
0
3
On a mac / in Terminal: Show Inspector (right click within the terminal window or Shell >Show Inspector) click the Settings icon above "running processes" choose from the list of options under "Signal Process Group" (Kill, terminate, interrupt, etc).
0
234,195
0
142
2009-09-01T19:17:00.000
python
Stopping python using ctrl+c
1
7
12
42,792,308
0
0
0
I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?
false
1,364,173
0.016665
0
0
1
Forcing the program to close using Alt+F4 (shuts down current program) Spamming the X button on CMD for e.x. Taskmanager (first Windows+R and then "taskmgr") and then end the task. Those may help.
0
234,195
0
142
2009-09-01T19:17:00.000
python
Stopping python using ctrl+c
1
7
12
52,672,359
0
0
0
I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?
false
1,364,173
1
0
0
57
If it is running in the Python shell use Ctrl + Z, otherwise locate the python process and kill it.
0
234,195
0
142
2009-09-01T19:17:00.000
python
Stopping python using ctrl+c
1
7
12
1,364,179
0
0
0
I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?
false
1,364,173
0.016665
0
0
1
For the record, what killed the process on my Raspberry 3B+ (running raspbian) was Ctrl+'. On my French AZERTY keyboard, the touch ' is also number 4.
0
234,195
0
142
2009-09-01T19:17:00.000
python
Stopping python using ctrl+c
1
7
12
54,316,333
0
0
0
I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?
true
1,364,173
1.2
0
0
206
On Windows, the only sure way is to use CtrlBreak. Stops every python script instantly! (Note that on some keyboards, "Break" is labeled as "Pause".)
0
234,195
0
142
2009-09-01T19:17:00.000
python
Stopping python using ctrl+c
1
7
12
1,364,199
0
0
0
I'm trying to play with inter-process communication and since I could not figure out how to use named pipes under Windows I thought I'll use network sockets. Everything happens locally. The server is able to launch slaves in a separate process and listens on some port. The slaves do their work and submit the result to the master. How do I figure out which port is available? I assume I cannot listen on port 80 or 21? I'm using Python, if that cuts the choices down.
false
1,365,265
1
0
0
47
Bind the socket to port 0. A random free port from 1024 to 65535 will be selected. You may retrieve the selected port with getsockname() right after bind().
0
154,753
1
189
2009-09-02T00:07:00.000
python,sockets,ipc,port
On localhost, how do I pick a free port number?
1
2
5
1,365,281
0
0
0
I'm trying to play with inter-process communication and since I could not figure out how to use named pipes under Windows I thought I'll use network sockets. Everything happens locally. The server is able to launch slaves in a separate process and listens on some port. The slaves do their work and submit the result to the master. How do I figure out which port is available? I assume I cannot listen on port 80 or 21? I'm using Python, if that cuts the choices down.
false
1,365,265
0.158649
0
0
4
You can listen on whatever port you want; generally, user applications should listen to ports 1024 and above (through 65535). The main thing if you have a variable number of listeners is to allocate a range to your app - say 20000-21000, and CATCH EXCEPTIONS. That is how you will know if a port is unusable (used by another process, in other words) on your computer. However, in your case, you shouldn't have a problem using a single hard-coded port for your listener, as long as you print an error message if the bind fails. Note also that most of your sockets (for the slaves) do not need to be explicitly bound to specific port numbers - only sockets that wait for incoming connections (like your master here) will need to be made a listener and bound to a port. If a port is not specified for a socket before it is used, the OS will assign a useable port to the socket. When the master wants to respond to a slave that sends it data, the address of the sender is accessible when the listener receives data. I presume you will be using UDP for this?
0
154,753
1
189
2009-09-02T00:07:00.000
python,sockets,ipc,port
On localhost, how do I pick a free port number?
1
2
5
1,365,283
0
0
0
I'm trying to use Twisted in a sort of spidering program that manages multiple client connections. I'd like to maintain of a pool of about 5 clients working at one time. The functionality of each client is to connect to a specified IRC server that it gets from a list, enter a specific channel, and then save the list of the users in that channel to a database. The problem I'm having is more architectural than anything. I'm fairly new to Twisted and I don't know what options are available for managing multiple clients. I'm assuming the easiest way is to simply have each ClientCreator instance die off once it's completed its work and have a central loop that can check to see if there's room to add a new client. I would think this isn't a particularly unusual problem so I'm hoping to glean some information from other peoples' experiences.
true
1,365,737
1.2
0
0
4
The best option is really just to do the obvious thing here. Don't have a loop, or a repeating timed call; just have handlers that do the right thing. Keep a central connection-management object around, and make event-handling methods feed it the information it needs to keep going. When it starts, make 5 outgoing connections. Keep track of how many are in progress, maintain a list with them in it. When a connection succeeds (in connectionMade) update the list to remember the connection's new state. When a connection completes (in connectionLost) tell the connection manager; its response should be to remove that connection and make a new connection somewhere else. In the middle, it should be fairly obvious how to fire off a request for the names you need and stuff them into a database (waiting for the database insert to complete before dropping your IRC connection, most likely, by waiting for the Deferred to come back from adbapi).
0
4,725
1
6
2009-09-02T03:45:00.000
python,twisted
Managing multiple Twisted client connections
1
1
3
1,408,498
0
1
0
I want to download a list of web pages. I know wget can do this. However downloading every URL in every five minutes and save them to a folder seems beyond the capability of wget. Does anyone knows some tools either in java or python or Perl which accomplishes the task? Thanks in advance.
true
1,367,189
1.2
0
0
5
Write a bash script that uses wget and put it in your crontab to run every 5 minutes. (*/5 * * * *) If you need to keep a history of all these web pages, set a variable at the beginning of your script with the current unixtime and append it to the output filenames.
0
2,545
0
1
2009-09-02T11:39:00.000
python,download,webpage,wget,web-crawler
How to download a webpage in every five minutes?
1
1
2
1,367,209
0
0
0
I would like to read a website asynchronously, which isnt possible with urllib as far as I know. Now I tried reading with with plain sockets, but HTTP is giving me hell. I run into all kind of funky encodings, for example transfer-encoding: chunked, have to parse all that stuff manually, and I feel like coding C, not python at the moment. Isnt there a nicer way like URLLib, asynchronously? I dont really feel like re-implementing the whole HTTP specification, when it's all been done before. Twisted isnt an option currently. Greetings, Tom
false
1,367,453
0.049958
0
0
1
The furthest I came was using modified asynchttp, that codeape suggested. I have tried to use both asyncore/asynchat and asynchttp, with lots of pain. It took me far too long to try to fix all the bugs in it (there's a method handle_read, nearly copied from asyncore, only badly indented and was giving me headaches with chunked encoding). Also, asyncore and asynchat are best not used according to some hints I got on google. I have settled with twisted, but that's obviously out of the question for you. It might also depend what are you trying to do with your application and why you want async requests, if threads are an option or not, if you're doing GUI programming or something else so if you could shed some more inforation, that's always good. If not, I'd vote for threaded version suggested above, it offers much more readability and maintainability.
0
2,352
0
7
2009-09-02T12:39:00.000
python,web-services,sockets
Reading a website with asyncore
1
1
4
1,372,289
0
0
0
I am trying to understand crc32 to generate the unique url for web page. If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates? What could be the approximative string length to keep the checksum to be 2^32? When I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further. Mostly I want to convert the url (maximum 1024 chars) to shorted id.
true
1,401,218
1.2
1
0
7
There is no such number as the "maximum number of urls can be used so that we can avoid duplicates" for CRC32. The problem is that CRC32 can produce duplicates, and it's not a function of how many values you throw at it, it's a function of what those values look like. So you might have a collision on the second url, if you're unlucky. You should not base your algorithm on producing a unique hash, instead produce a unique value for each url manually.
0
3,162
0
5
2009-09-09T18:16:00.000
c#,python,url,crc32,short-url
CRC32 to make short URL for web
1
5
6
1,401,231
0
0
0
I am trying to understand crc32 to generate the unique url for web page. If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates? What could be the approximative string length to keep the checksum to be 2^32? When I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further. Mostly I want to convert the url (maximum 1024 chars) to shorted id.
false
1,401,218
0.132549
1
0
4
If you're already storing the full URL in a database table, an integer ID is pretty short, and can be made shorter by converting it to base 16, 64, or 85. If you can use a UUID, you can use an integer, and you may as well, since it's shorter and I don't see what benefit a UUID would provide in your lookup table.
0
3,162
0
5
2009-09-09T18:16:00.000
c#,python,url,crc32,short-url
CRC32 to make short URL for web
1
5
6
1,401,237
0
0
0
I am trying to understand crc32 to generate the unique url for web page. If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates? What could be the approximative string length to keep the checksum to be 2^32? When I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further. Mostly I want to convert the url (maximum 1024 chars) to shorted id.
false
1,401,218
0.033321
1
0
1
CRC32 means cyclic redundancy check with 32 bits where any arbitrary amount of bits is summed up to a 32 bit check sum. And check sum functions are surjective, that means multiple input values have the same output value. So you cannot inverse the function.
0
3,162
0
5
2009-09-09T18:16:00.000
c#,python,url,crc32,short-url
CRC32 to make short URL for web
1
5
6
1,401,243
0
0
0
I am trying to understand crc32 to generate the unique url for web page. If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates? What could be the approximative string length to keep the checksum to be 2^32? When I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further. Mostly I want to convert the url (maximum 1024 chars) to shorted id.
false
1,401,218
0
1
0
0
No, even you use md5, or any other check sum, the URL CAN BE duplicate, it all depends on your luck. So don't make an unique url base on those check sum
0
3,162
0
5
2009-09-09T18:16:00.000
c#,python,url,crc32,short-url
CRC32 to make short URL for web
1
5
6
1,401,286
0
0
0
I am trying to understand crc32 to generate the unique url for web page. If we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates? What could be the approximative string length to keep the checksum to be 2^32? When I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further. Mostly I want to convert the url (maximum 1024 chars) to shorted id.
false
1,401,218
0.066568
1
0
2
The right way to make a short URL is to store the full one in the database and publish something that maps to the row index. A compact way is to use the Base64 of the row ID, for example. Or you could use a UID for the primary key and show that. Do not use a checksum, because it's too small and very likely to conflict. A cryptographic hash is larger and less likely, but it's still not the right way to go.
0
3,162
0
5
2009-09-09T18:16:00.000
c#,python,url,crc32,short-url
CRC32 to make short URL for web
1
5
6
1,401,331
0
0
0
Okay. So I have about 250,000 high resolution images. What I want to do is go through all of them and find ones that are corrupted. If you know what 4scrape is, then you know the nature of the images I. Corrupted, to me, is the image is loaded into Firefox and it says The image “such and such image” cannot be displayed, because it contains errors. Now, I could select all of my 250,000 images (~150gb) and drag-n-drop them into Firefox. That would be bad though, because I don't think Mozilla designed Firefox to open 250,000 tabs. No, I need a way to programmatically check whether an image is corrupted. Does anyone know a PHP or Python library which can do something along these lines? Or an existing piece of software for Windows? I have already removed obviously corrupted images (such as ones that are 0 bytes) but I'm about 99.9% sure that there are more diseased images floating around in my throng of a collection.
false
1,401,527
0.119427
1
0
3
If your exact requirements are that it show correctly in FireFox you may have a difficult time - the only way to be sure would be to link to the exact same image loading source code as FireFox. Basic image corruption (file is incomplete) can be detected simply by trying to open the file using any number of image libraries. However many images can fail to display simply because they stretch a part of the file format that the particular viewer you are using can't handle (GIF in particular has a lot of these edge cases, but you can find JPEG and the rare PNG file that can only be displayed in specific viewers). There are also some ugly JPEG edge cases where the file appears to be uncorrupted in viewer X, but in reality the file has been cut short and is only displaying correctly because very little information has been lost (FireFox can show some cut off JPEGs correctly [you get a grey bottom], but others result in FireFox seeming the load them half way and then display the error message instead of the partial image)
0
23,397
0
20
2009-09-09T19:15:00.000
php,python,image
How do I programmatically check whether an image (PNG, JPEG, or GIF) is corrupted?
1
1
5
1,401,566
0
0
0
How can I find out the http request my python cgi received? I need different behaviors for HEAD and GET. Thanks!
false
1,417,715
0
1
0
0
Why do you need to distinguish between GET and HEAD? Normally you shouldn't distinguish and should treat a HEAD request just like a GET. This is because a HEAD request is meant to return the exact same headers as a GET. The only difference is there will be no response content. Just because there is no response content though doesn't mean you no longer have to return a valid Content-Length header, or other headers, which are dependent on the response content. In mod_wsgi, which various people are pointing you at, it will actually deliberately change the request method from HEAD to GET in certain cases to guard against people who wrongly treat HEAD differently. The specific case where this is done is where an Apache output filter is registered. The reason that it is done in this case is because the output filter may expect to see the response content and from that generate additional response headers. If you were to decide not to bother to generate any response content for a HEAD request, you will deprive the output filter of the content and the headers they add may then not agree with what would be returned from a GET request. The end result of this is that you can stuff up caches and the operation of the browser. The same can apply equally for CGI scripts behind Apache as output filters can still be added in that case as well. For CGI scripts there is nothing in place though to protect against users being stupid and doing things differently for a HEAD request.
0
6,773
0
12
2009-09-13T13:12:00.000
python,http,httpwebrequest,cgi
Detecting the http request type (GET, HEAD, etc) from a python cgi
1
1
3
1,420,886
0
0
0
I'm trying to access a SOAP API using Suds. The SOAP API documentation states that I have to provide three cookies with some login data. How can I accomplish this?
true
1,417,902
1.2
0
0
4
Set a "Cookie" HTTP Request Header having the required name/value pairs. This is how Cookie values are usually transmitted in HTTP Based systems. You can add multiple key/value pairs in the same http header. Single Cookie Cookie: name1=value1 Multiple Cookies (seperated by semicolons) Cookie: name1=value1; name2=value2
0
1,581
0
2
2009-09-13T14:44:00.000
python,soap,cookies,suds
Sending cookies in a SOAP request using Suds
1
1
1
1,417,916
0
0
0
I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also?
false
1,418,082
0.066568
0
0
4
If you're on Windows, one option is to run the tests under a different user account. This means the browser and java server will not be visible to your own account.
0
90,474
0
93
2009-09-13T16:07:00.000
python,selenium,selenium-rc
Is it possible to hide the browser in Selenium RC?
1
4
12
1,750,751
0
0
0
I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also?
false
1,418,082
0.049958
0
0
3
This is how I run my tests with maven on a linux desktop (Ubuntu). I got fed up not being able to work with the firefox webdriver always taking focus. I installed xvfb xvfb-run -a mvn clean install Thats it
0
90,474
0
93
2009-09-13T16:07:00.000
python,selenium,selenium-rc
Is it possible to hide the browser in Selenium RC?
1
4
12
11,261,393
0
0
0
I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also?
false
1,418,082
0
0
0
0
On MacOSX, I haven't been able to hide the browser window, but at least I figured out how to move it to a different display so it doesn't disrupt my workflow so much. While Firefox is running tests, just control-click its icon in the dock, select Options, and Assign to Display 2.
0
90,474
0
93
2009-09-13T16:07:00.000
python,selenium,selenium-rc
Is it possible to hide the browser in Selenium RC?
1
4
12
24,662,478
0
0
0
I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also?
false
1,418,082
0
0
0
0
Using headless Chrome would be your best bet, or you could post directly to the site to interact with it, which would save a lot of compute power for other things/processes. I use this when testing out web automation bots that search for shoes on multiple sites using cpu heavy elements, the more power you save, and the simpler your program is, the easier it is to run multiple processes at a time with muhc greater speed and reliability.
0
90,474
0
93
2009-09-13T16:07:00.000
python,selenium,selenium-rc
Is it possible to hide the browser in Selenium RC?
1
4
12
55,484,939
0
0
0
I have a python program which starts up a PHP script using the subprocess.Popen() function. The PHP script needs to communicate back-and-forth with Python, and I am trying to find an easy but robust way to manage the message sending/receiving. I have already written a working protocol using basic sockets, but it doesn't feel very robust - I don't have any logic to handle dropped messages, and I don't even fully understand how sockets work which leaves me uncertain about what else could go wrong. Are there any generic libraries or IPC frameworks which are easier than raw sockets? ATM I need something which supports Python and PHP, but in the future I may want to be able to use C, Perl and Ruby also. I am looking for something robust, i.e. when the server or client crashes, the other party needs to be able to recover gracefully.
false
1,424,593
0
1
0
0
You could look at shared memory or named pipes, but I think there are two more likely options, assuming at least one of these languages is being used for a webapp: A. Use your database's atomicity. In python, begin a transaction, put a message into a table, and end the transaction. From php, begin a transaction, take a message out of the table or mark it "read", and end the transaction. Make your PHP and/or python self-aware enough not to post the same messages twice. Voila; reliable (and scaleable) IPC, using existing web architecture. B. Make your webserver (assuming as webapp) capable of running both php and python, locking down any internal processes to just localhost access, and then call them using xmlrpc or soap from your other language using standard libraries. This is also scalable, as you can change your URLs and security lock-downs later.
0
2,322
0
1
2009-09-15T00:47:00.000
php,python,ipc
Easy, Robust IPC between Python and PHP
1
1
2
1,424,687
0
0
0
I want to simulate MyApp that imports a module (ResourceX) which requires a resource that is not available at the time and will not work. A solution for this is to make and import a mock module of ResourceX (named ResourceXSimulated) and divert it to MyApp as ResourceX. I want to do this in order to avoid breaking a lot of code and get all kinds of exception from MyApp. I am using Python and It should be something like: "Import ResourceXSimulated as ResourceX" "ResourceX.getData()", actually calls ResourceXSimultated.getData() Looking forward to find out if Python supports this kind of redirection. Cheers. ADDITIONAL INFO: I have access to the source files. UPDATE: I am thinking of adding as little code as possible to MyApp regarding using the fake module and add this code near the import statements.
false
1,443,173
0
1
0
0
Yes, Python can do that, and so long as the methods exposed in the ResourceXSimulated module "look and smell" like these of the original module, the application should not see much any difference (other than, I'm assuming, bogus data fillers, different response times and such).
0
256
0
1
2009-09-18T08:12:00.000
python,testing,mocking,module,monkeypatching
Is it possible to divert a module in python? (ResourceX diverted to ResourceXSimulated)
1
2
5
1,443,195
0
0
0
I want to simulate MyApp that imports a module (ResourceX) which requires a resource that is not available at the time and will not work. A solution for this is to make and import a mock module of ResourceX (named ResourceXSimulated) and divert it to MyApp as ResourceX. I want to do this in order to avoid breaking a lot of code and get all kinds of exception from MyApp. I am using Python and It should be something like: "Import ResourceXSimulated as ResourceX" "ResourceX.getData()", actually calls ResourceXSimultated.getData() Looking forward to find out if Python supports this kind of redirection. Cheers. ADDITIONAL INFO: I have access to the source files. UPDATE: I am thinking of adding as little code as possible to MyApp regarding using the fake module and add this code near the import statements.
false
1,443,173
0.039979
1
0
1
Yes, it's possible. Some starters: You can "divert" modules by manipulating sys.modules. It keeps a list of imported modules, and there you can make your module appear under the same name as the original one. You must do this manipulating before any module that imports the module you want to fake though. You can also make a package called a different name, but in that package actually use the original module name, for your completely different module. This works well as long as the original module isn't installed. In none of these cases you can use both modules at the same time. For that you need to monkey-patch the original module. And of course: It' perfectly possible to just call the new module with the old name. But it might be confusing.
0
256
0
1
2009-09-18T08:12:00.000
python,testing,mocking,module,monkeypatching
Is it possible to divert a module in python? (ResourceX diverted to ResourceXSimulated)
1
2
5
1,443,281
0
0
0
I want to stream a binary data using python. I do not have any idea how to achieve it. I did created python socket program using SOCK_DGRAM. Problem with SOCK_STREAM is that it does not work over internet as our isp dont allow tcp server socket. I want to transmit screen shots periodically to remote computer. I have an idea of maintaining a Que of binary data and have two threads write and read synchronously. I do not want to use VNC . How do I do it? I did written server socket and client socket using SOCK_STREAM it was working on localhost and did not work over internet even if respective ip's are placed. We also did tried running tomcat web server on one pc and tried accessing via other pc on internet and was not working.
true
1,451,349
1.2
0
0
2
There are two problems here. First problem, you will need to be able to address the remote party. This is related to what you referred to as "does not work over Internet as most ISP don't allow TCP server socket". It is usually difficult because the other party could be placed behind a NAT or a firewall. As for the second problem, the problem of actual transmitting of data after you can make a TCP connection, python socket would work if you can address the remote party.
0
1,737
0
1
2009-09-20T16:01:00.000
python,sockets
How to stream binary data in python
1
2
2
1,451,356
0
0
0
I want to stream a binary data using python. I do not have any idea how to achieve it. I did created python socket program using SOCK_DGRAM. Problem with SOCK_STREAM is that it does not work over internet as our isp dont allow tcp server socket. I want to transmit screen shots periodically to remote computer. I have an idea of maintaining a Que of binary data and have two threads write and read synchronously. I do not want to use VNC . How do I do it? I did written server socket and client socket using SOCK_STREAM it was working on localhost and did not work over internet even if respective ip's are placed. We also did tried running tomcat web server on one pc and tried accessing via other pc on internet and was not working.
false
1,451,349
0.291313
0
0
3
SOCK_STREAM is the correct way to stream data. What you're saying about ISPs makes very little sense; they don't control whether or not your machine listens on a certain port on an interface. Perhaps you're talking about firewall/addressing issues? If you insist on using UDP (and you shouldn't because you'll have to worry about packets arriving out of place or not arriving at all) then you'll need to first use socket.bind and then socket.recvfrom in a loop to read data and keep track of open connections. It'll be hard work to do correctly.
0
1,737
0
1
2009-09-20T16:01:00.000
python,sockets
How to stream binary data in python
1
2
2
1,451,365
0
0
0
I am sending packets from one pc to other. I am using python socket socket.socket(socket.AF_INET, socket.SOCK_DGRAM ). Do we need to take care of order in which packets are received ? In ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ? Or some of them present in operating system ? On localhost I get all packets in order. Will it make any difference over internet ?
false
1,458,087
0.462117
0
0
5
SOCK_DGRAM means you want to send packets by UDP -- no order guarantee, no guarantee of reception, no guarantee of lack of repetition. SOCK_STREAM would imply TCP -- no packet boundary guarantee, but (unless the connection's dropped;-) guarantee of order, reception, and no duplication. TCP/IP, the networking model that won the heart and soul of every live practitioned and made the Internet happen, is not compliant to ISO/OSI -- a standard designed at the drafting table and never really winning in the real world. The Internet as she lives and breathes is TCP/IP all the way. Don't rely on tests done on a low-latency local network as in ANY way representative of what will happen out there in the real world. Welcome to the real world, BTW, and, good luck (you'll need some!-).
0
2,274
0
2
2009-09-22T04:23:00.000
python,sockets
Python socket programming and ISO-OSI model
1
2
2
1,458,109
0
0
0
I am sending packets from one pc to other. I am using python socket socket.socket(socket.AF_INET, socket.SOCK_DGRAM ). Do we need to take care of order in which packets are received ? In ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ? Or some of them present in operating system ? On localhost I get all packets in order. Will it make any difference over internet ?
true
1,458,087
1.2
0
0
4
To answer your immediate question, if you're using SOCK_STREAM, then you're actually using TCP, which is an implementation of the transport layer which does take care of packet ordering and integrity for you. So it sounds like that's what you want to use. SOCK_DGRAM is actually UDP, which doesn't take care of any integrity for you. Do we need to take care of order in which packets are received ? In ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ? Just to clear this up, in the ISO-OSI model, all the layers below the transport layer handle sending of a single packet from one computer to the other, and don't "understand" the concept of packet ordering (it doesn't apply to them). In this model, there is another layer (the session layer, above the transport layer) which is responsible for defining the session behavior. It is this layer which decides whether to have things put in place to prevent reordering, to ensure integrity, and so on. In the modern world, the ISO-OSI model is more of an idealistic template, rather than an actual model. TCP/IP is the actual implementation which is used almost everywhere. In TCP/IP, the transport layer is the one that has the role of defining whether there is any session behavior or not.
0
2,274
0
2
2009-09-22T04:23:00.000
python,sockets
Python socket programming and ISO-OSI model
1
2
2
1,458,734
0
0
0
I have a file contained in a key in my S3 bucket. I want to create a new key, which will contain the same file. Is it possible to do without downloading that file? I'm looking for a solution in Python (and preferably boto library).
false
1,464,961
0
0
0
0
Note that the 'copy' method on the Key object has a "preserve_acl" parameter (False by default) that will copy the source's ACL to the destination object.
0
9,405
0
17
2009-09-23T09:34:00.000
python,amazon-s3,boto
How to clone a key in Amazon S3 using Python (and boto)?
1
3
6
7,366,501
0
0
0
I have a file contained in a key in my S3 bucket. I want to create a new key, which will contain the same file. Is it possible to do without downloading that file? I'm looking for a solution in Python (and preferably boto library).
false
1,464,961
0.132549
0
0
4
Browsing through boto's source code I found that the Key object has a "copy" method. Thanks for your suggestion about CopyObject operation.
0
9,405
0
17
2009-09-23T09:34:00.000
python,amazon-s3,boto
How to clone a key in Amazon S3 using Python (and boto)?
1
3
6
1,466,148
0
0
0
I have a file contained in a key in my S3 bucket. I want to create a new key, which will contain the same file. Is it possible to do without downloading that file? I'm looking for a solution in Python (and preferably boto library).
false
1,464,961
0.066568
0
0
2
S3 allows object by object copy. The CopyObject operation creates a copy of an object when you specify the key and bucket of a source object and the key and bucket of a target destination. Not sure if boto has a compact implementation.
0
9,405
0
17
2009-09-23T09:34:00.000
python,amazon-s3,boto
How to clone a key in Amazon S3 using Python (and boto)?
1
3
6
1,465,978
0
0
0
What steps would be necessary, and what kind of maintenance would be expected if I wanted to contribute a module to the Python standard API? For example I have a module that encapsulates automated update functionality similar to Java's JNLP.
false
1,465,302
0.197375
0
0
2
First, look at modules on pypi. Download several that are related to what you're doing so you can see exactly what the state of the art is. For example, look at easy_install for an example of something like what you're proposing. After looking at other modules, write yours to look like theirs. Then publish information on your blog. When people show an interest, post it to SourceForge or something similar. This will allow you to get started slowly. When people start using it, you'll know exactly what kind of maintenance you need to do. Then, when demand ramps up, you can create the pypi information required to publish it on pypi. Finally, when it becomes so popular that people demand it be added to Python as a standard part of the library, many other folks will be involved in helping you mature your offering.
1
129
0
4
2009-09-23T10:56:00.000
python,api
What is involved in adding to the standard Python API?
1
1
2
1,465,505
0
0
0
Is it possible to extract type of object or class name from message received on a udp socket in python using metaclasses/reflection ? The scenario is like this: Receive udp buffer on a socket. The UDP buffer is a serialized binary string(a message). But the type of message is not known at this time. So can't de-serialize into appropriate message. Now, my ques is Can I know the classname of the seraialized binary string(recvd as UDP buffer) so that I can de-serialize into appropriate message and process further. Thanks in Advance.
false
1,487,582
0.132549
0
0
2
What you receive from the udp socket is a byte string -- that's all the "type of object or class name" that's actually there. If the byte string was built as a serialized object (e.g. via pickle, or maybe marshal etc) then you can deserialize it back to an object (using e.g. pickle.loads) and then introspect to your heart's content. But most byte strings were built otherwise and will raise exceptions when you try to loads from them;-). Edit: the OP's edit mentions the string is "a serialized object" but still doesn't say what serialization approach produced it, and that makes all the difference. pickle (and for a much narrower range of type marshal) place enough information on the strings they produce (via the .dumps functions of the modules) that their respective loads functions can deserialize back to the appropriate type; but other approaches (e.g., struct.pack) do not place such metadata in the strings they produce, so it's not feasible to deserialize without other, "out of bands" so to speak, indications about the format in use. So, o O.P., how was that serialized string of bytes produced in the first place...?
0
642
0
1
2009-09-28T15:08:00.000
python,sockets,udp
Type of object from udp buffer in python using metaclasses/reflection
1
2
3
1,487,619
0
0
0
Is it possible to extract type of object or class name from message received on a udp socket in python using metaclasses/reflection ? The scenario is like this: Receive udp buffer on a socket. The UDP buffer is a serialized binary string(a message). But the type of message is not known at this time. So can't de-serialize into appropriate message. Now, my ques is Can I know the classname of the seraialized binary string(recvd as UDP buffer) so that I can de-serialize into appropriate message and process further. Thanks in Advance.
false
1,487,582
0
0
0
0
Updated answer after updated question: "But the type of message is not known at this time. So can't de-serialize into appropriate message." What you get is a sequence of bytes. How that sequence of types should be interpreted is a question of how the protocol looks. Only you know what protocol you use. So if you don't know the type of message, then there is nothing you can do about it. If you are to receive a stream of data an interpret it, you must know what that data means, otherwise you can't interpret it. It's as simple as that. "Now, my ques is Can I know the classname of the seraialized binary string" Yes. The classname is "str", as all strings. (Unless you use Python 3, in which case you would not get a str but a binary). The data inside that str has no classname. It's just binary data. It means whatever the sender wants it to mean. Again, I need to stress that you should not try to make this into a generic question. Explain exactly what you are trying to do, not generically, but specifically.
0
642
0
1
2009-09-28T15:08:00.000
python,sockets,udp
Type of object from udp buffer in python using metaclasses/reflection
1
2
3
1,487,602
0
0
0
I've develop a chat server using Twisted framework in Python. It works fine with a Telnet client. But when I use my flash client problem appear... (the flash client work find with my old php chat server, I rewrote the server in python to gain performance) The connexion is establish between the flash client and the twisted server: XMLSocket .onConnect return TRUE. So it's not a problem of permission with the policy file. I'm not able to send any message from Flash clien with XMLSOCket function send(), nothing is receive on th server side. I tried to end those message with '\n' or '\n\0' or '\0' without succes. You have any clue?
false
1,489,931
0
0
0
0
I find out that the default delimiter for line, use by Twisted is '\r\n'. It can be overwrite in a your children class with: LineOnlyReceiver.delimiter = '\n'
0
927
0
1
2009-09-29T00:02:00.000
python,flash,twisted
Chat server with Twisted framework in python can't receive data from flash client
1
2
2
1,490,530
0
0
0
I've develop a chat server using Twisted framework in Python. It works fine with a Telnet client. But when I use my flash client problem appear... (the flash client work find with my old php chat server, I rewrote the server in python to gain performance) The connexion is establish between the flash client and the twisted server: XMLSocket .onConnect return TRUE. So it's not a problem of permission with the policy file. I'm not able to send any message from Flash clien with XMLSOCket function send(), nothing is receive on th server side. I tried to end those message with '\n' or '\n\0' or '\0' without succes. You have any clue?
true
1,489,931
1.2
0
0
1
Changing LineOnlyReceiver.delimiter is a pretty bad idea, since that changes the delivery for all instances of LineOnlyReceiver (unless they've changed it themselves on a subclass or on the instance). If you ever happen to use any such code, it will probably break. You should change delimiter by setting it on your LineOnlyReceiver subclass, since it's your subclass that has this requirement.
0
927
0
1
2009-09-29T00:02:00.000
python,flash,twisted
Chat server with Twisted framework in python can't receive data from flash client
1
2
2
1,729,776
0
0
0
I'm trying to create XML using the ElementTree object structure in python. It all works very well except when it comes to processing instructions. I can create a PI easily using the factory function ProcessingInstruction(), but it doesn't get added into the elementtree. I can add it manually, but I can't figure out how to add it above the root element where PI's are normally placed. Anyone know how to do this? I know of plenty of alternative methods of doing it, but it seems that this must be built in somewhere that I just can't find.
false
1,489,949
0.07983
1
0
2
Yeah, I don't believe it's possible, sorry. ElementTree provides a simpler interface to (non-namespaced) element-centric XML processing than DOM, but the price for that is that it doesn't support the whole XML infoset. There is no apparent way to represent the content that lives outside the root element (comments, PIs, the doctype and the XML declaration), and these are also discarded at parse time. (Aside: this appears to include any default attributes specified in the DTD internal subset, which makes ElementTree strictly-speaking a non-compliant XML processor.) You can probably work around it by subclassing or monkey-patching the Python native ElementTree implementation's write() method to call _write on your extra PIs before _writeing the _root, but it could be a bit fragile. If you need support for the full XML infoset, probably best stick with DOM.
0
4,332
0
6
2009-09-29T00:09:00.000
python,xml,elementtree
ElementTree in Python 2.6.2 Processing Instructions support?
1
1
5
1,490,057
0