Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Strengths of Shell Scripting compared to Python
796,344
17
109
63,953
0
python,shell
There's nothing you can do with shell scripts that you can't do with python. The big advantage of shell scripts is that you use the same commands as you do when you use the shell, so if you're a heavy shell user, shell scripting will at some point become a very quick and easy way to automate your shell work. I also find it easier to deal with pipes of data in shell scripts than in python, though it's absolutely doable from python. And, finally, you don't have to fire up an additional interpeter to run the shell scripts, giving you a very small, but sometimes maybe noticeable speed and memory usage advantage. But then again, Python scripts are a lot more maintainable, I'm trying to migrate from big ugly shell scripts to Python scripts for that very reason. It's also easier to do exception handling and QA with Python.
0
1
0
1
2009-04-28T05:16:00.000
8
1
false
796,319
0
0
0
8
I tried to learn shell(bash) scripting few times but was driven away by the syntax. Then I found Python and was able to do most of the things a shell script can do in Python. I am now not sure whether I should invest my time in learning shell scripting anymore. So I want to ask: What are strengths of shell scripting that make it an indispensable tool as compared to Python? I am not a system administration by profession, but I am interested in setting up Linux systems for home users, hence I think learning shell scripting can become necessary.
Strengths of Shell Scripting compared to Python
4,980,553
9
109
63,953
0
python,shell
I agree with most of the previous answers. I consider shell commands most suited to do filesystem-oriented tasks (copy and move files, grep, etc). Shell is better, in my opinion, if you have to read and write to file, since a single >>file.txt redirection appends to file instantly, instead of needing, say, file=open('file.txt','a'); file.write(), etc. Currently, for my personal use, I mix both, creating a python script and calling os.system('command') or os.popen('command') every time some action is easier in shell than in python.
0
1
0
1
2009-04-28T05:16:00.000
8
1
false
796,319
0
0
0
8
I tried to learn shell(bash) scripting few times but was driven away by the syntax. Then I found Python and was able to do most of the things a shell script can do in Python. I am now not sure whether I should invest my time in learning shell scripting anymore. So I want to ask: What are strengths of shell scripting that make it an indispensable tool as compared to Python? I am not a system administration by profession, but I am interested in setting up Linux systems for home users, hence I think learning shell scripting can become necessary.
Strengths of Shell Scripting compared to Python
9,188,834
5
109
63,953
0
python,shell
Another thing to consider when choosing shell scripts of Python is the Python version that will be running on the target machines. RHEL5 (to name one) is going to be around for a long time. RHEL5 is stuck with Python 2.4. There are a lot of nice libraries that depend on functionality added to Python post-2.4.
0
1
0
1
2009-04-28T05:16:00.000
8
0.124353
false
796,319
0
0
0
8
I tried to learn shell(bash) scripting few times but was driven away by the syntax. Then I found Python and was able to do most of the things a shell script can do in Python. I am now not sure whether I should invest my time in learning shell scripting anymore. So I want to ask: What are strengths of shell scripting that make it an indispensable tool as compared to Python? I am not a system administration by profession, but I am interested in setting up Linux systems for home users, hence I think learning shell scripting can become necessary.
Strengths of Shell Scripting compared to Python
814,425
36
109
63,953
0
python,shell
The shell makes common and simple actions really simple, at the expense of making more complex things much much more complex. Typically, a small shell script will be shorter and simpler than the corresponding python program, but the python program will tend to gracefully accept modifications, whereas the shell script will tend to get less and less maintainable as code is added. This has the consequence that for optimal day-to-day productivity you need shell-scripting, but you should use it mostly for throwaway scripts, and use python everywhere else.
0
1
0
1
2009-04-28T05:16:00.000
8
1
false
796,319
0
0
0
8
I tried to learn shell(bash) scripting few times but was driven away by the syntax. Then I found Python and was able to do most of the things a shell script can do in Python. I am now not sure whether I should invest my time in learning shell scripting anymore. So I want to ask: What are strengths of shell scripting that make it an indispensable tool as compared to Python? I am not a system administration by profession, but I am interested in setting up Linux systems for home users, hence I think learning shell scripting can become necessary.
Strengths of Shell Scripting compared to Python
797,341
58
109
63,953
0
python,shell
"What are strengths of shell scripting that make it an indispensable tool as compared to Python?" The shell is not indispensable. Why do you think there are so many? bash, tcsh, csh, sh, etc., etc., Python is a shell. Not the one you'd use for running all commands, but for scripting, it's ideal. Python is a more-or-less standard part of all Linux distro's. The more traditional shells do too many things. They have a handy user interface for running commands. This includes one-line commands where the shell searches your PATH, forks and execs the requested program. It also includes pipelines, sequences and concurrent programs (using ;, | and &) as well as some redirection (using > and <). They have a crummy little programming-language-like capability for running scripts. This language is rather hard to use and extremely inefficient. Most statements in this language require forking one or more additional processes, wasting time and memory. Running programs from the shell, redirecting stderr to a log file and that kind of thing is good. Do that in the shell. Almost everything else can be done more efficiently and more clearly as a Python script. You need both. However, you should never write a script with if-statements or loops in a traditional shell language.
0
1
0
1
2009-04-28T05:16:00.000
8
1
false
796,319
0
0
0
8
I tried to learn shell(bash) scripting few times but was driven away by the syntax. Then I found Python and was able to do most of the things a shell script can do in Python. I am now not sure whether I should invest my time in learning shell scripting anymore. So I want to ask: What are strengths of shell scripting that make it an indispensable tool as compared to Python? I am not a system administration by profession, but I am interested in setting up Linux systems for home users, hence I think learning shell scripting can become necessary.
Strengths of Shell Scripting compared to Python
796,348
7
109
63,953
0
python,shell
The shell is available everywhere. If you stick to a relatively basic set of portable functionality, your scripts can run on cell phones, wireless routers, DVRs, netbooks, workstations, big iron servers, and the like. Python is not necessarily included out of the box on lots of systems, and depending on the environment it may be hard to get it installed. Learning some shell scripting can also help you learn some command line tricks, since the command line is, well, the shell. It's also good for taking some fairly long and complicated command line, and converting that into a more general script after you realize you'll need it some more. The shell also has some pretty powerful features; pipelines are a really interesting control construct that is native only to the shell, as far as I know.
0
1
0
1
2009-04-28T05:16:00.000
8
1
false
796,319
0
0
0
8
I tried to learn shell(bash) scripting few times but was driven away by the syntax. Then I found Python and was able to do most of the things a shell script can do in Python. I am now not sure whether I should invest my time in learning shell scripting anymore. So I want to ask: What are strengths of shell scripting that make it an indispensable tool as compared to Python? I am not a system administration by profession, but I am interested in setting up Linux systems for home users, hence I think learning shell scripting can become necessary.
Strengths of Shell Scripting compared to Python
3,074,616
11
109
63,953
0
python,shell
one doesn't have to learn shell scripting, as all the previous answers indicate; but learning is never a bad thing. it's really a question of personal priorities. it's very hard for someone else to tell you what is and isn't worth your time. most programmers find that learning new languages gets incrementally easier each time. (the same is largely true of natural languages too.) and the earlier you start, the better. plus: having learned a language enables you to extravagantly diss its limitations from a position of complete knowledge and familiarity. this probably won't get you laid, but might earn you a beer from your peers!
0
1
0
1
2009-04-28T05:16:00.000
8
1
false
796,319
0
0
0
8
I tried to learn shell(bash) scripting few times but was driven away by the syntax. Then I found Python and was able to do most of the things a shell script can do in Python. I am now not sure whether I should invest my time in learning shell scripting anymore. So I want to ask: What are strengths of shell scripting that make it an indispensable tool as compared to Python? I am not a system administration by profession, but I am interested in setting up Linux systems for home users, hence I think learning shell scripting can become necessary.
Strengths of Shell Scripting compared to Python
796,343
95
109
63,953
0
python,shell
Shell scripting has simpler notations for I/O redirection. It is simpler to create pipelines out of existing programs in shell. Shell scripting reuses entire programs. Shell is universally available (on anything like Unix) - Python is not necessarily installed. 'Tis true that you can do everything in Python that you can do in shell; 'tis also true that there are things that are easy in Python that are hard in shell (just as there are things that are easy in shell but hard in Python). Knowing both will be best in the long term.
0
1
0
1
2009-04-28T05:16:00.000
8
1.2
true
796,319
0
0
0
8
I tried to learn shell(bash) scripting few times but was driven away by the syntax. Then I found Python and was able to do most of the things a shell script can do in Python. I am now not sure whether I should invest my time in learning shell scripting anymore. So I want to ask: What are strengths of shell scripting that make it an indispensable tool as compared to Python? I am not a system administration by profession, but I am interested in setting up Linux systems for home users, hence I think learning shell scripting can become necessary.
How to call a Perl script from Python, piping input to it?
798,508
2
6
13,888
0
python,perl,domainkeys,dkim
I'm sure there's a reason you're going down the route you've chosen, but why not just do the signing in Python? How are you signing it? Maybe we could provide some assitance in writing a python implementation?
0
1
0
1
2009-04-28T14:58:00.000
6
0.066568
false
798,413
0
0
0
1
I'm hacking some support for DomainKeys and DKIM into an open source email marketing program, which uses a python script to send the actual emails via SMTP. I decided to go the quick and dirty route, and just write a perl script that accepts an email message from STDIN, signs it, then returns it signed. What I would like to do, is from the python script, pipe the email text that's in a string to the perl script, and store the result in another variable, so I can send the email signed. I'm not exactly a python guru, however, and I can't seem to find a good way to do this. I'm pretty sure I can use something like os.system for this, but piping a variable to the perl script is something that seems to elude me. In short: How can I pipe a variable from a python script, to a perl script, and store the result in Python? EDIT: I forgot to include that the system I'm working with only has python v2.3
Notifying container object: best practices
802,084
5
4
403
0
python,architecture,containers,notifications
You're over-thinking this. Seriously. Python isn't C++; your concerns are non-issues in Python. Just write what makes sense in your problem domain. " Not absolutely good because of circular references." Why not? Circularity is of no relevance here at all. Bidirectional relationships are great things. Use them. Python garbage collects them just fine without any thinking on your part. What possible problem do you have with mutual (birectional) relationships? "...operate only Accounts, and within them, internally, handler operators. This one is a bit limiting because in this case I cannot pass around references to operators. " What? Your Operators are Python objects, pass all you want. All Python objects are (in effect) references, don't sweat it. What possible problem do you have with manipulating Operator objects?
0
1
0
1
2009-04-29T11:31:00.000
5
1.2
true
801,931
0
0
0
2
I have two classes: Account and Operator. Account contains a list of Operators. Now, whenever an operator (in the list) receives a message I want to notify Account object to perform some business logic as well. I think of three alternatives on how to achieve this: 1) Hold a reference within Operator to the container [Account] object and call methods directly. Not absolutely good because of circular references. 2) Use events. As far as I know there is no built-in event handling mechanism in Python. So, this one is a bit tricky to implement. 3) Don't send messages to Operators directly. Instead, operate only Accounts, and within them, internally, handler operators. This one is a bit limiting because in this case I cannot pass around references to operators. I wonder which approach is the most advantageous from the architectural point of view. How do you usually handle this task? It would be great if you could point out snippets in Python.
Notifying container object: best practices
802,031
3
4
403
0
python,architecture,containers,notifications
There is no "one-size-fits-all" solution for the Observer pattern. But usually, it's better to define an EventManager object where interested parties can register themselves for certain events and post these events whenever they happen. It simply creates less dependencies. Note that you need to use a global EventManager instance, which can be problematic during testing or from a general OO point of view (it's a global variable). I strongly advise against passing the EventManager around all the time because that will clutter your code. In my own code, the "key" for registering events is the class of the event. The EventManager uses a dictionary (event class -> list of observers) to know which event goes where. In the notification code, you can then use dict.get(event.__class__, ()) to find your listeners.
0
1
0
1
2009-04-29T11:31:00.000
5
0.119427
false
801,931
0
0
0
2
I have two classes: Account and Operator. Account contains a list of Operators. Now, whenever an operator (in the list) receives a message I want to notify Account object to perform some business logic as well. I think of three alternatives on how to achieve this: 1) Hold a reference within Operator to the container [Account] object and call methods directly. Not absolutely good because of circular references. 2) Use events. As far as I know there is no built-in event handling mechanism in Python. So, this one is a bit tricky to implement. 3) Don't send messages to Operators directly. Instead, operate only Accounts, and within them, internally, handler operators. This one is a bit limiting because in this case I cannot pass around references to operators. I wonder which approach is the most advantageous from the architectural point of view. How do you usually handle this task? It would be great if you could point out snippets in Python.
Python "Task Server"
1,556,571
1
4
2,958
0
python
You can have a look at celery
0
1
0
0
2009-04-30T02:19:00.000
5
0.039979
false
805,120
0
0
1
2
My question is: which python framework should I use to build my server? Notes: This server talks HTTP with it's clients: GET and POST (via pyAMF) Clients "submit" "tasks" for processing and, then, sometime later, retrieve the associated "task_result" submit and retrieve might be separated by days - different HTTP connections The "task" is a lump of XML describing a problem to be solved, and a "task_result" is a lump of XML describing an answer. When a server gets a "task", it queues it for processing The server manages this queue and, when tasks get to the top, organises that they are processed. the processing is performed by a long running (15 mins?) external program (via subprocess) which is feed the task XML and which produces a "task_result" lump of XML which the server picks up and stores (for later Client retrieval). it serves a couple of basic HTML pages showing the Queue and processing status (admin purposes only) I've experimented with twisted.web, using SQLite as the database and threads to handle the long running processes. But I can't help feeling that I'm missing a simpler solution. Am I? If you were faced with this, what technology mix would you use?
Python "Task Server"
805,126
0
4
2,958
0
python
It seems any python web framework will suit your needs. I work with a similar system on a daily basis and I can tell you, your solution with threads and SQLite for queue storage is about as simple as you're going to get. Assuming order doesn't matter in your queue, then threads should be acceptable. It's important to make sure you don't create race conditions with your queues or, for example, have two of the same job type running simultaneously. If this is the case, I'd suggest a single threaded application to do the items in the queue one by one.
0
1
0
0
2009-04-30T02:19:00.000
5
0
false
805,120
0
0
1
2
My question is: which python framework should I use to build my server? Notes: This server talks HTTP with it's clients: GET and POST (via pyAMF) Clients "submit" "tasks" for processing and, then, sometime later, retrieve the associated "task_result" submit and retrieve might be separated by days - different HTTP connections The "task" is a lump of XML describing a problem to be solved, and a "task_result" is a lump of XML describing an answer. When a server gets a "task", it queues it for processing The server manages this queue and, when tasks get to the top, organises that they are processed. the processing is performed by a long running (15 mins?) external program (via subprocess) which is feed the task XML and which produces a "task_result" lump of XML which the server picks up and stores (for later Client retrieval). it serves a couple of basic HTML pages showing the Queue and processing status (admin purposes only) I've experimented with twisted.web, using SQLite as the database and threads to handle the long running processes. But I can't help feeling that I'm missing a simpler solution. Am I? If you were faced with this, what technology mix would you use?
Python's bz2 module not compiled by default
813,744
28
28
40,171
0
python,c,compiler-construction
Use your vendor's package management to add the package that contains the development files for bz2. It's usually a package called "libbz2-dev". E.g. on Ubuntu sudo apt-get install libbz2-dev
0
1
0
1
2009-05-01T19:03:00.000
4
1
false
812,781
0
0
0
3
It seems that Python 2.6.1 doesn't compile bz2 library by default from source. I don't have lib-dynload/bz2.so What's the quickest way to add it (without installing Python from scratch)? OS is Linux 2.4.32-grsec+f6b+gr217+nfs+a32+fuse23+tg+++opt+c8+gr2b-v6.194 #1 SMP Tue Jun 6 15:52:09 PDT 2006 i686 GNU/Linux IIRC I used only --prefix flag.
Python's bz2 module not compiled by default
813,112
33
28
40,171
0
python,c,compiler-construction
You need libbz2.so (the general purpose libbz2 library) properly installed first, for Python to be able to build its own interface to it. That would typically be from a package in your Linux distro likely to have "libbz2" and "dev" in the package name.
0
1
0
1
2009-05-01T19:03:00.000
4
1.2
true
812,781
0
0
0
3
It seems that Python 2.6.1 doesn't compile bz2 library by default from source. I don't have lib-dynload/bz2.so What's the quickest way to add it (without installing Python from scratch)? OS is Linux 2.4.32-grsec+f6b+gr217+nfs+a32+fuse23+tg+++opt+c8+gr2b-v6.194 #1 SMP Tue Jun 6 15:52:09 PDT 2006 i686 GNU/Linux IIRC I used only --prefix flag.
Python's bz2 module not compiled by default
6,848,047
9
28
40,171
0
python,c,compiler-construction
If you happen to be trying to compile Python on RHEL5 the package is called bzip2-devel, and if you have RHN set up it can be installed with this command: yum install bzip2-devel Once that is done, you don't need either of the --enable-bz2 or --with-bz2 options, but you might need --enable-shared.
0
1
0
1
2009-05-01T19:03:00.000
4
1
false
812,781
0
0
0
3
It seems that Python 2.6.1 doesn't compile bz2 library by default from source. I don't have lib-dynload/bz2.so What's the quickest way to add it (without installing Python from scratch)? OS is Linux 2.4.32-grsec+f6b+gr217+nfs+a32+fuse23+tg+++opt+c8+gr2b-v6.194 #1 SMP Tue Jun 6 15:52:09 PDT 2006 i686 GNU/Linux IIRC I used only --prefix flag.
Google App Engine - design considerations about cron tasks
815,113
3
1
1,473
1
python,database,google-app-engine,cron
I think you'll find that snapshotting every user's state every hour isn't something that will scale well no matter what your framework. A more ordinary environment will disguise this by letting you have longer running tasks, but you'll still reach the point where it's not practical to take a snapshot of every user's data, every hour. My suggestion would be this: Add a 'last snapshot' field, and subclass the put() function of your model (assuming you're using Python; the same is possible in Java, but I don't know the syntax), such that whenever you update a record, it checks if it's been more than an hour since the last snapshot, and if so, creates and writes a snapshot record. In order to prevent concurrent updates creating two identical snapshots, you'll want to give the snapshots a key name derived from the time at which the snapshot was taken. That way, if two concurrent updates try to write a snapshot, one will harmlessly overwrite the other. To get the snapshot for a given hour, simply query for the oldest snapshot newer than the requested period. As an added bonus, since inactive records aren't snapshotted, you're saving a lot of space, too.
0
1
0
0
2009-05-02T13:54:00.000
3
0.197375
false
814,896
0
0
1
1
I'm developing software using the Google App Engine. I have some considerations about the optimal design regarding the following issue: I need to create and save snapshots of some entities at regular intervals. In the conventional relational db world, I would create db jobs which would insert new summary records. For example, a job would insert a record for every active user that would contain his current score to the "userrank" table, say, every hour. I'd like to know what's the best method to achieve this in Google App Engine. I know that there is the Cron service, but does it allow us to execute jobs which will insert/update thousands of records?
how to get the n-th record of a datastore query
827,149
3
1
845
0
python,google-app-engine,google-cloud-datastore,custompaging
There is no efficient way to do this - in any DBMS. In every case, you have to at least read sequentially through the index records until you find the nth one, then look up the corresponding data record. This is more or less what fetch(count, offset) does in GAE, with the additional limitation of 1000 records. A better approach to this is to keep a 'bookmark', consisting of the value of the field you're ordering on for the last entity you retrieved, and the entity's key. Then, when you want to continue from where you left off, you can add the field's value as the lower bound of an inequality query, and skip records until you match or exceed the last one you saw. If you want to provide 'friendly' page offsets to users, what you can do is to use memcache to store an association between a start offset and a bookmark (order_property, key) tuple. When you generate a page, insert or update the bookmark for the entity following the last one. When you fetch a page, use the bookmark if it exists, or generate it the hard way, by doing queries with offsets - potentially multiple queries if the offset is high enough.
0
1
0
0
2009-05-05T20:14:00.000
2
1.2
true
826,724
0
0
1
1
Suppose that I have the model Foo in GAE and this query: query = Foo.all().order('-key') I want to get the n-th record. What is the most efficient way to achieve that? Will the solution break if the ordering property is not unique, such as the one below: query = Foo.all().order('-color') edit: n > 1000 edit 2: I want to develop a friendly paging mechanism that shows pages available (such as Page 1, Page 2, ... Page 185) and requires a "?page=x" in the query string, instead of a "?bookmark=XXX". When page = x, the query is to fetch the records beginning from the first record of that page.
design for handling exceptions - google app engine
833,840
0
6
1,603
0
python,google-app-engine,exception-handling,web-applications
Ad. #4: I usually treat query strings as non-essential. If anything is wrong with query string, I'd just present bare resource page (as if no query was present), possibly with some information to user what was wrong with the query string. This leads to the problem similar to your #3: how did the user got into this wrong query? Did my application produce wrong URL somewhere? Or was it outdated link in some external service, or saved bookmark? HTTP_REFERER might contain some clue, but of course is not authoritative, so I'd log the problematic query (with some additional HTTP headers) and try to investigate the case.
0
1
0
0
2009-05-06T16:51:00.000
2
0
false
830,597
0
0
1
1
I'm developing a project on google app engine (webapp framework). I need you people to assess how I handle exceptions. There are 4 types of exceptions I am handling: Programming exceptions Bad user input Incorrect URLs Incorrect query strings Here is how I handle them: I have subclassed the webapp.requesthandler class and overrode the handle_exceptions method. Whenever an exception occurs, I take the user to a friendly "we're sorry" page and in the meantime send a message with the traceback to the admins. On the client side I (will) use js and also validate on the server side. Here I figure (as a coder with non-web experience) in addition to validate inputs according to programming logic (check: cash input is of the float type?) and business rules (check: user has enough points to take that action?), I also have to check against malicious intentions. What measures should I take against malicious actions? I have a catch-all URL that handles incorrect URLs. That is to say, I take the user to a custom "page does not exist" page. Here I have no problems, I think. Incorrect query strings presumably raise exceptions if left to themselves. If an ID does not exist, the method returns None (an exception is on the way). if the parameter is inconvenient, the code raises an exception. Here I think I must raise a 404 and take the user to the custom "page does not exist" page. What should I do? What are your opinions? Thanks in advance..
Cannot access Python server running as Windows service
834,878
0
2
1,004
0
python,windows-services,tcp
Check to see that the service is running under the Nertwork Service account and not the Local System account. The later doesn't have network access and is the default user to run services under. You can check this by going to the services app under administrative tool in the start menu and looking for your service. If you right-click the service you can go to properties and change the user that it is run under.
0
1
0
0
2009-05-07T05:52:00.000
3
0
false
833,062
0
0
0
2
I have written a Python TCP/IP server for internal use, using win32serviceutil/py2exe to create a Windows service. I installed it on a computer running Windows XP Pro SP3. However, I can't connect to it when it's running as a service. I can confirm that it's binding to the address/port, because I get a conflict when I try to bind to that address/port with another application. Further, I have checked the Windows Firewall settings and have added appropriate exceptions. If I run the server as a simple console application, everything works as expected. However, when I run it as a service, it doesn't work. I vaguely remember running into this problem before, but for the life of me can't remember any of the details. Suggestions, anyone?
Cannot access Python server running as Windows service
904,114
1
2
1,004
0
python,windows-services,tcp
First of all, whenever you implement a Windows service, be sure to add proper logging. My worker threads were terminating because of the exception, "The socket operation could not complete without blocking." The solution was to simply call sock.setblocking(1) after accepting the connection.
0
1
0
0
2009-05-07T05:52:00.000
3
1.2
true
833,062
0
0
0
2
I have written a Python TCP/IP server for internal use, using win32serviceutil/py2exe to create a Windows service. I installed it on a computer running Windows XP Pro SP3. However, I can't connect to it when it's running as a service. I can confirm that it's binding to the address/port, because I get a conflict when I try to bind to that address/port with another application. Further, I have checked the Windows Firewall settings and have added appropriate exceptions. If I run the server as a simple console application, everything works as expected. However, when I run it as a service, it doesn't work. I vaguely remember running into this problem before, but for the life of me can't remember any of the details. Suggestions, anyone?
How to run a script without being in the tasktray?
833,364
0
1
109
0
python
Set the scheduled task to start the script as minimized.
0
1
0
0
2009-05-07T07:45:00.000
2
0
false
833,356
0
0
0
2
I have a scheduled task which runs a python script every 10 min so it turns out that a script pops up on my desktop every 10 min how can i make it invincible so my script will work in the background ? I've been told that pythonw will do the work, but I cant figure out how to use it any help ? thanks
How to run a script without being in the tasktray?
833,391
3
1
109
0
python
I've been told that pythonw will do the work, but I cant figure out how to use it Normally you just have to rename the file extension to .pyw. Then it will be executed by pythonw.
0
1
0
0
2009-05-07T07:45:00.000
2
0.291313
false
833,356
0
0
0
2
I have a scheduled task which runs a python script every 10 min so it turns out that a script pops up on my desktop every 10 min how can i make it invincible so my script will work in the background ? I've been told that pythonw will do the work, but I cant figure out how to use it any help ? thanks
datastore transaction restrictions
838,960
0
3
384
1
python,google-app-engine,transactions,google-cloud-datastore
After a through research, I have found that a distributed transaction layer that provides a solution to the single entity group restriction has been developed in userland with the help of some google people. But so far, it is not released and is only available in java.
0
1
0
0
2009-05-07T20:55:00.000
3
1.2
true
836,992
0
0
1
1
in my google app application, whenever a user purchases a number of contracts, these events are executed (simplified for clarity): user.cash is decreased user.contracts is increased by the number contracts.current_price is updated. market.no_of_transactions is increased by 1. in a rdms, these would be placed within the same transaction. I conceive that google datastore does not allow entities of more than one model to be in the same transaction. what is the correct approach to this issue? how can I ensure that if a write fails, all preceding writes are rolled back? edit: I have obviously missed entity groups. Now I'd appreciate some further information regarding how they are used. Another point to clarify is google says "Only use entity groups when they are needed for transactions. For other relationships between entities, use ReferenceProperty properties and Key values, which can be used in queries". does it mean I have to define both a reference property (since I need queriying them) and a parent-child relationship (for transactions)? edit 2: and finally, how do I define two parents for an entity if the entity is being created to establish an n-to-n relationship between 2 parents?
How do I invoke Python code from Ruby?
837,862
-1
5
6,499
0
python,ruby
For python code to run the interpreter needs to be launched as a process. So system() is your best option. For calling the python code you could use RPC or network sockets, got for the simplest thing which could possibly work.
0
1
0
1
2009-05-07T21:59:00.000
5
-0.039979
false
837,256
0
0
1
2
Does a easy to use Ruby to Python bridge exist? Or am I better off using system()?
How do I invoke Python code from Ruby?
837,296
2
5
6,499
0
python,ruby
I don't think there's any way to invoke Python from Ruby without forking a process, via system() or something. The language run times are utterly diferent, they'd need to be in separate processes anyway.
0
1
0
1
2009-05-07T21:59:00.000
5
0.07983
false
837,256
0
0
1
2
Does a easy to use Ruby to Python bridge exist? Or am I better off using system()?
Twisted and p2p applications
839,411
1
14
5,823
0
python,twisted,protocols,p2p
Yes, twisted was used to create the initial version of Bittorrent. There are some opensource libraries to start from.
0
1
0
0
2009-05-08T11:20:00.000
4
0.049958
false
839,384
0
0
0
1
Can you tell me: could I use twisted for p2p-applications creating? And what protocols should I choose for this?
Advice for Windows system scripting+programming
841,689
8
0
501
0
.net,ruby,powershell,scripting,ironpython
Except for the seventh item on your list this should be fairly trivial using Powershell and WMI, as this is perhaps the natural domain for Powershell. Since you won't need another language for the first six list items it shouldn't really matter what you use for the last one. You probably can use PS (I've never done IO with it, though) or whatever suits you. As for your second question: VBScript is probably not going to go away in the near future as the Windows Script Host is still a safer bet when writing small scripts for deployment, as it comes preinstalled on every Windows since 98. Powershell is only included in Windows 7 and later. That being said, Powershell is surely targeted at obsoleting WSH and CMD for automation purposes since it offers the same features of the aforementioned ones and much more (like easy .NET and WMI access). VB.NET on the other hand is one of the primary .NET languages marketed by Microsoft. It has little to no relation to VBScript, is no competitor to Powershell or WPF (heck, those are completely different technologies). You may see some convergence with C# going on as both languages still seem to struggle a little finding their intended market. Still, VB.NET is the easiest choice of switching to .NET when you're a VB programmer and there were/are lots of them MS didn't want to lose just because they created .NET.
0
1
0
1
2009-05-08T20:35:00.000
7
1
false
841,669
0
0
0
3
I just got a project where I have to do the following on a Windows OS: detect how many drives (C: D: E: ..etc) are connected to current system what the system labels are for each volume how much storage (both used and free) for each of the drives what format each drive is (NTFS/FAT32) how many files are in a given directory in any of those drives how big each file size is File processing (each file is about 2GB) where I have to do a lot of C-like fseek(), and binary data parsing, and big to little-endian conversion. Have to write some logic code as well. I'm an experienced C\C++ programmer, but I thought this would be a perfect time for me to start learning about scripting. Candidates that I thought of are: ( Python || Ruby ) && PowerShell. Are those the kinds of things I can accomplish with, say, IronPython+Powershell? Or are there better tools/languages out there? PS: Is PowerShell meant to replace VBScript? Also, what is VB.net good for anyway now that C#, WPF, and Powershell exist?
Advice for Windows system scripting+programming
844,915
0
0
501
0
.net,ruby,powershell,scripting,ironpython
As for Perl, Ruby too has access to all Win32 API and WMI functions.
0
1
0
1
2009-05-08T20:35:00.000
7
0
false
841,669
0
0
0
3
I just got a project where I have to do the following on a Windows OS: detect how many drives (C: D: E: ..etc) are connected to current system what the system labels are for each volume how much storage (both used and free) for each of the drives what format each drive is (NTFS/FAT32) how many files are in a given directory in any of those drives how big each file size is File processing (each file is about 2GB) where I have to do a lot of C-like fseek(), and binary data parsing, and big to little-endian conversion. Have to write some logic code as well. I'm an experienced C\C++ programmer, but I thought this would be a perfect time for me to start learning about scripting. Candidates that I thought of are: ( Python || Ruby ) && PowerShell. Are those the kinds of things I can accomplish with, say, IronPython+Powershell? Or are there better tools/languages out there? PS: Is PowerShell meant to replace VBScript? Also, what is VB.net good for anyway now that C#, WPF, and Powershell exist?
Advice for Windows system scripting+programming
842,535
0
0
501
0
.net,ruby,powershell,scripting,ironpython
I'll give you the unpopular answer then since no one else has added it: Perl. If you're comfortable with the Win32 API as a C/C++ programmer, Perl may be the easier way to go. It has modules for accessing the Win32 API and Perl is quite easy for C/C++ programmers to get up to speed in. Perl has always done the job for me in the past.
0
1
0
1
2009-05-08T20:35:00.000
7
0
false
841,669
0
0
0
3
I just got a project where I have to do the following on a Windows OS: detect how many drives (C: D: E: ..etc) are connected to current system what the system labels are for each volume how much storage (both used and free) for each of the drives what format each drive is (NTFS/FAT32) how many files are in a given directory in any of those drives how big each file size is File processing (each file is about 2GB) where I have to do a lot of C-like fseek(), and binary data parsing, and big to little-endian conversion. Have to write some logic code as well. I'm an experienced C\C++ programmer, but I thought this would be a perfect time for me to start learning about scripting. Candidates that I thought of are: ( Python || Ruby ) && PowerShell. Are those the kinds of things I can accomplish with, say, IronPython+Powershell? Or are there better tools/languages out there? PS: Is PowerShell meant to replace VBScript? Also, what is VB.net good for anyway now that C#, WPF, and Powershell exist?
Piping Batch File output to a Python script
842,139
1
3
9,251
0
python,windows,scripting,batch-file,io
Try subprocess.Popen(). It allows you to redirect stdout and stderr to files.
0
1
0
0
2009-05-08T22:41:00.000
3
0.066568
false
842,120
0
0
0
1
I'm trying to write a python script (in windows) that runs a batch file and will take the command line output of that batch file as input. The batch file runs processes that I don't have access to and gives output based on whether those processes are successful. I'd like to take those messages from the batch file and use them in the python script. Anyone have any ideas on how to do this ?
Is Twisted an httplib2/socket replacement?
847,014
0
6
1,969
0
python,networking,sockets,twisted,httplib2
Should new networking code (with the exception of small command line tools) be written with Twisted? Maybe. It really depends. Sometimes its just easy enough to wrap the blocking calls in their own thread. Twisted is good for large scale network code. Would you mix Twisted, http2lib or socket code in the same project? Sure. But just remember that Twisted is single threaded, and that any blocking call in Twisted will block the entire engine. Is Twisted pythonic for most libraries (it is more complex than alternatives, introduce a dependency to a non-standard package...)? There are many Twisted zealots that will say it belongs in the Python standard library. But many people can implement decent networking code with asyncore/asynchat.
0
1
1
0
2009-05-11T06:30:00.000
2
1.2
true
846,950
0
0
0
1
Many python libraries, even recently written ones, use httplib2 or the socket interface to perform networking tasks. Those are obviously easier to code on than Twisted due to their blocking nature, but I think this is a drawback when integrating them with other code, especially GUI one. If you want scalability, concurrency or GUI integration while avoiding multithreading, Twisted is then a natural choice. So I would be interested in opinions in those matters: Should new networking code (with the exception of small command line tools) be written with Twisted? Would you mix Twisted, http2lib or socket code in the same project? Is Twisted pythonic for most libraries (it is more complex than alternatives, introduce a dependency to a non-standard package...)? Edit: please let me phrase this in another way. Do you feel writing new library code with Twisted may add a barrier to its adoption? Twisted has obvious benefits (especially portability and scalability as stated by gimel), but the fact that it is not a core python library may be considered by some as a drawback.
Python M2Crypto EC Support
848,795
0
2
880
0
python,m2crypto
Possibly its looking up shared libs libssl.so and libcrypto.so and finding the old ones in /usr/lib if you add the new_path to the top of /etc/ld.so.conf so it gets searched first it would work. But this might break other OpenSSL applications expecting old OpenSSL.
0
1
0
1
2009-05-11T14:55:00.000
3
0
false
848,508
0
0
0
1
M2Crypto provides EC support for ECDSA/ECDH. I have installed OpenSSL 0.9.8i which contains support for EC. However when I run "from M2Crypto import EC,BIO" I get error saying EC_init() failed. So I added debug to print m2.OPENSSL_VERSION_TEXT value. It gets printed as "OpenSSL 0.9.7 19 Feb 2003". This version of OpenSSL doesnot support EC. I tried "python setup.py build build_ext --openssl="new_path where OpenSSL 0.9.8i is installed". Though M2Crypto is built again "Python setup.py install" , I still see that it points to "Old version of OpenSSL". Any Pointers on how to successfully get M2Crypto to use 0.9.8i will be useful.
How to establish communication between flex and python code build on Google App Engine
854,403
0
0
1,782
0
python,apache-flex,google-app-engine
Do a HTTP post from Flex to your AppEngine app using the URLRequest class.
0
1
1
0
2009-05-12T19:11:00.000
2
0
false
854,353
0
0
1
1
I want to communicate using flex client with GAE, I am able to communicate using XMl from GAE to FLex but how should I post from flex3 to python code present on App Engine. Can anyone give me a hint about how to send login information from Flex to python Any ideas suggest me some examples.....please provide me some help Regards, Radhika
Running django on OSX
856,033
5
1
1,602
0
python,django,macos
Unless you are planning on going to production with OS X you might not want to bother. If you must do it, go straight to mod_wsgi. Don't bother with mod_python or older solutions. I did mod_python on Apache and while it runs great now, it took countless hours to set up. Also, just to clarify something based on what you said: You're not going to find a mapping between the url path (like /polls) and a script that is being called. Django doesn't work like that. With Django your application is loaded into memory waiting for requests. Once a request comes in it gets dispatched through the url map that you created in urls.py. That boils down to a function call somewhere in your code. That's why for a webserver like Apache you need a module like mod_wsgi, which gives your app a spot in memory in which to live. Compare that with something like CGI where the webserver executes a specific script on demand at a location that is physically mapped between the url and the filesystem. I hope that's helpful and not telling you something you already knew. :)
0
1
0
0
2009-05-12T23:29:00.000
4
0.244919
false
855,408
0
0
1
2
I've just completed the very very nice django tutorial and it all went swimmingly. One of the first parts of the tutorial is that it says not to use their example server thingie in production, my first act after the tutorial was thus to try to run my app on apache. I'm running OSX 10.5 and have the standard apache (which refuses to run python) and MAMP (which begrudgingly allows it in cgi-bin). The problem is that I've no idea which script to call, in the tutorial it was always localhost:8000/polls but I've no idea how that's meant to map to a specific file. Have I missed something blatantly obvious about what to do with a .htaccess file or does the tutorial not actually explain how to use it somewhere else?
Running django on OSX
856,986
2
1
1,602
0
python,django,macos
Yet another option is to consider using a virtual machine for your development. You can install a full version of whatever OS your production server will be running - say, Debian - and run your Apache and DB in the VM. You can connect to the virtual disk in the Finder, so you can still use TextMate (or whatever) on OSX to do your editing. I've had good experiences doing this via VMWare Fusion.
0
1
0
0
2009-05-12T23:29:00.000
4
0.099668
false
855,408
0
0
1
2
I've just completed the very very nice django tutorial and it all went swimmingly. One of the first parts of the tutorial is that it says not to use their example server thingie in production, my first act after the tutorial was thus to try to run my app on apache. I'm running OSX 10.5 and have the standard apache (which refuses to run python) and MAMP (which begrudgingly allows it in cgi-bin). The problem is that I've no idea which script to call, in the tutorial it was always localhost:8000/polls but I've no idea how that's meant to map to a specific file. Have I missed something blatantly obvious about what to do with a .htaccess file or does the tutorial not actually explain how to use it somewhere else?
Python: Plugging wx.py.shell.Shell into a separate process
892,646
1
1
1,527
0
python,shell,wxpython,multiprocessing
First create the shell Decouple the shell from your app by making its locals empty Create your code string Compile the code string and get a code object Execute the code object in the shell from wx.py.shell import Shell frm = wx.Frame(None) sh = Shell(frm) frm.Show() sh.interp.locals = {} codeStr = """ from multiprocessing import Process, Queue def f(q): q.put([42, None, 'hello']) q = Queue() p = Process(target=f, args=(q,)) p.start() print q.get() # prints "[42, None, 'hello']" p.join() """ code = compile(codeStr, '', 'exec') sh.interp.runcode(code) Note: The codeStr I stole from the first poster may not work here due to some pickling issues. But the point is you can execute your own codeStr remotely in a shell.
0
1
0
0
2009-05-14T18:56:00.000
2
1.2
true
865,082
1
0
0
1
I would like to create a shell which will control a separate process that I created with the multiprocessing module. Possible? How? EDIT: I have already achieved a way to send commands to the secondary process: I created a code.InteractiveConsole in that process, and attached it to an input queue and an output queue, so I can command the console from my main process. But I want it in a shell, probably a wx.py.shell.Shell, so a user of the program could use it.
Creating a new terminal/shell window to simply display text
866,750
0
1
1,202
0
python,shell
You say "pipe" so I assume you're dealing with text output from the subprocesses. A simple solution may be to just write output to files? e.g. in the subprocess: Redirect output %TEMP%\output.txt On exit, copy output.txt to a directory your main process is watching. In the main process: Every second, examine directory for new files. When files found, process and remove them. You could encode the subprocess name in the output filename so you know how to process it.
0
1
0
0
2009-05-15T02:16:00.000
3
0
false
866,737
0
0
0
1
I want to pipe [edit: real-time text] the output of several subprocesses (sometimes chained, sometimes parallel) to a single terminal/tty window that is not the active python shell (be it an IDE, command-line, or a running script using tkinter). IPython is not an option. I need something that comes with the standard install. Prefer OS-agnostic solution, but needs to work on XP/Vista. I'll post what I've tried already if you want it, but it’s embarrassing.
Python program using os.pipe and os.fork() issue
871,515
6
13
16,065
0
python,pipe,fork
Using fcntl.fcntl(readPipe, fcntl.F_SETFL, os.O_NONBLOCK) Before invoking the read() solved both problems. The read() call is no longer blocking and the data is appearing after just a flush() on the writing end.
0
1
0
0
2009-05-16T01:36:00.000
4
1
false
871,447
0
0
0
1
I've recently needed to write a script that performs an os.fork() to split into two processes. The child process becomes a server process and passes data back to the parent process using a pipe created with os.pipe(). The child closes the 'r' end of the pipe and the parent closes the 'w' end of the pipe, as usual. I convert the returns from pipe() into file objects with os.fdopen. The problem I'm having is this: The process successfully forks, and the child becomes a server. Everything works great and the child dutifully writes data to the open 'w' end of the pipe. Unfortunately the parent end of the pipe does two strange things: A) It blocks on the read() operation on the 'r' end of the pipe. Secondly, it fails to read any data that was put on the pipe unless the 'w' end is entirely closed. I immediately thought that buffering was the problem and added pipe.flush() calls, but these didn't help. Can anyone shed some light on why the data doesn't appear until the writing end is fully closed? And is there a strategy to make the read() call non blocking? This is my first Python program that forked or used pipes, so forgive me if I've made a simple mistake.
How do you pass script arguments to pdb (Python)?
67,887,744
1
22
12,717
0
python,debugging,arguments,pdb
python3 -m pdb myscript.py -a val if using argparse with flag "a" and value "val"
0
1
0
0
2009-05-16T19:17:00.000
4
0.049958
false
873,089
1
0
0
1
I've got python script (ala #! /usr/bin/python) and I want to debug it with pdb. How can I pass arguments to the script? I have a python script and would like to debug it with pdb. Is there a way that I can pass arguments to the scripts?
Admin privileges for script
874,519
7
2
2,996
0
python,unix,root,sudo
The concept of "admin-privileges" in our day of fine grained privilege control is becoming hard to define. If you are running on unix with "traditional" access control model, getting the effective user id (available in os module) and checking that against root (0) could be what you are looking for. If you know accessing a file on the system requires the privileges you want your script to have, you can use the os.access() to check if you are privileged enough. Unfortunately there is no easy nor portable method to give. You need to find out or define the security model used, what system provided APIs are available to query and set privileges and try to locate (or possibly implement yourself) the appropriate python modules that can be used to access the API. The classic question, why do you need to find out? What if your script tries to do what it needs to do and "just" catches and properly handles failures?
0
1
0
1
2009-05-17T12:09:00.000
3
1
false
874,476
0
0
0
1
how can i check admin-privileges for my script during running?
How do I get 'real-time' information back from a subprocess.Popen in python (2.5)
2,328,115
1
47
13,436
0
python,subprocess,stdout,popen
I've been running into this problem as well. The problem occurs because you are trying to read stderr as well. If there are no errors, then trying to read from stderr would block. On Windows, there is no easy way to poll() file descriptors (only Winsock sockets). So a solution is not to try and read from stderr.
0
1
0
0
2009-05-17T15:20:00.000
10
0.019997
false
874,815
0
0
0
2
I'd like to use the subprocess module in the following way: create a new process that potentially takes a long time to execute. capture stdout (or stderr, or potentially both, either together or separately) Process data from the subprocess as it comes in, perhaps firing events on every line received (in wxPython say) or simply printing them out for now. I've created processes with Popen, but if I use communicate() the data comes at me all at once, once the process has terminated. If I create a separate thread that does a blocking readline() of myprocess.stdout (using stdout = subprocess.PIPE) I don't get any lines with this method either, until the process terminates. (no matter what I set as bufsize) Is there a way to deal with this that isn't horrendous, and works well on multiple platforms?
How do I get 'real-time' information back from a subprocess.Popen in python (2.5)
874,865
7
47
13,436
0
python,subprocess,stdout,popen
stdout will be buffered - so you won't get anything till that buffer is filled, or the subprocess exits. You can try flushing stdout from the sub-process, or using stderr, or changing stdout on non-buffered mode.
0
1
0
0
2009-05-17T15:20:00.000
10
1
false
874,815
0
0
0
2
I'd like to use the subprocess module in the following way: create a new process that potentially takes a long time to execute. capture stdout (or stderr, or potentially both, either together or separately) Process data from the subprocess as it comes in, perhaps firing events on every line received (in wxPython say) or simply printing them out for now. I've created processes with Popen, but if I use communicate() the data comes at me all at once, once the process has terminated. If I create a separate thread that does a blocking readline() of myprocess.stdout (using stdout = subprocess.PIPE) I don't get any lines with this method either, until the process terminates. (no matter what I set as bufsize) Is there a way to deal with this that isn't horrendous, and works well on multiple platforms?
How can I create an local webserver for my python scripts?
22,310,248
2
18
25,839
0
python,webserver,simplehttpserver
Best way is to make your own local server by using command prompt. Make a new folder say Project Make a new folder inside project & name it as "cgi-bin"(without quotes) Paste your .py file inside the cgi-bin folder Open cmd and change to the directory from which you want to run the server and type "python -m CGIHTTPServer"(without quotes) Minimize the cmd window & open your browser and type "localhost:8000/cgi-bin/yourpythonfilename.py"(without quotes).
0
1
0
0
2009-05-18T10:13:00.000
3
0.132549
false
877,033
0
0
0
1
I'm looking to use a local webserver to run a series of python scripts for the user. For various unavoidable reasons, the python script must run locally, not on a server. As a result, I'll be using HTML+browser as the UI, which I'm comfortable with, for the front end. I've been looking, therefore, for a lightweight web server that can execute python scripts, sitting in the background on a machine, ideally as a Windows service. Security and extensibility are not high priorities as it's all running internally on a small network. Should I run a native python webserver as a Windows service (in which case, how)? Or is it just as easy to install Apache onto the user's machine and run as CGI? Since this is all local, performance is not an issue either. Or am I missing something obvious?
How to redirect the output of .exe to a file in python?
880,929
10
18
15,233
0
python,redirect,io
Easiest is os.system("the.exe -a >thefile.txt"), but there are many other ways, for example with the subprocess module in the standard library.
0
1
0
0
2009-05-19T04:03:00.000
5
1
false
880,918
1
0
0
1
In a script , I want to run a .exe with some command line parameters as "-a",and then redirect the standard output of the program to a file? How can I implement that?
How to hide "cgi-bin", ".py", etc from my URLs?
882,444
3
13
10,240
0
python,cgi
Just use some good web framework e.g. django and you can have such URLs more than URLs you will have a better infrastructure, templates, db orm etc
0
1
0
1
2009-05-19T12:24:00.000
6
0.099668
false
882,430
0
0
0
1
Brand new to web design, using python. Got Apache up and running, test python script working in cgi-bin directory. Get valid results when I type in the URL explicitly: ".../cgi-bin/showenv.py" But I don't want the URL to look that way. Here at stackoverflow, for example, the URLs that display in my address bar never have the messy details showing the script that was used to run them. They're clean of cgi-bin, .py, etc. extensions. How do I do that? EDIT: Thanks for responses, every single one helpful, lots to learn. I'm going with URL Rewriting for now; example in the docs looks extremely close to what I actually want to do. But I'm committed to python, so will have to look at WSGI down the road.
Django without shell access
886,561
1
0
702
0
python,django,shell
It is possible. Usually you will develop your application locally (where shell access is nice to have) and publish your work to your server. All you need for this is FTP access and some way to import a database dump from your development database (often hosters provide an installation of phpMyAdmin for this). python (I assume via mod_python) From my experience, you are most certainly wrong with that assumption. Many low-cost providers claim to support python but in fact provide only an outdated version that can be used with CGI scripts. This setup will have a pretty low performance for Django apps.
0
1
0
0
2009-05-20T07:07:00.000
2
0.099668
false
886,526
0
0
1
1
Is it possible to run django without shell access? My hoster supports the following for 5€/month: python (I assume via mod_python) mysql There is no shell nor cronjob support, which costs additional 10€/month, so I'm trying to avoid it. I know that Google Apps also work without shell access, but I assume that is possible because of their special configuration.
Safe Python Environment in Linux
5,289,924
1
2
920
0
python,linux,runtime,sandbox,restriction
could you not just run as a user which has no access to anything but the scripts in that directory?
0
1
0
0
2009-05-20T08:50:00.000
5
0.039979
false
886,895
1
0
0
4
Is it possible to create an environment to safely run arbitrary Python scripts under Linux? Those scripts are supposed to be received from untrusted people and may be too large to check them manually. A very brute-force solution is to create a virtual machine and restore its initial state after every launch of an untrusted script. (Too expensive.) I wonder if it's possible to restrict Python from accessing the file system and interacting with other programs and so on.
Safe Python Environment in Linux
887,091
2
2
920
0
python,linux,runtime,sandbox,restriction
You could run jython and use the sandboxing mechanism from the JVM. The sandboxing in the JVM is very strong very well understood and more or less well documented. It will take some time to define exactly what you want to allow and what you dnt want to allow, but you should be able to get a very strong security from that ... On the other side, jython is not 100% compatible with cPython ...
0
1
0
0
2009-05-20T08:50:00.000
5
0.07983
false
886,895
1
0
0
4
Is it possible to create an environment to safely run arbitrary Python scripts under Linux? Those scripts are supposed to be received from untrusted people and may be too large to check them manually. A very brute-force solution is to create a virtual machine and restore its initial state after every launch of an untrusted script. (Too expensive.) I wonder if it's possible to restrict Python from accessing the file system and interacting with other programs and so on.
Safe Python Environment in Linux
887,104
4
2
920
0
python,linux,runtime,sandbox,restriction
There are 4 things you may try: As you already mentioned, using a virtual machine or some other form of virtualisation (perhaps solaris zones are lightweight enough?). If the script breaks the OS there then you don't care. Using chroot, which puts a shell session into a virtual root directory, separate from the main OS root directory. Using systrace. Think of this as a firewall for system calls. Using a "jail", which builds upon systrace, giving each jail it's own process table etc. Systrace has been compromised recently, so be aware of that.
0
1
0
0
2009-05-20T08:50:00.000
5
0.158649
false
886,895
1
0
0
4
Is it possible to create an environment to safely run arbitrary Python scripts under Linux? Those scripts are supposed to be received from untrusted people and may be too large to check them manually. A very brute-force solution is to create a virtual machine and restore its initial state after every launch of an untrusted script. (Too expensive.) I wonder if it's possible to restrict Python from accessing the file system and interacting with other programs and so on.
Safe Python Environment in Linux
886,945
4
2
920
0
python,linux,runtime,sandbox,restriction
Consider using a chroot jail. Not only is this very secure, well-supported and tested but it also applies to external applications you run from python.
0
1
0
0
2009-05-20T08:50:00.000
5
1.2
true
886,895
1
0
0
4
Is it possible to create an environment to safely run arbitrary Python scripts under Linux? Those scripts are supposed to be received from untrusted people and may be too large to check them manually. A very brute-force solution is to create a virtual machine and restore its initial state after every launch of an untrusted script. (Too expensive.) I wonder if it's possible to restrict Python from accessing the file system and interacting with other programs and so on.
Change file creation date
887,652
2
15
23,873
0
python,linux,file,date
I am not a UNIX expert, so maybe I'm wrong, but I think that UNIX (or Linux) don't store file creation time.
0
1
0
0
2009-05-20T12:12:00.000
5
0.07983
false
887,557
0
0
0
1
Can I change creation date of some file using Python in Linux?
Install older versions of Python for testing on Mac OS X
891,966
0
2
5,880
0
python,version,installation
You can also use the Fink package manager and simply to "fink install python2.3". If you need Python 2.3 to be your default, you can simply change /sw/bin/python and /sw/bin/pydoc to point to the version you want (they sit in /sw/bin/).
0
1
0
0
2009-05-21T00:00:00.000
4
0
false
890,827
1
0
0
2
I have Mac OS X 10.5.7 with Python 2.5. I need to test a package I am working on with Python 2.3 for compatibility. I don't want to downgrade my whole system so is there a way to do an install of Python 2.3 that does not change the system python?
Install older versions of Python for testing on Mac OS X
892,087
0
2
5,880
0
python,version,installation
One alternative is to use a virtual machine. With something like VMWare Fusion or Virtualbox you could install a complete Linux system with Python2.3, and do your testing there. The advantage is that it would be completely sand-boxed, so wouldn't affect your main system at all.
0
1
0
0
2009-05-21T00:00:00.000
4
0
false
890,827
1
0
0
2
I have Mac OS X 10.5.7 with Python 2.5. I need to test a package I am working on with Python 2.3 for compatibility. I don't want to downgrade my whole system so is there a way to do an install of Python 2.3 that does not change the system python?
Send emails from Google App Engine
892,427
2
0
1,502
0
python,django,google-app-engine,mail-server
The example code for the remote APi gives you an interactive console from which you can access any of the modules in your application. I see no requirement that they be only datastore operations.
0
1
0
0
2009-05-21T10:31:00.000
2
0.197375
false
892,266
0
0
1
1
I have a web server with Django, hosted with Apache server. I would like to configure Google App Engine for the email server. My web server should be able to use Google App Engine, when it makes any email send using EmailMessage or sendmail infrastructure of Google Mail API. I learnt that by using Remote API, I can access Google App Engine server from my main web server. However, I could not access the Mail APIs supported by Google App Engine. Is the Remote API strictly for Datastore? If so, can only the DB read from it and no other API calls can?
How can I launch a python script on windows?
894,869
5
1
196
0
python
If you're looking to do recurring scheduled tasks, then the Task Scheduler (Vista) or Scheduled Tasks (XP and, I think, earlier) is the appropriate method on Windows.
0
1
0
0
2009-05-21T20:05:00.000
2
1.2
true
894,845
0
0
0
1
I have run a few using batch jobs, but, I am wondering what would be the most appropriate? Maybe using time.strftime?
os.system() execute command under which linux shell?
905,294
5
9
35,436
0
python,linux,shell
os.system() just calls the system() system call ("man 3 system"). On most *nixes this means you get /bin/sh. Note that export VAR=val is technically not standard syntax (though bash understands it, and I think ksh does too). It will not work on systems where /bin/sh is actually the Bourne shell. On those systems you need to export and set as separate commands. (This will work with bash too.)
0
1
0
0
2009-05-25T03:30:00.000
4
1.2
true
905,221
0
0
0
1
I am using /bin/tcsh as my default shell. However, the tcsh style command os.system('setenv VAR val') doesn't work for me. But os.system('export VAR=val') works. So my question is how can I know the os.system() run command under which shell?
show lyrics on ubuntu
1,450,802
2
0
1,143
0
python,plugins,gnome,rhythmbox
You can't import rhythmbox "built-in" modules from a standard python console. As far as I know they aren't real modules, they are just objects from the rhythmbox process exposed to plugins. So you can access them only if you are running your script from the rhythmbox process.
0
1
0
0
2009-05-25T12:16:00.000
2
0.197375
false
906,509
0
0
0
2
I'm writing a little script for Ubuntu. My intention is to call rhythmbox lyrics plug-in with a global short-cut (configuring gnome) . I can call it from rhythmbox python console, but I don't know how to import rhythmbox built-in modules (eg. rhythmdb). Any ideas?
show lyrics on ubuntu
4,594,514
0
0
1,143
0
python,plugins,gnome,rhythmbox
in this case i guess you'll have to write the whole plugin yourself and , then listen to dbus for change of songs in rhythmbox , to detect which song is being played .
0
1
0
0
2009-05-25T12:16:00.000
2
0
false
906,509
0
0
0
2
I'm writing a little script for Ubuntu. My intention is to call rhythmbox lyrics plug-in with a global short-cut (configuring gnome) . I can call it from rhythmbox python console, but I don't know how to import rhythmbox built-in modules (eg. rhythmdb). Any ideas?
Making a Python script executable chmod755?
907,605
0
4
3,729
0
python,hosting
In addition to the other fine answers here, you should be aware that most FTP clients have a chmod command to allow you to set permissions on files at the server. You may not need this if permissions come across properly, but there's a good chance they do not.
0
1
0
0
2009-05-25T18:05:00.000
5
0
false
907,579
0
0
0
1
My hosting provider says my python script must be made to be executable(chmod755). What does this mean & how do I do it? Cheers!
Reinstall /Library/Python on OS X Leopard
917,897
1
0
2,889
0
python,macos,osx-leopard,reinstall
/Library/Python contains your python site-packages, which is the local software you've installed using commands like python setup.py install. The pieces here are third-party packages, not items installed by Apple - your actual Python installation is still safe in /System/Library/etc... In other words, the default OS leaves these directories mostly blank... nothing in there is critical (just a readme and a path file). In this case, you'll have to : Recreate the directory structure: Re-install your third-party libraries. The directory structure on a default OS X install is: /Library/Python/2.3/site-packages /Library/Python/2.5/site-packages
0
1
0
0
2009-05-27T20:31:00.000
3
1.2
true
917,876
1
0
0
2
I accidentally removed /Library/Python on OS X Leopard. How can I reinstall that?
Reinstall /Library/Python on OS X Leopard
917,890
1
0
2,889
0
python,macos,osx-leopard,reinstall
If you'd like, I'll create a tarball from a pristine installation. I'm using MacOSX 10.5.7, and only 12K.
0
1
0
0
2009-05-27T20:31:00.000
3
0.066568
false
917,876
1
0
0
2
I accidentally removed /Library/Python on OS X Leopard. How can I reinstall that?
Writing a kernel mode profiler for processes in python
922,814
7
2
2,476
0
python,kernel
It's going to be very difficult to do the process monitoring part in Python, since the python interpreter doesn't run in the kernel. I suspect there are two easy approaches to this: use the /proc filesystem if you have one (you don't mention your OS) Use dtrace if you have dtrace (again, without the OS, who knows.) Okay, following up after the edit. First, there's no way you're going to be able to write code that runs in the kernel, in python, and is portable between Linux and Windows. Or at least if you were to, it would be a hack that would live in glory forever. That said, though, if your purpose is to process Python, there are a lot of Python tools available to get information from the Python interpreter at run time. If instead your desire is to get process information from other processes in general, you're going to need to examine the options available to you in the various OS APIs. Linux has a /proc filesystem; that's a useful start. I suspect Windows has similar APIs, but I don't know them. If you have to write kernel code, you'll almost certainly need to write it in C or C++.
0
1
0
0
2009-05-28T19:37:00.000
4
1.2
true
922,788
1
0
0
1
I would like seek some guidance in writing a "process profiler" which runs in kernel mode. I am asking for a kernel mode profiler is because I run loads of applications and I do not want my profiler to be swapped out. When I said "process profiler" I mean to something that would monitor resource usage by the process. including usage of threads and their statistics. And I wish to write this in python. Point me to some modules or helpful resource. Please provide me guidance/suggestion for doing it. Thanks, Edit::: Would like to add that currently my interest isto write only for linux. however after i built it i will have to support windows.
Python Environment Variables in Windows?
2,318,893
0
0
1,306
0
python,windows,testing,environment-variables
You cannot use environment variables in this way. As you have discovered it is not persistent after the setting application completes
0
1
0
0
2009-05-28T22:48:00.000
3
0
false
923,586
1
0
0
1
I'm developing a script that runs a program with other scripts over and over for testing purposes. How it currently works is I have one Python script which I launch. That script calls the program and loads the other scripts. It kills the program after 60 seconds to launch the program again with the next script. For some scripts, 60 seconds is too long, so I was wondering if I am able to set a FLAG variable (not in the main script), such that when the script finishes, it sets FLAG, so the main script and read FLAG and kill the process? Thanks for the help, my writing may be confusing, so please let me know if you cannot fully understand.
How to start a process on a remote server, disconnect, then later collect output?
923,720
0
2
2,588
0
python,testing,scripting
Most commercial products install an "Agent" on the remote machines. In the linux world, you have numerous such agents. rexec and rlogin and rsh all jump to mind. These are all clients that communication with daemons running on the remote hosts. If you don't want to use these agents, you can read about them and reinvent these wheels in pure Python. Essentially, the client (rexec for example) communicates with the server (rexecd) to send work requests.
0
1
0
1
2009-05-28T23:25:00.000
8
0
false
923,691
0
0
0
4
I am writing automation code in python to test the behavior of a network application. As such, my code needs to be able to start a process/script (say, tcpdump or a python script) on a server in the network, disconnect, run other processes and then later return and shutdown/evaluate the process started earlier. My network is a mix of windows and linux machines and all of the machines have sshd and python running (via Cygwin for the windows machines). I've considered a couple of ideas, namely: - Starting a process and moving it to the background via a trailing ampersand (&) - Using screen in some fashion - Using python threads What else should I be considering? In your experience what have you found to be the best way to accomplish a task like this?
How to start a process on a remote server, disconnect, then later collect output?
923,719
0
2
2,588
0
python,testing,scripting
As @Gandalf mentions, you'll need nohup in addition to the backgrounding &, or the process will be SIGKILLed when the login session terminates. If you redirect your output to a log file, you'll be able to look at it later easily (and not have to install screen on all your machines).
0
1
0
1
2009-05-28T23:25:00.000
8
0
false
923,691
0
0
0
4
I am writing automation code in python to test the behavior of a network application. As such, my code needs to be able to start a process/script (say, tcpdump or a python script) on a server in the network, disconnect, run other processes and then later return and shutdown/evaluate the process started earlier. My network is a mix of windows and linux machines and all of the machines have sshd and python running (via Cygwin for the windows machines). I've considered a couple of ideas, namely: - Starting a process and moving it to the background via a trailing ampersand (&) - Using screen in some fashion - Using python threads What else should I be considering? In your experience what have you found to be the best way to accomplish a task like this?
How to start a process on a remote server, disconnect, then later collect output?
923,703
3
2
2,588
0
python,testing,scripting
nohup for starters (at least on *nix boxes) - and redirect the output to some log file where you can come back and monitor it of course.
0
1
0
1
2009-05-28T23:25:00.000
8
0.07486
false
923,691
0
0
0
4
I am writing automation code in python to test the behavior of a network application. As such, my code needs to be able to start a process/script (say, tcpdump or a python script) on a server in the network, disconnect, run other processes and then later return and shutdown/evaluate the process started earlier. My network is a mix of windows and linux machines and all of the machines have sshd and python running (via Cygwin for the windows machines). I've considered a couple of ideas, namely: - Starting a process and moving it to the background via a trailing ampersand (&) - Using screen in some fashion - Using python threads What else should I be considering? In your experience what have you found to be the best way to accomplish a task like this?
How to start a process on a remote server, disconnect, then later collect output?
20,889,031
0
2
2,588
0
python,testing,scripting
If you are using python to run the automation... I would attempt to automate everything using paramiko. It's a versatile ssh library for python. Instead of going back to the output, you could collect multiple lines of output live and then disconnect when you no longer need the process and let ssh do the killing for you.
0
1
0
1
2009-05-28T23:25:00.000
8
0
false
923,691
0
0
0
4
I am writing automation code in python to test the behavior of a network application. As such, my code needs to be able to start a process/script (say, tcpdump or a python script) on a server in the network, disconnect, run other processes and then later return and shutdown/evaluate the process started earlier. My network is a mix of windows and linux machines and all of the machines have sshd and python running (via Cygwin for the windows machines). I've considered a couple of ideas, namely: - Starting a process and moving it to the background via a trailing ampersand (&) - Using screen in some fashion - Using python threads What else should I be considering? In your experience what have you found to be the best way to accomplish a task like this?
Fedora Python Upgrade broke easy_install
926,006
2
0
1,989
0
python,fedora,easy-install
I suggest you create a virtualenv (or several) for installing packages into.
0
1
0
1
2009-05-29T13:29:00.000
3
0.132549
false
925,965
0
0
0
2
Fedora Core 9 includes Python 2.5.1. I can use YUM to get latest and greatest releases. To get ready for 2.6 official testing, I wanted to start with 2.5.4. It appears that there's no Fedora 9 YUM package, because 2.5.4 isn't an official part of FC9. I downloaded 2.5.4, did ./configure; make; make install and wound up with two Pythons. The official 2.5.1 (in /usr/bin) and the new 2.5.4. (in /usr/local/bin). None of my technology stack is installed in /usr/local/lib/python2.5. It appears that I have several choices for going forward. Anyone have any preferences? Copy /usr/lib/python2.5/* to /usr/local/lib/python2.5 to replicate my environment. This should work, unless some part of the Python libraries have /usr/bin/python wired in during installation. This is sure simple, but is there a down side? Reinstall everything by running easy_install. Except, easy_install is (currently) hard-wired to /usr/bin/python. So, I'd have to fix easy_install first, then reinstall everything. This takes some time, but it gives me a clean, new latest-and-greatest environment. But is there a down-side? [And why does easy_install hard-wire itself?] Relink /usr/bin/python to be /usr/local/bin/python. I'd still have to copy or reinstall the library, so I don't think this does me any good. [It would make easy_install work; but so would editing /usr/bin/easy_install.] Has anyone copied their library? Is it that simple? Or should I fix easy_install and simply step through the installation guide and build a new, clean, latest-and-greatest? Edit Or, should I Skip trying to resolve the 2.5.1 and 2.5.4 issues and just jump straight to 2.6?
Fedora Python Upgrade broke easy_install
926,636
2
0
1,989
0
python,fedora,easy-install
I've had similar experiences and issues when installing Python 2.5 on an older release of ubuntu that supplied 2.4 out of the box. I first tried to patch easy_install, but this led to problems with anything that wanted to use the os-supplied version of python. I was often fiddling with the tool chain to fix different errors that might crop up with every install. Installing any python software via apt, or installing any software from apt that had a python easy_install script as part of the install, was often amusing. I'm sure I could probably have been more vigilant in patching easy_install, but I gave up. Instead, I copied the library, and everything worked. As you say, there may be issues depending on what you have installed, but I didn't run into issues. Double-checking Python's site.py module, I did see that it operates entirely on relative paths, building absolute paths dynamically; this gave me some confidence to try the "copy everything" approach. I double-checked any .pth files, then went for it.
0
1
0
1
2009-05-29T13:29:00.000
3
0.132549
false
925,965
0
0
0
2
Fedora Core 9 includes Python 2.5.1. I can use YUM to get latest and greatest releases. To get ready for 2.6 official testing, I wanted to start with 2.5.4. It appears that there's no Fedora 9 YUM package, because 2.5.4 isn't an official part of FC9. I downloaded 2.5.4, did ./configure; make; make install and wound up with two Pythons. The official 2.5.1 (in /usr/bin) and the new 2.5.4. (in /usr/local/bin). None of my technology stack is installed in /usr/local/lib/python2.5. It appears that I have several choices for going forward. Anyone have any preferences? Copy /usr/lib/python2.5/* to /usr/local/lib/python2.5 to replicate my environment. This should work, unless some part of the Python libraries have /usr/bin/python wired in during installation. This is sure simple, but is there a down side? Reinstall everything by running easy_install. Except, easy_install is (currently) hard-wired to /usr/bin/python. So, I'd have to fix easy_install first, then reinstall everything. This takes some time, but it gives me a clean, new latest-and-greatest environment. But is there a down-side? [And why does easy_install hard-wire itself?] Relink /usr/bin/python to be /usr/local/bin/python. I'd still have to copy or reinstall the library, so I don't think this does me any good. [It would make easy_install work; but so would editing /usr/bin/easy_install.] Has anyone copied their library? Is it that simple? Or should I fix easy_install and simply step through the installation guide and build a new, clean, latest-and-greatest? Edit Or, should I Skip trying to resolve the 2.5.1 and 2.5.4 issues and just jump straight to 2.6?
How to get the owner and group of a folder with Python on a Linux machine?
927,888
0
25
26,467
0
python,linux,directory,owner
Use the os.stat function.
0
1
0
1
2009-05-29T20:04:00.000
6
0
false
927,866
0
0
0
2
How can I get the owner and group IDs of a directory using Python under Linux?
How to get the owner and group of a folder with Python on a Linux machine?
71,426,599
0
25
26,467
0
python,linux,directory,owner
If you are using Linux, it is much easier. Install tree with the command yum install tree. Then execute the command 'tree -a -u -g'
0
1
0
1
2009-05-29T20:04:00.000
6
0
false
927,866
0
0
0
2
How can I get the owner and group IDs of a directory using Python under Linux?
Why no pure Python SSH1 (version 1) client implementations?
936,816
1
4
1,865
0
python,ssh
Well, the main reason probably was that when people started getting interested in such things in VHLLs such as Python, it didn't make sense to them to implement a standard which they themselves would not find useful. I am not familiar with the protocol differences, but would it be possible for you to adapt an existing codebase to the older protocol?
0
1
0
1
2009-06-01T21:06:00.000
2
0.099668
false
936,783
0
0
0
2
There seem to be a few good pure Python SSH2 client implementations out there, but I haven't been able to find one for SSH1. Is there some specific reason for this other than lack of interest in such a project? I am fully aware of the many SSH1 vulnerabilities, but a pure Python SSH1 client implementation would still be very useful to those of us who want to write SSH clients to manage older embedded devices which only support SSH1 (Cisco PIX for example). I also know I'm not the only person looking for this. The reason I'm asking is because I'm bored, and I've been thinking about taking a stab at writing this myself. I've just been hesitant to start, since I know there are a lot of people out there who are much smarter than me, and I figured there might be some reason why nobody has done it yet.
Why no pure Python SSH1 (version 1) client implementations?
940,483
3
4
1,865
0
python,ssh
SSHv1 was considered deprecated in 2001, so I assume nobody really wanted to put the effort into it. I'm not sure if there's even an rfc for SSH1, so getting the full protocol spec may require reading through old source code. Since there are known vulnerabilities, it's not much better than telnet, which is almost universally supported on old and/or embedded devices.
0
1
0
1
2009-06-01T21:06:00.000
2
1.2
true
936,783
0
0
0
2
There seem to be a few good pure Python SSH2 client implementations out there, but I haven't been able to find one for SSH1. Is there some specific reason for this other than lack of interest in such a project? I am fully aware of the many SSH1 vulnerabilities, but a pure Python SSH1 client implementation would still be very useful to those of us who want to write SSH clients to manage older embedded devices which only support SSH1 (Cisco PIX for example). I also know I'm not the only person looking for this. The reason I'm asking is because I'm bored, and I've been thinking about taking a stab at writing this myself. I've just been hesitant to start, since I know there are a lot of people out there who are much smarter than me, and I figured there might be some reason why nobody has done it yet.
mounting an s3 bucket in ec2 and using transparently as a mnt point
6,308,720
0
4
8,589
1
python,django,amazon-s3,amazon-ec2
I'd suggest using a separately-mounted EBS volume. I tried doing the same thing for some movie files. Access to S3 was slow, and S3 has some limitations like not being able to rename files, no real directory structure, etc. You can set up EBS volumes in a RAID5 configuration and add space as you need it.
0
1
0
0
2009-06-05T16:39:00.000
5
0
false
956,904
0
0
1
1
I have a webapp (call it myapp.com) that allows users to upload files. The webapp will be deployed on Amazon EC2 instance. I would like to serve these files back out to the webapp consumers via an s3 bucket based domain (i.e. uploads.myapp.com). When the user uploads the files, I can easily drop them in into a folder called "site_uploads" on the local ec2 instance. However, since my ec2 instance has finite storage, with a lot of uploads, the ec2 file system will fill up quickly. It would be great if the ec2 instance could mount and s3 bucket as the "site_upload" directory. So that uploads to the EC2 "site_upload" directory automatically end up on uploads.myapp.com (and my webapp can use template tags to make sure the links for this uploaded content is based on that s3 backed domain). This also gives me scalable file serving, as request for files hits s3 and not my ec2 instance. Also, it makes it easy for my webapp to perform scaling/resizing of the images that appear locally in "site_upload" but are actually on s3. I'm looking at s3fs, but judging from the comments, it doesn't look like a fully baked solution. I'm looking for a non-commercial solution. FYI, The webapp is written in django, not that that changes the particulars too much.
How can I create a variable that is scoped to a single request in app engine?
972,243
2
0
211
0
python,google-app-engine
If you're using the 'webapp' framework included with App Engine (or, actually, most other WSGI-baesd frameworks), a new RequestHandler is instantiated for each request. Thus, you can use class variables on your handler class to store per-request data.
0
1
0
0
2009-06-08T00:08:00.000
4
0.099668
false
963,080
0
0
1
3
I'm creating a python app for google app engine and I've got a performance problem with some expensive operations that are repetitive within a single request. To help deal with this I'd like to create a sort of mini-cache that's scoped to a single request. This is as opposed to a session-wide or application-wide cache, neither of which would make sense for my particular problem. I thought I could just use a python global or module-level variable for this, but it turns out that those maintain their state between requests in non-obvious ways. I also don't think memcache makes sense because it's application wide. I haven't been able to find a good answer for this in google's docs. Maybe that's because it's either a dumb idea or totally obvious, but it seems like it'd be useful and I'm stumped. Anybody have any ideas?
How can I create a variable that is scoped to a single request in app engine?
963,706
0
0
211
0
python,google-app-engine
use local list to store data and do a model.put at end of your request processing. save multiple db trips
0
1
0
0
2009-06-08T00:08:00.000
4
0
false
963,080
0
0
1
3
I'm creating a python app for google app engine and I've got a performance problem with some expensive operations that are repetitive within a single request. To help deal with this I'd like to create a sort of mini-cache that's scoped to a single request. This is as opposed to a session-wide or application-wide cache, neither of which would make sense for my particular problem. I thought I could just use a python global or module-level variable for this, but it turns out that those maintain their state between requests in non-obvious ways. I also don't think memcache makes sense because it's application wide. I haven't been able to find a good answer for this in google's docs. Maybe that's because it's either a dumb idea or totally obvious, but it seems like it'd be useful and I'm stumped. Anybody have any ideas?
How can I create a variable that is scoped to a single request in app engine?
963,107
1
0
211
0
python,google-app-engine
Module variables may (or may not) persist between requests (the same app instance may or may not stay alive between requests), but you can explicitly clear them (del, or set to None, say) at the start of your handling a request, or when you know you're done with one. At worst (if your code is peculiarly organized) you need to set some function to always execute at every request start, or at every request end.
0
1
0
0
2009-06-08T00:08:00.000
4
0.049958
false
963,080
0
0
1
3
I'm creating a python app for google app engine and I've got a performance problem with some expensive operations that are repetitive within a single request. To help deal with this I'd like to create a sort of mini-cache that's scoped to a single request. This is as opposed to a session-wide or application-wide cache, neither of which would make sense for my particular problem. I thought I could just use a python global or module-level variable for this, but it turns out that those maintain their state between requests in non-obvious ways. I also don't think memcache makes sense because it's application wide. I haven't been able to find a good answer for this in google's docs. Maybe that's because it's either a dumb idea or totally obvious, but it seems like it'd be useful and I'm stumped. Anybody have any ideas?
python: find out if running in shell or not (e.g. sun grid engine queue)
967,383
6
7
1,762
0
python,shell,terminal,stdout
You can use os.getppid() to find out the process id for the parent-process of this one, and then use that process id to determine which program that process is running. More usefully, you could use sys.stdout.isatty() -- that doesn't answer your title question but appears to better solve the actual problem you explain (if you're running under a shell but your output is piped to some other process or redirected to a file you probably don't want to emit "interactive stuff" on it either).
0
1
0
0
2009-06-08T22:36:00.000
4
1
false
967,369
0
0
0
1
is there a way to find out from within a python program if it was started in a terminal or e.g. in a batch engine like sun grid engine? the idea is to decide on printing some progress bars and other ascii-interactive stuff, or not. thanks! p.
Eclipse + local CVS + PyDev
969,236
1
4
2,673
0
python,eclipse,cvs,pydev
I tried Eclipse+Subclipse and Eclipse+Bazaar plugin. Both work very well, but I have found that Tortoise versions of those version source control tools are so good that I resigned from Eclipse plugins. On Windows Tortoise XXX are my choice. They integrate with shell (Explorer or TotalCommander), changes icon overlay if file is changed, shows log, compare revisions etc. etc.
0
1
0
0
2009-06-09T09:43:00.000
8
0.024995
false
969,121
1
0
1
4
I tried several Python IDEs (on Windows platform) but finally I found only Eclipse + PyDev meeting my needs. This set of tools is really comfortable and easy to use. I'm currently working on a quite bigger project. I'd like to have a possibility to use CVS or any other version control system which would be installed on my local harddrive (I recently moved my house and don't have yet an access to internet.) It doesn't matter for me if it'd be CVS - can also be any other version control system. It'd be great if it will be not too hard to configure with Eclipse. Can anyone give me some possible solution? Any hints? Regards and thanks in advance for any clues. Please forgive my English ;)
Eclipse + local CVS + PyDev
969,642
4
4
2,673
0
python,eclipse,cvs,pydev
Last time I tried this, Eclipse did not support direct access to local repositories in the same way that command line cvs does because command line cvs has both client and server functionality whereas Eclipse only has client functionality and needs to go through (e.g.) pserver, so you would probably need to have a cvs server running. Turns out that I didn't really need it anyway as Eclipse keeps its own history of all changes so I only needed to do an occasional manual update to cvs at major milestones. [Eventually I decided not to use cvs at all with Eclipse under Linux as it got confused by symlinks and started deleting my include files when it "synchronised" with the repository.]
0
1
0
0
2009-06-09T09:43:00.000
8
1.2
true
969,121
1
0
1
4
I tried several Python IDEs (on Windows platform) but finally I found only Eclipse + PyDev meeting my needs. This set of tools is really comfortable and easy to use. I'm currently working on a quite bigger project. I'd like to have a possibility to use CVS or any other version control system which would be installed on my local harddrive (I recently moved my house and don't have yet an access to internet.) It doesn't matter for me if it'd be CVS - can also be any other version control system. It'd be great if it will be not too hard to configure with Eclipse. Can anyone give me some possible solution? Any hints? Regards and thanks in advance for any clues. Please forgive my English ;)
Eclipse + local CVS + PyDev
976,961
0
4
2,673
0
python,eclipse,cvs,pydev
I recently moved my house and don't have yet an access to internet. CVS and SVN are the Centralized Version control systems. Rather than having to install them on your local system just for single version control, you could use DVCS like Mercurial or Git. When you clone a Mercurial Repository, you have literally all versions of all the repo files available locally.
0
1
0
0
2009-06-09T09:43:00.000
8
0
false
969,121
1
0
1
4
I tried several Python IDEs (on Windows platform) but finally I found only Eclipse + PyDev meeting my needs. This set of tools is really comfortable and easy to use. I'm currently working on a quite bigger project. I'd like to have a possibility to use CVS or any other version control system which would be installed on my local harddrive (I recently moved my house and don't have yet an access to internet.) It doesn't matter for me if it'd be CVS - can also be any other version control system. It'd be great if it will be not too hard to configure with Eclipse. Can anyone give me some possible solution? Any hints? Regards and thanks in advance for any clues. Please forgive my English ;)
Eclipse + local CVS + PyDev
969,134
1
4
2,673
0
python,eclipse,cvs,pydev
If you don't mind a switch to Subversion, Eclipse has its SubClipse plugin.
0
1
0
0
2009-06-09T09:43:00.000
8
0.024995
false
969,121
1
0
1
4
I tried several Python IDEs (on Windows platform) but finally I found only Eclipse + PyDev meeting my needs. This set of tools is really comfortable and easy to use. I'm currently working on a quite bigger project. I'd like to have a possibility to use CVS or any other version control system which would be installed on my local harddrive (I recently moved my house and don't have yet an access to internet.) It doesn't matter for me if it'd be CVS - can also be any other version control system. It'd be great if it will be not too hard to configure with Eclipse. Can anyone give me some possible solution? Any hints? Regards and thanks in advance for any clues. Please forgive my English ;)
users module errors in Google App Engine
14,879,943
1
1
444
0
python,google-app-engine
Actually not my answer, but from the OP, that didn't act on S. Lott's comment: It works now! but I didnt change anything actually, seems like Google need time to update its database for app engine. like 20 mins.
0
1
0
0
2009-06-09T12:48:00.000
1
0.197375
false
969,877
0
0
1
1
I want to use user service of my domain in google App, but... Is it possible to solve this problem by my side? Traceback (most recent call last): File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 501, in __call__ handler.get(*groups) File "/base/data/home/apps/myapp2009/1.334081739634584397/helloworld.py", line 13, in get self.redirect(users.create_login_url(self.request.uri)) File "/base/python_lib/versions/1/google/appengine/api/users.py", line 176, in create_login_url raise NotAllowedError NotAllowedError
Getting a list of all subdirectories in the current directory
35,261,270
1
824
1,160,495
0
python,directory,subdirectory
use a filter function os.path.isdir over os.listdir() something like this filter(os.path.isdir,[os.path.join(os.path.abspath('PATH'),p) for p in os.listdir('PATH/')])
0
1
0
0
2009-06-10T02:48:00.000
32
0.00625
false
973,473
1
0
0
2
Is there a way to return a list of all the subdirectories in the current directory in Python? I know you can do this with files, but I need to get the list of directories instead.
Getting a list of all subdirectories in the current directory
973,489
41
824
1,160,495
0
python,directory,subdirectory
If you need a recursive solution that will find all the subdirectories in the subdirectories, use walk as proposed before. If you only need the current directory's child directories, combine os.listdir with os.path.isdir
0
1
0
0
2009-06-10T02:48:00.000
32
1
false
973,473
1
0
0
2
Is there a way to return a list of all the subdirectories in the current directory in Python? I know you can do this with files, but I need to get the list of directories instead.
how to integrate ZSH and (i)python?
1,070,597
11
11
8,591
0
python,shell,zsh,ipython
I asked this question on the zsh list and this answer worked for me. YMMV. In genutils.py after the line if not debug: Remove the line: stat = os.system(cmd) Replace it with: stat = subprocess.call(cmd,shell=True,executable='/bin/zsh') you see, the problem is that that "!" call uses os.system to run it, which defaults to manky old /bin/sh . Like I said, it worked for me, although I'm not sure what got borked behind the scenes.
0
1
0
1
2009-06-10T03:11:00.000
2
1.2
true
973,520
0
0
0
1
I have been in love with zsh for a long time, and more recently I have been discovering the advantages of the ipython interactive interpreter over python itself. Being able to cd, to ls, to run or to ! is indeed very handy. But now it feels weird to have such a clumsy shell when in ipython, and I wonder how I could integrate my zsh and my ipython better. Of course, I could rewrite my .zshrc and all my scripts in python, and emulate most of my shell world from ipython, but it doesn't feel right. And I am obviously not ready to use ipython as a main shell anyway. So, here comes my question: how do you work efficiently between your shell and your python command-loop ? Am I missing some obvious integration strategy ? Should I do all that in emacs ?
Thinking in AppEngine
979,391
1
6
1,022
1
java,python,google-app-engine,data-modeling
The non relational database design essentially involves denormalization wherever possible. Example: Since the BigTable doesnt provide enough aggregation features, the sum(cash) option that would be in the RDBMS world is not available. Instead it would have to be stored on the model and the model save method must be overridden to compute the denormalized field sum. Essential basic design that comes to mind is that each template has its own model where all the required fields to be populated are present denormalized in the corresponding model; and you have an entire signals-update-bots complexity going on in the models.
0
1
0
0
2009-06-10T16:13:00.000
4
0.049958
false
976,639
0
0
1
2
I'm looking for resources to help migrate my design skills from traditional RDBMS data store over to AppEngine DataStore (ie: 'Soft Schema' style). I've seen several presentations and all touch on the the overarching themes and some specific techniques. I'm wondering if there's a place we could pool knowledge from experience ("from the trenches") on real-world approaches to rethinking how data is structured, especially porting existing applications. We're heavily Hibernate based and have probably travelled a bit down the wrong path with our data model already, generating some gnarly queries which our DB is struggling with. Please respond if: You have ported a non-trivial application over to AppEngine You've created a common type of application from scratch in AppEngine You've done neither 1 or 2, but are considering it and want to share your own findings so far.
Thinking in AppEngine
978,757
1
6
1,022
1
java,python,google-app-engine,data-modeling
The timeouts are tight and performance was ok but not great, so I found myself using extra space to save time; for example I had a many-to-many relationship between trading cards and players, so I duplicated the information of who owns what: Card objects have a list of Players and Player objects have a list of Cards. Normally storing all your information twice would have been silly (and prone to get out of sync) but it worked really well. In Python they recently released a remote API so you can get an interactive shell to the datastore so you can play with your datastore without any timeouts or limits (for example, you can delete large swaths of data, or refactor your models); this is fantastically useful since otherwise as Julien mentioned it was very difficult to do any bulk operations.
0
1
0
0
2009-06-10T16:13:00.000
4
0.049958
false
976,639
0
0
1
2
I'm looking for resources to help migrate my design skills from traditional RDBMS data store over to AppEngine DataStore (ie: 'Soft Schema' style). I've seen several presentations and all touch on the the overarching themes and some specific techniques. I'm wondering if there's a place we could pool knowledge from experience ("from the trenches") on real-world approaches to rethinking how data is structured, especially porting existing applications. We're heavily Hibernate based and have probably travelled a bit down the wrong path with our data model already, generating some gnarly queries which our DB is struggling with. Please respond if: You have ported a non-trivial application over to AppEngine You've created a common type of application from scratch in AppEngine You've done neither 1 or 2, but are considering it and want to share your own findings so far.
Can I make Python 2.5 exit on ctrl-D in Windows instead of ctrl-Z?
978,669
0
7
5,212
0
python,windows,python-2.5
Run Cygwin Python if windowisms are bothering you... Unless what you are doing depends on pywin32 that is.
0
1
0
0
2009-06-10T16:40:00.000
4
0
false
976,796
1
0
0
2
I'm used to ending the python interactive interpreter using Ctrl-d using Linux and OS X. On windows though, you have to use CTRL+Z and then enter. Is there any way to use CTRL+D?
Can I make Python 2.5 exit on ctrl-D in Windows instead of ctrl-Z?
977,031
0
7
5,212
0
python,windows,python-2.5
You can change the key set that Idle should be using. Under Options->"Configure IDLE..." go to the "Keys" tab. On the right you can select the "IDLE Classic Unix" key set.
0
1
0
0
2009-06-10T16:40:00.000
4
0
false
976,796
1
0
0
2
I'm used to ending the python interactive interpreter using Ctrl-d using Linux and OS X. On windows though, you have to use CTRL+Z and then enter. Is there any way to use CTRL+D?
Can Windows drivers be written in Python?
981,251
0
13
13,794
0
python,windows,drivers
No they cannot. Windows drivers must be written in a language that can Interface with the C based API Compile down to machine code Then again, there's nothing stopping you from writing a compiler that translates python to machine code ;)
0
1
0
0
2009-06-11T13:50:00.000
7
0
false
981,200
1
0
0
5
Can Windows drivers be written in Python?
Can Windows drivers be written in Python?
981,227
0
13
13,794
0
python,windows,drivers
Never say never but eh.. no You might be able to hack something together to run user-mode parts of drivers in python. But kernel-mode stuff can only be done in C or assembly.
0
1
0
0
2009-06-11T13:50:00.000
7
0
false
981,200
1
0
0
5
Can Windows drivers be written in Python?
Can Windows drivers be written in Python?
981,216
1
13
13,794
0
python,windows,drivers
Python runs in a virtual machine, so no. BUT: You could write a compiler that translates Python code to machine language. Once you've done that, you can do it.
0
1
0
0
2009-06-11T13:50:00.000
7
0.028564
false
981,200
1
0
0
5
Can Windows drivers be written in Python?
Can Windows drivers be written in Python?
981,320
3
13
13,794
0
python,windows,drivers
The definitive answer is not without embedding an interpreter in your otherwise C/assembly driver. Unless someone has a framework available, then the answer is no. Once you have the interpreter and bindings in place then the rest of the logic could be done in Python. However, writing drivers is one of the things for which C is best suited. I imagine the resulting Python code would look a whole lot like C code and defeat the purpose of the interpreter overhead.
0
1
0
0
2009-06-11T13:50:00.000
7
0.085505
false
981,200
1
0
0
5
Can Windows drivers be written in Python?
Can Windows drivers be written in Python?
981,268
1
13
13,794
0
python,windows,drivers
I don't know the restrictions on drivers on windows (memory allocation schemes, dynamic load of libraries and all), but you may be able to embed a python interpreter in your driver, at which point you can do whatever you want. Not that I think it is a good idea :)
0
1
0
0
2009-06-11T13:50:00.000
7
0.028564
false
981,200
1
0
0
5
Can Windows drivers be written in Python?
Run a Python project in Eclipse as root
982,463
3
10
4,783
0
python,eclipse,root,sudo,gksudo
It may not be an ideal solution, but the rare times that I need this same functionality I end up just running Eclipse as root.
0
1
0
0
2009-06-11T14:24:00.000
3
0.197375
false
981,411
0
0
0
1
I use Eclipse as my IDE, and when I run my application I wish the application itself to run as root. My program currently checks if it is root, and if not it restarts itself with gksudo. The output, however, isn't written to the console. I can't use sudo, since it doesn't give me a graphical prompt. (While my program is CLI, Eclipse doesn't permit console interaction afaict) What's the "right" way to be elevating my application?
windows command line and Python
981,705
3
1
8,771
0
python,windows,command-line
python myscript.py
0
1
0
0
2009-06-11T15:07:00.000
6
1.2
true
981,691
0
0
0
1
I have a python script that i want to run from the command line but unsure how to run it. Thanks :)
How to keep a Python script output window open?
1,031,891
59
214
389,804
0
python,windows
cmd /k is the typical way to open any console application (not only Python) with a console window that will remain after the application closes. The easiest way I can think to do that, is to press Win+R, type cmd /k and then drag&drop the script you want to the Run dialog.
0
1
0
0
2009-06-16T11:31:00.000
25
1
false
1,000,900
1
0
0
6
I have just started with Python. When I execute a python script file on Windows, the output window appears but instantaneously goes away. I need it to stay there so I can analyze my output. How can I keep it open?
How to keep a Python script output window open?
4,477,506
6
214
389,804
0
python,windows
I had a similar problem. With Notepad++ I used to use the command : C:\Python27\python.exe "$(FULL_CURRENT_PATH)" which closed the cmd window immediately after the code terminated. Now I am using cmd /k c:\Python27\python.exe "$(FULL_CURRENT_PATH)" which keeps the cmd window open.
0
1
0
0
2009-06-16T11:31:00.000
25
1
false
1,000,900
1
0
0
6
I have just started with Python. When I execute a python script file on Windows, the output window appears but instantaneously goes away. I need it to stay there so I can analyze my output. How can I keep it open?