Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
Prevent ftplib from Downloading a File in Progress?
375,716
0
4
1,585
0
python,ftp,ftplib
If you are dealing with multiple files, you could get the list of all the sizes at once, wait ten seconds, and see which are the same. Whichever are still the same should be safe to download.
0
1
0
1
2008-12-17T18:54:00.000
4
0
false
375,620
0
0
0
4
We have a ftp system setup to monitor/download from remote ftp servers that are not under our control. The script connects to the remote ftp, and grabs the file names of files on the server, we then check to see if its something that has already been downloaded. If it hasn't been downloaded then we download the file and add it to the list. We recently ran into an issue, where someone on the remote ftp side, will copy in a massive single file(>1GB) then the script will wake up see a new file and begin downloading the file that is being copied in. What is the best way to check this? I was thinking of grabbing the file size waiting a few seconds checking the file size again and see if it has increased, if it hasn't then we download it. But since time is of the concern, we can't wait a few seconds for every single file set and see if it's file size has increased. What would be the best way to go about this, currently everything is done via pythons ftplib, how can we do this aside from using the aforementioned method. Yet again let me reiterate this, we have 0 control over the remote ftp sites. Thanks. UPDATE1: I was thinking what if i tried to rename it... since we have full permissions on the ftp, if the file upload is in progress would the rename command fail? We don't have any real options here... do we? UPDATE2: Well here's something interesting some of the ftps we tested on appear to automatically allocate the space once the transfer starts. E.g. If i transfer a 200mb file to the ftp server. While the transfer is active if i connect to the ftp server and do a size while the upload is happening. It shows 200mb for the size. Even though the file is only like 10% complete. Permissions also seem to be randomly set the FTP Server that comes with IIS sets the permissions AFTER the file is finished copying. While some of the other older ftp servers set it as soon as you send the file. :'(
Prevent ftplib from Downloading a File in Progress?
375,705
0
4
1,585
0
python,ftp,ftplib
As you say you have 0 control over the servers and can't make your clients post trigger files as suggested by S. Lott, you must deal with the imperfect solution and risk incomplete file transmission, perhaps by waiting for a while and compare file sizes before and after. You can try to rename as you suggested, but as you have 0 control you can't be sure that the ftp-server-administrator (or their successor) doesn't change platforms or ftp servers or restricts your permissions. Sorry.
0
1
0
1
2008-12-17T18:54:00.000
4
0
false
375,620
0
0
0
4
We have a ftp system setup to monitor/download from remote ftp servers that are not under our control. The script connects to the remote ftp, and grabs the file names of files on the server, we then check to see if its something that has already been downloaded. If it hasn't been downloaded then we download the file and add it to the list. We recently ran into an issue, where someone on the remote ftp side, will copy in a massive single file(>1GB) then the script will wake up see a new file and begin downloading the file that is being copied in. What is the best way to check this? I was thinking of grabbing the file size waiting a few seconds checking the file size again and see if it has increased, if it hasn't then we download it. But since time is of the concern, we can't wait a few seconds for every single file set and see if it's file size has increased. What would be the best way to go about this, currently everything is done via pythons ftplib, how can we do this aside from using the aforementioned method. Yet again let me reiterate this, we have 0 control over the remote ftp sites. Thanks. UPDATE1: I was thinking what if i tried to rename it... since we have full permissions on the ftp, if the file upload is in progress would the rename command fail? We don't have any real options here... do we? UPDATE2: Well here's something interesting some of the ftps we tested on appear to automatically allocate the space once the transfer starts. E.g. If i transfer a 200mb file to the ftp server. While the transfer is active if i connect to the ftp server and do a size while the upload is happening. It shows 200mb for the size. Even though the file is only like 10% complete. Permissions also seem to be randomly set the FTP Server that comes with IIS sets the permissions AFTER the file is finished copying. While some of the other older ftp servers set it as soon as you send the file. :'(
Test if executable exists in Python?
377,590
0
337
167,325
0
python,path
So basically you want to find a file in mounted filesystem (not necessarily in PATH directories only) and check if it is executable. This translates to following plan: enumerate all files in locally mounted filesystems match results with name pattern for each file found check if it is executable I'd say, doing this in a portable way will require lots of computing power and time. Is it really what you need?
0
1
0
1
2008-12-18T05:55:00.000
15
0
false
377,017
0
0
0
1
In Python, is there a portable and simple way to test if an executable program exists? By simple I mean something like the which command which would be just perfect. I don't want to search PATH manually or something involving trying to execute it with Popen & al and see if it fails (that's what I'm doing now, but imagine it's launchmissiles)
Good language to develop a game server in?
393,963
-1
15
16,457
0
c#,java,python,networking
C++ and Java are quite slow compared to C. The language should be a tool but not a crutch.
0
1
0
1
2008-12-25T08:25:00.000
15
-0.013333
false
392,624
1
0
0
9
I was just wondering what language would be a good choice for developing a game server to support a large (thousands) number of users? I dabbled in python, but realized that it would just be too much trouble since it doesn't spawn threads across cores (meaning an 8 core server=1 core server). I also didn't really like the language (that "self" stuff grossed me out). I know that C++ is the language for the job in terms of performance, but I hate it. I don't want to deal with its sloppy syntax and I like my hand to be held by managed languages. This brings me to C# and Java, but I am open to other languages. I love the simplicity of .NET, but I was wondering if, speed wise, this would be good for the job. Keep in mind since this will be deployed on a Linux server, it would be running on the Mono framework - not sure if that matters. I know that Java is syntax-wise very similar to .Net, but my experience with it is limited. Are there any frameworks out there for it or anthing to ease in the development? Please help me and my picky self arrive on a solution. UPDATE: I didn't mean to sound so picky, and I really don't think I was. The only language I really excluded was C++, Python I don't like because of the scalability problem. I know that there are ways of communicating between processes, but if I have an 8 core server, why should I need to make 8 processes? Is there a more elegant solution?
Good language to develop a game server in?
392,874
0
15
16,457
0
c#,java,python,networking
What are your objectives? Not the creation of the game itself, but why are you creating it? If you're doing it to learn a new language, then pick the one that seems the most interesting to you (i.e., the one you most want to learn). If it is for any other reason, then the best language will be the one that you already know best and enjoy using most. This will allow you to focus on working out the game logic and getting something up and running so that you can see progress and remain motivated to continue, rather than getting bogged down in details of the language you're using and losing interest. If your favorite language proves inadequate in some ways (too slow, not expressive enough, whatever), then you can rewrite the problem sections in a more suitable language when issues come up - and you won't know the best language to address the specific problems until you know what the problems end up being. Even if your chosen language proves entirely unsuitable for final production use and the whole thing has to be rewritten, it will give you a working prototype with tested game logic, which will make dealing with the new language far easier.
0
1
0
1
2008-12-25T08:25:00.000
15
0
false
392,624
1
0
0
9
I was just wondering what language would be a good choice for developing a game server to support a large (thousands) number of users? I dabbled in python, but realized that it would just be too much trouble since it doesn't spawn threads across cores (meaning an 8 core server=1 core server). I also didn't really like the language (that "self" stuff grossed me out). I know that C++ is the language for the job in terms of performance, but I hate it. I don't want to deal with its sloppy syntax and I like my hand to be held by managed languages. This brings me to C# and Java, but I am open to other languages. I love the simplicity of .NET, but I was wondering if, speed wise, this would be good for the job. Keep in mind since this will be deployed on a Linux server, it would be running on the Mono framework - not sure if that matters. I know that Java is syntax-wise very similar to .Net, but my experience with it is limited. Are there any frameworks out there for it or anthing to ease in the development? Please help me and my picky self arrive on a solution. UPDATE: I didn't mean to sound so picky, and I really don't think I was. The only language I really excluded was C++, Python I don't like because of the scalability problem. I know that there are ways of communicating between processes, but if I have an 8 core server, why should I need to make 8 processes? Is there a more elegant solution?
Good language to develop a game server in?
392,831
3
15
16,457
0
c#,java,python,networking
More details about this game server might help folks better answer your question. Is this a game server in the sense of something like a Counter Strike dedicated server which sits in the background and hosts multiplayer interactions or are you writing something which will be hosted on an HTTP webserver? Personally, if it were me, I'd be considering Java or C++. My personal preference and skill set would probably lead me towards C++ because I find Java clumsy to work with on both platforms (moreso on Linux) and don't have the confidence that C# is ready for prime-time in Linux yet. That said, you also need to have a pretty significant community hammering on said server before performance of your language is going to be so problematic. My advise would be to write it in whatever language you can at the moment and if your game grows to be of sufficient size, invest in a rewrite at that time.
0
1
0
1
2008-12-25T08:25:00.000
15
0.039979
false
392,624
1
0
0
9
I was just wondering what language would be a good choice for developing a game server to support a large (thousands) number of users? I dabbled in python, but realized that it would just be too much trouble since it doesn't spawn threads across cores (meaning an 8 core server=1 core server). I also didn't really like the language (that "self" stuff grossed me out). I know that C++ is the language for the job in terms of performance, but I hate it. I don't want to deal with its sloppy syntax and I like my hand to be held by managed languages. This brings me to C# and Java, but I am open to other languages. I love the simplicity of .NET, but I was wondering if, speed wise, this would be good for the job. Keep in mind since this will be deployed on a Linux server, it would be running on the Mono framework - not sure if that matters. I know that Java is syntax-wise very similar to .Net, but my experience with it is limited. Are there any frameworks out there for it or anthing to ease in the development? Please help me and my picky self arrive on a solution. UPDATE: I didn't mean to sound so picky, and I really don't think I was. The only language I really excluded was C++, Python I don't like because of the scalability problem. I know that there are ways of communicating between processes, but if I have an 8 core server, why should I need to make 8 processes? Is there a more elegant solution?
Good language to develop a game server in?
392,645
1
15
16,457
0
c#,java,python,networking
It may depend a lot on what language your "game logic" (you may know this term as "business logic") is best expressed in. For example, if the game logic is best expressed in Python (or any other particular language) it might be best to just write it in Python and deal with the performance issues the hard way with either multi-threading or clustering. Even though it may cost you a lot of time to get the performance you want out of Python it will be less that the time it will take you to express "player A now casts a level 70 Spell of darkness in the radius of 7 units effecting all units that have spoken with player B and .... " in C++. Something else to consider is what protocol you will be using to communicate with the clients. If you have a complex binary protocol C++ may be easier (esp. if you already had experience doing it before) while a JSON (or similar) may be easier to parse in Python. Yes, i know C++ and python aren't languages you are limited to (or even considering) but i'm refer to them generally here. Probably comes down to what language you are the best at. A poorly written program which you hated writing will be worse that one written in a language you know and enjoy, even if the poorly written program was in an arguable more powerful language.
0
1
0
1
2008-12-25T08:25:00.000
15
0.013333
false
392,624
1
0
0
9
I was just wondering what language would be a good choice for developing a game server to support a large (thousands) number of users? I dabbled in python, but realized that it would just be too much trouble since it doesn't spawn threads across cores (meaning an 8 core server=1 core server). I also didn't really like the language (that "self" stuff grossed me out). I know that C++ is the language for the job in terms of performance, but I hate it. I don't want to deal with its sloppy syntax and I like my hand to be held by managed languages. This brings me to C# and Java, but I am open to other languages. I love the simplicity of .NET, but I was wondering if, speed wise, this would be good for the job. Keep in mind since this will be deployed on a Linux server, it would be running on the Mono framework - not sure if that matters. I know that Java is syntax-wise very similar to .Net, but my experience with it is limited. Are there any frameworks out there for it or anthing to ease in the development? Please help me and my picky self arrive on a solution. UPDATE: I didn't mean to sound so picky, and I really don't think I was. The only language I really excluded was C++, Python I don't like because of the scalability problem. I know that there are ways of communicating between processes, but if I have an 8 core server, why should I need to make 8 processes? Is there a more elegant solution?
Good language to develop a game server in?
392,764
2
15
16,457
0
c#,java,python,networking
You could as well use Java and compile the code using GCC to a native executable. That way you don't get the performance hit of the bytecode engine (Yes, I know - Java out of the box is as fast as C++. It must be just me who always measures a factor 5 performance difference). The drawback is that the GCC Java-frontend does not support all of the Java 1.6 language features. Another choice would be to use your language of choice, get the code working first and then move the performance critical stuff into native code. Nearly all languages support binding to compiled libraries. That does not solve your "python does not multithread well"-problem, but it gives you more choices.
0
1
0
1
2008-12-25T08:25:00.000
15
0.02666
false
392,624
1
0
0
9
I was just wondering what language would be a good choice for developing a game server to support a large (thousands) number of users? I dabbled in python, but realized that it would just be too much trouble since it doesn't spawn threads across cores (meaning an 8 core server=1 core server). I also didn't really like the language (that "self" stuff grossed me out). I know that C++ is the language for the job in terms of performance, but I hate it. I don't want to deal with its sloppy syntax and I like my hand to be held by managed languages. This brings me to C# and Java, but I am open to other languages. I love the simplicity of .NET, but I was wondering if, speed wise, this would be good for the job. Keep in mind since this will be deployed on a Linux server, it would be running on the Mono framework - not sure if that matters. I know that Java is syntax-wise very similar to .Net, but my experience with it is limited. Are there any frameworks out there for it or anthing to ease in the development? Please help me and my picky self arrive on a solution. UPDATE: I didn't mean to sound so picky, and I really don't think I was. The only language I really excluded was C++, Python I don't like because of the scalability problem. I know that there are ways of communicating between processes, but if I have an 8 core server, why should I need to make 8 processes? Is there a more elegant solution?
Good language to develop a game server in?
392,911
18
15
16,457
0
c#,java,python,networking
I might be going slightly off-topic here, but the topic interests me as I have (hobby-wise) worked on quite a few game servers (MMORPG servers) - on others' code as well as mine. There is literature out there that will be of interest to you, drop me a note if you want some references. One thing that strikes me in your question is the want to serve a thousand users off a multithreaded application. From my humble experience, that does not work too well. :-) When you serve thousands of users you want a design that is as modular as possible, because one of your primary goals will be to keep the service as a whole up and running. Game servers tend to be rather complex, so there will be quite a few show-stopping bugs. Don't make your life miserable with a single point of failure (one application!). Instead, try to build multiple processes that can run on a multitude of hosts. My humble suggestion is the following: Make them independent, so a failing process will be irrelevant to the service. Make them small, so that the different parts of the service and how they interact are easy to grasp. Don't let users communicate with the gamelogic OR DB directly. Write a proxy - network stacks can and will show odd behaviour on different architectures when you have a multitude of users. Also make sure that you can later "clean"/filter what the proxies forward. Have a process that will only monitor other processes to see if they are still working properly, with the ability to restart parts. Make them distributable. Coordinate processes via TCP from the start or you will run into scalability problems. If you have large landscapes, consider means to dynamically divide load by dividing servers by geography. Don't have every backend process hold all the data in memory. I have ported a few such engines written in C++ and C# for hosts operating on Linux, FreeBSD and also Solaris (on an old UltraSparc IIi - yes, mono still runs there :). From my experience, C# is well fast enough, considering on what ancient hardware it operates on that sparc machine. The industry (as far as I know) tends to use a lot of C++ for the serving work and embeds scripting languages for the actual game logic. Ah, written too much already - way cool topic.
0
1
0
1
2008-12-25T08:25:00.000
15
1
false
392,624
1
0
0
9
I was just wondering what language would be a good choice for developing a game server to support a large (thousands) number of users? I dabbled in python, but realized that it would just be too much trouble since it doesn't spawn threads across cores (meaning an 8 core server=1 core server). I also didn't really like the language (that "self" stuff grossed me out). I know that C++ is the language for the job in terms of performance, but I hate it. I don't want to deal with its sloppy syntax and I like my hand to be held by managed languages. This brings me to C# and Java, but I am open to other languages. I love the simplicity of .NET, but I was wondering if, speed wise, this would be good for the job. Keep in mind since this will be deployed on a Linux server, it would be running on the Mono framework - not sure if that matters. I know that Java is syntax-wise very similar to .Net, but my experience with it is limited. Are there any frameworks out there for it or anthing to ease in the development? Please help me and my picky self arrive on a solution. UPDATE: I didn't mean to sound so picky, and I really don't think I was. The only language I really excluded was C++, Python I don't like because of the scalability problem. I know that there are ways of communicating between processes, but if I have an 8 core server, why should I need to make 8 processes? Is there a more elegant solution?
Good language to develop a game server in?
392,650
7
15
16,457
0
c#,java,python,networking
What kind of performance do you need? twisted is great for servers that need lots of concurrency, as is erlang. Either supports massive concurrency easily and has facilities for distributed computing. If you want to span more than one core in a python app, do the same thing you'd do if you wanted to span more than one machine — run more than one process.
0
1
0
1
2008-12-25T08:25:00.000
15
1
false
392,624
1
0
0
9
I was just wondering what language would be a good choice for developing a game server to support a large (thousands) number of users? I dabbled in python, but realized that it would just be too much trouble since it doesn't spawn threads across cores (meaning an 8 core server=1 core server). I also didn't really like the language (that "self" stuff grossed me out). I know that C++ is the language for the job in terms of performance, but I hate it. I don't want to deal with its sloppy syntax and I like my hand to be held by managed languages. This brings me to C# and Java, but I am open to other languages. I love the simplicity of .NET, but I was wondering if, speed wise, this would be good for the job. Keep in mind since this will be deployed on a Linux server, it would be running on the Mono framework - not sure if that matters. I know that Java is syntax-wise very similar to .Net, but my experience with it is limited. Are there any frameworks out there for it or anthing to ease in the development? Please help me and my picky self arrive on a solution. UPDATE: I didn't mean to sound so picky, and I really don't think I was. The only language I really excluded was C++, Python I don't like because of the scalability problem. I know that there are ways of communicating between processes, but if I have an 8 core server, why should I need to make 8 processes? Is there a more elegant solution?
Good language to develop a game server in?
392,627
21
15
16,457
0
c#,java,python,networking
I hate to say it, and I know I'm risking a down mod here, but it doesn't sound like there's a language out there for you. All programming languages have their quirks and programmers simply have to adapt to them. It's completely possible to write a working server in Python without classes (eliminating the "self" variable class references) and likewise just as easy to write C++ with clean syntax. If you're looking to deploy cross-platform and want to develop cross-platform as well, your best bet would probably be Java. It shorter development cycles than compiled languages like C and C++, but is higher performance (arguable, but I've always been anti-Java =P) than interpreted languages like Python and Perl and you don't have to work with unofficial implementations like Mono that may from time to time not support all of a language's features.
0
1
0
1
2008-12-25T08:25:00.000
15
1
false
392,624
1
0
0
9
I was just wondering what language would be a good choice for developing a game server to support a large (thousands) number of users? I dabbled in python, but realized that it would just be too much trouble since it doesn't spawn threads across cores (meaning an 8 core server=1 core server). I also didn't really like the language (that "self" stuff grossed me out). I know that C++ is the language for the job in terms of performance, but I hate it. I don't want to deal with its sloppy syntax and I like my hand to be held by managed languages. This brings me to C# and Java, but I am open to other languages. I love the simplicity of .NET, but I was wondering if, speed wise, this would be good for the job. Keep in mind since this will be deployed on a Linux server, it would be running on the Mono framework - not sure if that matters. I know that Java is syntax-wise very similar to .Net, but my experience with it is limited. Are there any frameworks out there for it or anthing to ease in the development? Please help me and my picky self arrive on a solution. UPDATE: I didn't mean to sound so picky, and I really don't think I was. The only language I really excluded was C++, Python I don't like because of the scalability problem. I know that there are ways of communicating between processes, but if I have an 8 core server, why should I need to make 8 processes? Is there a more elegant solution?
Good language to develop a game server in?
392,844
2
15
16,457
0
c#,java,python,networking
The obvious candidates are Java and Erlang: Pro Java: ease of development good development environments stability, good stack traces well-known (easy to find experienced programmers, lots of libraries, books, ...) quite fast, mature VM Pro Erlang: proven in systems that need >99.9% uptime ability to have software updates without downtime scalable (not only multi-core, but also multi-machine) Contra Erlang: unfamiliar syntax and programming paradigm not so well known; hard to get experienced programmers for VM is not nearly as fast as java If your game server mainly works as a event dispatcher (with a bit of a database tucked on), Erlang's message-driven paradigm should be a good match. In this day and age, I would not consider using an unmanaged language (like C or C++); the marginal performance benefits simply aren't worth the hassle.
0
1
0
1
2008-12-25T08:25:00.000
15
0.02666
false
392,624
1
0
0
9
I was just wondering what language would be a good choice for developing a game server to support a large (thousands) number of users? I dabbled in python, but realized that it would just be too much trouble since it doesn't spawn threads across cores (meaning an 8 core server=1 core server). I also didn't really like the language (that "self" stuff grossed me out). I know that C++ is the language for the job in terms of performance, but I hate it. I don't want to deal with its sloppy syntax and I like my hand to be held by managed languages. This brings me to C# and Java, but I am open to other languages. I love the simplicity of .NET, but I was wondering if, speed wise, this would be good for the job. Keep in mind since this will be deployed on a Linux server, it would be running on the Mono framework - not sure if that matters. I know that Java is syntax-wise very similar to .Net, but my experience with it is limited. Are there any frameworks out there for it or anthing to ease in the development? Please help me and my picky self arrive on a solution. UPDATE: I didn't mean to sound so picky, and I really don't think I was. The only language I really excluded was C++, Python I don't like because of the scalability problem. I know that there are ways of communicating between processes, but if I have an 8 core server, why should I need to make 8 processes? Is there a more elegant solution?
Is there a simple way in Python to create a file which can be written to in one thread and read in a different one?
394,548
1
3
490
0
python
I think there is something wrong in the design if you already have a file-like object if you want your data to end up in the subprocess. You should then arrange that they get written into the subprocess in the first place, rather than having them written into something else file-like first. Whoever is writing the data should allow the flexibility to specify the output stream, and that should be the subprocess pipe. Alternatively, if the writer insists on creating its own stream object, you should let it complete writing, and only then start the subprocess, feeding it from the result of first write. E.g. if it is a StringIO object, take its value after writing, and write it into the pipe; no need for thread synchronization here.
0
1
0
0
2008-12-27T00:25:00.000
4
0.049958
false
394,500
1
0
0
1
In the python program I'm writing, I've got a thread which iterates over a large structure in memory and writes it incrementally into a file-like object. I've got another thread which takes a file-like object and writes it to disk. Is there an easy way to connect the two, such that any data input from the first thread will be buffered for the second? Specifically, I'm trying to pass data to subprocess.Popen(). The process will read from stdin, but you cannot pass a "file-like" object to Popen because it calls stdin.fileno() and blows up unless you have a real file. Instead, you need to pass the PIPE argument to Popen, which allows you to use proc.stdin as a file-like object. But if you've already got a file-like object, there doesn't seem to be a great way to yolk the two of them together.
Prevent Python subprocess from passing fds on Windows?
408,049
-2
1
671
0
python,windows,subprocess,popen
I don't have a windows box around, so this is untested, but I'd be tempted to try the os.dup and os.dup2 methods; duplicate the file descriptors and use those instead of the parent ones.
0
1
0
0
2009-01-02T21:08:00.000
2
-0.197375
false
408,039
0
0
0
1
Python's subprocess module by default passes all open file descriptors to any child processes it spawns. This means that if the parent process is listening on a port, and is killed, it cannot restart and begin listening again (even using SO_REUSEADDR) because the child is still in possession of that descriptor. I have no control over the child process. The subprocess POpen constructor does accept a close_fds argument, which would close descriptors on the child, just as I want. However, there is a restriction, only on Windows, that prevents it from being used if stdin/stdout are also overridden, which I need to do. Does anyone know of a work-around for this on Windows?
What versions of Python and wxPython correspond to each version of OSX?
409,702
3
0
412
0
python,macos,wxpython,compatibility
Tiger shipped with Python 2.3.5 and wxPython 2.5.3, Leopard ships with python 2.5.1 and wxPython 2.8.4. wxPython was not shipped with previous versions. OSX Lion has 2.7.1
0
1
0
0
2009-01-03T19:37:00.000
1
1.2
true
409,677
0
0
0
1
I'd like to know what versions of Python and wxPython correspond to each version of OSX. I'm interested to know exactly how far back some of my apps will remain compatible on a mac before having to install newer versions of Python and wxPython.
How to bring program to front using python
413,073
4
1
2,418
0
python,qt
Check if KWin is configured to prevent focus stealing. There might be nothing wrong with your code -- but we linux people don't like applications bugging us when we work, so stealing focus is kinda frowned upon, and difficult under some window managers.
1
1
0
0
2009-01-05T03:36:00.000
3
0.26052
false
412,214
0
0
0
1
I would like to force my python app to the front if a condition occurs. I'm using Kubuntu & QT3.1 I've tried setActiveWindow(), but it only flashes the task bar in KDE. I think Windows has a function bringwindowtofront() for VB. Is there something similar for KDE?
How do you query the set of Users in Google App Domain within your Google App Engine project?
426,287
0
1
1,636
0
python,google-app-engine,google-apps,gql,gqlquery
Yeah, there's no way to get information about people who haven't logged into your application.
0
1
0
0
2009-01-07T04:33:00.000
4
0
false
419,197
0
0
1
1
If you have a Google App Engine project you can authenticate based on either a) anyone with a google account or b) a particular google app domain. Since you can connect these two entities I would assume there is some way to query the list of users that can be authenticated. The use case is outputting a roster of all members in an organization to a web page running on Google App Engine. Any thoughts?
Python and different Operating Systems
425,383
4
4
1,173
0
python,cross-platform
In general: Be careful with paths. Use os.path wherever possible. Don't assume that HOME points to the user's home/profile directory. Avoid using things like unix-domain sockets, fifos, and other POSIX-specific stuff. More specific stuff: If you're using wxPython, note that there may be differences in things like which thread certain events are generated in. Don't assume that events are generated in a specific thread. If you're calling a method which triggers a GUI-event, don't assume that event-handlers have completed by the time your method returns. (And vice versa, of course.) There are always differences in how a GUI will appear. Layouts are not always implemented in the exact same way.
0
1
0
0
2009-01-08T18:38:00.000
4
1.2
true
425,343
1
0
0
4
I am about to start a personal project using python and I will be using it on both Linux(Fedora) and Windows(Vista), Although I might as well make it work on a mac while im at it. I have found an API for the GUI that will work on all 3. The reason I am asking is because I have always heard of small differences that are easily avoided if you know about them before starting. Does anyone have any tips or suggestions that fall along these lines?
Python and different Operating Systems
425,403
1
4
1,173
0
python,cross-platform
You should take care of the Python version you are developing against. Especially, on a Mac, the default version of Python installed with the OS, is rather old (of course, newer versions can be installed) Don't use the OS specific libraries Take special care of 'special' UI elements, like taskbar icons (windows), ... Use forward slashes when using paths, avoid C:/, /home/..., ... Use os.path to work with paths.
0
1
0
0
2009-01-08T18:38:00.000
4
0.049958
false
425,343
1
0
0
4
I am about to start a personal project using python and I will be using it on both Linux(Fedora) and Windows(Vista), Although I might as well make it work on a mac while im at it. I have found an API for the GUI that will work on all 3. The reason I am asking is because I have always heard of small differences that are easily avoided if you know about them before starting. Does anyone have any tips or suggestions that fall along these lines?
Python and different Operating Systems
425,409
3
4
1,173
0
python,cross-platform
Some things I've noticed in my cross platform development in Python: OSX doesn't have a tray, so application notifications usually happen right in the dock. So if you're building a background notification service you may need a small amount of platform-specific code. os.startfile() apparently only works on Windows. Either that or Python 2.5.1 on Leopard doesn't support it. os.normpath() is something you might want to consider using too, just to keep your paths and volumes using the correct slash notation and volume names. icons are dealt with in fundamentally different ways in Windows and OSX, be sure you provide icons at all the right sizes for both (16x16, 24x24, 32x32, 48x48, 64x64, 128x128 and 256x256) and be sure to read up on setting up icons with wx widgets.
0
1
0
0
2009-01-08T18:38:00.000
4
0.148885
false
425,343
1
0
0
4
I am about to start a personal project using python and I will be using it on both Linux(Fedora) and Windows(Vista), Although I might as well make it work on a mac while im at it. I have found an API for the GUI that will work on all 3. The reason I am asking is because I have always heard of small differences that are easily avoided if you know about them before starting. Does anyone have any tips or suggestions that fall along these lines?
Python and different Operating Systems
425,465
0
4
1,173
0
python,cross-platform
Some filename problems: This.File and this.file are different files on Linux, but point to the same file on Windows. Troublesome if you manage some file repository and access it from both platforms. Less frequent related problem is that of names like NUL or LPT being files on Windows. Binary distribution code (if any) would likely use py2exe on Win, py2app on Mac and wouldn't be present on Linux.
0
1
0
0
2009-01-08T18:38:00.000
4
0
false
425,343
1
0
0
4
I am about to start a personal project using python and I will be using it on both Linux(Fedora) and Windows(Vista), Although I might as well make it work on a mac while im at it. I have found an API for the GUI that will work on all 3. The reason I am asking is because I have always heard of small differences that are easily avoided if you know about them before starting. Does anyone have any tips or suggestions that fall along these lines?
How to process a YAML stream in Python
429,305
2
6
2,564
0
python,command-line,streaming,yaml
All of the references to stream in the the documentation seem to be referring to a stream of documents... I've never tried to use it in the way you describe, but it seems like chunking the data into such a stream of documents is a reasonable approach.
0
1
0
0
2009-01-09T18:29:00.000
2
0.197375
false
429,162
0
0
0
1
I have a command line app the continuously outputs YAML data in the form: - col0: datum0 col1: datum1 col2: datum2 - col0: datum0 col1: datum1 col2: datum2 ... It does this for all of eternity. I would like to write a Python script that continuously reads each of these records. The PyYAML library seems best at taking fully loaded strings and interpreting those as a complete YAML document. Is there a way to put PyYAML into a "streaming" mode? Or is my only option to chunk the data myself and feed it bit by bit into PyYAML?
Equivalent of shell 'cd' command to change the working directory?
39,964,190
4
797
1,317,026
0
python,cd
If You would like to perform something like "cd.." option, just type: os.chdir("..") it is the same as in Windows cmd: cd.. Of course import os is neccessary (e.g type it as 1st line of your code)
0
1
0
0
2009-01-10T20:28:00.000
14
0.057081
false
431,684
0
0
0
3
cd is the shell command to change the working directory. How do I change the current working directory in Python?
Equivalent of shell 'cd' command to change the working directory?
431,694
12
797
1,317,026
0
python,cd
os.chdir() is the Pythonic version of cd.
0
1
0
0
2009-01-10T20:28:00.000
14
1
false
431,684
0
0
0
3
cd is the shell command to change the working directory. How do I change the current working directory in Python?
Equivalent of shell 'cd' command to change the working directory?
431,695
12
797
1,317,026
0
python,cd
os.chdir() is the right way.
0
1
0
0
2009-01-10T20:28:00.000
14
1
false
431,684
0
0
0
3
cd is the shell command to change the working directory. How do I change the current working directory in Python?
Open document with default OS application in Python, both in Windows and Mac OS
818,083
2
157
118,227
0
python,windows,macos
If you want to specify the app to open the file with on Mac OS X, use this: os.system("open -a [app name] [file name]")
0
1
0
0
2009-01-12T06:23:00.000
15
0.02666
false
434,597
1
0
0
2
I need to be able to open a document using its default application in Windows and Mac OS. Basically, I want to do the same thing that happens when you double-click on the document icon in Explorer or Finder. What is the best way to do this in Python?
Open document with default OS application in Python, both in Windows and Mac OS
24,895,085
5
157
118,227
0
python,windows,macos
os.startfile(path, 'open') under Windows is good because when spaces exist in the directory, os.system('start', path_name) can't open the app correctly and when the i18n exist in the directory, os.system needs to change the unicode to the codec of the console in Windows.
0
1
0
0
2009-01-12T06:23:00.000
15
0.066568
false
434,597
1
0
0
2
I need to be able to open a document using its default application in Windows and Mac OS. Basically, I want to do the same thing that happens when you double-click on the document icon in Explorer or Finder. What is the best way to do this in Python?
How to implement a python REPL that nicely handles asynchronous output?
439,403
-1
13
7,222
0
python,readline,read-eval-print-loop
I think you have 2 basic options: Synchronize your output (i.e. block until it comes back) Separate your input and your (asyncronous) output, perhaps in two separate columns.
0
1
0
0
2009-01-12T21:11:00.000
6
-0.033321
false
437,025
1
0
0
1
I have a Python-based app that can accept a few commands in a simple read-eval-print-loop. I'm using raw_input('> ') to get the input. On Unix-based systems, I also import readline to make things behave a little better. All this is working fine. The problem is that there are asynchronous events coming in, and I'd like to print output as soon as they happen. Unfortunately, this makes things look ugly. The "> " string doesn't show up again after the output, and if the user is halfway through typing something, it chops their text in half. It should probably redraw the user's text-in-progress after printing something. This seems like it must be a solved problem. What's the proper way to do this? Also note that some of my users are Windows-based. TIA Edit: The accepted answer works under Unixy platforms (when the readline module is available), but if anyone knows how to make this work under Windows, it would be much appreciated!
Good Python networking libraries for building a TCP server?
442,079
1
12
12,949
0
python,networking,twisted
Just adding an answer to re-iterate other posters - it'll be worth it to use Twisted. There's no reason to write yet another TCP server that'll end up working not as well as one using twisted would. The only reason would be if writing your own is much faster, developer-wise, but if you just bite the bullet and learn twisted now, your future projects will benefit greatly. And, as others have said, you'll be able to do much more complex stuff if you use twisted from the start.
0
1
1
0
2009-01-14T03:51:00.000
5
0.039979
false
441,849
0
0
0
2
I was just wondering what network libraries there are out there for Python for building a TCP/IP server. I know that Twisted might jump to mind but the documentation seems scarce, sloppy, and scattered to me. Also, would using Twisted even have a benefit over rolling my own server with select.select()?
Good Python networking libraries for building a TCP server?
441,863
6
12
12,949
0
python,networking,twisted
The standard library includes SocketServer and related modules which might be sufficient for your needs. This is a good middle ground between a complex framework like Twisted, and rolling your own select() loop.
0
1
1
0
2009-01-14T03:51:00.000
5
1
false
441,849
0
0
0
2
I was just wondering what network libraries there are out there for Python for building a TCP/IP server. I know that Twisted might jump to mind but the documentation seems scarce, sloppy, and scattered to me. Also, would using Twisted even have a benefit over rolling my own server with select.select()?
Cleanest way to run/debug python programs in windows
445,607
3
24
35,429
0
python,windows,python-idle
However, I want to run programs in some other shell than the crappy windows command prompt, which can't be widened to more than 80 characters. Click on the system box (top-left) in the command prompt and click properties. In the layout tab you can set the width and height of the window and the width and height of the screen buffer. I recommend setting the screen buffer height to 9999 so you can scroll back through a long output.
0
1
0
0
2009-01-15T04:01:00.000
9
0.066568
false
445,595
1
0
0
3
Python for Windows by default comes with IDLE, which is the barest-bones IDE I've ever encountered. For editing files, I'll stick to emacs, thank you very much. However, I want to run programs in some other shell than the crappy windows command prompt, which can't be widened to more than 80 characters. IDLE lets me run programs in it if I open the file, then hit F5 (to go Run-> Run Module). I would rather like to just "run" the command, rather than going through the rigmarole of closing the emacs file, loading the IDLE file, etc. A scan of google and the IDLE docs doesn't seem to give much help about using IDLE's shell but not it's IDE. Any advice from the stack overflow guys? Ideally I'd either like advice on running programs using IDLE's shell advice on other ways to run python programs in windows outside of IDLE or "cmd". Thanks, /YGA
Cleanest way to run/debug python programs in windows
679,859
0
24
35,429
0
python,windows,python-idle
I replaced cmd with Cygwin and Poderosa. May be a little overkill though, if the only problem you have with cmd is that it's a pain to resize. Although you use Emacs instead of Vim, so I guess you're into overkill... ;-)
0
1
0
0
2009-01-15T04:01:00.000
9
0
false
445,595
1
0
0
3
Python for Windows by default comes with IDLE, which is the barest-bones IDE I've ever encountered. For editing files, I'll stick to emacs, thank you very much. However, I want to run programs in some other shell than the crappy windows command prompt, which can't be widened to more than 80 characters. IDLE lets me run programs in it if I open the file, then hit F5 (to go Run-> Run Module). I would rather like to just "run" the command, rather than going through the rigmarole of closing the emacs file, loading the IDLE file, etc. A scan of google and the IDLE docs doesn't seem to give much help about using IDLE's shell but not it's IDE. Any advice from the stack overflow guys? Ideally I'd either like advice on running programs using IDLE's shell advice on other ways to run python programs in windows outside of IDLE or "cmd". Thanks, /YGA
Cleanest way to run/debug python programs in windows
445,618
9
24
35,429
0
python,windows,python-idle
You can easily widen the Windows console by doing the following: click the icon for the console window in the upper right select Properties from the menu click the Layout tab change the Window Size > Width to 140 This can also be saved universally by changing the Defaults on the menu.
0
1
0
0
2009-01-15T04:01:00.000
9
1
false
445,595
1
0
0
3
Python for Windows by default comes with IDLE, which is the barest-bones IDE I've ever encountered. For editing files, I'll stick to emacs, thank you very much. However, I want to run programs in some other shell than the crappy windows command prompt, which can't be widened to more than 80 characters. IDLE lets me run programs in it if I open the file, then hit F5 (to go Run-> Run Module). I would rather like to just "run" the command, rather than going through the rigmarole of closing the emacs file, loading the IDLE file, etc. A scan of google and the IDLE docs doesn't seem to give much help about using IDLE's shell but not it's IDE. Any advice from the stack overflow guys? Ideally I'd either like advice on running programs using IDLE's shell advice on other ways to run python programs in windows outside of IDLE or "cmd". Thanks, /YGA
GAE - How to live with no joins?
446,471
13
13
2,112
1
python,google-app-engine,join,google-cloud-datastore
If you look at how the SQL solution you provided will be executed, it will go basically like this: Fetch a list of friends for the current user For each user in the list, start an index scan over recent posts Merge-join all the scans from step 2, stopping when you've retrieved enough entries You can carry out exactly the same procedure yourself in App Engine, by using the Query instances as iterators and doing a merge join over them. You're right that this will not scale well to large numbers of friends, but it suffers from exactly the same issues the SQL implementation has, it just doesn't disguise them as well: Fetching the latest 20 (for example) entries costs roughly O(n log n) work, where n is the number of friends.
0
1
0
0
2009-01-15T06:07:00.000
4
1
false
445,827
0
0
1
2
Example Problem: Entities: User contains name and a list of friends (User references) Blog Post contains title, content, date and Writer (User) Requirement: I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries. SQL Solution: So in sql land it would be something like: select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date GAE solutions i can think of are: Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts. I don't believe either of these solutions will scale. Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing?
GAE - How to live with no joins?
446,477
1
13
2,112
1
python,google-app-engine,join,google-cloud-datastore
"Load user, loop through the list of friends and load their latest blog posts." That's all a join is -- nested loops. Some kinds of joins are loops with lookups. Most lookups are just loops; some are hashes. "Finally merge all the blog posts to find the latest 10 blog entries" That's a ORDER BY with a LIMIT. That's what the database is doing for you. I'm not sure what's not scalable about this; it's what a database does anyway.
0
1
0
0
2009-01-15T06:07:00.000
4
0.049958
false
445,827
0
0
1
2
Example Problem: Entities: User contains name and a list of friends (User references) Blog Post contains title, content, date and Writer (User) Requirement: I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries. SQL Solution: So in sql land it would be something like: select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date GAE solutions i can think of are: Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts. I don't believe either of these solutions will scale. Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing?
How to check if a file can be created inside given directory on MS XP/Vista?
450,297
4
6
3,469
0
python,windows,winapi,windows-vista,permissions
I recently wrote a App to pass a set of test to obtain the ISV status from Microsoft and I also add that condition. The way I understood it was that if the user is Least Priveledge then he won't have permission to write in the system folders. So I approached the problem the the way Ishmaeel described. I try to create the file and catch the exception then inform the user that he doesn't have permission to write files to that directory. In my understanding an Least-Priviledged user will not have the necessary permissions to write to those folders, if he has then he is not a Least-Priveledge user. Should I stop bothering just because Windows Vista itself won't allow the Least-Privileged user to save any files in %WINDIR%? In my opinion? Yes.
0
1
0
0
2009-01-16T12:01:00.000
4
1.2
true
450,210
0
0
0
2
I have a code that creates file(s) in user-specified directory. User can point to a directory in which he can't create files, but he can rename it. I have created directory for test purposes, let's call it C:\foo. I have following permissions to C:\foo: Traversing directory/Execute file Removing subfolders and files Removing Read permissions Change permissions Take ownership I don't have any of the following permissions to C:\foo: Full Control File creation Folder creation I have tried following approaches, so far: os.access('C:\foo', os.W_OK) == True st = os.stat('C:\foo') mode = st[stat.ST_MODE] mode & stat.S_IWRITE == True I believe that this is caused by the fact that I can rename folder, so it is changeable for me. But it's content - not. Does anyone know how can I write code that will check for a given directory if current user has permissions to create file in that directory? In brief - I want to check if current user has File creation and Folder creation permissions for given folder name. EDIT: The need for such code arisen from the Test case no 3 from 'Certified for Windows Vista' program, which states: The application must not allow the Least-Privileged user to save any files to Windows System directory in order to pass this test case. Should this be understood as 'Application may try to save file in Windows System directory, but shouldn't crash on failure?' or rather 'Application has to perform security checks before trying to save file?' Should I stop bothering just because Windows Vista itself won't allow the Least-Privileged user to save any files in %WINDIR%?
How to check if a file can be created inside given directory on MS XP/Vista?
450,259
4
6
3,469
0
python,windows,winapi,windows-vista,permissions
I wouldn't waste time and LOCs on checking for permissions. Ultimate test of file creation in Windows is the creation itself. Other factors may come into play (such as existing files (or worse, folders) with the same name, disk space, background processes. These conditions can even change between the time you make the initial check and the time you actually try to create your file. So, if I had a scenario like that, I would just design my method to not lose any data in case of failure, to go ahead and try to create my file, and offer the user an option to change the selected directory and try again if creation fails.
0
1
0
0
2009-01-16T12:01:00.000
4
0.197375
false
450,210
0
0
0
2
I have a code that creates file(s) in user-specified directory. User can point to a directory in which he can't create files, but he can rename it. I have created directory for test purposes, let's call it C:\foo. I have following permissions to C:\foo: Traversing directory/Execute file Removing subfolders and files Removing Read permissions Change permissions Take ownership I don't have any of the following permissions to C:\foo: Full Control File creation Folder creation I have tried following approaches, so far: os.access('C:\foo', os.W_OK) == True st = os.stat('C:\foo') mode = st[stat.ST_MODE] mode & stat.S_IWRITE == True I believe that this is caused by the fact that I can rename folder, so it is changeable for me. But it's content - not. Does anyone know how can I write code that will check for a given directory if current user has permissions to create file in that directory? In brief - I want to check if current user has File creation and Folder creation permissions for given folder name. EDIT: The need for such code arisen from the Test case no 3 from 'Certified for Windows Vista' program, which states: The application must not allow the Least-Privileged user to save any files to Windows System directory in order to pass this test case. Should this be understood as 'Application may try to save file in Windows System directory, but shouldn't crash on failure?' or rather 'Application has to perform security checks before trying to save file?' Should I stop bothering just because Windows Vista itself won't allow the Least-Privileged user to save any files in %WINDIR%?
Python/Twisted - Sending to a specific socket object?
460,245
3
2
829
0
python,sockets,twisted,multiprocess
It sounds like you might need to keep a reference to the transport (or protocol) along with the bytes the just came in on that protocol in your 'event' object. That way responses that came in on a connection go out on the same connection. If things don't need to be processed serially perhaps you should think about setting up functors that can handle the data in parallel to remove the need for queueing. Just keep in mind that you will need to protect critical sections of your code. Edit: Judging from your other question about evaluating your server design it would seem that processing in parallel may not be possible for your situation, so my first suggestion stands.
0
1
1
0
2009-01-20T03:43:00.000
1
1.2
true
460,068
0
0
0
1
I have a "manager" process on a node, and several worker processes. The manager is the actual server who holds all of the connections to the clients. The manager accepts all incoming packets and puts them into a queue, and then the worker processes pull the packets out of the queue, process them, and generate a result. They send the result back to the manager (by putting them into another queue which is read by the manager), but here is where I get stuck: how do I send the result to a specific socket? When dealing with the processing of the packets on a single process, it's easy, because when you receive a packet you can reply to it by just grabbing the "transport" object in-context. But how would I do this with the method I'm using?
Python/Twisted - TCP packet fragmentation?
460,224
6
6
3,827
0
python,tcp,twisted,packet
In the dataReceived method you get back the data as a string of indeterminate length meaning that it may be a whole message in your protocol or it may only be part of the message that some 'client' sent to you. You will have to inspect the data to see if it comprises a whole message in your protocol. I'm currently using Twisted on one of my projects to implement a protocol and decided to use the struct module to pack/unpack my data. The protocol I am implementing has a fixed header size so I don't construct any messages until I've read at least HEADER_SIZE amount of bytes. The total message size is declared in this header data portion. I guess you don't really need to define a message length as part of your protocol but it helps. If you didn't define one you would have to have a special delimiter that determines when a message begins/ends. Sort of how the FIX protocol uses the SOH byte to delimit fields. Though it does have a required field that tells you how long a message is (just not how many fields are in a message).
0
1
0
0
2009-01-20T04:38:00.000
3
1.2
true
460,144
0
0
0
3
In Twisted when implementing the dataReceived method, there doesn't seem to be any examples which refer to packets being fragmented. In every other language this is something you manually implement, so I was just wondering if this is done for you in twisted already or what? If so, do I need to prefix my packets with a length header? Or do I have to do this manually? If so, what way would that be?
Python/Twisted - TCP packet fragmentation?
461,477
6
6
3,827
0
python,tcp,twisted,packet
When dealing with TCP, you should really forget all notion of 'packets'. TCP is a stream protocol - you stream data in and data streams out the other side. Once the data is sent, it is allowed to arrive in as many or as few blocks as it wants, as long as the data all arrives in the right order. You'll have to manually do the delimitation as with other languages, with a length field, or a message type field, or a special delimiter character, etc.
0
1
0
0
2009-01-20T04:38:00.000
3
1
false
460,144
0
0
0
3
In Twisted when implementing the dataReceived method, there doesn't seem to be any examples which refer to packets being fragmented. In every other language this is something you manually implement, so I was just wondering if this is done for you in twisted already or what? If so, do I need to prefix my packets with a length header? Or do I have to do this manually? If so, what way would that be?
Python/Twisted - TCP packet fragmentation?
817,378
2
6
3,827
0
python,tcp,twisted,packet
You can also use a LineReceiver protocol
0
1
0
0
2009-01-20T04:38:00.000
3
0.132549
false
460,144
0
0
0
3
In Twisted when implementing the dataReceived method, there doesn't seem to be any examples which refer to packets being fragmented. In every other language this is something you manually implement, so I was just wondering if this is done for you in twisted already or what? If so, do I need to prefix my packets with a length header? Or do I have to do this manually? If so, what way would that be?
Parallel processing from a command queue on Linux (bash, python, ruby... whatever)
463,981
-3
45
22,010
0
python,ruby,bash,shell,parallel-processing
Can you elaborate what you mean by in parallel? It sounds like you need to implement some sort of locking in the queue so your entries are not selected twice, etc and the commands run only once. Most queue systems cheat -- they just write a giant to-do list, then select e.g. ten items, work them, and select the next ten items. There's no parallelization. If you provide some more details, I'm sure we can help you out.
0
1
0
0
2009-01-21T02:54:00.000
12
-0.049958
false
463,963
1
0
0
3
I have a list/queue of 200 commands that I need to run in a shell on a Linux server. I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer. When a process finishes I want the next command to be "popped" from the queue and executed. Does anyone have code to solve this problem? Further elaboration: There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done. The actual problem I'm trying to solve is using imapsync to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly.
Parallel processing from a command queue on Linux (bash, python, ruby... whatever)
464,007
7
45
22,010
0
python,ruby,bash,shell,parallel-processing
GNU make (and perhaps other implementations as well) has the -j argument, which governs how many jobs it will run at once. When a job completes, make will start another one.
0
1
0
0
2009-01-21T02:54:00.000
12
1
false
463,963
1
0
0
3
I have a list/queue of 200 commands that I need to run in a shell on a Linux server. I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer. When a process finishes I want the next command to be "popped" from the queue and executed. Does anyone have code to solve this problem? Further elaboration: There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done. The actual problem I'm trying to solve is using imapsync to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly.
Parallel processing from a command queue on Linux (bash, python, ruby... whatever)
628,543
13
45
22,010
0
python,ruby,bash,shell,parallel-processing
For this kind of job PPSS is written: Parallel processing shell script. Google for this name and you will find it, I won't linkspam.
0
1
0
0
2009-01-21T02:54:00.000
12
1
false
463,963
1
0
0
3
I have a list/queue of 200 commands that I need to run in a shell on a Linux server. I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer. When a process finishes I want the next command to be "popped" from the queue and executed. Does anyone have code to solve this problem? Further elaboration: There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done. The actual problem I'm trying to solve is using imapsync to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly.
unit testing for an application server
465,422
1
3
1,357
0
python,unit-testing,twisted
I think you chose the wrong direction. It's true that the Trial docs is very light. But Trial is base on unittest and only add some stuff to deal with the reactor loop and the asynchronous calls (it's not easy to write tests that deal with deffers). All your tests that are not including deffer/asynchronous call will be exactly like normal unittest. The Trial command is a test runner (a bit like nose), so you don't have to write test suites for your tests. You will save time with it. On top of that, the Trial command can output profiling and coverage information. Just do Trial -h for more info. But in any way the first thing you should ask yourself is which kind of tests do you need the most, unit tests, integration tests or system tests (black-box). It's possible to do all with Trial but it's not necessary allways the best fit.
0
1
0
1
2009-01-21T09:15:00.000
4
0.049958
false
464,543
0
0
1
3
I wrote an application server (using python & twisted) and I want to start writing some tests. But I do not want to use Twisted's Trial due to time constraints and not having time to play with it now. So here is what I have in mind: write a small test client that connects to the app server and makes the necessary requests (the communication protocol is some in-house XML), store in a static way the received XML and then write some tests on those static data using unitest. My question is: Is this a correct approach and if yes, what kind of tests are covered with this approach? Also, using this method has several disadvantages, like: not being able to access the database layer in order to build/rebuild the schema, when will the test client going to connect to the server: per each unit test or before running the test suite?
unit testing for an application server
464,870
1
3
1,357
0
python,unit-testing,twisted
"My question is: Is this a correct approach?" It's what you chose. You made a lot of excuses, so I'm assuming that your pretty well fixed on this course. It's not the best, but you've already listed all your reasons for doing it (and then asked follow-up questions on this specific course of action). "correct" doesn't enter into it anymore, so there's no answer to this question. "what kind of tests are covered with this approach?" They call it "black-box" testing. The application server is a black box that has a few inputs and outputs, and you can't test any of it's internals. It's considered one acceptable form of testing because it tests the bottom-line external interfaces for acceptable behavior. If you have problems, it turns out to be useless for doing diagnostic work. You'll find that you need to also to white-box testing on the internal structures. "not being able to access the database layer in order to build/rebuild the schema," Why not? This is Python. Write a separate tool that imports that layer and does database builds. "when will the test client going to connect to the server: per each unit test or before running the test suite?" Depends on the intent of the test. Depends on your use cases. What happens in the "real world" with your actual intended clients? You'll want to test client-like behavior, making connections the way clients make connections. Also, you'll want to test abnormal behavior, like clients dropping connections or doing things out of order, or unconnected.
0
1
0
1
2009-01-21T09:15:00.000
4
1.2
true
464,543
0
0
1
3
I wrote an application server (using python & twisted) and I want to start writing some tests. But I do not want to use Twisted's Trial due to time constraints and not having time to play with it now. So here is what I have in mind: write a small test client that connects to the app server and makes the necessary requests (the communication protocol is some in-house XML), store in a static way the received XML and then write some tests on those static data using unitest. My question is: Is this a correct approach and if yes, what kind of tests are covered with this approach? Also, using this method has several disadvantages, like: not being able to access the database layer in order to build/rebuild the schema, when will the test client going to connect to the server: per each unit test or before running the test suite?
unit testing for an application server
464,596
0
3
1,357
0
python,unit-testing,twisted
haven't used twisted before, and the twisted/trial documentation isn't stellar from what I just saw, but it'll likely take you 2-3 days to implement correctly the test system you describe above. Now, like I said I have no idea about Trial, but I GUESS you could probably get it working in 1-2 days, since you already have a Twisted application. Now if Trial gives you more coverage in less time, I'd go with Trial. But remember this is just an answer from a very cursory look at the docs
0
1
0
1
2009-01-21T09:15:00.000
4
0
false
464,543
0
0
1
3
I wrote an application server (using python & twisted) and I want to start writing some tests. But I do not want to use Twisted's Trial due to time constraints and not having time to play with it now. So here is what I have in mind: write a small test client that connects to the app server and makes the necessary requests (the communication protocol is some in-house XML), store in a static way the received XML and then write some tests on those static data using unitest. My question is: Is this a correct approach and if yes, what kind of tests are covered with this approach? Also, using this method has several disadvantages, like: not being able to access the database layer in order to build/rebuild the schema, when will the test client going to connect to the server: per each unit test or before running the test suite?
How can I return system information in Python?
466,755
1
25
29,587
0
python,operating-system
It looks like you want to get a lot more information than the standard Python library offers. If I were you, I would download the source code for 'ps' or 'top', or the Gnome/KDE version of the same, or any number of system monitoring/graphing programs which are more likely to have all the necessary Unix cross platform bits, see what they do, and then make the necessary native calls with ctypes. It's trivial to detect the platform. For example with ctypes you might try to load libc.so, if that throws an exception try to load 'msvcrt.dll' and so on. Not to mention simply checking the operating system's name with os.name. Then just delegate calls to your new cross-platform API to the appropriate platform-specific (sorry) implementation. When you're done, don't forget to upload the resulting package to pypi.
0
1
0
0
2009-01-21T19:40:00.000
7
0.028564
false
466,684
1
0
0
1
Using Python, how can information such as CPU usage, memory usage (free, used, etc), process count, etc be returned in a generic manner so that the same code can be run on Linux, Windows, BSD, etc? Alternatively, how could this information be returned on all the above systems with the code specific to that OS being run only if that OS is indeed the operating environment?
Python/Twisted multiuser server - what is more efficient?
474,353
2
2
1,148
0
python,twisted,multi-user
I think that B is problematic. The thread would only run on one CPU, and even if it runs a process, the thread is still running. A may be better. It is best to try and measure both in terms of time and see which one is faster and which one scales well. However, I'll reiterate that I highly doubt that B will scale well.
0
1
0
0
2009-01-23T02:21:00.000
2
1.2
true
471,660
1
0
0
1
In Python, if I want my server to scale well CPU-wise, I obviously need to spawn multiple processes. I was wondering which is better (using Twisted): A) The manager process (the one who holds the actual socket connections) puts received packets into a shared queue (the one from the multiprocessing module), and worker processes pull the packets out of the queue, process them and send the results back to the client. B) The manager process (the one who holds the actual socket connections) launches a deferred thread and then calls the apply() function on the process pool. Once the result returns from the worker process, the manager sends the result back to the client. In both implementations, the worker processes use thread pools so they can work on more than one packet at once (since there will be a lot of database querying).
Python and os.chroot
478,396
7
2
5,674
0
python,linux,chroot
Yes there are pitfalls. Security wise: If you run as root, there are always ways to break out. So first chroot(), then PERMANENTLY drop privileges to an other user. Put nothing which isn't absolutely required into the chroot tree. Especially no suid/sgid files, named pipes, unix domain sockets and device nodes. Python wise your whole module loading gets screwed up. Python is simply not made for such scenarios. If your application is moderately complex you will run into module loading issues. I think much more important than chrooting is running as a non privileged user and simply using the file system permissions to keep that user from reading anything of importance.
0
1
0
0
2009-01-25T21:39:00.000
2
1.2
true
478,359
0
0
0
1
I'm writing a web-server in Python as a hobby project. The code is targeted at *NIX machines. I'm new to developing on Linux and even newer to Python itself. I am worried about people breaking out of the folder that I'm using to serve up the web-site. The most obvious way to do this is to filter requests for documents like /../../etc/passwd. However, I'm worried that there might be clever ways to go up the directory tree that I'm not aware of and consequentially my filter won't catch. I'm considering adding using the os.chroot so that the root directory is the web-site itself. Is this is a safe way of protecting against these jail breaking attacks? Are there any potential pitfalls to doing this that will hurt me down the road?
Locking a file in Python
490,032
8
193
216,892
0
python,file-locking
Coordinating access to a single file at the OS level is fraught with all kinds of issues that you probably don't want to solve. Your best bet is have a separate process that coordinates read/write access to that file.
0
1
0
0
2009-01-28T23:20:00.000
14
1
false
489,861
1
0
0
2
I need to lock a file for writing in Python. It will be accessed from multiple Python processes at once. I have found some solutions online, but most fail for my purposes as they are often only Unix based or Windows based.
Locking a file in Python
490,919
14
193
216,892
0
python,file-locking
Locking is platform and device specific, but generally, you have a few options: Use flock(), or equivalent (if your os supports it). This is advisory locking, unless you check for the lock, it's ignored. Use a lock-copy-move-unlock methodology, where you copy the file, write the new data, then move it (move, not copy - move is an atomic operation in Linux -- check your OS), and you check for the existence of the lock file. Use a directory as a "lock". This is necessary if you're writing to NFS, since NFS doesn't support flock(). There's also the possibility of using shared memory between the processes, but I've never tried that; it's very OS-specific. For all these methods, you'll have to use a spin-lock (retry-after-failure) technique for acquiring and testing the lock. This does leave a small window for mis-synchronization, but its generally small enough to not be a major issue. If you're looking for a solution that is cross platform, then you're better off logging to another system via some other mechanism (the next best thing is the NFS technique above). Note that sqlite is subject to the same constraints over NFS that normal files are, so you can't write to an sqlite database on a network share and get synchronization for free.
0
1
0
0
2009-01-28T23:20:00.000
14
1
false
489,861
1
0
0
2
I need to lock a file for writing in Python. It will be accessed from multiple Python processes at once. I have found some solutions online, but most fail for my purposes as they are often only Unix based or Windows based.
Best opensource IDE for building applications on Google App Engine?
498,183
0
15
11,377
0
python,google-app-engine,ide
For my recent GAE project I tried both eclipse with pydev and intellij with its python plugin. I use intellij for my "real" work and so I found it to be the most natural and easy to use, personally. It is not open source, but if you already have a license it is no extra cost. I found the eclipse plugin to be very good as well. You don't get as much intellisense as you would with java, but I was very impressed with what you do get from a dynamically typed language.
0
1
0
0
2009-01-30T14:03:00.000
10
0
false
495,579
0
0
1
5
Looking to dabble with GAE and python, and I'd like to know what are some of the best tools for this - thanks!
Best opensource IDE for building applications on Google App Engine?
497,470
0
15
11,377
0
python,google-app-engine,ide
I've been using gedit and am pretty happy with it, there is a couple of good plugins that make life easier (e.g. Class Browser). I tried eclipse but its just not the same experience you get with Java.
0
1
0
0
2009-01-30T14:03:00.000
10
0
false
495,579
0
0
1
5
Looking to dabble with GAE and python, and I'd like to know what are some of the best tools for this - thanks!
Best opensource IDE for building applications on Google App Engine?
495,783
6
15
11,377
0
python,google-app-engine,ide
Netbeans has some very nice tools for Python development
0
1
0
0
2009-01-30T14:03:00.000
10
1
false
495,579
0
0
1
5
Looking to dabble with GAE and python, and I'd like to know what are some of the best tools for this - thanks!
Best opensource IDE for building applications on Google App Engine?
498,484
6
15
11,377
0
python,google-app-engine,ide
I use pydev on eclipse, and works well for django too!
0
1
0
0
2009-01-30T14:03:00.000
10
1
false
495,579
0
0
1
5
Looking to dabble with GAE and python, and I'd like to know what are some of the best tools for this - thanks!
Best opensource IDE for building applications on Google App Engine?
496,385
2
15
11,377
0
python,google-app-engine,ide
VIM(there's enough plug-ins to make it IDE -like) Komodo IDE($$) Eclipse w/Pydev Net Beans with Python support WingIDE($$) SPE(Stani's Python Editor)
0
1
0
0
2009-01-30T14:03:00.000
10
0.039979
false
495,579
0
0
1
5
Looking to dabble with GAE and python, and I'd like to know what are some of the best tools for this - thanks!
In Python - how to execute system command with no output
500,483
2
12
6,820
0
python
You can redirect output into temp file and delete it afterward. But there's also a method called popen that redirects output directly to your program so it won't go on screen.
0
1
0
0
2009-02-01T09:20:00.000
2
0.197375
false
500,477
0
0
0
1
Is there a built-in method in Python to execute a system command without displaying the output? I only want to grab the return value. It is important that it be cross-platform, so just redirecting the output to /dev/null won't work on Windows, and the other way around. I know I can just check os.platform and build the redirection myself, but I'm hoping for a built-in solution.
How to distribute `.desktop` files and icons for a Python package in Gnome (with distutils or setuptools)?
501,624
1
10
3,003
0
python,packaging,setuptools,gnome,distutils2
In general, yes - everything is better than autotools when building Python projects. I have good experiences with setuptools so far. However, installing files into fixed locations is not a strength of setuptools - after all, it's not something to build installaters for Python apps, but distribute Python libraries. For the installation of files which are not application data files (like images, UI files etc) but provide integration into the operating system, you are better off with using a real packaging format (like RPM or deb). That said, nothing stops you from having the build process based on setuptools and a small make file for installing everything into its rightful place.
0
1
0
0
2009-02-01T21:18:00.000
4
0.049958
false
501,597
1
0
0
1
Currently I'm using the auto-tools to build/install and package a project of mine, but I would really like to move to something that feels more "pythonic". My project consists of two scripts, one module, two glade GUI descriptions, and two .desktop files. It's currently a pure python project, though that's likely to change soon-ish. Looking at setuptools I can easily see how to deal with everything except the .desktop files; they have to end up in a specific directory so that Gnome can find them. Is using distuils/setuptools a good idea to begin with?
Change directory to the directory of a Python script
23,595,382
14
15
17,931
0
python,scripting,directory
os.chdir(os.path.dirname(os.path.abspath(__file__))) should do it. os.chdir(os.path.dirname(__file__)) would not work if the script is run from the directory in which it is present.
0
1
0
0
2009-02-04T01:24:00.000
4
1
false
509,742
1
0
0
2
How do i change directory to the directory with my python script in? So far I figured out I should use os.chdir and sys.argv[0]. I'm sure there is a better way then to write my own function to parse argv[0].
Change directory to the directory of a Python script
509,987
7
15
17,931
0
python,scripting,directory
Sometimes __file__ is not defined, in this case you can try sys.path[0]
0
1
0
0
2009-02-04T01:24:00.000
4
1
false
509,742
1
0
0
2
How do i change directory to the directory with my python script in? So far I figured out I should use os.chdir and sys.argv[0]. I'm sure there is a better way then to write my own function to parse argv[0].
How to clear the interpreter console?
32,280,047
25
437
824,944
0
python,windows,console
Quickest and easiest way without a doubt is Ctrl+L. This is the same for OS X on the terminal.
0
1
0
0
2009-02-05T21:19:00.000
30
1
false
517,970
1
0
0
7
Like most Python developers, I typically keep a console window open with the Python interpreter running to test commands, dir() stuff, help() stuff, etc. Like any console, after a while the visible backlog of past commands and prints gets to be cluttered, and sometimes confusing when re-running the same command several times. I'm wondering if, and how, to clear the Python interpreter console. I've heard about doing a system call and either calling cls on Windows or clear on Linux, but I was hoping there was something I could command the interpreter itself to do. Note: I'm running on Windows, so Ctrl+L doesn't work.
How to clear the interpreter console?
23,730,811
2
437
824,944
0
python,windows,console
just use this.. print '\n'*1000
0
1
0
0
2009-02-05T21:19:00.000
30
0.013333
false
517,970
1
0
0
7
Like most Python developers, I typically keep a console window open with the Python interpreter running to test commands, dir() stuff, help() stuff, etc. Like any console, after a while the visible backlog of past commands and prints gets to be cluttered, and sometimes confusing when re-running the same command several times. I'm wondering if, and how, to clear the Python interpreter console. I've heard about doing a system call and either calling cls on Windows or clear on Linux, but I was hoping there was something I could command the interpreter itself to do. Note: I'm running on Windows, so Ctrl+L doesn't work.
How to clear the interpreter console?
518,401
4
437
824,944
0
python,windows,console
Use idle. It has many handy features. Ctrl+F6, for example, resets the console. Closing and opening the console are good ways to clear it.
0
1
0
0
2009-02-05T21:19:00.000
30
0.02666
false
517,970
1
0
0
7
Like most Python developers, I typically keep a console window open with the Python interpreter running to test commands, dir() stuff, help() stuff, etc. Like any console, after a while the visible backlog of past commands and prints gets to be cluttered, and sometimes confusing when re-running the same command several times. I'm wondering if, and how, to clear the Python interpreter console. I've heard about doing a system call and either calling cls on Windows or clear on Linux, but I was hoping there was something I could command the interpreter itself to do. Note: I'm running on Windows, so Ctrl+L doesn't work.
How to clear the interpreter console?
17,536,224
1
437
824,944
0
python,windows,console
OK, so this is a much less technical answer, but I'm using the Python plugin for Notepad++ and it turns out you can just clear the console manually by right-clicking on it and clicking "clear". Hope this helps someone out there!
0
1
0
0
2009-02-05T21:19:00.000
30
0.006667
false
517,970
1
0
0
7
Like most Python developers, I typically keep a console window open with the Python interpreter running to test commands, dir() stuff, help() stuff, etc. Like any console, after a while the visible backlog of past commands and prints gets to be cluttered, and sometimes confusing when re-running the same command several times. I'm wondering if, and how, to clear the Python interpreter console. I've heard about doing a system call and either calling cls on Windows or clear on Linux, but I was hoping there was something I could command the interpreter itself to do. Note: I'm running on Windows, so Ctrl+L doesn't work.
How to clear the interpreter console?
31,640,548
2
437
824,944
0
python,windows,console
If it is on mac, then a simple cmd + k should do the trick.
0
1
0
0
2009-02-05T21:19:00.000
30
0.013333
false
517,970
1
0
0
7
Like most Python developers, I typically keep a console window open with the Python interpreter running to test commands, dir() stuff, help() stuff, etc. Like any console, after a while the visible backlog of past commands and prints gets to be cluttered, and sometimes confusing when re-running the same command several times. I'm wondering if, and how, to clear the Python interpreter console. I've heard about doing a system call and either calling cls on Windows or clear on Linux, but I was hoping there was something I could command the interpreter itself to do. Note: I'm running on Windows, so Ctrl+L doesn't work.
How to clear the interpreter console?
29,520,444
1
437
824,944
0
python,windows,console
I am using Spyder (Python 2.7) and to clean the interpreter console I use either %clear that forces the command line to go to the top and I will not see the previous old commands. or I click "option" on the Console environment and select "Restart kernel" that removes everything.
0
1
0
0
2009-02-05T21:19:00.000
30
0.006667
false
517,970
1
0
0
7
Like most Python developers, I typically keep a console window open with the Python interpreter running to test commands, dir() stuff, help() stuff, etc. Like any console, after a while the visible backlog of past commands and prints gets to be cluttered, and sometimes confusing when re-running the same command several times. I'm wondering if, and how, to clear the Python interpreter console. I've heard about doing a system call and either calling cls on Windows or clear on Linux, but I was hoping there was something I could command the interpreter itself to do. Note: I'm running on Windows, so Ctrl+L doesn't work.
How to clear the interpreter console?
18,846,817
1
437
824,944
0
python,windows,console
I found the simplest way is just to close the window and run a module/script to reopen the shell.
0
1
0
0
2009-02-05T21:19:00.000
30
0.006667
false
517,970
1
0
0
7
Like most Python developers, I typically keep a console window open with the Python interpreter running to test commands, dir() stuff, help() stuff, etc. Like any console, after a while the visible backlog of past commands and prints gets to be cluttered, and sometimes confusing when re-running the same command several times. I'm wondering if, and how, to clear the Python interpreter console. I've heard about doing a system call and either calling cls on Windows or clear on Linux, but I was hoping there was something I could command the interpreter itself to do. Note: I'm running on Windows, so Ctrl+L doesn't work.
Insert Command into Bash Shell
524,104
3
1
2,348
0
python,linux,bash,shell,command
You can do this, but only if the shell runs as a subprocess of your Python program; you can't feed content into the stdin of your parent process. (If you could, UNIX would have a host of related security issues when folks run processes with fewer privileges than the calling shell!) If you're familiar with how Expect allows passthrough to interactive subprocesses (with specific key sequences from the user or strings received from the child process triggering matches and sending control back to your program), the same thing can be done from Python with pexpect. Alternately, as another post mentioned, the curses module provides full control over the drawing of terminal displays -- which you'll want if this history menu is happening within the window rather than in a graphical (X11/win32) pop-up.
0
1
0
0
2009-02-07T16:30:00.000
4
0.148885
false
524,068
0
0
0
1
Is there any way to inject a command into a bash prompt in Linux? I am working on a command history app - like the Ctrl+R lookup but different. I am using python for this. I will show a list of commands from history based on the user's search term - if the user presses enter, the app will execute the command and print the results. So far, so good. If the user chooses a command and then press the right or left key, I want to insert the command into the prompt - so that the user can edit the command before executing it. If you are on Linux, just fire up a bash console, press Ctrl+r, type cd(or something), and then press the right arrow key - the selected command will be shown at the prompt. This is the functionality I am looking for - but I want to know how to do that from within python.
Find and Replace Inside a Text File from a Bash Command
70,116,740
1
687
811,095
0
bash,replace,scripting,ironpython
For MAC users in case you don't read the comments :) As mentioned by @Austin, if you get the Invalid command code error For the in-place replacements BSD sed requires a file extension after the -i flag to save to a backup file with given extension. sed -i '.bak' 's/find/replace' /file.txt You can use '' empty string if you want to skip backup. sed -i '' 's/find/replace' /file.txt All merit to @Austin
0
1
0
1
2009-02-08T11:57:00.000
17
0.011764
false
525,592
0
0
0
2
What's the simplest way to do a find and replace for a given input string, say abc, and replace with another string, say XYZ in file /tmp/file.txt? I am writting an app and using IronPython to execute commands through SSH — but I don't know Unix that well and don't know what to look for. I have heard that Bash, apart from being a command line interface, can be a very powerful scripting language. So, if this is true, I assume you can perform actions like these. Can I do it with bash, and what's the simplest (one line) script to achieve my goal?
Find and Replace Inside a Text File from a Bash Command
68,204,228
2
687
811,095
0
bash,replace,scripting,ironpython
Simplest way to replace multiple text in a file using sed command Command - sed -i 's#a/b/c#D/E#g;s#/x/y/z#D:/X#g;' filename In the above command s#a/b/c#D/E#g where I am replacing a/b/c with D/E and then after the ; we again doing the same thing
0
1
0
1
2009-02-08T11:57:00.000
17
0.023525
false
525,592
0
0
0
2
What's the simplest way to do a find and replace for a given input string, say abc, and replace with another string, say XYZ in file /tmp/file.txt? I am writting an app and using IronPython to execute commands through SSH — but I don't know Unix that well and don't know what to look for. I have heard that Bash, apart from being a command line interface, can be a very powerful scripting language. So, if this is true, I assume you can perform actions like these. Can I do it with bash, and what's the simplest (one line) script to achieve my goal?
Windows System Idle Processes interfering with performance measurement
525,925
-1
1
4,707
0
windows,performance,python-idle
If you "system idle process" is taking up 100% then essentially you machine is bored, nothing is going on. If you add up everything going on in task manager, subtract this number from 100%, then you will have the value of "system idle process." Notice it consumes almost no memory at all and cannot be affecting performance.
0
1
0
0
2009-02-08T14:30:00.000
3
-0.066568
false
525,807
1
0
0
1
I am doing some performance measurement of my code on a Windows box and I am finding that I am getting dramatically different results between measurements. A quick bit of ad hoc exploration during a slow one shows in the task manager System Idle Processes taking up almost 100% CPU. Does anyone know what System Idle Processes actually means and what Windows features it may be running? NB: I am not measuring performance using the task manager, I just used it to take a look at what else was running during a particularly slow measurement. Please think before saying this is not programming related and closing the question. I would not ask it unless I thought there were grounds to say it is. In this case I believe it clearly is because it is detrimentally affecting my development and test environments and in order to sort it out I need to know a bit more about it. Programming does not start and end with the writing of the code.
How to deploy a Python application with libraries as source with no further dependencies?
528,064
8
13
7,725
0
python,deployment,layout,bootstrapping
I sometimes use the approach I describe below, for the exact same reason that @Boris states: I would prefer that the use of some code is as easy as a) svn checkout/update - b) go. But for the record: I use virtualenv/easy_install most of the time. I agree to a certain extent to the critisisms by @Ali A and @S.Lott Anyway, the approach I use depends on modifying sys.path, and works like this: Require python and setuptools (to enable loading code from eggs) on all computers that will use your software. Organize your directory structure this: project/ *.py scriptcustomize.py file.pth thirdparty/ eggs/ mako-vNNN.egg ... .egg code/ elementtree\ *.py ... In your top-level script(s) include the following code at the top: from scriptcustomize import apply_pth_files apply_pth_files(__file__) Add scriptcustomize.py to your project folder: import os from glob import glob import fileinput import sys def apply_pth_files(scriptfilename, at_beginning=False): """At the top of your script: from scriptcustomize import apply_pth_files apply_pth_files(__file__) """ directory = os.path.dirname(scriptfilename) files = glob(os.path.join(directory, '*.pth')) if not files: return for line in fileinput.input(files): line = line.strip() if line and line[0] != '#': path = os.path.join(directory, line) if at_beginning: sys.path.insert(0, path) else: sys.path.append(path) Add one or more *.pth file(s) to your project folder. On each line, put a reference to a directory with packages. For instance: # contents of *.pth file thirdparty/code thirdparty/eggs/mako-vNNN.egg I "kind-of" like this approach. What I like: it is similar to how *.pth files work, but for individual programs instead of your entire site-packages. What I do not like: having to add the two lines at the beginning of the top-level scripts. Again: I use virtualenv most of the time. But I tend to use virtualenv for projects where I have tight control of the deployment scenario. In cases where I do not have tight control, I tend to use the approach I describe above. It makes it really easy to package a project as a zip and have the end user "install" it (by unzipping).
0
1
0
0
2009-02-09T09:20:00.000
5
1.2
true
527,510
1
0
0
4
Background: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako) The situation: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install. The Problem This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access). I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion. Anecdote: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above).
How to deploy a Python application with libraries as source with no further dependencies?
527,934
0
13
7,725
0
python,deployment,layout,bootstrapping
I agree with the answers by Nosklo and S.Lott. (+1 to both) Can I just add that what you want to do is actually a terrible idea. If you genuinely want people to hack on your code, they will need some understanding of the libraries involved, how they work, what they are, where they come from, the documentation for each etc. Sure provide them with a bootstrap script, but beyond that you will be molly-coddling to the point that they are clueless. Then there are specific issues such as "what if one user wants to install a different version or implementation of a library?", a glaring example here is ElementTree, as this has a number of implementations.
0
1
0
0
2009-02-09T09:20:00.000
5
0
false
527,510
1
0
0
4
Background: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako) The situation: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install. The Problem This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access). I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion. Anecdote: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above).
How to deploy a Python application with libraries as source with no further dependencies?
527,872
8
13
7,725
0
python,deployment,layout,bootstrapping
"I dislike the fact that developers (or me starting on a clean new machine) have to jump through the distutils hoops of having to install the libraries locally before they can get started" Why? What -- specifically -- is wrong with this? You did it to create the project. Your project is so popular others want to do the same. I don't see a problem. Please update your question with specific problems you need solved. Disliking the way open source is distributed isn't a problem -- it's the way that open source works. Edit. The "walled garden" doesn't matter very much. Choice 1. You could, BTW, build an "installer" that runs easy_install 6 times for them. Choice 2. You can save all of the installer kits that easy_install would have used. Then you can provide a script that does an unzip and a python setup.py install for all six. Choice 3. You can provide a zipped version of your site-packages. After they install Python, they unzip your site-packages directory into `C:\Python2.5\lib\site-packages``. Choice 4. You can build your own MSI installer kit for your Python environment. Choice 5. You can host your own pypi-like server and provide an easy_install that checks your server first.
0
1
0
0
2009-02-09T09:20:00.000
5
1
false
527,510
1
0
0
4
Background: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako) The situation: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install. The Problem This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access). I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion. Anecdote: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above).
How to deploy a Python application with libraries as source with no further dependencies?
530,727
0
13
7,725
0
python,deployment,layout,bootstrapping
I'm not suggesting that this is a great idea, but usually what I do in situations like these is that I have a Makefile, checked into subversion, which contains make rules to fetch all the dependent libraries and install them. The makefile can be smart enough to only apply the dependent libraries if they aren't present, so this can be relatively fast. A new developer on the project simply checks out from subversion and then types "make". This approach might work well for you, given that your audience is already used to the idea of using subversion checkouts as part of their fetch process. Also, it has the nice property that all knowledge about your program, including its external dependencies, are captured in the source code repository.
0
1
0
0
2009-02-09T09:20:00.000
5
0
false
527,510
1
0
0
4
Background: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako) The situation: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy_install. The Problem This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access). I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion. Anecdote: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above).
Switching Printer Trays
545,183
1
5
4,316
0
python,winapi
That's not possible using plain PDF, as you have create new print job for any particular bin and tray combination (and not all printers allow you to do that, Xerox 4x and DP Series allows you to do such things). My best bet would be juggling with PostScript: convert PDF to PostScript, where you have access to individual pages, then extract the pages you need and for each such page (or pages) create new print job (eg. using Windows program lpr). To ease the task, I'd create print queue for any combination of bin and tray you have to print to, then use these queues as printers.
0
1
0
0
2009-02-13T06:32:00.000
4
0.049958
false
544,923
0
0
0
1
I know this question has been asked before, but there was no clear answer. How do I change the printer tray programmatically? I am trying to use python to batch print some PDFs. I need to print different pages from different trays. The printer is a Ricoh 2232C. Is there a way to do it through and Acrobat Reader command line parameter? I am able to use the Win32 api to find out which bins correspond to which binnames, but that is about it. Any advice/shortcuts/etc?
Tool (or combination of tools) for reproducible environments in Python
545,839
0
9
1,538
0
python,continuous-integration,installation,development-environment,automated-deploy
I do exactly this with a combination of setuptools and Hudson. I know Hudson is a java app, but it can run Python stuff just fine.
0
1
0
0
2009-02-13T12:20:00.000
7
0
false
545,730
0
0
1
1
I used to be a java developer and we used tools like ant or maven to manage our development/testing/UAT environments in a standardized way. This allowed us to handle library dependencies, setting OS variables, compiling, deploying, running unit tests, and all the required tasks. Also, the scripts generated guaranteed that all the environments were almost equally configured, and all the task were performed in the same way by all the members of the team. I'm starting to work in Python now and I'd like your advice in which tools should I use to accomplish the same as described for java.
Cross-platform way to get PIDs by process name in python
557,021
0
58
78,794
0
python,cross-platform,jython,hp-ux
There isn't, I'm afraid. Processes are uniquely identified by pid not by name. If you really must find a pid by name, then you will have use something like you have suggested, but it won't be portable and probably will not work in all cases. If you only have to find the pids for a certain application and you have control over this application, then I'd suggest changing this app to store its pid in files in some location where your script can find it.
0
1
0
1
2009-02-15T10:23:00.000
9
0
false
550,653
0
0
0
3
Several processes with the same name are running on host. What is the cross-platform way to get PIDs of those processes by name using python or jython? I want something like pidof but in python. (I don't have pidof anyway.) I can't parse /proc because it might be unavailable (on HP-UX). I do not want to run os.popen('ps') and parse the output because I think it is ugly (field sequence may be different in different OS). Target platforms are Solaris, HP-UX, and maybe others.
Cross-platform way to get PIDs by process name in python
727,024
0
58
78,794
0
python,cross-platform,jython,hp-ux
For jython, if Java 5 is used, then you can get the Java process id as following: from java.lang.management import * pid = ManagementFactory.getRuntimeMXBean().getName()
0
1
0
1
2009-02-15T10:23:00.000
9
0
false
550,653
0
0
0
3
Several processes with the same name are running on host. What is the cross-platform way to get PIDs of those processes by name using python or jython? I want something like pidof but in python. (I don't have pidof anyway.) I can't parse /proc because it might be unavailable (on HP-UX). I do not want to run os.popen('ps') and parse the output because I think it is ugly (field sequence may be different in different OS). Target platforms are Solaris, HP-UX, and maybe others.
Cross-platform way to get PIDs by process name in python
550,672
1
58
78,794
0
python,cross-platform,jython,hp-ux
I don't think you will be able to find a purely python-based, portable solution without using /proc or command line utilities, at least not in python itself. Parsing os.system is not ugly - someone has to deal with the multiple platforms, be it you or someone else. Implementing it for the OS you are interested in should be fairly easy, honestly.
0
1
0
1
2009-02-15T10:23:00.000
9
0.022219
false
550,653
0
0
0
3
Several processes with the same name are running on host. What is the cross-platform way to get PIDs of those processes by name using python or jython? I want something like pidof but in python. (I don't have pidof anyway.) I can't parse /proc because it might be unavailable (on HP-UX). I do not want to run os.popen('ps') and parse the output because I think it is ugly (field sequence may be different in different OS). Target platforms are Solaris, HP-UX, and maybe others.
Detect script start up from command prompt or "double click" on Windows
558,808
2
8
3,077
0
python,windows
Good question. One thing you could do is create a shortcut to the script in Windows, and pass arguments (using the shortcut's Target property) that would denote the script was launched by double-clicking (in this case, a shortcut).
0
1
0
0
2009-02-17T21:20:00.000
5
0.07983
false
558,776
0
0
0
2
Is is possible to detect if a Python script was started from the command prompt or by a user "double clicking" a .py file in the file explorer on Windows?
Detect script start up from command prompt or "double click" on Windows
558,804
3
8
3,077
0
python,windows
The command-prompt started script has a parent process named cmd.exe (or a non-existent process, in case the console has been closed in the mean time). The doubleclick-started script should have a parent process named explorer.exe.
0
1
0
0
2009-02-17T21:20:00.000
5
0.119427
false
558,776
0
0
0
2
Is is possible to detect if a Python script was started from the command prompt or by a user "double clicking" a .py file in the file explorer on Windows?
Anybody tried mosso CloudFiles with Google AppEngine?
564,966
1
1
310
0
python,google-app-engine,storage,cloud,mosso
It appears to implement a simple RESTful API, so there's no reason you couldn't use it from App Engine. Previously, you'd have had to write your own library to do so, using App Engine's urlfetch API, but with the release of SDK 1.1.9, you can now use urllib and httplib instead.
0
1
0
0
2009-02-19T09:01:00.000
1
1.2
true
564,460
0
0
1
1
I'm wondering if anybody tried to integrate mosso CloudFiles with an application running on Google AppEngine (mosso does not provide testing sandbox so I cann't check for myself without registering)? Looking at the code it seems that this will not work due to httplib and urllib limitations in AppEngine environment, but maybe somebody has patched cloudfiles?
How to check if there exists a process with a given pid in Python?
568,614
-2
127
136,481
0
python,process,pid
I'd say use the PID for whatever purpose you're obtaining it and handle the errors gracefully. Otherwise, it's a classic race (the PID may be valid when you check it's valid, but go away an instant later)
0
1
0
0
2009-02-20T04:22:00.000
14
-0.028564
false
568,271
0
0
0
1
Is there a way to check to see if a pid corresponds to a valid process? I'm getting a pid from a different source other than from os.getpid() and I need to check to see if a process with that pid doesn't exist on the machine. I need it to be available in Unix and Windows. I'm also checking to see if the PID is NOT in use.
With Twisted, how can 'connectionMade' fire a specific Deferred?
570,561
0
2
1,036
0
python,connection,twisted,reactor
Looking at this some more, I think I've come up with a solution, although hopefully there is a better way; this seems kind of weird. Twisted has a class, ClientCreator that is used for producing simple single-use connections. It in theory does what I want; connects and returns a Deferred that fires when the connection is established. I didn't think I could use this, though, since I'd lose the ability to pass arguments to the protocol constructor, and therefore have no way to share state between connections. However, I just realized that the ClientFactory constructor does accept *args to pass to the protocol constructor. Or at least it looks like it; there is virtually no documentation for this. In that case, I can give it a reference to my factory (or whatever else, if the factory is no longer necessary). And I get back the Deferred that fires when the connection is established.
0
1
0
0
2009-02-20T17:03:00.000
1
1.2
true
570,397
0
0
0
1
This is part of a larger program; I'll explain only the relevant parts. Basically, my code wants to create a new connection to a remote host. This should return a Deferred, which fires once the connection is established, so I can send something on it. I'm creating the connection with twisted.internet.interfaces.IReactorSSL.connectSSL. That calls buildProtocol on my ClientFactory instance to get a new connection (twisted.internet.protocol.Protocol) object, and returns a twisted.internet.interfaces.IConnector. When the connection is started, Twisted calls startedConnecting on the factory, giving it the IConnector. When the connection is actually made, the protocol's connectionMade callback is called, with no arguments. Now, if I only needed one connection per host/port, the rest would be easy. Before calling connectSSL, I would create a Deferred and put it in a dictionary keyed on (host, port). Then, in the protocol's connectionMade, I could use self.transport.getPeer() to retrieve the host/port, use it to look up the Deferred, and fire its callbacks. But this obviously breaks down if I want to create more than one connection. The problem is that I can't see any other way to associate a Deferred I created before calling connectSSL with the connectionMade later on.
Why doesn't Python release file handles after calling file.close()?
575,086
4
5
7,977
0
python
It does close them. Are you sure f.close() is getting called? I just tested the same scenario and windows deletes the file for me.
0
1
0
0
2009-02-22T15:32:00.000
5
0.158649
false
575,081
1
0
0
2
I am on windows with Python 2.5. I have an open file for writing. I write some data. Call file close. When I try to delete the file from the folder using Windows Explorer, it errors, saying that a process still holds a handle to the file. If I shutdown python, and try again, it succeeds.
Why doesn't Python release file handles after calling file.close()?
30,957,893
0
5
7,977
0
python
I was looking for this, because the same thing happened to me. The question didn't help me, but I think I figured out what happened. In the original version of the script I wrote, I neglected to add in a 'finally' clause to the file in case of an exception. I was testing the script from the interactive prompt and got an exception while the file was open. What I didn't realize was that the file object wasn't immediately garbage-collected. After that, when I ran the script (still from the same interactive session), even though the new file objects were being closed, the first one still hadn't been, and so the file handle was still in use, from the perspective of the operating system. Once I closed the interactive prompt, the problem went away, at which I remembered that exception occurring while the file was open and realized what had been going on. (Moral: Don't try to program on insufficient sleep. : ) ) Naturally, I have no idea if this is what happened in the case of the original poster, and even if the original poster is still around, they may not remember the specific circumstances, but the symptoms are similar, so I thought I'd add this as something to check for, for anyone caught in the same situation and looking for an answer.
0
1
0
0
2009-02-22T15:32:00.000
5
0
false
575,081
1
0
0
2
I am on windows with Python 2.5. I have an open file for writing. I write some data. Call file close. When I try to delete the file from the folder using Windows Explorer, it errors, saying that a process still holds a handle to the file. If I shutdown python, and try again, it succeeds.
How to pause a script when it ends on Windows?
15,254,312
11
55
124,670
0
python,windows,cmd,command-line
The best option: os.system('pause') <-- this will actually display a message saying 'press any key to continue' whereas adding just raw_input('') will print no message, just the cursor will be available. not related to answer: os.system("some cmd command") is a really great command as the command can execute any batch file/cmd commands.
0
1
0
0
2009-02-23T12:20:00.000
13
1
false
577,467
1
0
0
4
I am running command-line Python scripts from the Windows taskbar by having a shortcut pointing to the Python interpreter with the actual script as a parameter. After the script has been processed, the interpreter terminates and the output window is closed which makes it impossible to read script output. What is the most straightforward way to keep the interpreter window open until any key is pressed? In batch files, one can end the script with pause. The closest thing to this I found in python is raw_input() which is sub-optimal because it requires pressing the return key (instead of any key).
How to pause a script when it ends on Windows?
4,130,571
44
55
124,670
0
python,windows,cmd,command-line
Try os.system("pause") — I used it and it worked for me. Make sure to include import os at the top of your script.
0
1
0
0
2009-02-23T12:20:00.000
13
1
false
577,467
1
0
0
4
I am running command-line Python scripts from the Windows taskbar by having a shortcut pointing to the Python interpreter with the actual script as a parameter. After the script has been processed, the interpreter terminates and the output window is closed which makes it impossible to read script output. What is the most straightforward way to keep the interpreter window open until any key is pressed? In batch files, one can end the script with pause. The closest thing to this I found in python is raw_input() which is sub-optimal because it requires pressing the return key (instead of any key).
How to pause a script when it ends on Windows?
41,732,172
3
55
124,670
0
python,windows,cmd,command-line
As to the "problem" of what key to press to close it, I (and thousands of others, I'm sure) simply use input("Press Enter to close").
0
1
0
0
2009-02-23T12:20:00.000
13
0.046121
false
577,467
1
0
0
4
I am running command-line Python scripts from the Windows taskbar by having a shortcut pointing to the Python interpreter with the actual script as a parameter. After the script has been processed, the interpreter terminates and the output window is closed which makes it impossible to read script output. What is the most straightforward way to keep the interpreter window open until any key is pressed? In batch files, one can end the script with pause. The closest thing to this I found in python is raw_input() which is sub-optimal because it requires pressing the return key (instead of any key).
How to pause a script when it ends on Windows?
577,488
58
55
124,670
0
python,windows,cmd,command-line
One way is to leave a raw_input() at the end so the script waits for you to press Enter before it terminates.
0
1
0
0
2009-02-23T12:20:00.000
13
1.2
true
577,467
1
0
0
4
I am running command-line Python scripts from the Windows taskbar by having a shortcut pointing to the Python interpreter with the actual script as a parameter. After the script has been processed, the interpreter terminates and the output window is closed which makes it impossible to read script output. What is the most straightforward way to keep the interpreter window open until any key is pressed? In batch files, one can end the script with pause. The closest thing to this I found in python is raw_input() which is sub-optimal because it requires pressing the return key (instead of any key).
Is there a Django apps pattern equivalent in Google App Engine?
591,169
3
3
715
0
python,django,design-patterns,google-app-engine,django-apps
The Django implementation of apps is closely tied to Django operation as a framework - I mean plugging application using Django url mapping features (for mapping urls to view functions) and Django application component discovery (for discovering models and admin configuration). There is no such mechanisms in WebApp (I guess you think of WebApp framework when you refer to AppEngine, which is rather platform) itself - you have to write them by yourself then persuade people to write such applications in a way that will work with your url plugger and component discovery after plugging app to the rest of site code. There are generic pluggable modules, ready to use with AppEngine, like sharded counters or GAE utilities library, but they do not provide such level of functionality like Django apps (django-registration for example). I thing this comes from much greater freedom of design (basically, on GAE you can model your app after Django layout or after any other you might think of) and lack of widely used conventions.
0
1
0
0
2009-02-25T23:09:00.000
2
1.2
true
588,342
0
0
1
1
Django has a very handy pattern known as "apps". Essentially, a self-contained plug-in that requires a minimal amount of wiring, configuring, and glue code to integrate into an existing project. Examples are tagging, comments, contact-form, etc. They let you build up large projects by gathering together a collection of useful apps, rather than writing everything from scratch. The apps you do end up writing can be made portable so you can recycle them in other projects. Does this pattern exist in Google App Engine? Is there any way to create self-contained apps that can be easily be dropped into an App Engine project? Right off the bat, the YAML url approach looks like it could require a significant re-imagining to the way its done in Django. Note: I know I can run Django on App Engine, but that's not what I'm interested in doing this time around.
Python Daemon Packaging Best Practices
40,901,455
0
24
16,678
0
python,packaging,setuptools,distutils
correct me if wrong, but I believe the question is how to DEPLOY the daemon. Set your app to install via pip and then make the entry_point a cli(daemon()). Then create an init script that simply runs $app_name &
0
1
0
1
2009-02-26T01:32:00.000
10
0
false
588,749
1
0
0
2
I have a tool which I have written in python and generally should be run as a daemon. What are the best practices for packaging this tool for distribution, particularly how should settings files and the daemon executable/script be handled? Relatedly are there any common tools for setting up the daemon for running on boot as appropriate for the given platform (i.e. init scripts on linux, services on windows, launchd on os x)?
Python Daemon Packaging Best Practices
588,835
0
24
16,678
0
python,packaging,setuptools,distutils
On Linux systems, the system's package manager (Portage for Gentoo, Aptitude for Ubuntu/Debian, yum for Fedora, etc.) usually takes care of installing the program including placing init scripts in the right places. If you want to distribute your program for Linux, you might want to look into bundling it up into the proper format for various distributions' package managers. This advice is obviously irrelevant on systems which don't have package managers (Windows, and Mac I think).
0
1
0
1
2009-02-26T01:32:00.000
10
0
false
588,749
1
0
0
2
I have a tool which I have written in python and generally should be run as a daemon. What are the best practices for packaging this tool for distribution, particularly how should settings files and the daemon executable/script be handled? Relatedly are there any common tools for setting up the daemon for running on boot as appropriate for the given platform (i.e. init scripts on linux, services on windows, launchd on os x)?
Python persistent Popen
589,104
0
3
2,516
0
python,subprocess,popen
For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string? Sounds like you're using shell=True. Don't, unless you need to. Instead use shell=False (the default) and pass in a command/arg list. Is there a way to do multiple calls in the same "session" in Popen? For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string? Any reason you can't just create two Popen instances and wait/communicate on each as necessary? That's the normal way to do it, if I understand you correctly.
0
1
0
0
2009-02-26T04:19:00.000
3
0
false
589,093
0
0
0
2
Is there a way to do multiple calls in the same "session" in Popen? For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?
Python persistent Popen
589,282
3
3
2,516
0
python,subprocess,popen
You're not "making a call" when you use popen, you're running an executable and talking to it over stdin, stdout, and stderr. If the executable has some way of doing a "session" of work (for instance, by reading lines from stdin) then, yes, you can do it. Otherwise, you'll need to exec multiple times. subprocess.Popen is (mostly) just a wrapper around execvp(3)
0
1
0
0
2009-02-26T04:19:00.000
3
1.2
true
589,093
0
0
0
2
Is there a way to do multiple calls in the same "session" in Popen? For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?
Miminal Linux For a Pylons Web App?
590,001
1
2
602
0
python,linux,pylons
If you want to be able to remove all the cruft but still be using a ‘mainstream’ distro rather than one cut down to aim at tiny devices, look at Slackware. You can happily remove stuff as low-level as sysvinit, cron and so on, without collapsing into dependency hell. And nothing in it relies on Perl or Python, so you can easily remove them (and install whichever version of Python your app prefers to use).
0
1
0
1
2009-02-26T04:33:00.000
7
0.028564
false
589,115
0
0
0
4
I am going to be building a Pylons-based web application. For this purpose, I'd like to build a minimal Linux platform, upon which I would then install the necessary packages such as Python and Pylons, and other necessary dependencies. The other reason to keep it minimal is because this machine will be virtual, probably over KVM, and will eventually be replicated in some cloud environment. What would you use to do this? I am thinking of using Fedora 10's AOS iso, but would love to understand all my options.
Miminal Linux For a Pylons Web App?
895,583
1
2
602
0
python,linux,pylons
For this purpose, I'd like to build a minimal Linux platform... So Why not try to use ArchLinux www.archlinux.org? Also you can use virtualenv with Pylons in it.
0
1
0
1
2009-02-26T04:33:00.000
7
0.028564
false
589,115
0
0
0
4
I am going to be building a Pylons-based web application. For this purpose, I'd like to build a minimal Linux platform, upon which I would then install the necessary packages such as Python and Pylons, and other necessary dependencies. The other reason to keep it minimal is because this machine will be virtual, probably over KVM, and will eventually be replicated in some cloud environment. What would you use to do this? I am thinking of using Fedora 10's AOS iso, but would love to understand all my options.
Miminal Linux For a Pylons Web App?
589,638
0
2
602
0
python,linux,pylons
debootstrap is your friend.
0
1
0
1
2009-02-26T04:33:00.000
7
0
false
589,115
0
0
0
4
I am going to be building a Pylons-based web application. For this purpose, I'd like to build a minimal Linux platform, upon which I would then install the necessary packages such as Python and Pylons, and other necessary dependencies. The other reason to keep it minimal is because this machine will be virtual, probably over KVM, and will eventually be replicated in some cloud environment. What would you use to do this? I am thinking of using Fedora 10's AOS iso, but would love to understand all my options.