Title
stringlengths
15
150
A_Id
int64
2.98k
72.4M
Users Score
int64
-17
470
Q_Score
int64
0
5.69k
ViewCount
int64
18
4.06M
Database and SQL
int64
0
1
Tags
stringlengths
6
105
Answer
stringlengths
11
6.38k
GUI and Desktop Applications
int64
0
1
System Administration and DevOps
int64
1
1
Networking and APIs
int64
0
1
Other
int64
0
1
CreationDate
stringlengths
23
23
AnswerCount
int64
1
64
Score
float64
-1
1.2
is_accepted
bool
2 classes
Q_Id
int64
1.85k
44.1M
Python Basics and Environment
int64
0
1
Data Science and Machine Learning
int64
0
1
Web Development
int64
0
1
Available Count
int64
1
17
Question
stringlengths
41
29k
How to make a batch file automatically run itself after being updated
42,663,541
0
0
451
0
python,batch-file,cmd
For running the the batch file again after it has been edited or modified, you can write a script. That script can be executed using daemon services or launchd in mac os x.
0
1
0
0
2017-03-08T05:14:00.000
2
0
false
42,663,496
0
0
0
1
I have edited the contents of my batch file in a python program however when I try to execute the .bat in python it doesn't follow the instructions. It opens the console and then closes but nothing happens. Instead I am looking at an alternative route- automatically running after the code has been saved or changed. The reason I need it to run is because it updates an mp3, so if it's not running properly the mp3 doesn't change. I think one of the reasons may be down to not being able to run as administrator in python. I did create a shortcut and set it to run as admin every time but python wouldn't allow the .ink file for subprocess.Popen() and os.system()
Is it okay to relocate packages from ./Library/Python/2.7/lib to /usr/local/lib?
42,671,889
0
0
158
0
python,bash,macos
Just change the path in your source command to match the location of the script, which should be where pip installed it, that is in /usr/local/bin if you used sudo pip install to install it system wide, or wherever the bin directory associated with your python environment is located. That would /path/to/virtualenv/local/bin if you are using a virtualenv, or /path/to/anaconda/bin if you are using anaconda's python distribution.
0
1
0
0
2017-03-08T12:46:00.000
1
0
false
42,671,780
1
0
0
1
So I'm trying to install virtualenvwrapper and then as a requirement for the task I'm trying to implement I'm supposed to update my .bash_profile file to contain the lines source /usr/local/bin/virtualenvwrapper.sh But after activating the changes to the file I get -bash: /usr/local/bin/virtualenvwrapper.sh: No such file or directory So that's because using pip install virtualenv the package gets installed in ./Library/Python/2.7/lib/python/site-packages . My question is, is it okay to manually relocate the packages? What would be the way to do so?
Is there any possibile issue in having Anaconda Python 3.6 as default Python version in macOS?
46,385,845
0
0
61
0
python,macos,anaconda
I have been using Anaconda Python. I had a problem with the default Python installed on my Mac OSX 10.11 at one point, because of the numpy package. It was a problem for me when I tried to run a script in Anaconda Python which relies on a numpy version higher the Mac default version and I wasn't able to get it working using conda install, pip install, or by changing the PATH/PYTHONPATH. I was able to install the package but Anaconda Python would not recognize the new version. I ended up removing the entire numpy that came with the Mac. But I do not think this would be a problem in the other way (i.e., using mostly the Mac python but occasionally install other packages for Anaconda Python) because the default Python does not look at the Ancondoa package directory.
0
1
0
0
2017-03-08T19:03:00.000
1
0
false
42,679,780
1
0
0
1
I've just installed Anaconda on my macOS machine and it has changed my PATH so that now Python 3.6 is the default version (i.e. the Python 3.6 interpreter opens when I type python in the Terminal). I'm fine with this since this is the version I usually use, but I was wondering if there is the possibility of this messing up with the system functionalities relying on having 2.7 as default. I suppose that there will no problems since 2.7 is still in /usr/bin, but I would like to be sure.
Understanding pip and home-brew file structure
42,702,937
0
1
152
0
python,pip,homebrew
I found the answer in the documentation for homebrew. For Homebrew python, you must use "pip3 install " instead of "python -m pip install " There were two other issues that complicated this. 1. I had previously manually installed python 3.5. The bash profile was configured to point to this before /usr/local/bin. 2. In the documentation of pip, it mentions that the CLI command "pip" points to the last version of python that used it. So using "pip" alone was causing pip to load the modules into the 2.7 version of the python. To fix this, I deleted the manually installed version, I removed the garbage from the bash profile, then it everything seemed to work.
0
1
0
0
2017-03-08T21:34:00.000
2
0
false
42,682,326
1
0
0
1
I have a mac running OS X. Although it has Python 2.7 preinstalled, I used home-brew to install Python 3.5, which works great. Now I'm looking to add modules using pip. Trouble is, when I used pip in the terminal, it looks like the module was installed, however, my Python 3.5 doesn't see it. After a bit of digging, I suspect the problem is that my pip is pointed at the the Apple 2.7 Python version and I realize the answer is I need to change the config on pip to point at the 3.5 version of python, but I can't make any sense of the brew file structure in order to know where to point it at. And, as I dig through the Cellar, I see multiple versions of pip, so I'm not even sure I'm using the right one, but not sure how to call the right one from the terminal. I'm sure this is very straightforward to experienced users, but I'm lost.
how to use command line to start celery when I install it both in python2,python3
42,686,288
1
0
60
0
python,celery
What you need is a virtual environment. A virtual environment encapsulates a Python install, along with all the pip packages And executable files such as celery. check out the virtualenv and virtualenvwrapper Python packages.
0
1
0
0
2017-03-09T03:29:00.000
1
1.2
true
42,686,125
0
0
0
1
I used celery + requests first in python2.7,and it works fine,but I heard celery + aiohttp is faster,so I test it in python3, and it really fast,but then I found I can't use celery to start my program write in python2.7,because there are changes between them ,I use command line to start celery only get errors I guess I should just uninstall the celery of python3? Is there a better way to do this? In fact,I guess since there are many package works for both p2,p3,and use commandline to start,there must have a good solution.
difference between "Python" file and "python2.7" file on macOS
42,697,960
0
0
33
0
python,macos,python-2.7
python is alias name for current python binary. It's symlink to some version of python binary called Python. Something like /Library/Frameworks/Python.framework/Version/2.7/Python /Library/Frameworks/Python.framework/Version/3.5/Python Currently code for 2.7+ and 3.0+ may conflict (like use print(x) instead of print x or range for generators instead xrange in 2.7. etc). So if your scripts are not ported for newest version you will probably catch a lot of errors while executing python my_cool_script.py because you wrote code for 2.7 and after installation you trying to execute it with 3.5 version. So you can change symlink back to Version/2.7/Python and execute the same command and it will work like you code it and version conflict will be solved.
0
1
0
0
2017-03-09T13:42:00.000
1
0
false
42,696,960
1
0
0
1
I have a macOS Sierra 10.12.3 and I have installed Python 2.7.13 by downloading it from the official Python site. When I type which python I get /Library/Frameworks/Python.framework/Version/2.7/bin/python. The python file referenced in this result is a shortcut for python2.7 file located in the same directory. I'm wondering what is the difference between Python (with the capital "P") file located in /Library/Frameworks/Python.framework/Version/2.7 and the one mentioned above? Thanks.
How to deploy python applications to remote machines running Windows
42,714,418
1
0
496
0
python,windows,deployment
I was in a similar position and i combine pyinstaller with fabric. So i build a "compile" version of the project and with fabric, i deploy like the client wants. Fabric support roles definition, several configuration for several clients.
0
1
0
0
2017-03-10T09:07:00.000
1
1.2
true
42,714,356
1
0
0
1
I develop a distributed application which is based on RabbitMQ and multiple python applications. System is pretty complex so it is very likely that we will need to update deployed solution multiple times. Customer wants that we use his servers which are running windows. So the question is how to deploy and update python part of this system. And as sub-question is it better to deploy sources or use pyinstaller to get executables and then deploy them? On my test server I just use git pull when I have some changes which is probably not the case for production system.
Is it possible to build a homepage with python on virtual server when the sudo command is disabled?
42,730,116
1
0
38
0
python,django
Have you thought about asking the admin to start a virtualenv for you and give you permissions to work in that environment?
0
1
0
1
2017-03-11T00:50:00.000
1
0.197375
false
42,730,059
0
0
1
1
I've studied Python and Django, building a homepage. And I've been using a virtual memory on Ubuntu server(apache2 2.4.18/ php-7.0/ MariaDB 10.0.28 with phpMyAdmin/ FTP) offered for developers. The server hadn't allowed users to use python, but I asked the server administrator to give me a permission and I got it. The problem was, however, that I was not allowed to use not only any sudo command line but also basic commands like apt-get and python. The only administrator can do so, therefore it seems that I cannot install any neccessary things-virtualenv, django, and so on- by myself. Just to check whether .py file works or not, I added <?php include_once"test.py" ?> on the header of index.php about the test.py where only print "python test"(meaning only python 2 is installed on this server) is written. It works. That is, I guess, all I can do is uploading .py file with Filezilla. In this case, can I make a homepage with Python on this server efficiently? I was thinking about using Bottle Framework, but also not sure. I am confused with wondering whether I should use PHP on this server and using Python on PythonAnywhere in the end. I am a beginner. Any advice will be appreciated :)
Does executing a python script load it into memory?
42,748,323
1
2
1,125
0
python,python-3.x
The "script" you use is only the human friendly representation you see. Python opens that script, reads lines, tokenizes them, creates a parse and ast tree for it and then emits bytecode which you can see using the dis module. The "script" isn't loaded, it's code object (the object that contains the instructions generated for it) is. There's no direct way to affect that process. I have never heard of a script being so big that you need to read it in chunks, I'd be surprised if you accomplished it.
0
1
0
0
2017-03-12T11:19:00.000
2
0.099668
false
42,746,745
1
0
0
2
I'm running a python script using python3 myscript.py on Ubuntu 16.04. Is the script loaded into memory or read and interpreted line by line from the hdd? If it's not loaded all at once, is there any way of knowing or controlling how big the chunks are, that are loaded into Memory?
Does executing a python script load it into memory?
42,746,771
5
2
1,125
0
python,python-3.x
It is loaded into memory in its entirety. This must be the case, because a syntax error near the end will abort the program straight away. Try it and see. There does not need to be any way to control or configure this. It is surely an implementation detail best left alone. If you have a problem related to this (e.g. your script is larger than your RAM), it can be solved some other way.
0
1
0
0
2017-03-12T11:19:00.000
2
1.2
true
42,746,745
1
0
0
2
I'm running a python script using python3 myscript.py on Ubuntu 16.04. Is the script loaded into memory or read and interpreted line by line from the hdd? If it's not loaded all at once, is there any way of knowing or controlling how big the chunks are, that are loaded into Memory?
Does GIL affect parallel processing of a python script in separate terminal windows?
42,757,357
2
1
160
0
python,python-3.x,terminal,parallel-processing,gil
Each terminal window will start a new python interpreter, each of which has its own GIL. The difference is probably due to contention for some resource at the OS level (disk i/o, memory, cpu cycles).
0
1
0
0
2017-03-13T05:36:00.000
1
1.2
true
42,757,209
1
0
0
1
I am trying to understand Python's GIL. I recently had an assignment where I had to compare the execution times of a certain task performed using different algorithms of different time complexities on multiple input files. I ran a python script to do the same, but I used separate terminal windows on macOS to run the same python script for different input files. I also ran it all in one terminal window, one after the other, for each input file. The CPU time for this was lower for each execution as compared to the previous approach with multiple windows where each program took twice as long but ran all at once. (Note : there were 4 terminal windows in the previous approach and the python script only ran an a.out executable compiled with clang on macOS and stored the output in different files). Can anyone explain why running them in parallel lead to each program being slower? Did they run on separate cores or did the GIL lead to each program being slower than it would if I run it one at a time in one terminal window?
How to export PATH for sublime build tool?
42,765,548
2
1
668
0
python,tensorflow,sublimetext2,sublimetext3,sublimetext
Ok I got it: The problem is that the LD_LIBRARY_PATH variable was missing. I only exported it in .bashrc. When I add export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64\ ${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} to ~/.profile it's working (don't forget to restart). It also works if I start sublime from terminal with subl which passes all the variables.
0
1
0
0
2017-03-13T13:11:00.000
2
1.2
true
42,764,539
0
1
0
1
I wanted to create a new "build tool" for sublime text, so that I can run my python scripts with an anaconda env with tensorflow. On my other machines this works without a problem, but on my ubuntu machine with GPU support I get an error. I think this is due to the missing paths. The path provided in the error message doesn't contain the cuda paths, although I've included them in .bashrc. Update I changed ~/.profile to export the paths. But tensorflow still won't start from sublime. Running my script directly from terminal is no problem. I get ImportError: libcudart.so.8.0: cannot open shared object file: No such file or directory So somehow the GPU stuff (cuda?) can not be found Thanks
Subprocess emulate user input after command
42,767,271
1
0
87
0
python,command-line,subprocess
Often, tools you are calling have a -y flag to automatically answer surch questions with yes.
0
1
0
0
2017-03-13T15:00:00.000
1
1.2
true
42,766,823
1
0
0
1
I have a script where I used a few command-line tools are utilised. However I've hit an issue where I am trying to convert two videos into one video (which I can do) however this is meant to be an idle process and when I run this command with subprocess.call() it prompted me with a 'A file with this name already exists, would you like to overwrite it [y/n]?' and now I am stuck on how to emulate a users input of 'y' + Enter. It could be a case of running it as admin (somehow) or using pipes or this Stdout stuff I read about but didn't really understand. How would you guys approach this? What do you think the best technique? Cheers guys, any help is immensely appreciated!
Synchronize python files between my development computer and my raspberry
42,787,653
0
0
334
0
python,version-control,raspberry-pi
Following a couple of bad experiences where I lost code which was only on my Pi's SD card, I now run WinSCP on my laptop, and edit files from Pi on my laptop, they open in Notepad++ and WinSCP automatically saves edits to Pi. And also I can use WinSCP folder sync feature to copy contents of SD card folder to my latop. Not perfect, but better what I was doing before
0
1
0
1
2017-03-14T13:38:00.000
3
0
false
42,787,560
0
0
0
2
I am writing a web python application with tornado framework on a raspberry pi. What i actually do is to connect to my raspberry with ssh. I am writing my source code with vi, on the raspberry. What i want to do is to write source code on my development computer but i do not know how to synchronize (transfer) this source code to raspberry. It is possible to do that with ftp for example but i will have to do something manual. I am looking for a system where i can press F5 on my IDE and this IDE will transfer modified source files. Do you know how can i do that ? Thanks
Synchronize python files between my development computer and my raspberry
54,502,688
0
0
334
0
python,version-control,raspberry-pi
I have done this before using bitbucket as a standard repository and it is not too bad. If you set up cron scripts to git pull it's almost like continuous integration.
0
1
0
1
2017-03-14T13:38:00.000
3
0
false
42,787,560
0
0
0
2
I am writing a web python application with tornado framework on a raspberry pi. What i actually do is to connect to my raspberry with ssh. I am writing my source code with vi, on the raspberry. What i want to do is to write source code on my development computer but i do not know how to synchronize (transfer) this source code to raspberry. It is possible to do that with ftp for example but i will have to do something manual. I am looking for a system where i can press F5 on my IDE and this IDE will transfer modified source files. Do you know how can i do that ? Thanks
Error import tornado in python 3
42,820,278
0
0
299
0
python,linux,raspberry-pi3
In these days the website of tornado has some problem. I downloaded the tar.gz file from another website and installed from there. Instead of use command "python" use "python3"
0
1
0
1
2017-03-14T16:27:00.000
1
0
false
42,791,422
0
0
0
1
I'm using raspberry pi 3 to communicate with an android app through websocket. I installed tornado on my raspberry and the installation was succesfull but if I use it with python 2.7 I haven't any kind of problem but I need to use it with python 3 and when I just write "import tornado" I get an ImportError: No module named "tornado". It is like if it is installed in python 2 but not in 3. Both python 2 and 3 are preinstalled on raspberry. Somebody can help me? Thanks in advance Sorry for my bad english
Celery Production Graceful Restart
55,646,504
0
2
3,057
0
python,celery
We have tasks that may run up to 48 hours. Graceful restart you talk about is very common when we have a new release and we deploy the new version to production. What we do is simply send the SIGTERM (shutdown) signal to the running workers, and then spin up completely new set of workers in parallel.
0
1
0
0
2017-03-15T14:06:00.000
3
0
false
42,812,125
0
0
0
1
I need to restart the celery daemon but I need it to tell the current workers to shutdown as their tasks complete and then spin up a new set of workers while the old ones are still shutting down. The current graceful option on the daemon waits for all tasks to complete before restarting which is not useful when you have long running jobs. Please do not suggest autoreload as it is currently undocumented in 4.0.2.
Can someone help me in installing python package "Prophet" on windows 10
57,057,654
0
2
14,608
0
python-3.5
I faced the same issue and my solution was to:- Create a new environment with Python3.5 conda create -n pht python=3.5 anaconda Install Prophet using the command. conda install -c conda-forge fbprophet I didn't install 'gcc' although this was advised before installing Prophet.
0
1
0
0
2017-03-15T23:53:00.000
6
0
false
42,822,902
1
0
0
1
Can someone help me in installing python package "Prophet" on windows 10 . I tried installing python 3.5 and the dependency 'pystan' but yet I get below error. "The package setup script has attempted to modify files on your system that are not within the EasyInstall build area, and has been aborted.This package cannot be safely installed by EasyInstall, and may not support alternate installation locations even if you run its setup script by hand.Please inform the package's author and the EasyInstall maintainers to find out if a fix or workaround is available. Command "python setup.py egg_info" failed with error code 1 in c:\users\suman\appdata\local\temp\pip-build-aqoiqs\fbprophet\"`
Can't access Google Cloud SQL instance from different GCP project, despite setting IAM permissions
42,827,972
6
3
3,103
1
python,mysql,google-app-engine,google-cloud-sql
Figured it out eventually - perhaps this will be useful to someone else encountering the same problem. Problem: The problem was that the "Cloud SQL Editor" role is not a superset of the "Cloud SQL Client", as I had imagined; "Cloud SQL Editor" allows administration of the Cloud SQL instance, but doesn't allow basic connectivity to the database. Solution: Deleting the IAM entry granting Cloud SQL Editor permissions and replacing it with one granting Cloud SQL Client permissions fixed the issue and allowed the database connection to go through.
0
1
0
0
2017-03-16T06:14:00.000
1
1
false
42,826,560
0
0
1
1
I'm attempting to access a Google Cloud SQL instance stored on one Cloud Platform project from an App Engine application on another project, and it's not working. Connections to the SQL instance fail with this error: OperationalError: (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 38") I followed the instructions in Google's documentation and added the App Engine service account for the second project to the IAM permissions list for the project housing the Cloud SQL instance (with "Cloud SQL Editor" as the role). The connection details and configuration I'm using in my app are identical to those being used in a perfectly functioning App Engine app housed in the same project as the Cloud SQL instance. The only thing that seems off about my configuration is that in my second GCP project, while an App Engine service account that looks like the default one ([MY-PROJECT-NAME]@appspot.gserviceaccount.com) appears in the IAM permissions list, this service account is not listed under the Service Accounts tab of IAM & Admin. The only service account listed is the Compute Engine default service account. I haven't deleted any service accounts; there's never been an App Engine default service account listed here, but apart from the MySQL connection the App Engine app runs fine. Not sure if it's relevant, but I'm running a Python 2.7 app on the App Engine Standard Environment, connecting using MySQLdb.
Can Python's argparse replace a single option by a group of options?
42,842,312
0
0
65
0
python,argparse
rsync has been around long enough that it (or many implementations) probably uses getopt for parsing the commands (if it doesn't do its own parsing). Python has a version of getopt. Neither the c version or the python has a mechanism for replacing a -a command with -rlptgoD. Any such replacement is performed after parsing. The primary purpose of a parser is to decode what the user wants. Acting on that information is the responsibility of your code. I can imagine writing a custom Action class that would set multiple attributes at once. But it wouldn't save any coding work. It would look a lot like a equivalent function that is used after parsing.
0
1
0
0
2017-03-16T11:11:00.000
1
0
false
42,832,429
1
0
0
1
Some Linux commands provide a single option that is equivalent to a given group of options, for convenience. For example, rsync has an option -a which is equivalent to -rlptgoD. For a Python script, is it possible to implement this behaviour using argparse? Or should I just pass the -a option to my code and handle it there?
After deploying python app on Google App Engine changes are reflected after several minutes
42,870,689
0
0
47
0
google-app-engine,google-app-engine-python
A common reason for not seeing your changes instantly after deploying is that you didn't change the application version. Instances with the same version will continue serving traffic until they die off, which could take a while. If instead you bump the default version, traffic will only be routed to instances that are funning the newer version of the code.
0
1
0
0
2017-03-17T03:26:00.000
1
0
false
42,848,692
0
0
1
1
I have an app on Google App Engine, I used to deploy my application and see the deployed files and changes instantly. But recently I have to wait about 5 minutes to see if the files are changed. The only thing that I suspect is that I changed the application Zone. I am not sure what was the default Zone but now I set it to us-central1-a. How can I solve this issue? I want to see all changes instantly as before. Thanks!
how to run python file in mongodb using cmd
65,763,003
0
0
82
0
python,mongodb,pymongo
First you need to ensure that your directory is in the correct folder. for example you can write cd name_of_folder then to run it you need to typepython your_filen_name.py
0
1
0
1
2017-03-17T09:13:00.000
1
0
false
42,853,347
0
0
0
1
I have a python file named abc.py. I can run it in mongodb with the help of robomongo but i couldnt run it in cmd. Can anyone tell me how to run a .py file in mongodb using cmd ?
If I have multiple instance of the same python application running how to perform logging into file?
42,872,180
0
0
303
0
python,logging,distributed-computing,multiple-instances
I will use MySQL. This way I will have a standard tool for log analysis (MySQL Workbench), will solve the problem with multiple instance logging serialization. The best way would be probably to write a handler to standard logging module but at the moment I'll sent all messages through rabbitmq to service that stores them.
0
1
0
0
2017-03-17T13:40:00.000
1
1.2
true
42,859,075
1
0
0
1
I have a worker application written in python for a distributed system. There is a situation when I need to start multiple instances of this worker on a single server. Logging should be written into file I suspect that I cannot write to the same file from different instances. So what should I do, pass log-file name as command line argument to each instance? Is there a standard approach for such situation?
Get a list of all mounted file systems in Linux with python
42,874,372
-1
2
1,490
0
python,json,linux,shell,automation
You can use command df Provides an option to display sizes in Human Readable formats (e.g., 1K 1M 1G) by using ‘-h’.This is the most common command but you can also check du and di. di in fact provides even more info than df.
0
1
0
0
2017-03-18T10:34:00.000
2
-0.099668
false
42,873,222
0
0
0
1
I am planning to automate a process of cleaning file systems in Linux using a set of scripts in Shell, Python and I'll create a simple dashboard using Node.js to allow a more visual approach. I have a script in Shell which already cleans a file system in a specific server - but I have to login and then issue this command. Now I am proceeding with a dashboard in HTML/CSS/JS to visualize all servers which are having space problems. My idea is: create a Python scrip to login and get a list of filesystems and its usage and update a single JSON file, then, my dashboard uses this JSON to feed the screen. My question is how to get the list of file system in Linux and its usage?
python - IO Error [Errno 2] No such file or directory when downloading package
42,988,360
0
1
1,408
0
python,file,windows-7
User letmaik was able to help me with this. It turned out that the error was caused by my version of pip being too old. The command "python -m pip install -U pip" did not work to upgrade pip; "easy_install -U pip" was required. This allowed rawpy to be installed successfully.
0
1
0
0
2017-03-21T00:49:00.000
2
0
false
42,916,551
0
0
0
1
I was trying to download a Python wrapper called rawpy on my Windows machine. I used the command "pip install rawpy". I have already looked at many other SO threads but could find no solution. The exact error is : IO Error: [Errno 2] No such file or directory: 'external/LibRawcmake/CMakeLists.txt' The only dependency for the wrapper is numpy, which I successfully installed. I would like to know how to fix this. Quite new to Python, so any information would help.
Compile a JIT based lang to Webassembly
42,920,349
4
3
763
0
javascript,python,c,webassembly
If you are actually implementing an interpreter then you don't need to generate machine code at runtime, so everything can stay within Wasm. What you actually seem to have in mind is a just-in-time compiler. For that, you indeed have to call back into the embedder (i.e., JavaScript in the browser) and create and compile new Wasm modules there on the fly, and link them into the running program -- e.g., by adding new functions to an existing table. The synchronous compilation/instantiation interface exists for this use case. In future versions it may be possible to invoke the compilation API directly from within Wasm, but for now going through JavaScript is the intended approach.
0
1
0
1
2017-03-21T05:50:00.000
1
1.2
true
42,919,339
0
0
0
1
When thinking of the why a interpeter work: parse code -> producer machine byte code -> allocate exec mem -> run how can it be done in wasm? thanks!
Trying to install kivy for python on mac os 10.12
46,702,178
1
1
1,854
0
macos,kivy,python-3.4
Just had this issue, and was able to fix it following the directions on the kivy mac OS X install page, with one modification as follows: $ brew install pkg-config sdl2 sdl2_image sdl2_ttf sdl2_mixer gstreamer $ pip3 install Cython==0.25.2 $ pip3 install kivy pip3 is my reference to pip for Python 3.6 as I have two different versions of python on my system. May just be pip install for you. Hope this helps!
1
1
0
0
2017-03-22T01:04:00.000
1
0.197375
false
42,940,941
0
0
0
1
so I am trying to install kivy on my mac.From their instructions page, I am on step 2, and have to enter the command $ USE_OSX_FRAMEWORKS=0 pip install kivy. However, when I put this in terminal, I get the error error: command '/usr/bin/clang' failed with exit status 1, and as a result Failed building wheel for kivy. Does anyone know how to address this issue?
Which pool class should i use prefork, eventlet or gevent in celery?
43,895,350
20
13
6,858
0
python,celery,celery-task
funny that this question scrolled by. We just switched from eventlet to gevent. Eventlet caused hanging broker connections which ultimately stalled the workers. General tips: Use a higher concurreny if you're I/O bound, I would start with 25, check the cpu load and tweak from there, aim for 99,9% cpu usage for the process. you might want to use --without-gossip and --without-mingle if your workforce grows. don't use RabbitMQ as your result backend (redis ftw!), but RabbitMQ is our first choice when it comes to a broker (the amqp emulation on redis and the hacky async-redis solution of celery is smelly and caused a lot of grief in our past). More advanced options to tune your celery workers: pin each worker process to one core to avoid the overhead of moving processes around (taskset is your friend) if one worker isn't always working, consider core-sharing with one or two other processes, use nice if one process has priority
0
1
0
1
2017-03-22T10:12:00.000
2
1
false
42,948,547
0
0
0
1
I have 3 remote workers, each one is running with default pool (prefork) and single task. A single task is taking 2 to 5 minutes for completion as it runs on many different tools and inserts database in ELK. worker command: celery -A project worker -l info Which pool class should I use to make processing faster? is there any other method to improve performance?
Wing IDE not stopping at break points
42,989,924
1
2
406
0
python,debugging,wing-ide
Following the comments above, I have copied the wingdbstub.py file (from debugger packages of Wing ide) to the folder I am currently running my project on and used 'import wingdbstub' & initiated the debug process. All went well, I can now debug modules.
0
1
0
0
2017-03-22T19:57:00.000
1
1.2
true
42,961,484
0
0
0
1
I am running a project that makes calls to C++ framework functions and python modules, I can run it on Wing IDE with no problems (personal version). However, I can not debug on the run. It only lets me debug a certain file, which is pretty useless. I make a call to a shell script to run the framework function via a python file (init) and that function calls a python module that I want to debug. I have had the same problem with pyCharm. I have spent quite a while trying to figure this out, something that should be very basic. How can I fix this problem and debug on the go???
Python + uwsgi - multiprocessing and shared app state
42,967,483
2
4
504
0
python,flask,multiprocessing,uwsgi
I think there are two routes you could go down. Have an endpoint "/set_es_cluster" that gets hit by your SNS POST request. This endpoint then sets the key "active_es_cluster", which is read on every ES request by your other processes. The downside of this is that on each ES request you need to do a redis lookup first. Have a seperate process that gets the POST request specifically (I assume the clusters are not changing often). The purpose of this process is to receive the post request and just have uWSGI gracefully restart your other flask processes. The advantages of the second option: Don't have to hit redis on every request Let uWSGI handle the restarts for you (which it does well) You already setup the config pulling at runtime anyway so it should "just work" with your existing application
0
1
0
0
2017-03-23T04:26:00.000
1
1.2
true
42,967,242
0
0
1
1
We have a flask app running behind uwsgi with 4 processes. Its an API which serves data from one of our two ElasticSearch clusters. On app bootstrap each process pulls config from external DB to check which ES cluster is active and connects to it. Evey now and then POST request comes (from aws SNS service) which informs all the clients to switch ES cluster. That triggers the same function as on bootstrap - pull config from DB reconnect to active ES cluster. It works well running as a single process, but when we have more then one process running only one of them will get updated (the one which picks up POST request)... where other processes are still connected to inactive cluster. Pulling config on each request to make sure that ES cluster we use is active would be to slow. Im thinking to install redis locally and store the active_es_cluster there... any other ideas?
SNS/PubSub notifications on a Python CLI Application?
43,002,548
1
0
80
0
python,push-notification,command-line-interface,amazon-sns,google-cloud-pubsub
It is not possible for CLI applications The workarounds are Have a web api and register the endpoint with SNS. SNS will push notifications to the web API. From the web api somehow pass that to the CLI app, using either RPC calls or some other mechanism Have the SNS push notifications to AWS SQS and then from your CLI poll the SQS
0
1
0
0
2017-03-23T07:07:00.000
1
0.197375
false
42,969,345
0
0
1
1
I am developing a python application which will majorly be used as a command line interface. I want to push notifications from Amazon SNS or Google PubSub to the python application. Is this possible? If yes, what is the best solution to this? If no, is there a workaround? Thank you for the help.
Python: Can't install .whl package for two python versions on Windows
43,003,863
0
0
637
0
python,pip
You may have two versions of pip installed i did too and it was a pain but i fixed it myself with the following command: pip2 download/install (enter your package here) That should fix the issue you have encountered.
0
1
0
0
2017-03-24T09:34:00.000
1
0
false
42,995,878
1
0
0
1
I have Python 2.7 and 3.4 on my work computer for compatibility reasons with older scripts. Now I wanted to install "aenum" for Py2.7 but "pip" only installs the package for Py3.4. telling me "aenum-2.0.4-py2-none-any.whl is not a supported wheel on this platform". In the CMD terminal I changed to the designated Python's "site-packages" folder where it's installed in Py3.4. "pip" was updated before. pip is installed in both Python folders How can I set this up properly?
lftp show remote recursive directory size
43,010,672
1
1
323
0
python-2.7,lftp
looks like lftp supports regular linux commands. incase anyone else runs into this just do a du -h
0
1
0
0
2017-03-24T23:04:00.000
1
0.197375
false
43,010,515
0
0
0
1
I've been using smartftp (windows only )to work out the file size of remote directories before downloading them. I've switched over to ubuntu and ive been looking around but I don't see if lftp has this feature or maybe someone can show me a way to do this via cli or maybe with a python script Thanks
Python - Open a given path in a separate window and continue script
43,028,894
0
0
40
0
python,ubuntu,window,subprocess
The window placement is performed according to the placement policy of ones user interface. This can be influenced by add-ons, but depends on the user interface you use. As to the continuation of the script, you could call the subprocess.Popen(...) in a thread you create for that purpose.
0
1
0
0
2017-03-26T12:01:00.000
1
0
false
43,028,435
0
0
0
1
My question is how can I open a given path in a window and continue the script? I'd also like to select where to put that window. This is aimed to Ubuntu, where I can set a window in any corner by pressing cntrl + alt + 1/7/9/3. I've tried this so far, but appart from not being able to continue the script, I can't select where to position the window: import subprocess subprocess.Popen(["xdg-open", "/home/user/Desktop"]) Thanks
Is it correct to use a cron job for a notification programme?
43,035,495
0
0
244
0
python,notifications,cron,crontab
Since this is a personal project, that is ok I would say. It is quick and simple, as well as using pre-existing tools available to you (crontab in this case). The downside of it, it is making the solution / programme OS dependent. What if someone ever wants or needs to use this on Windows? It would not not work as crontabs are not available in that OS. For making it OS independent / portable, you should include the ability to manage, control notification and trigger them in your program. This would of course require it to be spawn as a server, so keep track of tasks and notifications on them. How far do you want to go? That is the question.
0
1
0
0
2017-03-26T21:47:00.000
1
1.2
true
43,034,958
1
0
0
1
I'm making a small reminder/note-taking programme for myself, and I have a lot of it set up. All that I'm wondering is if it'd be correct for me to make a cron job for each note. This cron job would run notify-send whenever a note was set to take place. If this is the correct method, how would I go about doing this?
Intrepret bytecode with subprocess.call argument
43,040,077
0
0
59
0
python,bash
A quick thing to note: $ is a bash construct. It is the one which evaluates the variable and returns it's value. This does not happen in general when calling one program from another program. So when you invoke myprogram it is up to you to provide all the arguments in a form in which myprogram understands them.
0
1
0
0
2017-03-27T07:06:00.000
2
0
false
43,039,917
0
0
0
1
I have a program that take one argument. I need to call this program in my python script and I need to pass the argument in bytecode format (like \x42\x43). Directly in bash, I can do like this and it does work: ./myprogram $'\x42\x43' But with subprocess.call it doesn't work: subprocess.call(["myprogram", "$'\x42\x43'"]) Bytes are not intrepreted. I try to call my program with /bin/bash but my program returns a segfault!
How can I use Linux commands in python Windows
43,050,053
0
0
803
0
python,linux,windows
os.rename(src, dst) Rename the file or directory src to dst. If dst is a directory, OSError will be raised. On Unix, if dst exists and is a file, it will be replaced silently if the user has permission. The operation may fail on some Unix flavors if src and dst are on different filesystems. If successful, the renaming will be an atomic operation (this is a POSIX requirement). On Windows, if dst already exists, OSError will be raised even if it is a file; there may be no way to implement an atomic rename when dst names an existing file. or shutil.move(src, dst) Recursively move a file or directory (src) to another location (dst). If the destination is an existing directory, then src is moved inside that directory. If the destination already exists but is not a directory, it may be overwritten depending on os.rename() semantics. If the destination is on the current filesystem, then os.rename() is used. Otherwise, src is copied (using shutil.copy2()) to dst and then removed. If I got you right both will work for you. by the way I know that when you install git you can enable Linux commands inside your CMD during the installation. (pay attention to checkbox there), but I'm not sure how it will behave and integrate with your scripts.
0
1
0
0
2017-03-27T14:32:00.000
2
0
false
43,049,256
0
0
0
2
Is it possible to use mv in Windows python. I want to use mv --backup=t *.pdf ..\ to make copies of existing file but don't want to overwrite them, and Windows move command does not supports suffixes with existing files. I can run my script with mv command in Windows Bash or CygWin but not on cmd or powershell. So is it possible to use Linux commands in Windows python? EDIT: i'm using python 2.7
How can I use Linux commands in python Windows
43,064,620
1
0
803
0
python,linux,windows
well I tried a different approach to rename the existing files with a random hex at the end on the 'name' and i'm pretty much satisfied with it :D if os.path.isfile('../%s.pdf' % name) == True: os.system('magick *.jpg pdf:"%s".pdf' % name_hex) else: os.system('magick *.jpg pdf:"%s".pdf' % name)
0
1
0
0
2017-03-27T14:32:00.000
2
1.2
true
43,049,256
0
0
0
2
Is it possible to use mv in Windows python. I want to use mv --backup=t *.pdf ..\ to make copies of existing file but don't want to overwrite them, and Windows move command does not supports suffixes with existing files. I can run my script with mv command in Windows Bash or CygWin but not on cmd or powershell. So is it possible to use Linux commands in Windows python? EDIT: i'm using python 2.7
ec2 run scripts every boot
43,056,995
0
2
4,018
0
python,linux,amazon-web-services,amazon-ec2
I read that the use of rc.local is getting deprecated. One thing to try is a line in /etc/crontab like this: @reboot full-path-of-script If there's a specific user you want to run the script as, you can list it after @reboot.
0
1
0
1
2017-03-27T20:35:00.000
3
0
false
43,056,007
0
0
0
1
I have followed a few posts on here trying to run either a python or shell script on my ec2 instance after every boot not just the first boot. I have tried the: [scripts-user, always] to /etc/cloud/cloud.cfg file Added script to ./scripts/per-boot folder and adding script to /etc/rc.local Yes the permissions were changed to 755 for /etc/rc.local I am attempting to pipe the output of the file into a file located in the /home/ubuntu/ directory and the file does not contain anything after boot. If I run the scripts (.sh or .py) manually they work. Any suggestions or request for additional info to help?
How to change default idle for python (Ubuntu)?
43,060,767
2
0
1,044
0
python,python-2.7,python-3.x,ubuntu-16.04,python-idle
Type whereis python2 on your terminal; you end up getting possibly one or more paths to python2. You can then copy-paste any of these paths onto your alias for python in .bash_aliases.
0
1
0
0
2017-03-28T02:33:00.000
2
0.197375
false
43,059,689
0
0
0
1
Okay, so I have python 3.5 on my system (Ubuntu 16.04). Whenever I open a .py file, Idle3 starts, thus pressing F5 will instantly run my code. However I need python 2.7 now for an assignment. In terminal I've apt-get install idle so, I can open idle and idle3 there easily. My problem is, I can't change my .py files' default application to idle. It only sees idle3, so I can't open my files with idle(2.7) as default. Tried to make an alias in ~/.bash_aliases as alias python=/usr/local/bin/python2.7, but typing python --version into terminal I get: -bash: /usr/local/bin/python2.7: No such file or directory. Typing python2 --version and python3 --version works fine. Is there any simple workaround for that?
google-cloud-sdk installation not finding right Python 2.7 version in CentOS /usr/local/bin
54,142,496
0
5
9,333
0
linux,python-2.7,google-app-engine,google-cloud-platform,centos6
If you are on Windows This is a simple solution that worked for me: open Powershell as administrator and run this to add your Python folder to your environment's PATH: $env:Path += ";C:\python27_x64\" Then re-run the command that gave you the original error. It should work fine. Alternatively you could run that original (error-causing) command within the Cloud SDK Shell. That also worked for me.
0
1
0
0
2017-03-28T08:39:00.000
4
0
false
43,064,633
0
0
0
1
Our server OS is CentOS 6.8, I was trying to install google-cloud-sdk, even though I installed python 2.7 in /usr/local/bin , it is still looking at old version of python 2.6 in /usr/bin . I tried giving export PATH=/usr/local/bin:$PATH to first look at /usr/local/bin than /usr/bin but still the problem persists. please suggest a way to fix.
How to delay the execution of a script in Python?
43,078,388
0
1
2,353
0
python,linux,bash
You should run sleep using subprocess.Popen before calling script.sh.
0
1
0
1
2017-03-28T19:26:00.000
3
0
false
43,078,256
0
0
0
1
I'm working with a Python script and I have some problems on delaying the execution of a Bash script. My script.py lets the user choose a script.sh, and after the possibility to modify that, the user can run it with various options. One of this option is the possibility to delay of N seconds the execution of the script, I used time.sleep(N) but the script.py totally stops for N seconds, I just want to retard the script.sh of N seconds, letting the user continue using the script.py. I searched for answers without success, any ideas?
Subprocess not retaining all environment variables
43,118,394
0
0
882
0
python,shell,subprocess,tcsh
Knowing the subprocess inherits all the parent process environment and they are supposed to be ran under same environment, making the shell script to not setup any environment, fixed it. This solves the environment being retained, but now the problem is, the process just hangs! (it does not happen when it is ran directly from shell)
0
1
0
0
2017-03-29T14:18:00.000
1
0
false
43,096,197
0
0
0
1
I have a tcsh shell script that sets up all the necessary environment including PYTHONPATH, which then run an executable at the end of it. I also have a python script that gets sent to the shell script as an input. So the following works perfectly fine when it is ran from Terminal: path to shell script path to python script Now, the problem occurs when I want to do the same thing from a subprocess. The python script fails to be ran since it cannot find many of the modules that's already supposed to be set via the shell script. And clearly, the PYTHONPATH ends up having many missing paths compared to the parent environment the subprocess was ran from or the shell script itself! It seems like the subprocess does not respect the environment the shell script sets up. I've tried all sorts of things already but none help! cmd = [shell_script_path, py_script_path] process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=os.environ.copy()) It makes no difference if env is not given either! Any idea how to fix this?!
undefined symbol: cudnnCreate in ubuntu google cloud vm instance
43,239,777
1
0
304
0
python,tensorflow,ubuntu-16.04,cudnn
Answering my own question: The issue was not that the library was not installed, the library installed was the wrong version hence it could not find it. In this case it was cudnn 5.0. However even after installing the right version it still didn't work due to incompatibilities between versions of driver, CUDA and cudnn. I solved all this issues by re-installing everything including the driver taking into account tensorflow libraries requisites.
0
1
0
0
2017-03-29T17:29:00.000
1
0.197375
false
43,100,290
0
1
0
1
I'm trying to run a tensorflow python script in a google cloud vm instance with GPU enabled. I have followed the process for installing GPU drivers, cuda, cudnn and tensorflow. However whenever I try to run my program (which runs fine in a super computing cluster) I keep getting: undefined symbol: cudnnCreate I have added the next to my ~/.bashrc export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda-8.0/lib64:/usr/local/cuda-8.0/extras/CUPTI/lib64:/usr/local/cuda-8.0/lib64" export CUDA_HOME="/usr/local/cuda-8.0" export PATH="$PATH:/usr/local/cuda-8.0/bin" but still it does not work and produces the same error
Why are my gunicorn Python/Flask workers exiting from signal term?
71,013,193
0
13
11,790
0
python,docker,flask,gunicorn,amazon-ecs
For me, it turned out that the worker was quitting due to one of the containers in my Docker Swarm stack was failing repeatedly, resulting in the rollback process. The gunicorn process received the signal 'term' when the rollback process began.
0
1
0
0
2017-03-29T21:58:00.000
4
0
false
43,104,913
0
0
1
2
I have a Python/Flask web application that I am deploying via Gunicorn in a docker image on Amazon ECS. Everything is going fine, and then suddenly, including the last successful request, I see this in the logs: [2017-03-29 21:49:42 +0000] [14] [DEBUG] GET /heatmap_column/e4c53623-2758-4863-af06-91bd002e0107/ADA [2017-03-29 21:49:43 +0000] [1] [INFO] Handling signal: term [2017-03-29 21:49:43 +0000] [14] [INFO] Worker exiting (pid: 14) [2017-03-29 21:49:43 +0000] [8] [INFO] Worker exiting (pid: 8) [2017-03-29 21:49:43 +0000] [12] [INFO] Worker exiting (pid: 12) [2017-03-29 21:49:43 +0000] [10] [INFO] Worker exiting (pid: 10) ... [2017-03-29 21:49:43 +0000] [1] [INFO] Shutting down: Master And the processes die off and the program exits. ECS then restarts the service, and the docker image is run again, but in the meanwhile the service is interrupted. What would be causing my program to get a TERM signal? I can't find any references to this happening on the web. Note that this only happens in Docker on ECS, not locally.
Why are my gunicorn Python/Flask workers exiting from signal term?
43,105,563
16
13
11,790
0
python,docker,flask,gunicorn,amazon-ecs
It turned out that after adding a login page to the system, the health check was getting a 302 redirect to /login at /, which was failing the health check. So the container was periodically killed. Amazon support is awesome!
0
1
0
0
2017-03-29T21:58:00.000
4
1.2
true
43,104,913
0
0
1
2
I have a Python/Flask web application that I am deploying via Gunicorn in a docker image on Amazon ECS. Everything is going fine, and then suddenly, including the last successful request, I see this in the logs: [2017-03-29 21:49:42 +0000] [14] [DEBUG] GET /heatmap_column/e4c53623-2758-4863-af06-91bd002e0107/ADA [2017-03-29 21:49:43 +0000] [1] [INFO] Handling signal: term [2017-03-29 21:49:43 +0000] [14] [INFO] Worker exiting (pid: 14) [2017-03-29 21:49:43 +0000] [8] [INFO] Worker exiting (pid: 8) [2017-03-29 21:49:43 +0000] [12] [INFO] Worker exiting (pid: 12) [2017-03-29 21:49:43 +0000] [10] [INFO] Worker exiting (pid: 10) ... [2017-03-29 21:49:43 +0000] [1] [INFO] Shutting down: Master And the processes die off and the program exits. ECS then restarts the service, and the docker image is run again, but in the meanwhile the service is interrupted. What would be causing my program to get a TERM signal? I can't find any references to this happening on the web. Note that this only happens in Docker on ECS, not locally.
how do i setup django in wamp?
43,772,665
2
0
3,505
0
python,django,wamp
Ok the answer is basically ericeastwood.com/blog/3/django-setup-for-wamp combined with httpd.apache.org/docs/2.4/vhosts/name-based.html – shadow
0
1
0
0
2017-03-30T01:57:00.000
2
1.2
true
43,107,173
0
0
1
1
I want to test my django app in my WAMP server. The idea is that i want to create a web app for aaa.com and aaa.co.uk, if the user enter the domain aaa.co.uk, my django app will serve the UK version, if the user go to aaa.com, the same django app will serve the US version (different frontend). Basically i will be detecting the host of the user and serve the correct templates. How do i setup my WAMP so i can test this? right now i am using pyCharm default server which is 127.0.0.1:8000
Possible to outsource computations to AWS and utilize results locally?
43,107,922
1
2
99
0
python,amazon-web-services,amazon-ec2,hpc,grid-computing
Possible: of course it is. You can use any kind of RPC to implement this. HTTPS requests, xml-rpc, raw UDP packets, and many more. If you're more interested in latency and small amounts of data, then something UDP based could be better than TCP, but you'd need to build extra logic for ordering the messages and retrying the lost ones. Alternatively something like Zeromq could help. As for the latency: only you can answer that, because it depends on where you're connecting from. Start up an instance in the region closest to you and run ping, or mtr against it to find out what's the roundtrip time. That's the absolute minimum you can achieve. Your processing time goes on top of that.
0
1
0
1
2017-03-30T03:08:00.000
3
0.066568
false
43,107,807
0
0
1
2
I'm working on a robot that uses a CNN that needs much more memory than my embedded computer (Jetson TX1) can handle. I was wondering if it would be possible (with an extremely low latency connection) to outsource the heavy computations to EC2 and send the results back to the be used in a Python script. If this is possible, how would I go about it and what would the latency look like (not computations, just sending to and from).
Possible to outsource computations to AWS and utilize results locally?
43,107,931
1
2
99
0
python,amazon-web-services,amazon-ec2,hpc,grid-computing
I think it's certainly possible. You would need some scripts or a web server to transfer data to and from. Here is how I think you might achieve it: Send all your training data to an EC2 instance Train your CNN Save the weights and/or any other generated parameters you may need Construct the CNN on your embedded system and input the weights from the EC2 instance. Since you won't be needing to do any training here and won't need to load in the training set, the memory usage will be minimal. Use your embedded device to predict whatever you may need It's hard to give you an exact answer on latency because you haven't given enough information. The exact latency is highly dependent on your hardware, internet connection, amount of data you'd be transferring, software, etc. If you're only training once on an initial training set, you only need to transfer your weights once and thus latency will be negligible. If you're constantly sending data and training, or doing predictions on the remote server, latency will be higher.
0
1
0
1
2017-03-30T03:08:00.000
3
1.2
true
43,107,807
0
0
1
2
I'm working on a robot that uses a CNN that needs much more memory than my embedded computer (Jetson TX1) can handle. I was wondering if it would be possible (with an extremely low latency connection) to outsource the heavy computations to EC2 and send the results back to the be used in a Python script. If this is possible, how would I go about it and what would the latency look like (not computations, just sending to and from).
autotools: pass constant from configure.ac to python script
43,163,111
5
2
149
0
python,autotools,automake
Create a config.py.in with some contents like MYVAR = '''@MYVAR@''' and add it to AC_CONFIG_FILES in your configure.ac. You can then import config in your other Python scripts. This fulfills much the same function as config.h does for C programs.
0
1
0
1
2017-03-31T09:20:00.000
1
1.2
true
43,136,997
0
0
0
1
I want to pass a constant in a C preprocessor style but with a Python script. This constant is already declared with AC_DEFINE in my configure.ac file and used in my C program, and now I need to pass it to a Python script too. I tried with a custom target in my Makefile.am with a sed call to preprocess a specific symbol in my Python script, but it seems dirty-coding to me. How can I achieve this?
How to return a process id of a lengthy process started using Thread in python before the thread completes its execution
43,145,849
0
1
254
0
python,multithreading,subprocess
If you are using subprocess.Popen simply to spin off another process, there is no reason you need to do so from another thread. A sub-process created this way does not block your main thread. You can continue to do other things while the sub-process is running. You simply keep a reference to the Popen object returned. The Popen object has all the facilities you need for monitoring / interacting with the sub-process. You can read and write to its standard input and output (via stdin and stdout members, if created with PIPE); you can monitor readability / writability of stdin and stdout (with select module); you can check whether the sub-process is still in existence with poll, reap its exit status with wait; you can stop it with terminate (or kill depending on how emphatic you wish to be). There are certainly times when it might be advantageous to do this from another thread -- for example, if you need significant interaction with the sub-process and implementing that in the main thread would over-complicate your logic. In that case, it would be best to arrange a mechanism whereby you signal to your other "monitoring" thread that it's time to shutdown and allow the monitoring thread to execute terminate or kill on the sub-process.
1
1
0
0
2017-03-31T12:55:00.000
1
0
false
43,141,252
0
0
0
1
How can I return a process id of a lengthy process started using Thread in Python before the thread completes its execution? I'm using Tkinter GUI so I can't start a lengthy process on the main thread so instead I start one on a separate thread. The thread in turn calls subprocess.popen. This process should run for like 5 -6 hours. But When I press stopbutton I need this process to stop but I am unable to return the process id of the process created using subprocess.popen. Is there any solution to this?
Dynamic task generation in an Airflow DAG
43,146,209
0
5
1,242
0
python,airflow
Trigger_dag concept Let the task that uses a database hook in a python operator to generate a list" be the task in the controller dag and pass the each item in list to the trigger_dag in the params section. You will find reference in the examples folder in your airflow installation Good Luck!
0
1
0
0
2017-03-31T15:48:00.000
1
0
false
43,144,802
0
0
0
1
I want to use Airflow to generate client reports, I would like to have one DAG that loops through all clients and launches a task to generate their report. The list of clients is gathered by the first task in the DAG and cannot be hardcoded in. Basically I have a task that uses a database hook in a python operator to generate a list. Then for each item in the list I would like to execute a task using a python operator with that item being passed as an argument to the python function. Is there a certain pattern I can use to achieve this?
Where should i save my tornado custom options
43,171,572
0
0
28
1
python,tornado
Yes, there is the tornado.options package, which does pretty much what you need. Keep in mind, however, that the values saved here are not persisted between requests; if you need that kind of functionality, you will have to implement an external persistence solution, which you already have done with SQLite.
0
1
0
0
2017-03-31T16:39:00.000
1
0
false
43,145,705
0
0
1
1
I am working on a python/tornado web application. I have several options to save in my app. Thoses options can by changed by the user, and those options will be access very often. I have created an sqlite database but there is some disk operation and i am asking you what is the best location for those options. Does tornado embed a feature for custom user options ? Thanks
How do I open Python IDLE (Shell WIndow) in WIndows 10?
47,840,739
2
12
68,871
0
python,shell
If your using Windows 10 just type in idle where it says: "Type here for search"
0
1
0
0
2017-04-01T16:46:00.000
5
0.07983
false
43,159,488
1
0
0
3
I am just starting to learn Python and I am using Windows 10. I downloaded and installed Python 3.4.3. But everytime I open Python from my Desktop or from C:\Python\python.exe it just opens a black command prompt without any Menu options like File menu, Edit Menu, Format Menu etc. I can't see any colors of the code, it's just black screen with white text. I searched about it on internet and came to know that what I am opening is the Editor winodws and I need to open Shell Window in order to have access to all of those options and features. I can't figure out where is the .exe of Shell Window and with what name is it? Please help me. P.S. I also tried to open pythonw.exe that was present in the Python folder where it was installed, but nothing opened.
How do I open Python IDLE (Shell WIndow) in WIndows 10?
56,585,937
4
12
68,871
0
python,shell
Start menu > type IDLE (Python 3.4.3 <bitnum>-bit). Replace <bitnum> with 32 if 32-bit, otherwise 64. Example: IDLE (Python 3.6.2 64-bit) I agree with one who says: just type "IDLE" in the start-menu where it says "Type here to search" and press [{ENTER}]
0
1
0
0
2017-04-01T16:46:00.000
5
0.158649
false
43,159,488
1
0
0
3
I am just starting to learn Python and I am using Windows 10. I downloaded and installed Python 3.4.3. But everytime I open Python from my Desktop or from C:\Python\python.exe it just opens a black command prompt without any Menu options like File menu, Edit Menu, Format Menu etc. I can't see any colors of the code, it's just black screen with white text. I searched about it on internet and came to know that what I am opening is the Editor winodws and I need to open Shell Window in order to have access to all of those options and features. I can't figure out where is the .exe of Shell Window and with what name is it? Please help me. P.S. I also tried to open pythonw.exe that was present in the Python folder where it was installed, but nothing opened.
How do I open Python IDLE (Shell WIndow) in WIndows 10?
43,159,526
16
12
68,871
0
python,shell
In Windows you will need to right click a .py, and press Edit to edit the file using IDLE. Since the default action of double clicking a .py is executing the file with python on a shell prompt. To open just IDLE: Click on that. C:\Python36\Lib\idlelib\idle.bat
0
1
0
0
2017-04-01T16:46:00.000
5
1.2
true
43,159,488
1
0
0
3
I am just starting to learn Python and I am using Windows 10. I downloaded and installed Python 3.4.3. But everytime I open Python from my Desktop or from C:\Python\python.exe it just opens a black command prompt without any Menu options like File menu, Edit Menu, Format Menu etc. I can't see any colors of the code, it's just black screen with white text. I searched about it on internet and came to know that what I am opening is the Editor winodws and I need to open Shell Window in order to have access to all of those options and features. I can't figure out where is the .exe of Shell Window and with what name is it? Please help me. P.S. I also tried to open pythonw.exe that was present in the Python folder where it was installed, but nothing opened.
spark consume from stream -- considering data for longer period
43,182,382
2
1
35
0
python-3.x,apache-spark,pyspark
Your streaming job is not supposed to calculate the Daily count/Avg. Approach 1 : You can store the data consumer from Kafka into a persistent storage like DB/HBase/HDFS , and then you can run Daily batch which will calculate all the statistics for you like Daily count or avg. Approach 2 : In order to get that information form streaming itself you need to use Accumulators which will hold the record count,sum. and calculate avg according. Approach 3 : Use streaming window, but holding data for a day doesn't make any sense. If you need 5/10 min avg, you can use this. I think the first method is preferable as it will give you more flexibility to calculate all the analytics you want.
0
1
0
0
2017-04-03T04:42:00.000
1
1.2
true
43,176,607
0
1
0
1
We have a spark job running which consumes data from kafka stream , do some analytics and store the result. Since data is consumed as they are produced to kafka, if we want to get count for the whole day, count for an hour, average for the whole day that is not possible with this approach. Is there any way which we should follow to accomplish such requirement Appreciate any help Thanks and Regards Raaghu.K
Are there any negative consequences if a python script moves/renames its parent directory?
43,177,411
2
2
24
0
python,github,directory
As long as all of the code used by the script has been compiled and loaded into the Python VM there will be no issue with the source moving since it will remain resident in memory until the process ends or is replaced (or swapped out, but since it is considered dirty data it will be swapped in exactly the same). The operating system, though, may attempt to block the move operation if any files remain open during the process.
0
1
0
1
2017-04-03T05:47:00.000
1
0.379949
false
43,177,320
0
0
0
1
I have a github project available to others. One of the scripts, update.py, checks github everyday (via cron) to see if there is a newer version available. Locally, the script is located at directory /home/user/.Project/update.py If the version on github is newer, then update.py moves /home/user/.Project/ to /home/user/.OldProject/, clones the github repo and moves/renames the downloaded repo to /home/user/.Project/ It has worked perfectly for me about five times, but I just realized that the script is moving itself while it is still running. Are there any unforeseen consequences to this approach, and it there a better way?
import os, trying to refresh event variables after running a script
43,647,525
0
0
45
0
python,python-2.7
since i know where it will be installed, you can set env, and then call sub processes. The issue i was having is that a lot of these executables assign their own path variables which is what i wanted to do. Since i cant relaunch a new console due to security issues, the best course of action would be to navigate to the new applications target bin folder or otherwise and then set the env or pass it into subprocesses by appending it with Env variables.
0
1
0
0
2017-04-03T18:05:00.000
2
1.2
true
43,191,431
1
0
0
1
I install a program through python, git in this case. Immediately after, I will call os.system("git --version") but the call doesn't go through because of the snapshot of variables has not been updated. Is there a way to refresh the cmd prompt? Maybe just reimport os or something? The issue i am having is that after installing an application, the app related cmd commands are not yet key words. I have noticed this is a reoccurring issue in all of my platform configuration installs. I spent awhile reading docs but i havent see anything really jumping out at me other than the concept that the env is pulls at the time of importing os so maybe that means i could dump and reimport it.
How to handle filepaths?
43,198,109
1
0
61
0
python
A module's location is always available in the __file__ variable. You can use the functions in os.path (I'm mainly thinking of basedir and join) to transform module-relative paths to absolute paths
0
1
0
0
2017-04-04T04:11:00.000
2
0.099668
false
43,198,084
1
0
0
1
I discovered that a script's "current working directory" is, initially, not the where the script is located, but rather where the user is when he/she runs the script. If the script is at /Desktop/Projects/pythonProject/myscript.py, but I'm at /Documents/Arbitrary in my terminal when I run the script, then that's going to be it's present working directory, and an attempt at open('data.txt') is going to give File Not Found because it's not looking in the right directory. So how is a script supposed to open files if it can't know where it's being run from? How is this handled? My initial thought was to use absolute paths. Say my script needs to open data.txt which is stored alongside it in its package pythonProject. Then I would just say open('/Desktop/Projects/pythonProject/data.txt'). But then you can't ever move the project without editing every path in it, so this can't be the right solution. Or is the answer simply that you must be in the directory where the script is located whenever you run the script? That doesn't seem right either. Is there some simple manipulation for this that I'm not thinking of? Are you just supposed to os.chdir to the script's location at the beginning of the script?
Broken Pipe Error Redis
43,210,008
8
5
9,490
0
python,sockets,redis,redis-py
Redis' String data type can be at most 512MB.
0
1
0
0
2017-04-04T10:22:00.000
2
1.2
true
43,204,496
0
1
0
1
We are trying to SET pickled object of size 2.3GB into redis through redis-py package. Encountered the following error. BrokenPipeError: [Errno 32] Broken pipe redis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer. I would like to understand the root cause. Is it due to input/output buffer limitation at server side or client side ? Is it due to any limitations on RESP protocol? Is single value (bytes) of 2.3 Gb allowed to store into Redis ? import redis r = redis.StrictRedis(host='10.X.X.X', port=7000, db=0) pickled_object = pickle.dumps(obj_to_be_pickled) r.set('some_key', pickled_object) Client Side Error BrokenPipeError: [Errno 32] Broken pipe /usr/local/lib/python3.4/site-packages/redis/connection.py(544)send_packed_command() self._sock.sendall(item) Server Side Error 31164:M 04 Apr 06:02:42.334 - Protocol error from client: id=95 addr=10.2.130.144:36120 fd=11 name= age=0 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=16384 qbuf-free=16384 obl=42 oll=0 omem=0 events=r cmd=NULL 31164:M 04 Apr 06:07:09.591 - Protocol error from client: id=96 addr=10.2.130.144:36139 fd=11 name= age=9 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=40 qbuf-free=32728 obl=42 oll=0 omem=0 events=r cmd=NULL Redis Version : 3.2.8 / 64 bit
How would I go about making a Python script into an executable?
43,219,241
0
0
492
0
python,windows,executable
You don't have shell scripts on Windows, you have batch or powershell. If your reading is teaching Unix things, get a virtual machine running (insert popular Linux distribution here). Regarding python, you just execute python script.py
0
1
0
1
2017-04-04T23:10:00.000
2
0
false
43,219,217
0
0
0
1
I'm reading from a "bookazine" which I purchased from WHSmiths today and its said during the setup I need to type in these commands into the terminal (or the Command Prompt in my case) in order to make a script without needing to do it manually. One of these commands is chmod +x (file name) but because this is based of Linux or Mac and I am on Windows I am not sure how to make my script executable, how do I? Thanks in advance.
Segmentation fault when I try to run Anaconda Navigator
53,868,885
1
6
9,783
0
python,ubuntu,anaconda,navigator
I had the same issue when I install OpenCV library using conda.Most probably downgrading something makes this issue happen. Just type : conda update --all
0
1
0
0
2017-04-04T23:58:00.000
5
0.039979
false
43,219,679
1
1
0
2
I have recently Installed Anaconda for Python 3.6 but it shows the error "Segmentation fault" whenever I try to run Anaconda-Navigator. I've tried just writting in the terminal Anaconda-Navigator and also going to my Anaconda3 folder and try to execute it inside bin. The only solution that works so far is accessing the previously bin folder as root. My problem is that I need to activate TensorFlow before I run anything in my console, but that is imposible as a root user. I've already try to upgrade both, Anaconda and Navigator and reinstall them but nothing ocurrs Anyone here has any idea of what is happening?
Segmentation fault when I try to run Anaconda Navigator
47,718,983
0
6
9,783
0
python,ubuntu,anaconda,navigator
I had the same problem.I solved it by adding /lib to mt LD_LIBRARY_PATH. Note: On my system Anaconda installation path is /home/pushyamik/anaconda3.
0
1
0
0
2017-04-04T23:58:00.000
5
0
false
43,219,679
1
1
0
2
I have recently Installed Anaconda for Python 3.6 but it shows the error "Segmentation fault" whenever I try to run Anaconda-Navigator. I've tried just writting in the terminal Anaconda-Navigator and also going to my Anaconda3 folder and try to execute it inside bin. The only solution that works so far is accessing the previously bin folder as root. My problem is that I need to activate TensorFlow before I run anything in my console, but that is imposible as a root user. I've already try to upgrade both, Anaconda and Navigator and reinstall them but nothing ocurrs Anyone here has any idea of what is happening?
How to delay the run of SCons source scanner?
43,432,166
0
0
180
0
python,build,dependencies,scons
Here's another potential solution which is kind of another workaround. Is it possible for the scanner to speculate the list of files that will be generated if the *.i swig interface files are passed to it as the "node" argument? This way the scanner doesn't actually need the files to be present to generate the list of dependencies. In general, I'm wondering if the solution to this problem is to just write logic to aggressively speculate the dependencies before the SWIG libraries are actually generated. I don't assume much info can be gained from looking at the "_*.so" files themselves.
0
1
0
0
2017-04-05T02:18:00.000
2
0
false
43,220,715
1
0
0
2
I have a SCons build system set up to build some libraries from C++, as well as Python wrappers for them via SWIG. Then the results are used for data processing, which is also a part of SCons build. The data processing is Python scripts that use the built SWIG-wrapped libraries. I've set up the dependencies such that data processing starts after all the libraries and wrappers are built, and that works out well. But there's a caveat (you guessed it, right? :) ). I want to add a source scanner, which also uses some of the SWIG libraries to expand the dependencies. The problem is that the scanner runs too soon. In fact, I see it running twice - once at some point early in the build and the other just before data processing starts. So the first scanner run in parallel build typically happens before all the necessary libraries are built, so it fails. How can I make the scanner itself depend on library targets? Or, can I delay the scanner run - or eliminate the first scanner run? Any other ideas?
How to delay the run of SCons source scanner?
43,220,910
1
0
180
0
python,build,dependencies,scons
One workaround I think would work is to turn the scanner into a builder that runs the scan process instead of scanner and generates a file that lists all the dependencies. The data processing build would then simply have a scanner to parse that file. I'd expect SCons not attempt to run it early, because it would be aware of the scanned source file being a target of some builder. Assuming it works, it is still a sub-par solution as it complicates the build set up and adds extra file I/O of a not-so-small file (the dependencies are thousands of files, with long paths).
0
1
0
0
2017-04-05T02:18:00.000
2
0.099668
false
43,220,715
1
0
0
2
I have a SCons build system set up to build some libraries from C++, as well as Python wrappers for them via SWIG. Then the results are used for data processing, which is also a part of SCons build. The data processing is Python scripts that use the built SWIG-wrapped libraries. I've set up the dependencies such that data processing starts after all the libraries and wrappers are built, and that works out well. But there's a caveat (you guessed it, right? :) ). I want to add a source scanner, which also uses some of the SWIG libraries to expand the dependencies. The problem is that the scanner runs too soon. In fact, I see it running twice - once at some point early in the build and the other just before data processing starts. So the first scanner run in parallel build typically happens before all the necessary libraries are built, so it fails. How can I make the scanner itself depend on library targets? Or, can I delay the scanner run - or eliminate the first scanner run? Any other ideas?
Do Docker containers share a single Python GIL?
43,245,303
2
5
948
0
python,python-3.x,docker,containers,virtualization
So does Docker share a common Python GIL lock among all containers? NO. The GIL is per Python process, a Docker container may have 1 or many Python processes, each with it's own GIL. If you are not multi-threading, you should not even be aware of the GIL. Are you using threads at all?
0
1
0
0
2017-04-06T03:35:00.000
1
0.379949
false
43,245,220
1
0
0
1
When I run a Python script inside a Docker container, it completes one execution loop in ~1 minute. Now as I spin up 2 more containers from same image, and run Python scripts inside them, everything slow down to a crawl and start requiring 5-6 minutes per loop. None of the scripts are resource bound; there is plenty of RAM and CPU cores sitting around idle. This happens when running 3 containers on a 64-core Xeon Phi system. So does Docker share a common Python GIL lock among all containers? What are my options to separate the GILs, so each process will run at its full potential speed? Thank you!
Celery: Is it better to store task results in MySQL or Redis?
43,264,780
2
2
931
1
python,mysql,django,redis,celery
Performance-wise it's probably going to be Redis but performance questions are almost always nuance based. Redis stores lists of data with no requirement for them to relate to one another so is extremely fast when you don't need to use SQL type queries against the data it contains.
0
1
0
0
2017-04-06T19:57:00.000
1
0.379949
false
43,264,701
0
0
0
1
Currently I am using celery to build a scheduled database synchronization feature, which periodically fetch data from multiple databases. If I want to store the task results, would the performance be better if I store them in Redis instead of a RDB like MySQL?
Celery: When should you choose Redis as a message broker over RabbitMQ?
72,343,366
0
58
22,467
0
python,django,redis,rabbitmq,celery
The Redis broker gives tasks to workers in a fair round robin between different queues. Rabbit is FIFO always. For me, a fair round robin was preferable and I tried both. Rabbit seems a tad more stable though.
0
1
0
0
2017-04-06T20:06:00.000
2
0
false
43,264,838
0
0
0
2
My rough understanding is that Redis is better if you need the in-memory key-value store feature, however I am not sure how that has anything to do with distributing tasks? Does that mean we should use Redis as a message broker IF we are already using it for something else?
Celery: When should you choose Redis as a message broker over RabbitMQ?
48,627,555
75
58
22,467
0
python,django,redis,rabbitmq,celery
I've used both recently (2017-2018), and they are both super stable with Celery 4. So your choice can be based on the details of your hosting setup. If you must use Celery version 2 or version 3, go with RabbitMQ. Otherwise... If you are using Redis for any other reason, go with Redis If you are hosting at AWS, go with Redis so that you can use a managed Redis as service If you hate complicated installs, go with Redis If you already have RabbitMQ installed, stay with RabbitMQ In the past, I would have recommended RabbitMQ because it was more stable and easier to setup with Celery than Redis, but I don't believe that's true any more. Update 2019 AWS now has a managed service that is equivalent to RabbitMQ called Amazon MQ, which could reduce the headache of running this as a service in production. Please comment below if you have any experience with this and celery.
0
1
0
0
2017-04-06T20:06:00.000
2
1.2
true
43,264,838
0
0
0
2
My rough understanding is that Redis is better if you need the in-memory key-value store feature, however I am not sure how that has anything to do with distributing tasks? Does that mean we should use Redis as a message broker IF we are already using it for something else?
How to restart a failed task on Airflow
43,330,451
98
54
35,603
0
python,hadoop,airflow
In the UI: Go to the dag, and dag run of the run you want to change Click on GraphView Click on task A Click "Clear" This will let task A run again, and if it succeeds, task C should run. This works because when you clear a task's status, the scheduler will treat it as if it hadn't run before for this dag run.
0
1
0
0
2017-04-07T06:08:00.000
2
1.2
true
43,270,820
0
0
0
1
I am using a LocalExecutor and my dag has 3 tasks where task(C) is dependant on task(A). Task(B) and task(A) can run in parallel something like below A-->C B So task(A) has failed and but task(B) ran fine. Task(C) is yet to run as task(A) has failed. My question is how do i re run Task(A) alone so Task(C) runs once Task(A) completes and Airflow UI marks them as success.
How to install multiple whl files in cmd
43,314,666
-3
6
9,927
0
python,cmd
For installing multiple packages on the command line, just pass them as a space-delimited list, e.g.: pip install numpy pandas
0
1
0
0
2017-04-10T03:23:00.000
5
-0.119427
false
43,314,517
1
0
0
1
I know how to install *.whl files through cmd (the code is simply python -m pip install *so-and-so-.whl). But since I accidentally deleted my OS and had no backups I found myself in the predicament to reinstall all of my whl files for my work. This comes up to around 50 files. I can do this manually which is pretty simple, but I was wondering how to do this in a single line. I can't seem to find anything that would allow me to simply type in python -m pip install *so-and-so.whl to find all of the whl files in the directory and install them. Any ideas?
How best to install Python + modules on Windows using InstallShield
43,726,757
1
2
847
0
python,python-2.7,installshield,python-module
I never received an answer here, so I forged ahead on my own. The Windows Python 2.7.13 installation includes pip and setuptools by default. That fact allowed me to switch from .exe module installers to wheel (.whl) installers. Since we have no Internet connection, I couldn't use a whl with unmet dependencies, but thankfully none of the modules I needed fell into that category. Once Python itself is installed, each pip installation is triggered right from the InstallShield code via LaunchAppAndWait(). The only "gotcha" was that the pywin32 module has a post-install script that must be run after the install by pip. That was handled automatically with the exe installer, so I didn't even know about it unless things went initially wrong with the whl install.
0
1
0
0
2017-04-10T19:34:00.000
1
1.2
true
43,331,589
1
0
0
1
We have an existing InstallShield installer which installs the following: Our product Python 2.7.13 via the official Windows exe installer 3 python modules (pywin32, psycopg, and setuptools) via exe installers 2 egg modules that we produce Python is installed silently, but the 3 module installers bring up their own installer windows that block our install, look very unprofessional, and require the user to click through them. There appears to be no parameters that we can pass to force them to run silently. Our installer is 7 years old. I assume that advancements in how Python modules are installed on Windows have made exe-based module installers completely obsolete, but I can't seem to find a clear answer on what the recommended "modern" method of installation would be. Given the following limitations, what can we do to make the installer run to completion with no need to click through the module installers? The following conditions apply: We must continue to use InstallShield as the installation engine. We will not have an Internet connection during installation. The install is for all users on the machine.
How to switch Python versions in Terminal?
43,354,458
3
38
185,402
0
python,django,bash,macos,terminal
If you have python various versions of python installed,you can launch any of them using pythonx.x.x where x.x.x represents your versions.
0
1
0
0
2017-04-11T19:11:00.000
10
0.059928
false
43,354,382
1
0
0
2
My Mac came with Python 2.7 installed by default, but I'd like to use Python 3.6.1 instead. How can I change the Python version used in Terminal (on Mac OS)? Please explain clearly and offer no third party version manager suggestions.
How to switch Python versions in Terminal?
62,839,173
0
38
185,402
0
python,django,bash,macos,terminal
I have followed the below steps in Macbook. Open terminal type nano ~/.bash_profile and enter Now add the line alias python=python3 Press CTRL + o to save it. It will prompt for file name Just hit enter and then press CTRL + x. Now check python version by using the command : python --version
0
1
0
0
2017-04-11T19:11:00.000
10
0
false
43,354,382
1
0
0
2
My Mac came with Python 2.7 installed by default, but I'd like to use Python 3.6.1 instead. How can I change the Python version used in Terminal (on Mac OS)? Please explain clearly and offer no third party version manager suggestions.
What is the relationship between Celery and RabbitMQ?
43,379,719
1
6
3,260
0
python,rabbitmq,celery
Celery is the task management framework--the API you use to schedule jobs, the code that gets those jobs started, the management tools (e.g. Flower) you use to monitor what's going on. RabbitMQ is one of several "backends" for Celery. It's an oversimplification to say that Celery is a high-level interface to RabbitMQ. RabbitMQ is not actually required for Celery to run and do its job properly. But, in practice, they are often paired together, and Celery is a higher-level way of accomplishing some things that you could do at a lower level with just RabbitMQ (or another queue or message delivery backend).
0
1
1
0
2017-04-12T20:59:00.000
2
0.099668
false
43,379,554
0
0
0
1
Is Celery mostly just a high level interface for message queues like RabbitMQ? I am trying to set up a system with multiple scheduled workers doing concurrent http requests, but I am not sure if I would need either of them. Another question I am wondering is where do you write the actual task in code for the workers to complete, if I am using Celery or RabbitMQ?
Connect from local GAE project to Google Cloud Datastore
43,394,458
-2
4
378
0
python,google-app-engine,google-cloud-datastore
You have to specify the path to your codebase. IF you are running the command from the same folder, use . dev_appserver.py .
0
1
0
0
2017-04-13T13:59:00.000
1
-0.379949
false
43,394,318
0
0
1
1
I have a project on GAE wich use Google Cloud Datastore. Of course, I have a development environment on my local machine(with local Datastore), and stage environment and production environment on the Google Cloud with two Datastores(stage & prod) for each environment. When I run a project on my local machine NDB connect me to my local Datastore. And it's a problem because I want to connect to Google Cloud Datastore How can I run the project on my local machine and connect it Google Cloud Datastore(stage)? I use Python, and run the project via: dev_appserver.py app.yaml
Linux pip package installation error
43,401,090
3
1
3,160
0
linux,python-2.7,pip,installation
Seems there's a problem with your pip installation. I have two options for you. 1) Edit file /usr/lib/python2.7/site-packages/packaging/requirements.py and replace line MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker") with MARKER_EXPR = originalTextFor(MARKER_EXPR)("marker") OR 2) Try and upgrade your pip installation with pip install -U pip setuptools
0
1
0
0
2017-04-13T19:54:00.000
3
0.197375
false
43,400,703
1
0
1
1
I am using python 2.7 and trying to install scrapy using pip but get this: Exception: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 324, in run requirement_set.prepare_files(finder) File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 634, in _prepare_file abstract_dist.prep_for_dist() File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 129, in prep_for_dist self.req_to_install.run_egg_info() File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 412, in run_egg_info self.setup_py, self.name, File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 387, in setup_py import setuptools # noqa File "/root/.local/lib/python2.7/site-packages/setuptools/init.py", line 12, in import setuptools.version File "/root/.local/lib/python2.7/site-packages/setuptools/version.py", line 1, in import pkg_resources File "/root/.local/lib/python2.7/site-packages/pkg_resources/init.py", line 72, in import packaging.requirements File "/root/.local/lib/python2.7/site-packages/packaging/requirements.py", line 59, in MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker") TypeError: call() takes exactly 2 arguments (1 given)
Run python script without type pythonat the front
43,407,826
1
0
81
0
python
Are you running it from the same folder using ./SleepCalc.py command? SleepCalc.pyonly will not work.
0
1
0
1
2017-04-14T08:17:00.000
1
0.197375
false
43,407,803
1
0
0
1
Usually I run python script python myscript.py, I want to run script directly without type python, I already Add shebang: #!/usr/bin/env python at the top of my script. And then give the file permission to execute by: chmod +x SleepCalc.py but it still tell me "Command not found" Is anything I need to change in cshell?or anything I did wrong?
Capture run time of python script executed inside shell script
43,416,717
0
0
884
0
python,bash,shell
Call the python script with /usr/bin/time script. This allows you to track CPU and wall-clock time of the script.
0
1
0
1
2017-04-14T17:49:00.000
2
0
false
43,416,606
1
0
0
1
I have bash shell script which is internally calling python script.I would like to know how long python is taking to execute.I am not allowed to do changes in python script. Any leads would be helpful thanks in advance.
Executing Python code lines in Cygwin
43,420,091
1
0
102
0
python,python-2.7
If you go to the Cygwin site, you can find the answers to all of your questions. Cygwin provides a collection of tools that give functionality similar to a Linux distro on Windows. Cygwin also provides substantial POSIX API functionality. When programmers launch their python scripts using Cygwin, they are using the tools provided within the Cygwin library. To avoid spoon-feeding while still answering your question, go into Cygwin and test it for yourself. What happens when you enter that command within Cygwin? Once you see the result, if you have any other questions, comment them on here.
0
1
0
0
2017-04-14T22:24:00.000
1
1.2
true
43,419,989
1
0
0
1
I am new to Python. I must precise that I don't clearly understand the relation between Cygwin and Python. I've seen tutorials of programmers launching a Python script in Cygwin with the following line: python "file path" I think that this line makes python build and run that script. My question is: is it possible to directly write "print ("Hello World") " in Cygwin ? By the way, are the three arrows (>>>) used to designate a Cygwin Shell input line? Many thanks in advance! Nicola
Is it recommended to use TensorFlow under Ubuntu or under Windows?
43,425,655
2
2
8,589
0
python,ubuntu,tensorflow,deep-learning
I think it's easier for you to use Ubuntu if you have the possibility. Getting lapack and blas libraries from sources is easier in linux (you can get precompiled packages for windows though). I prefer native pip, but for windows and for starting Anaconda should be the choice.
0
1
0
0
2017-04-15T11:43:00.000
2
1.2
true
43,425,621
1
1
0
1
I am a newbie to TensorFlow (and the whole deep learning as well). I have a machine with dual boot, Windows 10 and Ubuntu 16. Under which OS should I install and run TensorFlow? Windows or Ubuntu? Also, what is the recommended Python environment? Anaconda or native pip?
Capture the value of python -c "some code"
43,431,194
0
1
87
0
python
The option list starts after the code (which was passed as a string literal) according to the manual: Specify the command to execute (see next section). This terminates the option list (following options are passed as arguments to the command). It means that the name of the script will be replaced by -c. The python -c "import sys; print(sys.argv)" 1 2 3 results ['-c', '1', '2', '3'] A possible solution is the usage of inspect module, for example python3 -c "import sys; import inspect; inspect.getsource(sys.modules[__name__])" but it causes TypeError because the __main__ module is a built-in one.
0
1
0
0
2017-04-15T20:11:00.000
4
0
false
43,430,790
1
0
0
2
When using sys.argv on python -c "some code" I only get ['-c'], how can I reliably access the code being passed to -c as a string?
Capture the value of python -c "some code"
43,437,447
0
1
87
0
python
This works python -c "import sys; exec(sys.argv[1])" "print 'hello'" hello
0
1
0
0
2017-04-15T20:11:00.000
4
0
false
43,430,790
1
0
0
2
When using sys.argv on python -c "some code" I only get ['-c'], how can I reliably access the code being passed to -c as a string?
Anaconda Prompt, where is the exe file saved on windows?
43,432,128
6
2
8,760
0
python,conda
Check your start menu, it should be there. Its a link named "Anaconda Prompt", that links to %windir%\system32\cmd.exe "/K" C:\...\Anaconda3\Scripts\activate.bat C:\...\Anaconda3, it's executed in C:\Users\...\AppData\Roaming\SPB_16.6
0
1
0
0
2017-04-15T22:51:00.000
1
1.2
true
43,432,038
1
0
0
1
I am looking for the exe file for Anaconda Prompt, I am looking C:\Anaconda3\Scripts and don't know what it's named?
Processing Multiple files in hadoop python
46,250,897
0
0
212
0
python,file,hadoop,pyspark,bigdata
How to handle files arriving at different times? Doesn't matter unless your data is time-sensitive. If so, then your raw data should include the timestamp at which the record was written. Should such large files be combined or processed separately? Large, separate files are best. Take note of the HDFS block size. This size depends on your installation. I want this solution to be implemented in python You're welcome to use Spark Streaming for watch a directory for files, or Oozie+Spark to just schedule regular batches but other tools are arguably simpler. Some you can research Apache NiFi Streamsets Data Collector Apache Flume Flume will require you to install agents on those 10 external servers. Each of the listed services can read data in near-real time, so you don't explicitly need 30 minute batches.
0
1
0
0
2017-04-16T09:56:00.000
1
0
false
43,435,955
0
0
0
1
I have a scenario where text delimited files arrives from different servers(around 10) to hadoop system every 30 minutes. Each file has around 2.5 million records and may not arrive at the same time, I am looking for an approach where these file can be processed every 30 minutes. My questions are: How to handle files arriving at different times? I want the data to be aggregated across 10 files. Should such large files be combined or processed separately? I want this solution to be implemented in python but solutions using any tools/techniques in hadoop would be appreciated.
How do I run the Sample files included in CUDA 8.0?
43,450,571
3
2
7,147
0
python,cuda,tensorflow,installation
First copy samples folder from installation folder somewhere else, for example your home directory. Then navigate to sample you wish to run type make and it should create executable file. For example in folder samples/1_Utilities/deviceQuery you should get exec file named deviceQuery and you can run it ./deviceQuery edit: Just noticed that you are familiar more with python than C, therefore you should check out pyCUDA
0
1
0
0
2017-04-17T09:52:00.000
2
0.291313
false
43,449,122
1
0
0
1
I'm installing CUDA 8.0 on my MacBook Pro running Sierra (by way of installing TensorFlow). Very new to GPU computing; I've only ever worked in Python at a very high level (lots of data analysis using numpy). Most of the language on the CUDA website assumes knowledge I don't have. Specifically, I have no idea how to 1) run the sample programs included in the Samples file, and 2) how to "change library pathnames in my .bashrc file" (I'm fairly sure I don't have a .bashrc file, just .bash_history and .bash_profile. How to I do the above? And are there any good ground-up references online for someone very new to all this?
Python terminal output width
43,573,926
3
9
8,727
0
python,python-3.x,shell,unix,formatting
I have the same problem while using pandas. So if this is what you are trying to solve, I fixed mine by doing pd.set_option('display.width', pd.util.terminal.get_terminal_size()[0])
0
1
0
0
2017-04-20T08:26:00.000
4
0.148885
false
43,514,106
0
0
0
2
My Python 3.5.2 output in the terminal (on a mac) is limited to a width of ca. 80px, even if I increase the size of the terminal window. This narrow width causes a bunch of line breaks when outputting long arrays which is really a hassle. How do I tell python to use the full command line window width? For the record, i am not seeing this problem in any other program, for instance my c++ output looks just fine.
Python terminal output width
43,605,633
7
9
8,727
0
python,python-3.x,shell,unix,formatting
For numpy, it turns out you can enable the full output by setting np.set_printoptions(suppress=True,linewidth=np.nan,threshold=np.nan).
0
1
0
0
2017-04-20T08:26:00.000
4
1.2
true
43,514,106
0
0
0
2
My Python 3.5.2 output in the terminal (on a mac) is limited to a width of ca. 80px, even if I increase the size of the terminal window. This narrow width causes a bunch of line breaks when outputting long arrays which is really a hassle. How do I tell python to use the full command line window width? For the record, i am not seeing this problem in any other program, for instance my c++ output looks just fine.
AWS Device Farm- Appium Python - Order of tests
44,378,193
0
1
251
0
pytest,python-appium,aws-device-farm
I work for the AWS Device Farm team. This seems like an old thread but I will answer so that it is helpful to everyone in future. Device Farm parses the test in a random order. In case of Appium Python it will be the order what is received from pytest --collect-only. This order may change across executions. The only way to guarantee an order right now is to wrap all the test calls in a new test which will be the only test called. Although not the prettiest solution this is the only way to achieve this today. We are working on bringing more parity between your local environment and Device Farm in the coming weeks.
0
1
0
1
2017-04-20T13:12:00.000
1
0
false
43,520,574
0
0
1
1
I'm using Appium-Python with AWS Device farm. And I noticed that AWS run my tests in a random order. Since my tests are part dependent I need to find a way to tell AWS to run my tests in a specific order. Any ideas about how can I accomplish that? Thanks
15-second idle delay loading Windows native Python module
43,545,428
1
1
50
0
python,windows,cython
This was caused by the Avira antivirus. Disabling its real-time protection fixed the problem. I eventually replaced it with Avast, which so far hasn't given me any trouble.
0
1
0
0
2017-04-21T14:36:00.000
1
1.2
true
43,545,427
1
0
0
1
I'm developing a native Python module (DLL or PYD) on Windows using Cython. Every time I rebuild it, the first time it's loaded blocks for 15 seconds, during which time the CPU and disk are completely idle. Subsequent attempts run normally, until I rebuild the module again. This happens with both the Cygwin and MSYS2 builds of Python.
Importing modules with a Python launch daemon (OSX)
43,718,280
1
1
197
0
python,macos
Have you tried to do a which python to see if the actual Python version used is the one installed through brew(I assume you did a brew install pythonbecause of the path under /usr/local)? If the Python executable is not the one under /usr/local then you might be in trouble, take into account that installing through brew won't replace default system Python.
0
1
0
0
2017-04-22T11:40:00.000
2
0.099668
false
43,558,763
0
0
0
1
Good morning, I'm playing with launch daemons on my Mac running OSX El Capitan. I've made the script in Python that I would like to run when my machine boots (it should snap a picture through the webcam and save it to a directory I specify). I've made the appropriate plist, booted into recovery mode to disable csrutil, and then added the plist to /System/Library/LaunchDaemons. Upon reboot, I do not see any pictures (nor does the green webcam light turn on). I checked the error log for the script and found that the python script throws an error that it cannot import CV2 (ImportError: no module named cv2). However, I do have cv2 installed and it works once the system is booted. My script seems to be able to load other modules (os, datetime, and time) as they are imported before cv2. Is this an additional security feature? Is there a way to work around this? If there is a workaround, will it work even when csrutil is enabled? I don't want to be running around with that disabled, I just disable it to make the necessary changes to the LaunchDaemons directory, and re-enable it after. I did reboot with csrutil disabled and still received the import error, so it doesn't seem to be that (at least as far as I can tell). Thanks! Edit: Some more googling led me to discover that the python path specified in the plist for my daemon was not the one with which openCV was associated. However, a quick echo $PYTHONPATH gives me /usr/local/lib/python2.7/site-packages, which when put in the plist no longer gives an error on startup, but now doesn't seem to execute at all. Also, I've tried changing the directory I write to be /tmp/ since all users have access to that, but still to no avail.
how to run multiple mappers in single node simultaneously
43,590,222
0
1
263
0
python,mapreduce,hadoop2
If you are running mapreduce in local mode (e.g., from eclipse), it will only run one mapper and one reducer at a time. If you are running it in distributed (or pseudo-distributed) mode (e.g., using the hadoop command from the terminal, it can run with more mappers. Make sure to set the max number of mappers to more than 1 in the configuration files. If you have 4 files, your Mac has at least 4 cores, then you should see at least 4 map tasks running simultaneously.
0
1
0
0
2017-04-22T18:37:00.000
1
0
false
43,563,128
0
0
0
1
I am using Hadoop 2.8.0 in my Mac. I want to run all the mappers simultaneously. I tried by forcing to make more than one split of input file and using more than one input files, so that multiple mappers are created. They are created, but they run sequentially. I see in the output something like this: starting task ****_m_0 ............... finising task ****_m_0 starting task ****_m_1 Why mappers run one after another? how can I configure so that they start at once?
launching cassandra cqlsh python not found
62,266,742
1
1
2,108
0
python-2.7,cqlsh,cassandra-2.2
For centos 8 and other similarly: Install python 2.7 Then, prior to invoking cqlsh, run: sudo alternatives --set python /usr/bin/python2
0
1
0
0
2017-04-23T19:15:00.000
2
0.099668
false
43,575,436
0
0
0
1
I am trying to install cassandra version 2.2.0 and I found the compatible python version for it is 2.7.10 then I installed it. when I type in terminal python2.7 --version Python 2.7.10 but when I launch cassandra server and want to start cassandra query language shell by typing root@eman:/usr/local/cassandra# bin/cqlsh bin/cqlsh: 19: bin/cqlsh: python: not found how could I fix this issue thanks in advance
How can i run a compiled python file like a shell script in Unix?
43,591,665
1
0
2,249
0
python-2.7,pyc
Is there a specific reason you're using the .pyc file? Normally, you'd just add a shebang to the top of your script like so: #!/usr/bin/env python, modify permissions (777 is not necessary, 755 or even 744 would work), and run it $ ./file.py
0
1
0
1
2017-04-24T15:00:00.000
1
0.197375
false
43,591,526
0
0
0
1
I have created a compiled python file. When I am executing the file using python command, then it is working fine like below. $ python file.pyc But, when I am putting ./ before the filename (file.pyc) like running a .sh file, then it is not working.It is throwing error. $ ./file.pyc It is having all the privileges (777). Is there any way to execute the test.pyc file like we do with a test.sh file? Regards, Sayantan
How can I set the pythonpath and path of an ipengine (using ipyparallel)?
43,619,292
0
0
175
0
python,python-3.x,environment-variables,ipython-parallel
Eventually, I managed to solve this using a startup script for the ipengines (see ipengine_config.py). The startup script defines the path, pythonpath etc prior to starting each ipengine. However, it is still unclear to me why the same result cannot be achieved by setting these variables prior to starting an ipengine (in the same environment).
0
1
0
1
2017-04-24T20:35:00.000
1
0
false
43,597,253
1
0
0
1
Using Windows / ipython v6.0.0 I am running ipcontroller and a couple of ipengines on a remote host and all appears to work fine for simple cases. I try to adjust the pythonpath on the remote host (where the ipengines run) such that it can locate python user packages installed on the remote host. For some reason the ipengine does not accept this. I can't figure out where each ipengine gets its pythonpath from. Starting a command prompt, changing the pythonpath and then starting an ipengine in that environment does not help. In fact, this does not seem to apply to the pythonpath, but also to all other environment variables. All come from somewhere and apparently can't changed such that the ipengine uses these values. The only option seems to be is to add all packages, required binaries etc, in the directory where the ipengine is started from (since that directory is added to the pythonpath). This seems rather crude and not very elegant at all. Am I missing something here?
How to create an online Bottle server accessible from any system?
43,635,182
2
0
63
0
python-3.x,server,bottle
On PythonAnywhere, all you need to do is: Sign up for an account, and log in. Go to the "Web" tab Click the "Add a new web app" button Select "Bottle" Select the Python version you want to use Specify where you want your code files to be ...and then you'll have a bottle server up and running on the Internet, with simple "Hello world" code behind it. You can then change that to do whatever you want.
0
1
1
0
2017-04-25T14:40:00.000
1
0.379949
false
43,613,798
0
0
0
1
The project I'm doing requires a server. But, with Bottle I can create only a localhost server. I want to be able to access it anywhere. What do I use? I know about pythonanywhere.com, but I'm not sure as to how to go about it.
How to capture the microphone buffer raw data?
43,632,432
3
2
1,426
0
python,c++,c,linux,signal-processing
"I'm needing to capture the raw data (every few milliseconds) that the microphone provides" No, you don't. That wouldn't work. Even if you captured that data every millisecond, at exactly a multiple of 1000 microseconds (no jitter), you would have an audio quality that's utterly horrible. A sample frequency of 1000 Hz (once per millisecond) limits the Nyquist frequency to 500 Hz. That's horribly low. "I want to make real time maginitude analysis". Well, you're ignoring the magnitude of components above 500 Hz, which is about 98% of the audible frequencies. "real time fft" - same problem, that too would miss 98%. You can't handle raw audio like that. You must rely on the sound card to do the heavy lifting, to get the timing rights. It can sample sounds every 21 microseconds, with microsecond accuracy. You can talk to the audio card using ALSA or PulseAudio, or a few other options (that's sound on Linux for you). But recommendations there would be off-topic.
0
1
0
1
2017-04-26T10:28:00.000
1
1.2
true
43,631,564
0
0
0
1
I'm needing to capture the raw data (every few miliseconds) that the microphone provides. For preference on Python, but it can be in C/C++ too. I'm using Linux/macOS. How do I capture the audio wave (microphone input) and what kind of data it will be? Pure bytes? An array with some data? I want to make real time maginitude analysis and (if magnitude reachs a determined value) real time fft of the microphone signal, but I don't know the concepts about what data and how much data the microphone provides me. I see a lot of code that sets to capture 44.1kHz of the audio, but does it capture all this data? The portion of data taken depends of how it was programmed?
How to stop/kill Airflow tasks from the UI
53,409,092
6
55
83,312
0
python,hadoop,airflow
As menioned by Pablo and Jorge pausing the Dag will not stop the task from being executed if the execution already started. However there is a way to stop a running task from the UI but it's a bit hacky. When the task is on running state you can click on CLEAR this will call job.kill() the task will be set to shut_down and moved to up_for_retry immediately hence it is stopped. Clearly Airflow did not meant for you to clear tasks in Running state however since Airflow did not disable it either you can use it as I suggested. Airflow meant CLEAR to be used with failed, up_for_retry etc... Maybe in the future the community will use this bug(?) and implement this as a functionality with "shut down task" button.
0
1
0
0
2017-04-26T10:33:00.000
5
1
false
43,631,693
0
0
0
2
How can I stop/kill a running task on Airflow UI? I am using LocalExecutor. Even if I use CeleryExecutor, how do can I kill/stop the running task?
How to stop/kill Airflow tasks from the UI
50,707,968
11
55
83,312
0
python,hadoop,airflow
from airflow gitter (@villasv) " Not gracefully, no. You can stop a dag (unmark as running) and clear the tasks states or even delete them in the UI. The actual running tasks in the executor won't stop, but might be killed if the executor realizes that it's not in the database anymore. "
0
1
0
0
2017-04-26T10:33:00.000
5
1
false
43,631,693
0
0
0
2
How can I stop/kill a running task on Airflow UI? I am using LocalExecutor. Even if I use CeleryExecutor, how do can I kill/stop the running task?