[{"Question":"I'm using RabbitMQ to manage multiple servers executing long lasting tasks. Each server can listen to one or more queues, but each server should process only one task at a time.\nEach time I start a consumer in a server, I configure it with channel.basic_qos(prefetch_count=1) so that only one tasks is processed for the respective queue.\nSuppose we have:\n - 2 queues: task1, task2.\n - 2 servers: server1, server2.\n - Both servers work with task1 and task2.\nIf the next messages are produced at the same time:\n - messageA for tasks1\n - messageB for tasks2\n - messageC for tasks1\nWhat I expect:\n - messageA gets processed by server1\n - messageB gets processed by server2\n . messageC stays queued until one of the servers is ready (finishes its current task).\nWhat I actually get:\n - messageA gets processed by worker1\n - messageB gets processed by worker2\n - messageC gets processed by worker2 (WRONG)\nI do not start consumers at the same time. In fact, working tasks are constantly being turned on\/off in each server. Most of the time servers work with different queues (server1: tasks1, tasks2, task3; server2: tasks1, tasks5; server3: tasks2, tasks5; and so on).\nHow could I manage to do this?\nEDIT\nBased on Olivier's answer:\nTasks are different. Each server is able to handle some of the tasks, not all of them. A server could process only one task at a time.\nI tried using exchanges with routing_keys, but I found two problems: all of the servers binded to the routing key task_i would process its tasks (I need it to be processed only once), and if there is no server binded to task_i, then its messages are dropped (I need to remain queued until some server can handle it).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":539,"Q_Id":44116020,"Users Score":0,"Answer":"Seems from the description you provide that the issue is due to your servers connecting to multiple queues at the same time. \nAs your prefetch count is set to 1, a server connected to 3 queues will consume up to 3 messages even though he will only be processing one at a time (per your description of processing).\nIt's not clear from your question whether there is a need for multiple queues or whether you could have all tasks end up in a single queue:\n\ndo all the servers consume all the tasks\ndo you need to be able to stop the processing of certain tasks\n\nIf you need\/wish to be able to \"stop\" the processing of certain tasks, or control the distribution of processing throughout your servers, you'll need to manage consumers in your servers to only have one active consumer at a time (otherwise you're going to block\/consume some messages due to prefetch 1).\nIf you do not need to control the processing of the various tasks, would be far simpler to have all of the messages end up in a single queue, and single consumer to that queue setup with prefetch one for each of your servers.","Q_Score":0,"Tags":"python,rabbitmq","A_Id":44122840,"CreationDate":"2017-05-22T14:50:00.000","Title":"RabbitMQ: multiple queues\/one (long) task at a time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Goal\nTo track the program real time that user is interacting with.\nExpected output\nTo get the information of current process that user is interacting with.\nWhat I've done\nUsed psutil to list all the process and find the process that uses CPU most. But it resulted in returning python.exe, which was using most CPU because it was counting processes. \nQuestion\nIs there any other way around to do the task without this kind of mistake?\nOr any keyword that I can google for would be nice, too.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":25,"Q_Id":44117162,"Users Score":1,"Answer":"I suppose figuring out why your own app itself is using all the CPU would be your first priority. :) My psychic powers suggest that you are polling the system constantly without sleeping. Did you consider sleeping for a half second after enumerating processes?\nUsing CPU metrics isn't the best way to accomplish what you are after. You didn't mention what OS you are, but if you are on Windows, then you want to be tracking the window in the foreground, as that is what the user is interacting with. By getting the foreground HWND, you can likely map it back to process id and ultimately process name. Not sure about Mac, but I bet there's an equivalent call.","Q_Score":0,"Tags":"python,process","A_Id":44117271,"CreationDate":"2017-05-22T15:50:00.000","Title":"Recognizing active process","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a web service that accepts post requests. A post request specifies a specific job to be executed in the background, that modifies a database used for later analysis. The sender of the request does not care about the result, and only needs to receive a 202 acknowledgment from the web service. \nHow it was implemented so far:\nFlask Web service will get the http request , and add the necessary parameters to the task queue (rq workers), and return back an acknowledgement. A separate rq worker process listens on the queue and processes the job.\nWe have now switched to aiohttp, and realized that the web service can now schedule the actual job request in its own event loop, by using the aiohttp.ensure_future() method.\nThis however blurs the lines between the web-server and the task queue. On the positive side, it eliminates the need of having to manage the rq workers.\nIs this considered a good practice?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":381,"Q_Id":44139153,"Users Score":0,"Answer":"If your tasks are not CPU heavy - yes, it is good practice.\nBut if so, then you need to move them to separate service or use run_in_executor(). In other case your aiohttp event loop will be blocked by this tasks and server will not be able to accept new requests.","Q_Score":2,"Tags":"python-3.x,celery,aiohttp","A_Id":51969202,"CreationDate":"2017-05-23T15:30:00.000","Title":"Background processes in the aiohttp event loop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am from electrical engineering and currently working on a project using UP-Board, I have attached LEDs, switch, Webcam, USB flash drive with it. I have created an executable script that I want to run at startup. \nwhen I try to run the script in terminal using the code sudo \/etc\/init.d\/testRun start it runs perfectly. Now when I write this command in terminal sudo update-rc.d testRun defaults to register the script to be run at startup it gives me the following error\n\ninsserv: warning: script 'testRun' missing LSB tags and overrides\n\nPlease guide me how to resolve this? I am from Electrical engineering background, so novice in this field of coding. Thanks a lot :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":44140535,"Users Score":0,"Answer":"The thing to remember is that you run the script as you but like chron startup does not, so you need to:\n\nEnsure that the executable flags are set for all users and that it is in a directory that everybody has access to.\nUse the absolute path for every thing, including the script.\nSpecify what to run it with, again with the absolute path.","Q_Score":0,"Tags":"python","A_Id":44140659,"CreationDate":"2017-05-23T16:39:00.000","Title":"Error while Registering the script to be run at start-up, how to resolve?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I have a filesystem that is downloading some data, storing it in memory, and representing only completed downloads as files to the user. \nHowever, each download may take time to complete, so I don't want the user to have to wait for all the files to finish downloading. The way I do this is by choosing which 'files' to list in readdir. \nWhen I open nautlius to see the files, I only see the first few and have to refresh to see the rest.\nWhen I monitor the inotify activity, I noticed there are no CREATE events for the newly completed downloads. What do I need to do to create this notification?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":212,"Q_Id":44144949,"Users Score":1,"Answer":"Inotify is concerned with calls on vfs level only, if you call fuse operations within a fuse filesystem, inotify would not know about this.","Q_Score":1,"Tags":"python,fuse,inotify","A_Id":44539296,"CreationDate":"2017-05-23T21:01:00.000","Title":"inotify CREATE notification with fusepy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I have a filesystem that is downloading some data, storing it in memory, and representing only completed downloads as files to the user. \nHowever, each download may take time to complete, so I don't want the user to have to wait for all the files to finish downloading. The way I do this is by choosing which 'files' to list in readdir. \nWhen I open nautlius to see the files, I only see the first few and have to refresh to see the rest.\nWhen I monitor the inotify activity, I noticed there are no CREATE events for the newly completed downloads. What do I need to do to create this notification?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":212,"Q_Id":44144949,"Users Score":0,"Answer":"You need IN_CLOSE_WRITE. From inotify man page:\n\nIN_CLOSE_WRITE (+)\nFile opened for writing was closed.","Q_Score":1,"Tags":"python,fuse,inotify","A_Id":44150850,"CreationDate":"2017-05-23T21:01:00.000","Title":"inotify CREATE notification with fusepy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Trying to install cx_Oracle on Solaris11U3 but getting ld: fatal: file \/oracle\/database\/lib\/libclntsh.so: wrong ELF class: ELFCLASS64 error\npython setup.py build\nrunning build\nrunning build_ext\nbuilding 'cx_Oracle' extension\ncc -DNDEBUG -KPIC -DPIC -I\/oracle\/database\/rdbms\/demo -I\/oracle\/database\/rdbms\/public -I\/usr\/include\/python2.7 -c cx_Oracle.c -o build\/temp.solaris-2.11-sun4v.32bit-2.7-11g\/cx_Oracle.o -DBUILD_VERSION=5.2.1\n\"SessionPool.c\", line 202: warning: integer overflow detected: op \"<<\"\ncc -G build\/temp.solaris-2.11-sun4v.32bit-2.7-11g\/cx_Oracle.o -L\/oracle\/database\/lib -L\/usr\/lib -lclntsh -lpython2.7 -o build\/lib.solaris-2.11-sun4v.32bit-2.7-11g\/cx_Oracle.so\nld: fatal: file \/oracle\/database\/lib\/libclntsh.so: wrong ELF class: ELFCLASS64\nerror: command 'cc' failed with exit status 2\nTried all available information on the internet:\nInstalled gcc\nInstalled solarisstudio12.4\nInstalled instantclient-basic-solaris.sparc64-12.2.0.1.0, instantclient-odbc-solaris.sparc64-12.2.0.1.0\nSet LD_LIBRARY_PATH to oracle home directory:instantclient_12_2\/\nSame issue seen while installing DBD:Oracle perl module.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":268,"Q_Id":44155943,"Users Score":0,"Answer":"You cannot mix 32-bit and 64-bit together. Everything (Oracle client, Python, cx_Oracle) must be 32-bit or everything must be 64-bit. The error above looks like you are trying to mix a 64-bit Oracle client with a 32-bit Python.","Q_Score":0,"Tags":"python,cx-oracle","A_Id":44171743,"CreationDate":"2017-05-24T10:37:00.000","Title":"Python module cx_Oracle ld installation issue on Solaris11U3 SPARC: fatal: file \/oracle\/database\/lib\/libclntsh.so: wrong ELF class: ELFCLASS64 error","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using Django 1.2 template tags over webapp framework in Google App engine. I intend to migrate over to Webapp2. I was looking for a way to this in webapp2, but did not find a template engine for webapp2. So, should I continue with webapp's template engine or use pure Django template engine.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":109,"Q_Id":44158025,"Users Score":2,"Answer":"Many people use Jinja templates with webapp2 in GAE, but you can also use Django templates. The two template systems are very similar so it is fairly easy to switch between the two.\nThe template system that you use is quite independent of webapp2. It works like this:\n\nRender your template to get a string representation of your HTML page\nTransmit the string with webapp2\n\nFeel free to use Jinja, Django, or any other templating system. Webapp2 doesn't provide templates because it is not part of its job.","Q_Score":0,"Tags":"python,google-app-engine,django-templates,webapp2","A_Id":44158851,"CreationDate":"2017-05-24T12:08:00.000","Title":"Is there any way to use Django templates in Webapp2 framework in Google App Engine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a python script: \/usr\/bin\/doxypy.py\nI have added #!\/usr\/local\/bin\/python as first line and given full permission to script with chmod 777 \/usr\/bin\/doxypy.py.\nIf I want to run it as a linux command, let's say, I want to run it with only doxypy, is there any way to achieve this?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1422,"Q_Id":44167842,"Users Score":2,"Answer":"Yes, rename it to \/usr\/bin\/doxypy","Q_Score":2,"Tags":"python,python-2.7","A_Id":44167881,"CreationDate":"2017-05-24T20:24:00.000","Title":"How can I run python script as linux command","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python script: \/usr\/bin\/doxypy.py\nI have added #!\/usr\/local\/bin\/python as first line and given full permission to script with chmod 777 \/usr\/bin\/doxypy.py.\nIf I want to run it as a linux command, let's say, I want to run it with only doxypy, is there any way to achieve this?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1422,"Q_Id":44167842,"Users Score":1,"Answer":"Rename the file to doxpy and put it in a folder of $PATH, e.g. \/usr\/bin","Q_Score":2,"Tags":"python,python-2.7","A_Id":44167906,"CreationDate":"2017-05-24T20:24:00.000","Title":"How can I run python script as linux command","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to understand this difference between interpreted languages and compiled languages. Lot of explanation is found online and I understand all of them.\nBut question is, softwares are distributed as exe(on windows) as final products.\nSo if I write my program in Python\/Java and compile it to exe, would that work as fast as if I had written and compiled in C\/C++?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":791,"Q_Id":44173237,"Users Score":1,"Answer":"In practice no.\nThe languages C\/C++ were written as a better assembler. Their underlying operations have been designed to have a good fit with the processors of the 1970.\nSubsequently, processors have been driven to run quickly, and so they have been designed around instructions which can make C\/C++ faster.\nThis close linking with the semantics of the language and the silicon, has given a headstart to the C\/C++ community.\nAn example of the C\/C++ advantage, is how simple types and objects can be created on the stack. The stack is implemented as a processor stack, and objects will only be valid whilst their callstack is current. \nJava\/python implement all their objects on the free-store, having lambdas and closures which stretch the lifespan of their objects beyond the call-stack which create them. The free-store is a more-costly way of creating objects, and is one of the penalties the language take.\nJIT compiling the java\/python bytecode, can make up some of the difference, and (theoretically) beat the performance of the C\/C++ code.\nWhen JIT compiled, a language is compiled based on the processor on the box (with possibly better features than when the code was written), with knowledge of the exact data that is being used with the code. This means the Jit compiler is tuned to the exact usage of the code. Rather than a compiler's best guess.","Q_Score":1,"Tags":"python,c++,compilation,interpreted-language","A_Id":44173717,"CreationDate":"2017-05-25T05:51:00.000","Title":"Does Python\/Java program work as fast as C if I convert both of them into exe?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Where is the python installation location specified for Python interpreter in Zeppelin?\nHow to change the python installation location for Python interpreter with new python installation directory in Zeppelin?\nHelp would be much appreciated.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":81,"Q_Id":44213136,"Users Score":1,"Answer":"It worked.\nI have set \"zeppelin.python\" as \/mnt\/reference\/softwares\/anaconda27\/bin\/python in %python interpreter section.","Q_Score":0,"Tags":"python,apache-zeppelin","A_Id":44213558,"CreationDate":"2017-05-27T05:02:00.000","Title":"How to change the python installation location for Python interpreter in Zeppelin?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running MacOS Sierra 10.12.4 and I've realized that homebrew python was upgraded to version 2.7.13. How can I switch back to 2.7.10?","AnswerCount":5,"Available Count":1,"Score":-0.0399786803,"is_accepted":false,"ViewCount":47780,"Q_Id":44217507,"Users Score":-1,"Answer":"This is not a direct answer to the question, rather it explains a solution to avoid touching the system python. \nThe general idea is that you should always install the independent python for your projects. Each project needs its own python version (for compatibility reasons with libraries), and it's not practical to keep one python version and trying to make it work with multiple projects. \nI assume this issue in your system happened because another project required a higher python version and now for your other project you need a lower version python. \nThe best way to handle python versions is to use virtualenv. \nEach project will have it's own python, so you can have projects that work with python 2.7 and python 3 and they never touch each other's dependency. \nInstall different python versions with homebrew and then for each project when you create virtualenv you decide which python to pick. Everytime you work with that project, the python version would be the one you picked yourself when created the virtualenv.","Q_Score":18,"Tags":"python,macos,homebrew","A_Id":44484184,"CreationDate":"2017-05-27T13:36:00.000","Title":"MacOS: How to downgrade homebrew Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to add xmltodict package to Python on Linux.\nI have 2 installations of Python on my Linux build; Python 2.7 (default) and Python 3.5 (in the form of an Anaconda installation). I want to add xmltodict to the Python 3 installation, but when I use sudo apt-get install python-xmltodict it adds it to the default Python 2.7 installation. \nHow can I get apt to add this package to my Python 3 installation without changing the default or using pip? I do not want to have to rebuild my installation with a virtual environment either","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":138,"Q_Id":44241532,"Users Score":1,"Answer":"if you want to do it with pip or easy-install, you can run pip2\/pip2.7 and install it. It will be installed for Python which was used to run pip...","Q_Score":0,"Tags":"python,linux,apt-get,xmltodict","A_Id":44242318,"CreationDate":"2017-05-29T11:34:00.000","Title":"Add package to specific install of Python in Linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to use cv2.VideoCapture to capture image from a docker container.\nimport cv2\nvid = cv2.VideoCapture('path\\to\\video')\nret, frame = vid.read()\nIn terms of the video file,\nI have tried \neither mount the file with docker -v\nor docker cp to copy the video file into container,\nbut both with no luck (ret returns False).\nShould I add any command when launching the container?\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":1491,"Q_Id":44242760,"Users Score":-1,"Answer":"There might be 2 problems:\n1) In your container opencv is not installed properly. To check that do print(ret, frame). If they come (false, None). Then opencv has not installed properly. \n2) The file you are using is corrupted. To check that try to copy any image file (jpg) into the container and use cv2.imread to read the image and then print it. If numpy array comes then while copying your file is not getting corrupted. \nThe best option is pull any opencv + python image and then create a container with it. The better option is use dockerfiles of these images to create the container.","Q_Score":2,"Tags":"python,opencv,docker,video-capture","A_Id":44807718,"CreationDate":"2017-05-29T12:40:00.000","Title":"cv2.VideoCapture doesn't work within docker container","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have couple of file with same name, and I wanted to get the latest file \n[root@xxx tmp]# ls -t1 abclog*\nabclog.1957\nabclog.1830\nabclog.1799\nabclog.1742\nI can accomplish that by executing below command.\n[root@xxx tmp]# ls -t1 abclog*| head -n 1\nabclog.1957\nBut when I am trying to execute the same in python , getting error :\n\n\n\nsubprocess.check_output(\"ls -t1 abclog* | head -n 1\",shell=True)\n ls: cannot access abclog*: No such file or directory\n ''\n\n\n\nSeems it does not able to recognize '*' as a special parameter. How can I achieve the same ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":260,"Q_Id":44243853,"Users Score":0,"Answer":"Make sure you execute this in the directory where the files exist. If you just fire up Idle to run this code, you will not be in that directory.","Q_Score":0,"Tags":"python,shell,subprocess","A_Id":44243964,"CreationDate":"2017-05-29T13:35:00.000","Title":"how to use subprocess.check_output() in Python to list any file name as \"abc*\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been struggling with how to pass requisite parameters to snakebite utility for it to be able to access a kerberized cluster. I have tried setting the necessary conf dir in the \/usr\/local\/etc\/hadoop path, as well as initialising and getting a ticket using kinit.\nAny help or working example in this regards would be greatly appreciated.\nNOTE: I have tested that the environment setup is proper by using 'hadoop' CLI to access the cluster from the same machine.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":387,"Q_Id":44268109,"Users Score":1,"Answer":"You must kinit with proper keytab is enough. It will take automatically by the principal name to fetch results.","Q_Score":0,"Tags":"python,snakebite","A_Id":63066582,"CreationDate":"2017-05-30T17:09:00.000","Title":"How to access kerberized cluster using snakebite python client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently using a angular4 projected generated by angular-CLI to create the project structure and I was able to serve it using ng-serve and develop and see changes. Now I would like to move it to being hosted on my own backend, using google app engine and webApp2 and run it using dev_appserver.py app.yaml. Currently the only way I can get it to work is by doing an ng-build and serving the files out of the dist folder. I would like to make it so I can easily make changes but not have to wait for it to rebuild each time.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":468,"Q_Id":44271174,"Users Score":1,"Answer":"You can use enviroment on angular in order to point you python rest service and another enviroment to production.\nExample:\nenviroment.ts\n\nexport const environment = {\n production: false,\n urlServices: 'http:\/\/190.52.112.41:8075'\n};\n\nenviroment.prod.ts\n\nexport const environment = {\n production: true,\n urlServices: 'http:\/\/localhost:8080'\n};\n\nIn this way , you dont need to compile to test you aplication because angular always will be pointed to you python app.","Q_Score":0,"Tags":"python,angular,google-app-engine,angular-cli,google-app-engine-python","A_Id":44271728,"CreationDate":"2017-05-30T20:17:00.000","Title":"Angular4 and WebApp2 on google App engine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Greeting all!\nI'm learning Google's Python class on my own. I have one quick question about the Baby Names Exercise.\nIn the py file provided, there are lines like this:\nif not args:\n print 'usage: [--summaryfile] file [file ...]'\n sys.exit(1)\nI can understand it wants to show you what to type in cmd when using the code. However the format of \"[--summaryfile] file [file ...]\" confuses me. What does the square brackets in \"[--summaryfile]\" and \"[file ...]\" mean? Lists? Or something else? What should I type in the Windows cmd when running the code? Some examples would be very helpful.\nThank you very much in advance!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":241,"Q_Id":44295930,"Users Score":0,"Answer":"In command line application documentation, square brackets usually denote optional arguments (that goes beyond Python).","Q_Score":0,"Tags":"python","A_Id":44295964,"CreationDate":"2017-05-31T22:58:00.000","Title":"Python usage: [--summaryfile] file [file ...]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Greeting all!\nI'm learning Google's Python class on my own. I have one quick question about the Baby Names Exercise.\nIn the py file provided, there are lines like this:\nif not args:\n print 'usage: [--summaryfile] file [file ...]'\n sys.exit(1)\nI can understand it wants to show you what to type in cmd when using the code. However the format of \"[--summaryfile] file [file ...]\" confuses me. What does the square brackets in \"[--summaryfile]\" and \"[file ...]\" mean? Lists? Or something else? What should I type in the Windows cmd when running the code? Some examples would be very helpful.\nThank you very much in advance!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":241,"Q_Id":44295930,"Users Score":1,"Answer":"It means that those parts are optional. You can supply or omit the option name --summaryfile. Then you supply a list of as many files as you want.\nThis is ancient command description syntax, going back to the days of IBM mainframe operating systems in the 1960s. Most documentation assumes that you've seen (or can find) a description somewhere, and most supply it in the front of the global reference manual ... wherever that might be.","Q_Score":0,"Tags":"python","A_Id":44295966,"CreationDate":"2017-05-31T22:58:00.000","Title":"Python usage: [--summaryfile] file [file ...]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I try to open .py files with IDLE (right click and \"edit with IDLE\"), Windows will ask whether to use python.exe to open this file. If I choose python, IDLE won't start properly.\nIf I use \"open with\" and navigate to Python27\\Lib\\idlelib\\idle.bat I'll get an error:\n\nWindows cannot find 'C:\\Python27\\Lib\\idlelib....\\pythonw.exe\n\nIf I start IDLE from start menu, Windows will open the installer and say \"Please wait while Windows configures Python 2.7.13 (64-bit)\" and if I have the Python installation file available, it will say \"Error writing to file C:\\Python27\\pythonw.exe\".\nStarting IDLE from CMD (eg. >>python.exe C:\\Python27\\Lib\\idlelib\\idle.py) works normally.\nI don't see pythonw.exe under C:\\Python27\\. Should it be there and how would it delete itself?\nI'm using Windows 10 and this issue started a while back. I did a complete reinstall of python and packages and the issue was fixed for a short while, but it returned today.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1122,"Q_Id":44305806,"Users Score":3,"Answer":"Okay, I've installed Process Monitor and set up filters to monitor the Python folder. Turns out Avast has been busy deleting pythonw.exefor a couple of months now.\nIt happened sparsely in the beginning, but last 12 deletes were from the past few days.\nFIXED","Q_Score":2,"Tags":"python,windows,python-2.7,windows-10,python-idle","A_Id":44307874,"CreationDate":"2017-06-01T11:03:00.000","Title":"Python IDLE doesn't work because python.exe is missing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have windows-7 64 bit machine and Python3.6.1(32-bit) installed on it. I wanted to try spyder as IDE for python. I don't have Anaconda or anything like that. So, I installed spyder from command line (cmd.exe) and it did install successfully and prompt returned. \nI think it is installed because \n\nI can see spyder3.exe under C:\\Users\\UserName\\AppData\\Local\\Programs\\Python\\Python36-32\\Scripts\nFrom cmd.exe when I enter spyder3 it doesn't throw any error and a rotating circle appears which indicates something is processing. But nothing is launched.\nAfter running spyder3 from cmd.exe though nothing gets launched except the rotating circle for couple of seconds, I see spyder.lock folder under C:\\Users\\UserName\\.spyder-py3\nWhen I delete spyder.lock folder under C:\\Users\\UserName\\.spyder-py3 and run spyder3 again in cmd.exe the folder is created again.\n\nQuestion: How can I make spyder launch? Did I do something wrong while installing spyder or am I trying to launch it with an incorrect method?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":983,"Q_Id":44317175,"Users Score":3,"Answer":"I had to install PyQt5. It is a requirement for spyder on windows and I was missing it. After I installed it I can launch spyder3 from cmd.exe","Q_Score":1,"Tags":"windows,python-3.x,cmd","A_Id":44373756,"CreationDate":"2017-06-01T21:14:00.000","Title":"Issue in launching spyder that was installed using pip install on Python 3.6.1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to start using the RIDE tool for automation but I am currently unable to launch it from the command prompt. The following are what I have installed.\n\nPython 2.7.11 (32bit) \nWx Python 2.8.12.1(unicode) for Python 2.7 \nrobotframework 3.0.2 (pip installed)\nrobotframework-ride 1.5.2.1 (pip installed)\n\nWhen I launch ride.py from cmd, it opens up a word file which has the same ride.py which is installed in the C:\\Python27\\Scripts folder.\nThe same setup works on a different machine. I don't understand why in this machine, it opens up a word document instead of launching RIDE","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1827,"Q_Id":44335270,"Users Score":0,"Answer":"Do a right click on ride.py and choose python as default program.","Q_Score":1,"Tags":"python-2.7,robotframework","A_Id":44404547,"CreationDate":"2017-06-02T18:24:00.000","Title":"Unable to Launch RIDE from command prompt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have 3 devices connected to my computer and I want to run pytest in parallel on each of them.\nI there any way to do it, either with pytest or with adb?\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":264,"Q_Id":44343975,"Users Score":0,"Answer":"I think you have to use Selenium Grid for runs on multiple devices.","Q_Score":1,"Tags":"python,parallel-processing,automated-tests,adb,pytest","A_Id":70539100,"CreationDate":"2017-06-03T12:34:00.000","Title":"Running Pytest on Multiple Devices in Parallel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a situation where if I run Apache with wsgi (now uninstalled), a test website works, but running the same server with runserver 0.0.0.0:8080 gives ERR_CONNECTION_REFUSED from local or remote (even with the apache2 service stopped).\nEdit: I don't think it's Apache, I've reproduced the problem on a clean server with no Apache installed, so unless Apache somehow modified something under source control it's not that\nMy knowledge of web details is hazy, I don't even know where to troubleshoot this problem - the devserver runs (runserver prints as expected and doesn't give any errors) but never receives a request, I have nothing in iptables.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":246,"Q_Id":44358803,"Users Score":0,"Answer":"Sorry for anyone who read this, it would probably have been impossible to solve given my supplied information.\nWhat had actually happened was that I'd been having to modify my wsgi.py script in order to make it happy inside the Apache server, and I'd added a line which said \"os.system('\/bin\/bash --rcfile )\" to try and make sure that when running inside apache it got the virtualenv activated.\nThis line must have been causing some strange problem, another symptom was that I realised when I was running \"runserver\", it wasn't crashing the the python process was backgrounding itself, where normally it runs inside that console window.\nThanks everyone who asked questions helping me debug!","Q_Score":0,"Tags":"python,django,apache,wsgi","A_Id":44366981,"CreationDate":"2017-06-04T21:10:00.000","Title":"django devserver can't connect","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to get eta of task in celery each time with get request. There is no direct api in celery to get task scheduled time (except inspect() - but it's seems very costly to me)\nHow can i manage eta of particular task? The downside of storing eta time in Django model is not consistent ( either i couldnt store taks_id because i can't - dont know how get eta from task_id)\nI see on one question that there is no api, cause it somehow depends on brokers and etc. But i hope that there is some solution\nSo what's the best way manage task_id to get eta? \nBackend and broker is redis","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":429,"Q_Id":44367961,"Users Score":0,"Answer":"I don't think there is a magic way to do this.\nWhat I do in my app is just log the execution time for each task and return that as an ETA. If you wanted to get a little more accurate you could also factor in the redis queue size and the task consumption rate.","Q_Score":0,"Tags":"python,redis,celery,estimation","A_Id":44961359,"CreationDate":"2017-06-05T11:28:00.000","Title":"Celery best way manage\/get eta of task","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a few django micro-services.\nTheir main workload is constant background processes, not request handling.\nThe background processes constantly use Django's ORM, and since I needed to hack a few things for it to work properly (it did for quite a while), now I have problems with the DB connection, since Django is not really built for using DB connections a lot in the background I guess...\nCelery is always suggested in these cases, but before switching the entire design, I want to know if it really is a good solution.\nCan celery tasks (a lot of tasks, time-consuming tasks) use Django's ORM in the background without problems?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1506,"Q_Id":44368133,"Users Score":0,"Answer":"Celery was originally written specifically as an offline task processor for Django, and although it was later generalised to deal with any Python code it still works perfectly well with Django.\nHow many tasks there are and how long they take is pretty much irrelevant to the choice of technology; each Celery worker runs as a separate process, so the limiting resource will be your server capacity.","Q_Score":0,"Tags":"python,django,celery","A_Id":44368455,"CreationDate":"2017-06-05T11:38:00.000","Title":"Use Django's ORM in Celery","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a few django micro-services.\nTheir main workload is constant background processes, not request handling.\nThe background processes constantly use Django's ORM, and since I needed to hack a few things for it to work properly (it did for quite a while), now I have problems with the DB connection, since Django is not really built for using DB connections a lot in the background I guess...\nCelery is always suggested in these cases, but before switching the entire design, I want to know if it really is a good solution.\nCan celery tasks (a lot of tasks, time-consuming tasks) use Django's ORM in the background without problems?","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1506,"Q_Id":44368133,"Users Score":3,"Answer":"Can celery tasks (a lot of tasks, time-consuming tasks) use Django's ORM in the background without problems?\n\nYes, depending on your definition of \u201cproblems\u201d :-)\nMore seriously: The Django ORM performance will be mostly limited by the performance characteristics of the underlying database engine.\nIf your chosen database engine is PostgreSQL, for example, you will be able to handle a high volume of concurrent connections.","Q_Score":0,"Tags":"python,django,celery","A_Id":44382660,"CreationDate":"2017-06-05T11:38:00.000","Title":"Use Django's ORM in Celery","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I would like to know whether I could return a pdf as a response to Google App Engine Endpoints for Python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":44404237,"Users Score":0,"Answer":"Responses from Google Cloud Endpoints APIs must be of content-type application\/json. You could bundle the binary data of the PDF in a JSON value by encoding it with base64.","Q_Score":0,"Tags":"python,google-app-engine,google-cloud-endpoints","A_Id":44442742,"CreationDate":"2017-06-07T05:30:00.000","Title":"What are the supported return types for Google App Engine Endpoints for Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Why we add 2 directory path in windows environment variableC:\\Python27\\;C:\\Python27\\Scripts\\ ?\nIs C:\\Python27\\ is not sufficient ?\nAny helpful answer will be appreciated !","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":54,"Q_Id":44407686,"Users Score":1,"Answer":"sys does not do a recursive lookup because that is wasteful. Explicitly specify the complete path to ONLY the directories\/sub-directories you want to include in your PATH, and python will only look for modules to import there.","Q_Score":1,"Tags":"python,python-2.7,python-3.x","A_Id":44407755,"CreationDate":"2017-06-07T08:40:00.000","Title":"Why we add 2 directories path in Python in Windows environment variable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am developing a django application which handles lots of file uploads from multiple clients periodically. Each file is around 1 to 10 megabytes.\nSince uploads are thread blocking I can only serve a number of requests equivalent to the number of uwsgi workers\/processes (4 in my case).\nWhat should I do to increase throughput?\nIs it advisable to increase number of processes\/workers in uwsgi? \nWhat should be the upper limit? \nIs there any other solution that I can use in this scenario?\nStack: django+uwsgi+nginx running on amazon ec2 and s3 buckets used for storing zip files.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":700,"Q_Id":44411632,"Users Score":0,"Answer":"What should be the upper limit?\n\nThat depends on your hardware such as quantity of core in your server\n\nIs there any other solution that I can use in this scenario?\n\nConsider using Celery\/rabbitMQ. Celery could be used to process asynchronously file or upload to S3 and notify the events to rabbitMQ","Q_Score":1,"Tags":"python,django,amazon-s3,uwsgi","A_Id":44416796,"CreationDate":"2017-06-07T11:40:00.000","Title":"Django - Handling several file upload requests at a time?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to understand how docker is useful outside of the webapp space.\nIf for example someone wants to run a python script which downloads global weather data every 12 hours, why would they use docker?\nWhat is the advantage of using docker to Linux LXC\/LXD containers?\nI am struggling to understand the benefits of using Docker.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2216,"Q_Id":44416700,"Users Score":10,"Answer":"If for example someone wants to run a python script which downloads global weather data every 12 hours, why would they use docker?\n\nI wouldn't, in this case. set up a cron job to run the script.\n\nWhat is the advantage of using docker to Linux LXC\/LXD containers?\n\nDocker was originally built on top of LXC containers. Since then, it has moved to a newer standard, libcontainer.\nThe major benefit here, is cross-platform compatibility with a much larger ecosystem.\nThe world of linux containers with lxc probably still has a place, but Docker is quickly bringing containers to everyone and not just linux users.\n\nI am struggling to understand the benefits of using Docker.\n\nfor me, the big advantage i see in docker is in my development efforts. i no longer have to worry about older projects that require older runtime libraries and dependencies. it's all encapsulated in docker.\nthen there's the production scaling and deployment story. with the community and user base around docker, there are simple solutions for nearly every scenario - from one server deployments, to auto-scaling and netflix level stuff that i'll never get near.\n\nI'm just finding it difficult to understand Docker outside of a webapp server context\n\nthink slightly more broadly to any app or process that runs continuously, providing an API or service for other applications to consume. it's typically web based services, yes, but any TCP\/IP or UDP enabled process should be able to work.\ndatabase systems, cache systems, key-value stores, web servers... anything with an always running process that provides an API over TCP\/IP or UDP.\nthe big benefit here is encapsulating the service and all of it's runtime dependencies, like i was saying before.\nneed to run MongoDB 2.3 and 3.2 on your server? no problem. they are both in separate containers, can both run independently.\nwant to run mysql for this app, and mongo for that app? done.\ncontainerization is powerful in helping keep apps separate from each other, and in helping to reduce the \"works on my machine\" problem.","Q_Score":7,"Tags":"python,docker,containers,lxc,lxd","A_Id":44417108,"CreationDate":"2017-06-07T15:18:00.000","Title":"How is docker useful for non webapp applications (e.g. Python scripts)? What is the advantage of using it over LXC\/LXD?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Having the command\ngit --no-pager log -m --first-parent --no-renames --reverse --name-status --pretty=oneline --full-index\n\nis there any way to also get the blob hash for each file at that particular commit, next to the \"name status\"?\nThe command is used in a deployment pipeline for some huge repositories, so whatever the solution, I aim at keeping it fast, meaning: not spawning new processes.\nIf not possible, an acceptable approach would be to use a python library \/ binding. If you think that's the best approach, then please point to some key API calls which I'd need.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":426,"Q_Id":44457336,"Users Score":1,"Answer":"If I remove --name-status and add --raw, I see a format where each individual blob has a before... after... hash.","Q_Score":0,"Tags":"python,git","A_Id":44457468,"CreationDate":"2017-06-09T12:02:00.000","Title":"git log show file blob id","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was researching on whether or not Python can replace Bash for shell scripting purposes. I have seen that Python can execute Linux commands using subprocess.call() or os.system(). But I've read somewhere (forgot the link of the article) that using these is a bad thing. Is this really true? \nIf yes, then why is it a bad thing? \nIf not, then is it safe to say that Python can indeed replace Bash for scripting since I could just execute Linux commands using either of the 2 function calls?\nNote: If I'm not mistaken, os.system() is deprecated and subprocess.call() should be used instead but that is not the main point of the question.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3744,"Q_Id":44459737,"Users Score":1,"Answer":"In general it is not a bad thing to create another process from your own process.\nPeople do this constantly on the bash.\nHowever, one always should ask oneself what is the best environment to do the task you need to do.\nFor instance I could easily call a python script to cut (the linux tool) a column from a file. However, the overhead to first open the python interpreter, then save the output from cut, and then save that again is possibly higher than checking how to use the bash-tool with man.\nHowever, collecting output from another \"serious\" program to do further calculations on that output, yes, you can do that nicely with subprocesses (though I would opt for storing that output in a file and then just read in the file if I need to rerun my script).\nAnd this is where launching a subprocess may get tricky: depending on how you open a new subprocess, you can not rely anymore on environment variables.\nEspecially when dealing with large input data, the output from the subprocess does not get piped further and therefore is collected in memory until the program finished, which might lead into a memory problem.\nTo put it short: if using python solves your problem faster than combining bash-only tools, sure, do it. If that involves launching serious subprocesses, ok. However, if you want to replace bash with python, do not do that.","Q_Score":6,"Tags":"python,linux,scripting","A_Id":44460033,"CreationDate":"2017-06-09T13:58:00.000","Title":"Is it bad to use subprocess.call() or os.system() when writing Python shell scripts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wrote !cat array.txt , and the result was: \n\nERROR: 'cat' is not recognized as an internal or external command,\n operable program or batch file.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":464,"Q_Id":44461622,"Users Score":0,"Answer":"Try this command\n%cat array.txt","Q_Score":0,"Tags":"python,jupyter","A_Id":53287898,"CreationDate":"2017-06-09T15:30:00.000","Title":"Jupyter notebook is not able to execute unix command (i.e. ls,pwd etc)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to write a Python script to automatically unzip lots of apk files and then do static analysis. However, when I unzip some apk files, unzip prompted that \"Press 'Q' to quit, or any other key to continue\". \nBecause it's a script and I haven't press any key then the script hangs. Any command option can solve this problem? Or do I have to handle it in Python? Thanks in advance :D","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":292,"Q_Id":44479568,"Users Score":0,"Answer":"I just ran into the same thing and figured out what's causing it. Turns out, if a zip file has a zip comment attached, it will be shown, along with a prompt that hangs your script.\nPassing -q to unzip will avoid showing the comment and any hangs, though you will lose the list of files being unzipped too. I haven't figured out how to just prevent the comment from showing up and not the rest of the stuff unzip prints.","Q_Score":1,"Tags":"python,unzip","A_Id":52304543,"CreationDate":"2017-06-11T01:25:00.000","Title":"Unzip prompted \"Press 'Q' to quit, or any other key to continue\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Tried to \nconda install -c conda-forge requests-futures=0.9.7\nbut failed with\nconda is not recognized as an internal or external command,\nC:\\Users\\user_name\\Anaconda3\\Scripts has been set for Path in environment variables under both user and System variables.\nI installed Python 3.5 as well and it is on Path, I am using Win10 X64.\nHow to fix the issue?","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":39426,"Q_Id":44488349,"Users Score":2,"Answer":"I had a similar problem when using cmd.\nFrom your Command prompt 'C:\\Users\\zkdur\\anaconda3\\Scripts\nNow try\nconda init --help\nconda init --verbose after that restart your command prompt and conda will be working.","Q_Score":9,"Tags":"python,python-3.x,anaconda,conda","A_Id":68669748,"CreationDate":"2017-06-11T20:16:00.000","Title":"Windows 10 conda is not recognized as an internal or external command","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Tried to \nconda install -c conda-forge requests-futures=0.9.7\nbut failed with\nconda is not recognized as an internal or external command,\nC:\\Users\\user_name\\Anaconda3\\Scripts has been set for Path in environment variables under both user and System variables.\nI installed Python 3.5 as well and it is on Path, I am using Win10 X64.\nHow to fix the issue?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":39426,"Q_Id":44488349,"Users Score":0,"Answer":"After installing Anaconda on windows 10, you can use Anaconda prompt from start menu to activate a conda enabled terminal window.","Q_Score":9,"Tags":"python,python-3.x,anaconda,conda","A_Id":69712123,"CreationDate":"2017-06-11T20:16:00.000","Title":"Windows 10 conda is not recognized as an internal or external command","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Whenever I use python manage.py runserver on Windows PowerShell it causes Python to crash: \"Python has stopped working\" window pops up. Any idea why?\nI've tried rebooting, which causes a different type of error:\n\npython.exe: can't open file '.\\maanage.py': [Errno 2] No such file or directory.\n\nI created another project and tried runserver and again it caused the first error. All installation commands ran smoothly, but why am I facing this error?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":44508145,"Users Score":0,"Answer":"Make sure you're spelling the file name correctly. \nmaanage.py != manage.py","Q_Score":0,"Tags":"python,django","A_Id":44508252,"CreationDate":"2017-06-12T20:25:00.000","Title":"Python localhost server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using python Tornado to perform asynchronous requests to crawl certain websites and one of the things I want to know is if a URL results in a redirect or what it's inital status code is (301, 302, 200, etc.). However, right now I can't figure out a way to find that information out with a Tornado response. I know a requests response object has a history attribute which records the redirect history, is there something similar for Tornado?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":44508737,"Users Score":0,"Answer":"Tornado's HTTP clients do not currently provide this information. Instead, you can pass follow_redirects=False and handle and record redirects yourself.","Q_Score":0,"Tags":"python,tornado","A_Id":44511338,"CreationDate":"2017-06-12T21:08:00.000","Title":"Is there a way to get the redirect history from a Tornado response?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"How can I read files within a python module while executing in docker? I have a python module which I import in my code. normally in order to fetch the relative path of the module one can do <>.__path__. however this approach does not work in docker but works locally.\nIs there a common way in which I can read the files from the module in docker as well as in local?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":789,"Q_Id":44510246,"Users Score":1,"Answer":"There are a few things to consider:\n\nWhere is python installed in the container and what version? Does\nthat match your dev environment? \nLook at your dockerfile - what is your working directory? Did you set one? Perhaps, you are running your python code from one directory, but trying to import a module from another. \nIs your PYTHONPATH set in your container? \nHave you installed the modules in the container that you're attempting to use? Perhaps with a requirements.txt file or manually? If so, are you executing your python code with the same python version\/path that you installed the modules with? \nAre you using a virtual environment? Has it been sourced?\nWhat user is your container running as? Does it have access to the python modules? You may need to chown the site-packages path or run as a different user or root.","Q_Score":0,"Tags":"python,docker,dockerfile,docker-swarm","A_Id":44510805,"CreationDate":"2017-06-12T23:38:00.000","Title":"how to read files from a python module inside docker","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python application that's run inside a virtualenv on CentOS. This application needs a python library that's distributed and installed as an rpm. When the application runs I just get \nno module named .... \nI've verified that the rpm is installed correctly, and I've also installed the rpm in the site-packages directory of the virtualenv but that didn't help. What is the correct way to install an rpm so that an application running in a virtual environment has access to it?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":599,"Q_Id":44513019,"Users Score":3,"Answer":"By default virtual environments don't access modules in site-packages. You either need to allow such access (toggleglobalsitepackages in virtualenvwrapper) or recreate you virtualenv allowing such access with option --system-site-packages.","Q_Score":0,"Tags":"python,centos,virtualenv,rpm","A_Id":44519576,"CreationDate":"2017-06-13T05:29:00.000","Title":"Application can't find python library installed as rpm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I tried to download pip inside a docker container by first copying the installation file via\n\ndocker cp get-pip.py dock:get-pip.py\n\nand then I went into the container\n\ndocker exec -it 58 bash \n\nI then tried to python get-pip.py the file and i get the following error.\n\nRetrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',)': \/simple\/pip\/","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1840,"Q_Id":44530305,"Users Score":1,"Answer":"The problem was that the docker process had problems connecting to the internet. So the installation of pip manually had errors.\nSolution Process:\n\nRestart the docker process. (Not working) \nRestart the computer. (problem solved)","Q_Score":1,"Tags":"python,docker,pip,containers","A_Id":44545311,"CreationDate":"2017-06-13T19:49:00.000","Title":"Error Installing pip Inside Docker Container","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm a complete noob in Python, so please forgive this question if it's completely stupid. I had Canopy 1.4.7 installed on my system, working with Python 2.7. I just upgraded to Canopy 2.1.2, with Python 3.5.\nI'd been using Rodeo 2.5.2 as my environment. It worked like a charm with 1.4.7, but since the upgrade, I haven't been able to get it to work. All I get is a message saying \"Unable to execute.\" The Rodeo terminal then has to be restarted.\nAs a matter of fact, any code input doesn't work. I tried to put code into the Rodeo terminal; it doesn't even register the input. I can't press \"Enter,\" nothing happens. I tried to install a package; nothing happened. I've tried reinstalling both Canopy and Rodeo, but to no effect. I've also tried turning it off and on again (thanks, Roy). Mind you, I tried the same codes in the Canopy environment, and they worked fine. So I'm assuming it's an issue in Rodeo.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":942,"Q_Id":44532112,"Users Score":0,"Answer":"I got this error attempting to run a source file having more than 500 lines (code, comments and empty lines) in Rodeo 2.5.2. I am not sure what is the actual maximum (?) number of lines allowed (if any) but removing some comments and thus reducing the total number of lines to 514 allowed to run that file.","Q_Score":4,"Tags":"python,python-2.7,python-3.x,rodeo","A_Id":50547525,"CreationDate":"2017-06-13T21:46:00.000","Title":"\"Unable to execute\" message in Rodeo with Python 3.5","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an standard appengine app currently running. I am currently developing another flask app which will use flexible runtime. I am wondering is it possible for me to have both apps in same project?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":460,"Q_Id":44539192,"Users Score":0,"Answer":"There is an even easier way to do this that doesn't require creating a separate service :)\nSince you are only developing your second application, you do not need to set it as the default version to serve traffic to the public. You only need to access it yourself, (or perhaps whoever you give the URL to). The only drawback really is that you will not be able to access your project at the default url, usually .appspot.com\nTherefore, you can simply deploy your flexible app engine project, but make sure you don't give it any traffic sharing or promote it to the main version!! You can then access your version simply by clicking on the version in the google cloud console, or by visiting http:\/\/-dot-.appspot.com\/ (It was not possible to access individual versions like this for flexible for quite some time, but this has now been resolved.)","Q_Score":0,"Tags":"google-app-engine,google-cloud-platform,google-app-engine-python,app-engine-flexible","A_Id":44711051,"CreationDate":"2017-06-14T08:15:00.000","Title":"Is it possible to have both appengine fleixble and standard in single project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Which time measurement is used for the timeout by the Python 3 subprocess module on UNIX\/Linux OSes?\nUNIX like OSes report 3 different times for process execution: real, user, and system. Even with processes that will be alive for only a few milliseconds the real time is often several hundred percent longer than the user and system time.\nI'm making calls using subprocess.call() and subprocess.check_output() with the timeout set to a quarter of a second for processes that the time utility reports taking 2-18 milliseconds for the various times reported. There is no problem and my enquiry is purely out of interest.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":56,"Q_Id":44550304,"Users Score":1,"Answer":"This is wall-clock time (real), not time spent in either userland (user) or the kernel (system).\nYou can test this yourself by running a process such as sleep 60, which uses almost no user or system time at all, and observing that it still times out.","Q_Score":0,"Tags":"python-3.x,subprocess","A_Id":44550449,"CreationDate":"2017-06-14T16:39:00.000","Title":"Which 'time' is used for the timeout by the subprocess module on UNIX\/Linux OSes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a self-installed python in my user directory in a corporate UNIX SUSE computer (no sudo privilege):\nwhich python\n\/bin\/python\/Python-3.6.1\/python\nI have an executable (chmod 777) sample.py file with this line at the top of the file:\n#!\/bin\/python\/Python-3.6.1\/python\nI can execute the file like this:\npython sample.py\nBut when I run it by itself I get an error:\n\/full\/path\/sample.py\n \/full\/path\/sample.py: Command not found\nI have no idea why it's not working. I'm discombobulated as what might be going wrong since the file is executable, the python path is correct, and the file executes if I put a python command in the front. What am I missing?\nEDIT:\nI tried putting this on top of the file:\n#!\/usr\/bin\/env python\nNow, I get this error:\n: No such file or directory\nI tried this to make sure my env is correct\nwhich env\n \/usr\/bin\/env\nEDIT2:\nYes, I can run the script fine using the shebang command like this:\n\/bin\/python\/Python-3.6.1\/python \/full\/path\/sample.py","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1040,"Q_Id":44555995,"Users Score":2,"Answer":"Your file has DOS line endings (CR+LF). It works if you run python sample.py but doesn't work if you run .\/sample.py. Recode the file so it has Unix line endings (pure LF at the end of every line).","Q_Score":1,"Tags":"python,linux,unix","A_Id":44556361,"CreationDate":"2017-06-14T22:59:00.000","Title":"Executable .py file with shebang path to which python gives error, command not found","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I created a .exe file using Py2exe on Windows 10 but when I try to run it on a Windows 7 computer it says that the os version is wrong. \nCan anyone tell me how to fix this? (like using another Python or Py2exe version or setting a specific configuration inside setup.py)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":850,"Q_Id":44577583,"Users Score":0,"Answer":"I solved the problem myself and I'm going to share the answer if someone ever has the same mistake. I just had to download a 32-bit version of Canopy (with Python 2.7) and py2exe in order for them to work on Windows 7.","Q_Score":5,"Tags":"python,windows,py2exe","A_Id":44632191,"CreationDate":"2017-06-15T21:49:00.000","Title":"Py2exe - Can't run a .exe created on Windows 10 with a Windows 7 computer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using a CentOS 7.2 and trying to provision a VM in azure through Ansible using the module \"azure_rm_virtualmachine\" and getting the error as \"No module named packaging.version\" Below is my error \nTraceback (most recent call last):\n File \"\/tmp\/ansible_7aeFMQ\/ansible_module_azure_rm_virtualmachine.py\", line 445, in \n from ansible.module_utils.azure_rm_common import *\n File \"\/tmp\/ansible_7aeFMQ\/ansible_modlib.zip\/ansible\/module_utils\/azure_rm_common.py\", line 29, in \nImportError: No module named packaging.version\nfatal: [localhost]: FAILED! => {\n \"changed\": false,\n \"failed\": true,\n \"module_stderr\": \"Traceback (most recent call last):\\n File \\\"\/tmp\/ansible_7aeFMQ\/ansible_module_azure_rm_virtualmachine.py\\\", line 445, in \\n from ansible.module_utils.azure_rm_common import *\\n File \\\"\/tmp\/ansible_7aeFMQ\/ansible_modlib.zip\/ansible\/module_utils\/azure_rm_common.py\\\", line 29, in \\nImportError: No module named packaging.version\\n\",\n \"module_stdout\": \"\",\n \"msg\": \"MODULE FAILURE\",\n \"rc\": 0\n}\nBelow is my playbook and I am using a ansible version 2.3.0.0 and python version of 2.7.5 and pip 9.0.1 \n\nname: Provision new VM in azure\nhosts: localhost\nconnection: local\ntasks:\n\nname: Create VM\nazure_rm_virtualmachine:\nresource_group: xyz\nname: ScriptVM\nvm_size: Standard_D1\nadmin_username: xxxx\nadmin_password: xxxx\nimage:\noffer: CentOS\npublisher: Rogue Wave Software\nsku: '7.2'\nversion: latest\n\n\nI am running the playbook from the ansible host and I tried to create a resource group through ansible but I get the same error as \"No module named packaging.version\" .","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":4088,"Q_Id":44583740,"Users Score":6,"Answer":"The above error is occurred due to your environment doesn't have packaging module.\nTo solve this issue by installing packaging module.\npip install packaging\nThe above command will install packaging module of 16.8 version","Q_Score":4,"Tags":"python-2.7,azure,ansible","A_Id":45960185,"CreationDate":"2017-06-16T07:49:00.000","Title":"No module named packaging.version for Ansible VM provisioning in Azure","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using a CentOS 7.2 and trying to provision a VM in azure through Ansible using the module \"azure_rm_virtualmachine\" and getting the error as \"No module named packaging.version\" Below is my error \nTraceback (most recent call last):\n File \"\/tmp\/ansible_7aeFMQ\/ansible_module_azure_rm_virtualmachine.py\", line 445, in \n from ansible.module_utils.azure_rm_common import *\n File \"\/tmp\/ansible_7aeFMQ\/ansible_modlib.zip\/ansible\/module_utils\/azure_rm_common.py\", line 29, in \nImportError: No module named packaging.version\nfatal: [localhost]: FAILED! => {\n \"changed\": false,\n \"failed\": true,\n \"module_stderr\": \"Traceback (most recent call last):\\n File \\\"\/tmp\/ansible_7aeFMQ\/ansible_module_azure_rm_virtualmachine.py\\\", line 445, in \\n from ansible.module_utils.azure_rm_common import *\\n File \\\"\/tmp\/ansible_7aeFMQ\/ansible_modlib.zip\/ansible\/module_utils\/azure_rm_common.py\\\", line 29, in \\nImportError: No module named packaging.version\\n\",\n \"module_stdout\": \"\",\n \"msg\": \"MODULE FAILURE\",\n \"rc\": 0\n}\nBelow is my playbook and I am using a ansible version 2.3.0.0 and python version of 2.7.5 and pip 9.0.1 \n\nname: Provision new VM in azure\nhosts: localhost\nconnection: local\ntasks:\n\nname: Create VM\nazure_rm_virtualmachine:\nresource_group: xyz\nname: ScriptVM\nvm_size: Standard_D1\nadmin_username: xxxx\nadmin_password: xxxx\nimage:\noffer: CentOS\npublisher: Rogue Wave Software\nsku: '7.2'\nversion: latest\n\n\nI am running the playbook from the ansible host and I tried to create a resource group through ansible but I get the same error as \"No module named packaging.version\" .","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4088,"Q_Id":44583740,"Users Score":0,"Answer":"You may try this, it solved for me\n\nsudo pip install -U pip setuptools\n\nFYI: My ENVs are \n\nUbuntu 16.04.2 LTS on Windows Subsystem for Linux (Windows 10 bash)\nPython 2.7.12\npip 9.0.1\nansible 2.3.1.0\nazure-cli (2.0.12)","Q_Score":4,"Tags":"python-2.7,azure,ansible","A_Id":45527155,"CreationDate":"2017-06-16T07:49:00.000","Title":"No module named packaging.version for Ansible VM provisioning in Azure","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"windows power shell or cmd uses anaconda python instead of the default windows installation\nhow to make them use the default python installation?\nmy os is win 8.1\npython 3.6\nanaconda python 3.6","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1228,"Q_Id":44586049,"Users Score":0,"Answer":"Set the environment path variable of your default python interpreter in system properties.\nor if this doesn't work do:\nin cmd C:\\Python27\\python.exe yourfilename.py\nin the command first part is your interpreter location and second is your file name","Q_Score":2,"Tags":"python,anaconda,default","A_Id":44586285,"CreationDate":"2017-06-16T09:43:00.000","Title":"How to change cmd python from anaconda to default python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm on Docker for past weeks and I can say I love it and I get the idea. But what I can't figure out is how can I \"transfer\" my current set-up on Docker solution. I guess I'm not the only one and here is what I mean.\nI'm Python guys, more specifically Django. So I usually have this:\n\nDebian installation\nMy app on the server (from git repo).\nVirtualenv with all the app dependencies\nSupervisor that handles Gunicorn that runs my Django app.\n\nThe thing is when I want to upgrade and\/or restart the app (I use fabric for these tasks) I connect to the server, navigate to the app folder, run git pull, restart the supervisor task that handles Gunicorn which reloads my app. Boom, done.\nBut what is the right (better, more Docker-ish) approach to modify this setup when I use Docker? Should I connect to docker image bash somehow everytime I want upgrade the app and run the upgrade or (from what I saw) should I like expose the app into folder out-of docker image and run the standard upgrade process?\nHope you get the confusion of old school dude. I bet Docker guys were thinking about that.\nCheers!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":284,"Q_Id":44591775,"Users Score":3,"Answer":"For development, docker users will typically mount a folder from their build directory into the container at the same location the Dockerfile would otherwise COPY it. This allows for rapid development where at most you need to bounce the container rather than rebuild the image.\nFor production, you want to include everything in the image and not change it, only persistent data goes in the volumes, your code is in the image. When you make a change to the code, you build a new image and replace the running container in production.\nLogging into the container and manually updating things is something I only do to test while developing the Dockerfile, not to manage a developing application.","Q_Score":1,"Tags":"python,django,git,docker","A_Id":44591974,"CreationDate":"2017-06-16T14:27:00.000","Title":"Docker vs old approach (supervisor, git, your project)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I read that message queue is preferred over subprocess.Popen(). It is said that message queue is scalable solution. I want to understand how is it so.\nI just want to list the benefits of message queue over subeprocess.Popen() so that I can convince my superiors to use message queue instead of subprocess","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1304,"Q_Id":44592972,"Users Score":4,"Answer":"These are completely separate and different things.\nsubprocess.Popen() simply spawns (by calling fork and exec) new OS process for a specific command you passed to it. So it's perfect for cases when you need something to execute in separate process and (optionally) get result of execution (in somewhat awkward way, via pipe).\nQueues (like Celery or ActiveJob) gives you two main things:\n\nStorage (more precisely, an interface to some existing storage, like PostgreSQL or MongoDB) for your tasks (or messages), that are going to be serialized automatically before going to that storage.\nWorkers that polling this storage and actually perform those tasks (deserializing them before performing, also automatically).\n\nSo, it's possible to have a lot of workers, even maybe in distributed environment. It gives you not only a vertical scalability but a horizontal one (by keeping your workers on separate machines).\nOn the other hand, queues are more suited for asynchronous processing (i.e. for jobs that need to be executed later and you don't need results right now) and are more heavyweight than simple process spawning.\nSo, if you have simple one-off jobs just to be executed somewhere outside your main process - use processes.\nIf you have a bunch of different jobs that are needed to be executed asynchronously and you want to have an ability to scale that process, you should use queues, they'll make life easier.","Q_Score":1,"Tags":"python,django,celery,django-celery","A_Id":44593395,"CreationDate":"2017-06-16T15:29:00.000","Title":"python subprocess.Popen() vs message queue (celery)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to get the unix file type of a file specified by path (find out whether it is a regular file, a named pipe, a block device, ...)\nI found in the docs os.stat(path).st_type but in Python 3.6, this seems not to work.\nAnother approach is to use os.DirEntry objects (e. g. by os.listdir(path)), but there are only methods is_dir(), is_file() and is_symlink().\nAny ideas how to do it?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1711,"Q_Id":44595736,"Users Score":1,"Answer":"Python 3.6 has pathlib and its Path objects have methods:\n\nis_dir()\nis_file()\nis_symlink()\nis_socket()\nis_fifo()\nis_block_device()\nis_char_device()\n\npathlib takes a bit to get used to (at least for me having come to Python from C\/C++ on Unix), but it is a nice library","Q_Score":6,"Tags":"python,unix,operating-system","A_Id":44595853,"CreationDate":"2017-06-16T18:24:00.000","Title":"Get unix file type with Python os module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I downloaded Python 3.6 from Python's website (from the download page for Windows) and it seems only the interpreter is available. I don't see anything else (Standard Library or something) in my system. Is it included in the interpreter and hidden or something?\nI tried to install ibm_db 2.0.7 as an extension of Python DB API, but the installation instructions seem too old. The paths defined don't exist in my Win10.\nBy the way, I installed the latest Platform SDK as instructed (which is the predecessor of Windows 10 SDK, so I had to to install Windows 10 SDK instead). I also installed .NET SDK V1.1, which is told to include Visual C++ 2003 (Visual C++ 2003 is not available today on its own). I considered to install VS2017, but because it was too big (12.some GB) I passed on that option.\nI am stuck and can't proceed for I don't know which changes happened from the point the installation instructions have been written and what else I need to do. How can I install python with the ibm_db package on Windows 10?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4969,"Q_Id":44610150,"Users Score":0,"Answer":"ibm_db does not support Python 3.6 you need to downgrade to Python 3.5.4. That is how i did it.","Q_Score":1,"Tags":"python,windows,db2","A_Id":47904208,"CreationDate":"2017-06-17T22:54:00.000","Title":"How to install Python libraries\/packages and ibm_db 2.0.7 installation instructions?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I downloaded Python 3.6 from Python's website (from the download page for Windows) and it seems only the interpreter is available. I don't see anything else (Standard Library or something) in my system. Is it included in the interpreter and hidden or something?\nI tried to install ibm_db 2.0.7 as an extension of Python DB API, but the installation instructions seem too old. The paths defined don't exist in my Win10.\nBy the way, I installed the latest Platform SDK as instructed (which is the predecessor of Windows 10 SDK, so I had to to install Windows 10 SDK instead). I also installed .NET SDK V1.1, which is told to include Visual C++ 2003 (Visual C++ 2003 is not available today on its own). I considered to install VS2017, but because it was too big (12.some GB) I passed on that option.\nI am stuck and can't proceed for I don't know which changes happened from the point the installation instructions have been written and what else I need to do. How can I install python with the ibm_db package on Windows 10?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4969,"Q_Id":44610150,"Users Score":0,"Answer":"for python libraries you can always use pip, which is a python package installer.\nif you are using python 3.6 then you already have pip. just make sure you're using pip from the main command line not the python terminal","Q_Score":1,"Tags":"python,windows,db2","A_Id":44611663,"CreationDate":"2017-06-17T22:54:00.000","Title":"How to install Python libraries\/packages and ibm_db 2.0.7 installation instructions?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have looked for Python 2.7.12 in my apps and docs but I can't find it... \nI'm using a macbook pro.\nI can see Python 3.6 in my applications so I don't know why the terminal isn't referring to this one. \nI want to get started learning django but I don't think it will be possible if I don't use Python 3.5 or later. \nis there a way to tell the terminal to use 3.6 instead?","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2694,"Q_Id":44615007,"Users Score":0,"Answer":". ~\/.bashrc\nalias python='\/usr\/bin\/python3.4'","Q_Score":0,"Tags":"python,django,macos,terminal,installation","A_Id":44615048,"CreationDate":"2017-06-18T12:33:00.000","Title":"Downloaded Python 3.6 but terminal is still saying I'm using python 2.7.12","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have looked for Python 2.7.12 in my apps and docs but I can't find it... \nI'm using a macbook pro.\nI can see Python 3.6 in my applications so I don't know why the terminal isn't referring to this one. \nI want to get started learning django but I don't think it will be possible if I don't use Python 3.5 or later. \nis there a way to tell the terminal to use 3.6 instead?","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":2694,"Q_Id":44615007,"Users Score":1,"Answer":"Open the text editor like nano , vim or gedit and open the .bashrc file ,\nnano ~\/.bashrc\nand create the bash alias,\nTo do so add the following line into the .bashrc file:\nalias python='\/usr\/bin\/python3.6'\nSave the file and re-open the terminal.\nEdit:\nSimilarly, if you don't want to create the direct alias.\nAs @exprator suggested above you can also use python command for python 2 and python3 to use Python 3 version","Q_Score":0,"Tags":"python,django,macos,terminal,installation","A_Id":44615199,"CreationDate":"2017-06-18T12:33:00.000","Title":"Downloaded Python 3.6 but terminal is still saying I'm using python 2.7.12","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have looked for Python 2.7.12 in my apps and docs but I can't find it... \nI'm using a macbook pro.\nI can see Python 3.6 in my applications so I don't know why the terminal isn't referring to this one. \nI want to get started learning django but I don't think it will be possible if I don't use Python 3.5 or later. \nis there a way to tell the terminal to use 3.6 instead?","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2694,"Q_Id":44615007,"Users Score":0,"Answer":"By the way, you shouldn't use the default environment to develop. Instead, you have to use Virtualenv","Q_Score":0,"Tags":"python,django,macos,terminal,installation","A_Id":44618123,"CreationDate":"2017-06-18T12:33:00.000","Title":"Downloaded Python 3.6 but terminal is still saying I'm using python 2.7.12","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have looked for Python 2.7.12 in my apps and docs but I can't find it... \nI'm using a macbook pro.\nI can see Python 3.6 in my applications so I don't know why the terminal isn't referring to this one. \nI want to get started learning django but I don't think it will be possible if I don't use Python 3.5 or later. \nis there a way to tell the terminal to use 3.6 instead?","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":2694,"Q_Id":44615007,"Users Score":1,"Answer":"Just use python in terminal for python 2.7 and type python3 to use python 3.6 when you need","Q_Score":0,"Tags":"python,django,macos,terminal,installation","A_Id":44615243,"CreationDate":"2017-06-18T12:33:00.000","Title":"Downloaded Python 3.6 but terminal is still saying I'm using python 2.7.12","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"So I have written a script that scrapes betting data from an odds aggregating site and outputs everything into a CSV. My script works perfectly, however, I can only run it from within Spyder. Whenever I double click the PY file a terminal opens up and closes quickly. After messing around with it for a while I also discovered that I can run it through the command line. \nI have the program\/script line pointing to my python3:\nC:\\Users\\path\\AppData\\Local\\Continuum\\Anaconda3\\python.exe\nAnd my argument line points to the script\n\\networkname\\path\\moneylineScraper.py\nBest case scenario I would like to be able to run this script through task scheduler, but I also cannot even run it when I double click the Py file. Any help would be appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":203,"Q_Id":44631802,"Users Score":0,"Answer":"An alternative is, to create a bat file and then execute it.\nA new bat file:\n-- change directory to that of python script file.\n-- using full path execute python with script file as argument.\n-- End the batch file.\nMake that bat file executable with sufficient previlages and execute that.","Q_Score":3,"Tags":"python,python-2.7,python-3.x,csv,scheduled-tasks","A_Id":45740149,"CreationDate":"2017-06-19T13:23:00.000","Title":"Able to run Python3 Script through IDE, Command Line, but not by double clicking or Task Scheduler","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using google cloudSQL for applying advance search on people data to fetch the list of users. In datastore, there are data already stored there with 2 model. First is used to track current data of users and other model is used to track historical timeline. The current data is stored on google cloudSQL are more than millions rows for all users. Now I want to implement advance search on historical data including between dates by adding all history data to cloud. \nIf anyone can suggest the better structure for this historical model as I've gone through many of the links and articles. But cannot find proper solution as I have to take care of the performance for search (In Current search, the time is taken to fetch result is normal but when history is fetched, It'll scan all the records which causes slowdown of queries because of complex JOINs as needed). The query that is used to fetch the data from cloudSQL are made dynamically based on the users' need. For example, A user want the employees list whose manager is \"xyz.123@abc.in\" , by using python code, the query will built accordingly. Now a user want to find users whose manager WAS \"xyz.123@abc.in\" with effectiveFrom 2016-05-02 to 2017-01-01.\nAs I've find some of the usecases for structure as below:\n1) Same model as current structure with new column flag for isCurrentData (status of data whether it is history or active)\nDisadv.:\n - queries slowdown while fetching data as it will scan all records.\n Duplication of data might increase.\nThese all disadv. will affect the performance of advance search by increasing time.\n Solution to this problem is to partition whole table into diff tables.\n2) Partition based on year.\n As time passes, this will generate too many tables.\n3) 2 tables might be maintained.\n 1st for current data and second one for history. But when user want to search data on both models will create complexity of build query.\nSo, need suggestions for structuring historical timeline with improved performance and effective data handling.\nThanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":44654127,"Users Score":0,"Answer":"@Kevin Malachowski : Thanks for guiding me with your info and questions as It gave me new way of thinking. \nHistorical data records will be more than 0.3-0.5 million(maximum). Now I'll use BigQuery for historical advance search.\nFor live data-cloudSQL will be used as we must focus on perfomance for fetched data.\nSome of performance issue will be there for historical search, when a user wants both results from live as well as historical data. (BigQuery is taking time near about 5-6 sec[or more] for worst case) But it will be optimized as per data and structure of the model.","Q_Score":1,"Tags":"python,google-app-engine,google-cloud-sql","A_Id":44715500,"CreationDate":"2017-06-20T13:14:00.000","Title":"Google CloudSQL : structuring history data on cloudSQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using google cloudSQL for applying advance search on people data to fetch the list of users. In datastore, there are data already stored there with 2 model. First is used to track current data of users and other model is used to track historical timeline. The current data is stored on google cloudSQL are more than millions rows for all users. Now I want to implement advance search on historical data including between dates by adding all history data to cloud. \nIf anyone can suggest the better structure for this historical model as I've gone through many of the links and articles. But cannot find proper solution as I have to take care of the performance for search (In Current search, the time is taken to fetch result is normal but when history is fetched, It'll scan all the records which causes slowdown of queries because of complex JOINs as needed). The query that is used to fetch the data from cloudSQL are made dynamically based on the users' need. For example, A user want the employees list whose manager is \"xyz.123@abc.in\" , by using python code, the query will built accordingly. Now a user want to find users whose manager WAS \"xyz.123@abc.in\" with effectiveFrom 2016-05-02 to 2017-01-01.\nAs I've find some of the usecases for structure as below:\n1) Same model as current structure with new column flag for isCurrentData (status of data whether it is history or active)\nDisadv.:\n - queries slowdown while fetching data as it will scan all records.\n Duplication of data might increase.\nThese all disadv. will affect the performance of advance search by increasing time.\n Solution to this problem is to partition whole table into diff tables.\n2) Partition based on year.\n As time passes, this will generate too many tables.\n3) 2 tables might be maintained.\n 1st for current data and second one for history. But when user want to search data on both models will create complexity of build query.\nSo, need suggestions for structuring historical timeline with improved performance and effective data handling.\nThanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":44654127,"Users Score":0,"Answer":"Depending on how often you want to do live queries vs historical queries and the size of your data set, you might want to consider placing the historical data elsewhere.\nFor example, if you need quick queries for live data and do many of them, but can handle higher-latency queries and only execute them sometimes, you might consider periodically exporting data to Google BigQuery. BigQuery can be useful for searching a large corpus of data but has much higher latency and doesn't have a wire protocol that is MySQL-compatible (although it's query language will look familiar to those who know any flavor of SQL). In addition, while for Cloud SQL you pay for data storage and the amount of time your database is running, in BigQuery you mostly pay for data storage and the amount of data scanned during your query executions. Therefore, if you plan on executing many of these historical queries it may get a little expensive.\nAlso, if you don't have a very large data set, BigQuery may be a bit of an overkill. How large is your \"live\" data set and how large do you expect your \"historical\" data set to grow over time? Is it possible to just increase the size of the Cloud SQL instance as the historical data grows until the point at which it makes sense to start exporting to Big Query?","Q_Score":1,"Tags":"python,google-app-engine,google-cloud-sql","A_Id":44662852,"CreationDate":"2017-06-20T13:14:00.000","Title":"Google CloudSQL : structuring history data on cloudSQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Is there a way to install the python documentation that would make it available as if it was a manpage? (I know you can download the sourcefiles for the documentation and read them in vim, using less or whatever but I was thinking about something a bit less manual. Don't want to roll my own.)","AnswerCount":7,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":17218,"Q_Id":44697036,"Users Score":10,"Answer":"On Debian (and derived distributions, like Ubuntu) install pydoc package. Then you can use pydoc whatever command.","Q_Score":12,"Tags":"python,python-3.x,python-2.7","A_Id":44697197,"CreationDate":"2017-06-22T10:39:00.000","Title":"Reading python documentation in the terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way to install the python documentation that would make it available as if it was a manpage? (I know you can download the sourcefiles for the documentation and read them in vim, using less or whatever but I was thinking about something a bit less manual. Don't want to roll my own.)","AnswerCount":7,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":17218,"Q_Id":44697036,"Users Score":0,"Answer":"I will answer since I don't satisfied with the accepted answer. Probably because I don't use IDLE.\nNote that, I use Ubuntu (terminal).\nProbably in other OSs, it works the same.\nIs there a way to install that? No need.\nI found that it comes by default.\nHow to access that?\nUse help() command in python' shell.\n\nIn the shell, type command help().\nNow that you're in the help utility, enter anything that you want to read its documentation. Press q to quit the documentation and type quit to quit the help utility.","Q_Score":12,"Tags":"python,python-3.x,python-2.7","A_Id":72360631,"CreationDate":"2017-06-22T10:39:00.000","Title":"Reading python documentation in the terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I edited my dispatch.yaml and deployed on app engine using\n appcfg.py update_dispatch .\nBut when I go and see source code under StackDriver debug, I don't see the change.\nWhy the changes doesn't get reflected. But when I deploy complete app by appcfg.py update . the changes get reflected.\nBut in case, If I only want to update dispatch how do I do???","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":306,"Q_Id":44698229,"Users Score":1,"Answer":"Try\n\ngcloud app deploy dispatch.yaml\n\n...to connect services to dispatch rules.","Q_Score":0,"Tags":"python,google-app-engine","A_Id":62299368,"CreationDate":"2017-06-22T11:33:00.000","Title":"dispatch.yaml not getting updated","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm working on a flask API, which one of the endpoint is to receive a message and publish it to PubSub. Currently, in order to test that endpoint, I will have to manually spin-up a PubSub emulator from the command line, and keep it running during the test. It working just fine, but it wouldn't be ideal for automated test. \nI wonder if anyone knows a way to spin-up a test PubSub emulator from python? Or if anyone has a better solution for testing such an API?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":768,"Q_Id":44708430,"Users Score":0,"Answer":"This is how I usually do:\n1. I create a python client class which does publish and subscribe with the topic, project and subscription used in emulator.\nNote: You need to set PUBSUB_EMULATOR_HOST=localhost:8085 as env in your python project.\n2. I spin up a pubsub-emulator as a docker container.\nNote: You need to set some envs, mount volumes and expose port 8085.\nset following envs for container:\n\nPUBSUB_EMULATOR_HOST\nPUBSUB_PROJECT_ID\nPUBSUB_TOPIC_ID\nPUBSUB_SUBSCRIPTION_ID\n\n\nWrite whatever integration tests you want to. Use publisher or subscriber from client depending on your test requirements.","Q_Score":1,"Tags":"python,google-cloud-platform,google-cloud-pubsub","A_Id":64239668,"CreationDate":"2017-06-22T20:03:00.000","Title":"How to boot up a test pubsub emulator from python for automated testing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to install pip for python in windows 7. I installed it and I added \"C:\\PythonXX\\Scripts\" to the windows path variables. But, when I typed \"pip\" in the command prompt it shows that pip is not recognized as an internal or external command.\nIs there any way to figure out this problem?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":6397,"Q_Id":44711255,"Users Score":0,"Answer":"if you have already installed python of whatever version you can skip ahead to step 4 or step 6.\n\ndownload and install python default installation is c:\\python27\nCreate a new System Variable named Variable name: PYTHON_HOME and Variable \nvalue: c:\\Python27 (or whatever your installation path was)\nFind the system variable called Path and click Edit\nAdd the following text to the end of the Variable value:;%PYTHON_HOME%\\;%PYTHON_HOME%\\Scripts\\\nVerify a successful environment variable update by opening a new command prompt window (important!) and typing python from any location\nDownload get-pip.py to a folder on your computer. Open a command prompt window and navigate to the folder containing get-pip.py. Then run python get-pip.py. This will install pip.\nverify your pip installation: open command prompt and type 'pip freeze' \nwithout quotation and if it shows like\nantiorm==1.1.1\nenum34==1.0\nrequests==2.3.0\nvirtualenv==1.11.6\nthen you are successful.\nif above steps failed than update environment variable.\ngo to Control Panel\\System and Security\\System\nselect advance system setting then select environment variable and add c:\\python27\\scripts to path variable then it will be fine.\ni have tested it successfully on my pc.","Q_Score":0,"Tags":"python,pip","A_Id":44713819,"CreationDate":"2017-06-23T00:26:00.000","Title":"Error found when installing pip on Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to install pip for python in windows 7. I installed it and I added \"C:\\PythonXX\\Scripts\" to the windows path variables. But, when I typed \"pip\" in the command prompt it shows that pip is not recognized as an internal or external command.\nIs there any way to figure out this problem?","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":6397,"Q_Id":44711255,"Users Score":-1,"Answer":"I know it's hustle to install pip in Windows. With latest Python you don't need to install pip, it's now prebuilt and you can access it by python -m pip","Q_Score":0,"Tags":"python,pip","A_Id":44713170,"CreationDate":"2017-06-23T00:26:00.000","Title":"Error found when installing pip on Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to install mysqlclient, but I get this error message:\n\n_mysql.c:40:20: fatal error: Python.h: No such file or directory\n compilation terminated.\n error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n\nCould anyone help me to resolve this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1400,"Q_Id":44720075,"Users Score":3,"Answer":"You need to install the python development package (which contains the C headers files) for your OS (on debian-based distros it's named 'pythonX.X-dev' where 'X.X' is python version).","Q_Score":3,"Tags":"python","A_Id":44720171,"CreationDate":"2017-06-23T11:14:00.000","Title":"fatal error: Python.h: No such file or directory error: command 'x86_64-linux-gnu-gcc' failed with exit status 1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"There are a lot of threads from Python users on Windows who lose the \"Edit with IDLE\" option on the context menu (right click on a .py file in the File Explorer). I do have the menu item, but most of the time it appears to do nothing. \nChecking the running applications and processes in Task Manager reveals nothing, except I think IDLE or its launcher or something runs very briefly, so quickly it usually never shows up in the Task Manager list. \nThanks to all who suggested splitting this into question and answer.\nMy solution (for now) will be posted next","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":152,"Q_Id":44733040,"Users Score":0,"Answer":"Eventually it became clear that it was launching IDLE, but IDLE was exiting as soon as it got to the point of waiting for user input. Windows CMD scripts do that sometimes when they aren't feeling conversational.\nI found the relevant script at F:\\Python35\\Lib\\idlelib\\Idle.bat. If you add a \"\/W\" switch to the command, it waits for input. Problem solved, at least so far. The entire Idle.bat file now reads as follows. The only change was inserting \"\/W\" in the main command line:\n@echo off\nrem Start IDLE using the appropriate Python interpreter\nset CURRDIR=%~dp0\nstart \"IDLE\" \"%CURRDIR%....\\pythonw.exe\" \/W \"%CURRDIR%idle.pyw\" %1 %2 %3 %4 %5 %6 %7 %8 %9","Q_Score":0,"Tags":"python,windows-7,contextmenu,python-idle","A_Id":44738281,"CreationDate":"2017-06-24T05:19:00.000","Title":"Python's \"EDIT with IDLE\" runs but exits immediately (Win7)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed Python 3.6.1 on my Ubuntu 16 server and cannot find the install location. I have looked in \/usr\/bin and there are reference to all other versions except 3.6.1. Where can I find the executable for this version?","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":6396,"Q_Id":44734164,"Users Score":3,"Answer":"In your terminal, type : which python3","Q_Score":3,"Tags":"python-3.x,installation,ubuntu-16.04","A_Id":44744454,"CreationDate":"2017-06-24T07:46:00.000","Title":"Python 3.6.1 install location","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed Python 3.6.1 on my Ubuntu 16 server and cannot find the install location. I have looked in \/usr\/bin and there are reference to all other versions except 3.6.1. Where can I find the executable for this version?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":6396,"Q_Id":44734164,"Users Score":4,"Answer":"Use command \"whereis python3.6.1\"","Q_Score":3,"Tags":"python-3.x,installation,ubuntu-16.04","A_Id":44734300,"CreationDate":"2017-06-24T07:46:00.000","Title":"Python 3.6.1 install location","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've seen a few posts on this topic with odd hard to reproduce behaviours. Here's a new set of data points.\n\nCurrently the following works\n\ncd .\/hosts\n.\/ec2.py --profile=dev\n\n\nAnd this fails\n\nAWS_PROFILE=dev; ansible-playbook test.yml\nThese were both working a couple days ago. Something in my environment changed. Still investigating. Any guesses?\nError message:\nERROR! The file .\/hosts\/ec2.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this withchmod -x .\/hosts\/ec2.py.\nERROR! Inventory script (.\/hosts\/ec2.py) had an execution error: ERROR: \"Error connecting to AWS backend.\nYou are not authorized to perform this operation.\", while: getting EC2 instances\nERROR! .\/hosts\/ec2.py:3: Error parsing host definition ''''': No closing quotation\n\nNote that the normal credentials error is:\nERROR: \"Error connecting to AWS backend.\nYou are not authorized to perform this operation.\", while: getting EC2 instances\n...\nHmmm. Error message has shifted. \n\nAWS_PROFILE=dev; ansible-playbook test.yml\nERROR! ERROR! .\/hosts\/tmp:2: Expected key=value host variable assignment, got: {","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1445,"Q_Id":44737529,"Users Score":0,"Answer":"Looks like the problem was a temporary file in the hosts folder. After removing it the problems went away. It looks like std ansible behaviour: Pull in ALL files in the hosts folder.","Q_Score":0,"Tags":"python,amazon-ec2,ansible","A_Id":44737817,"CreationDate":"2017-06-24T14:36:00.000","Title":"Ansible ec2.py runs standalone but fails in playbook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm trying to send date to azure documentdb and I have to send a huge document (100 000+ lines), however when I send it I get a Request size is too large error.\nI guess it should be possible to change this request size limit (which should be stored in a variable somewhere) but I can't find it, does someone know this ?\nThanks !\n(I'm using pydocumentdb by the way)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":44756937,"Users Score":0,"Answer":"DocumentDB allows only documents upto 2MB to be sent\/inserted. What's the size of your document and how complex\/nested it is?","Q_Score":0,"Tags":"python,azure-cosmosdb","A_Id":44772468,"CreationDate":"2017-06-26T09:46:00.000","Title":"How to change pydocumentdb request size limit?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"For reference. The absolute path is the full path to some place on your computer. The relative path is the path to some file with respect to your current working directory (PWD). For example:\nAbsolute path: \nC:\/users\/admin\/docs\/stuff.txt\nIf my PWD is C:\/users\/admin\/, then the relative path to stuff.txt would be: docs\/stuff.txt\nNote, PWD + relative path = absolute path.\nCool, awesome. Now, I wrote some scripts which check if a file exists.\nos.chdir(\"C:\/users\/admin\/docs\")\n os.path.exists(\"stuff.txt\")\nThis returns TRUE if stuff.txt exists and it works.\nNow, instead if I write,\nos.path.exists(\"C:\/users\/admin\/docs\/stuff.txt\")\nThis also returns TRUE. \nIs there a definite time when we need to use one over the other? Is there a methodology for how python looks for paths? Does it try one first then the other?\nThanks!","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":29634,"Q_Id":44772007,"Users Score":6,"Answer":"The biggest consideration is probably portability. If you move your code to a different computer and you need to access some other file, where will that other file be? If it will be in the same location relative to your program, use a relative address. If it will be in the same absolute location, use an absolute address.","Q_Score":13,"Tags":"python,path,relative-path,absolute-path,pwd","A_Id":44772227,"CreationDate":"2017-06-27T03:54:00.000","Title":"When to use Absolute Path vs Relative Path in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have B* instances running on App Engine(Python env) to serve user facing requests. Sometimes I see B* instances getting terminated due to Exceeded soft private memory limit \nI understand that increasing the instance class will solve the issue but\nI have few queries regarding requests that are present in the Instance Pending Queue !\nAssume we have 2 instances of B* instance class and we call it let say => I-1, I-2 \n\nWhat will happen to those requests that are there in I-1 Instance Request Pending Queue after the I-1 instance gets terminated due to some reason? Will those requests gets evicted from the instance queue as this instance got terminated ?\nWill the requests in Instance Pending Queue are Dequeued from the Instance Pending Queue for I-1 and will be put in I-2 Request Queue by the Request scheduler as soon as Request Scheduler finds that I-1 is shutting down due to some reason.\n\nAny help regarding understanding these things will be highly appreciated !","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":34,"Q_Id":44795589,"Users Score":1,"Answer":"Based on my external observations of how things work, I suspect there is only a single ingress queue (per service\/module) from which requests are only handed to instances which can immediately handle them.\nThe actual parameters of this single queue (depth, waiting time, etc) would be the indicators driving the automatic\/basic instance scaling logic for that service\/module - starting and stopping instances.\nIn such architecture the death of an instance has absolutely no impact on the queued requests, they would simply be dispatched to other instance(s), either already running or specifically started to handle such request.\nNote: this is just a theory, though.","Q_Score":0,"Tags":"python,google-app-engine,google-cloud-platform","A_Id":44805435,"CreationDate":"2017-06-28T06:58:00.000","Title":"What happens to requests in Instance Request Pending Queue when a backend instance gets terminated due to \"Exceeded soft private memory limit\"?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Are there any TFTP libraries in Python to allow the PUT transfer of binary file to an IP address.\nIdeally I would rather use a built in library if this is not possible then calling the cmd via python would be acceptable.\nUsually if TFTP is installed in windows the command in command prompt would be:\n\ntftp -i xxx.xxx.xxx.xxx put example_filename.bin\n\nOne thing to note is that python is 32bit and running on a 64bit machine. I've been unable to run tftp using subprocess.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":2164,"Q_Id":44796047,"Users Score":3,"Answer":"You can use TFTPy\nTFTPy is a pure Python implementation of the Trivial FTP protocol.\nTFTPy is a TFTP library for the Python programming language. It includes client and server classes, with sample implementations. Hooks are included for easy inclusion in a UI for populating progress indicators. It supports RFCs 1350, 2347, 2348 and the tsize option from RFC 2349.","Q_Score":2,"Tags":"python,tftp","A_Id":44796119,"CreationDate":"2017-06-28T07:22:00.000","Title":"Running TFTP client\/library in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When i backfill a dag for specific dates, i want to run it by sequentially, i.e. i want it to be run day by day\ncompleting all the tasks for the specific day and then next day so on.. I have used the depends_on_past argument, but it is only helping me to set the dependency on tasks not in dag runs. \nexample :- Dag_A have 4 tasks , i use back fill with depends_on_past,\nAfter executing first task in Dag_A (first day) it triggers first task of Dag_A (second day), I dont want it","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":522,"Q_Id":44819809,"Users Score":0,"Answer":"There is an option to set maximum number of runs per dag in the global airflow.cfg file. The parameter to set is max_active_runs_per_dag.","Q_Score":1,"Tags":"python,workflow,pipeline,airflow,apache-airflow","A_Id":44886741,"CreationDate":"2017-06-29T08:19:00.000","Title":"airflow backfilling dag run dependancy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a flask application where i submit tasks to celery(worker) to execute it. so that i can get the webpage back after submitting. Can i achieve same if i submit the task to Jenkins instead? Just wanted an opinion why would i use celery when i can ask Jenkins to schedule\/execute the job through Jenkins API ? still get my webpage back. I may be wrong with my approach but anyone who can shed light on this would really appreciate. \nThe Main aim is that user submits the form which actually is task to execute and after hitting submit task detachs from web , reloads the form. Meanwhile task runs in background which celery does it efficiently but can it be done via jenkins.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1256,"Q_Id":44843864,"Users Score":0,"Answer":"Couple of points below to consider to compare between celery and jenkins.\n\nCelery is specifically designed and built for running resource intensive tasks in background whereas jenkins is a more general tool for automation.\njenkins is built on java so native integration is there although plugins are available whereas celery is built with python so you can directly program the tasks in python and send it to celery or just call your shell tasks from python.\nMessage queuing - Again jenkins does not have builtin support for message brokers so queuing might be difficult for you. Celery uses rabbitmq as default to queue the tasks so your tasks never gets lost.\nCelery also provide simple callbacks so when task is completed you can run some function after it.\nNow if you ask about cpu comsumption then celery is not at all heavy","Q_Score":0,"Tags":"python,jenkins,flask,celery","A_Id":44846900,"CreationDate":"2017-06-30T10:05:00.000","Title":"Flask async job\/task submit to Celery or Jenkins","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Been trying for the last two hours and it keeps telling me that the job already exists:\nFrom the logs:\nBackup \"datastore_backup_2017_06_30\" already exists. (\/base\/data\/home\/runtimes\/python27\/python27_lib\/versions\/1\/google\/appengine\/ext\/datastore_admin\/backup_handler.py:853) Traceback (most recent call last): File \"\/base\/data\/home\/runtimes\/python27\/python27_lib\/versions\/1\/google\/appengine\/ext\/datastore_admin\/backup_handler.py\", line 839, in _ProcessPostRequest raise BackupValidationError('Backup \"%s\" already exists.' % backup) BackupValidationError: Backup \"datastore_backup_2017_06_30\" already exists.\nHowever, there is not a job running as far as I can tell. The last time I did one was this morning:\nStarted: June 30, 2017, 9:27 a.m.\nCompleted: June 30, 2017, 9:28 a.m.\nAnyone hd similar or have a solution","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":19,"Q_Id":44850538,"Users Score":2,"Answer":"The logs are telling you exactly what the problem is. The backup you did today at 9:27AM already has the name \"datastore_backup_2017_06_30\". To do another, it must have a unique name. Try adding a time to the backup filename, or change it to \"datastore_backup_2017_06_30_2\".","Q_Score":0,"Tags":"python,google-app-engine,google-cloud-datastore","A_Id":44851226,"CreationDate":"2017-06-30T15:55:00.000","Title":"Can't do manual backup of ndb datastore from datastore admin","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have defined a DAG in a file called tutorial_2.py (actually a copy of the tutorial.py provided in the airflow tutorial, except with the dag_id changed to tutorial_2).\nWhen I look inside my default, unmodified airflow.cfg (located in ~\/airflow), I see that dags_folder is set to \/home\/alex\/airflow\/dags. \nI do cd \/home\/alex\/airflow; mkdir dags; cd dags; cp [...]\/tutorial_2.py tutorial_2.py. Now I have a dags folder matching the path set in airflow.cfg , containing the tutorial_2.py file I created earlier.\nHowever, when I run airflow list_dags, I only get the names corresponding with the default, tutorial DAGs.\nI would like to have tutorial_2 show up in my DAG list, so that I can begin interacting with. Neither python tutorial_2.py nor airflow resetdb have caused it to appear in the list.\nHow do I remedy this?","AnswerCount":4,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":34425,"Q_Id":44856214,"Users Score":6,"Answer":"In my understanding, AIRFLOW_HOME should link to the directory where airflow.cfg is stored. Then, airflow.cfg can apply and set the dag directory to the value you put in it. \nThe important point is : airflow.cfg is useless if your AIRFLOW_HOME is not set","Q_Score":35,"Tags":"python,airflow,apache-airflow","A_Id":44878992,"CreationDate":"2017-07-01T00:04:00.000","Title":"How to add new DAGs to Airflow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have defined a DAG in a file called tutorial_2.py (actually a copy of the tutorial.py provided in the airflow tutorial, except with the dag_id changed to tutorial_2).\nWhen I look inside my default, unmodified airflow.cfg (located in ~\/airflow), I see that dags_folder is set to \/home\/alex\/airflow\/dags. \nI do cd \/home\/alex\/airflow; mkdir dags; cd dags; cp [...]\/tutorial_2.py tutorial_2.py. Now I have a dags folder matching the path set in airflow.cfg , containing the tutorial_2.py file I created earlier.\nHowever, when I run airflow list_dags, I only get the names corresponding with the default, tutorial DAGs.\nI would like to have tutorial_2 show up in my DAG list, so that I can begin interacting with. Neither python tutorial_2.py nor airflow resetdb have caused it to appear in the list.\nHow do I remedy this?","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":34425,"Q_Id":44856214,"Users Score":17,"Answer":"I think the reason for this is because you haven't exported AIRFLOW_HOME.\nTry doing:\nAIRFLOW_HOME=\"\/home\/alex\/airflow\/dags\" airflow list_dags.\nIf that's not working than do two steps\n\nexport AIRFLOW_HOME=\"\/home\/alex\/airflow\/dags\"\nairflow list_dags\n\nI believe this should work. Give it a go?","Q_Score":35,"Tags":"python,airflow,apache-airflow","A_Id":44856401,"CreationDate":"2017-07-01T00:04:00.000","Title":"How to add new DAGs to Airflow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have defined a DAG in a file called tutorial_2.py (actually a copy of the tutorial.py provided in the airflow tutorial, except with the dag_id changed to tutorial_2).\nWhen I look inside my default, unmodified airflow.cfg (located in ~\/airflow), I see that dags_folder is set to \/home\/alex\/airflow\/dags. \nI do cd \/home\/alex\/airflow; mkdir dags; cd dags; cp [...]\/tutorial_2.py tutorial_2.py. Now I have a dags folder matching the path set in airflow.cfg , containing the tutorial_2.py file I created earlier.\nHowever, when I run airflow list_dags, I only get the names corresponding with the default, tutorial DAGs.\nI would like to have tutorial_2 show up in my DAG list, so that I can begin interacting with. Neither python tutorial_2.py nor airflow resetdb have caused it to appear in the list.\nHow do I remedy this?","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":34425,"Q_Id":44856214,"Users Score":1,"Answer":"The issue is that you might have two airflow configs existing in your directories, so check for \/root\/airflow\/dags and if yes you require to change the dags_folder path in both airflow.cfg s","Q_Score":35,"Tags":"python,airflow,apache-airflow","A_Id":55957328,"CreationDate":"2017-07-01T00:04:00.000","Title":"How to add new DAGs to Airflow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have defined a DAG in a file called tutorial_2.py (actually a copy of the tutorial.py provided in the airflow tutorial, except with the dag_id changed to tutorial_2).\nWhen I look inside my default, unmodified airflow.cfg (located in ~\/airflow), I see that dags_folder is set to \/home\/alex\/airflow\/dags. \nI do cd \/home\/alex\/airflow; mkdir dags; cd dags; cp [...]\/tutorial_2.py tutorial_2.py. Now I have a dags folder matching the path set in airflow.cfg , containing the tutorial_2.py file I created earlier.\nHowever, when I run airflow list_dags, I only get the names corresponding with the default, tutorial DAGs.\nI would like to have tutorial_2 show up in my DAG list, so that I can begin interacting with. Neither python tutorial_2.py nor airflow resetdb have caused it to appear in the list.\nHow do I remedy this?","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":34425,"Q_Id":44856214,"Users Score":1,"Answer":"I might be using the latest airflow, the command has changed. What works for me is:\n\nexport AIRFLOW_HOME=\"~\/airflow\"\nThen run airflow dags list","Q_Score":35,"Tags":"python,airflow,apache-airflow","A_Id":69834752,"CreationDate":"2017-07-01T00:04:00.000","Title":"How to add new DAGs to Airflow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to create a python script (widget.py) that will get a JSON feed from an arbitrary resource. I can't get the python script to execute on localhost. Here are the steps I have followed:\n\nIn etc\/apache2\/httpd.conf I enabled LoadModule cgi_module libexec\/apache2\/mod_cgi.so\nRestarted Apache sudo apachectl restart\nAdded a .htaccess file to my directory:\n\n Options +ExecCGI\n AddHandler cgi-script .py\n<\/Directory>\nNOTE: I will eventually need to deploy this on a server where I won't have access to the apache2 directory.\nNavigated to http:\/\/localhost\/~walter\/widget\/widget.py\n\nI get a 500 server error. Log file contents:\n[Sat Jul 01 08:51:00.922413 2017] [core:info] [pid 75403] AH00096: removed PID file \/private\/var\/run\/httpd.pid (pid=75403)\n[Sat Jul 01 08:51:00.922446 2017] [mpm_prefork:notice] [pid 75403] AH00169: caught SIGTERM, shutting down\nAH00112: Warning: DocumentRoot [\/usr\/docs\/dummy-host.example.com] does not exist\nAH00112: Warning: DocumentRoot [\/usr\/docs\/dummy-host2.example.com] does not exist\n[Sat Jul 01 08:51:01.449227 2017] [mpm_prefork:notice] [pid 75688] AH00163: Apache\/2.4.25 (Unix) PHP\/5.6.30 configured -- resuming normal operations\n[Sat Jul 01 08:51:01.449309 2017] [core:notice] [pid 75688] AH00094: Command line: '\/usr\/sbin\/httpd -D FOREGROUND'\nDo I need to enable cgi in \/etc\/apache2\/users\/walter\/http.conf? Should I?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3573,"Q_Id":44862432,"Users Score":0,"Answer":"1, 2. Ok.\n\nAre .htaccess files allowed in \/etc\/apache2\/httpd.conf ?\n\n \u2014 I think you want .\n\nLook into error log \u2014 what is the error?","Q_Score":1,"Tags":"python,apache,.htaccess,httpd.conf,macos-sierra","A_Id":44862745,"CreationDate":"2017-07-01T15:12:00.000","Title":"Executing python script on Apache in MacOS Sierra","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to create a python script (widget.py) that will get a JSON feed from an arbitrary resource. I can't get the python script to execute on localhost. Here are the steps I have followed:\n\nIn etc\/apache2\/httpd.conf I enabled LoadModule cgi_module libexec\/apache2\/mod_cgi.so\nRestarted Apache sudo apachectl restart\nAdded a .htaccess file to my directory:\n\n Options +ExecCGI\n AddHandler cgi-script .py\n<\/Directory>\nNOTE: I will eventually need to deploy this on a server where I won't have access to the apache2 directory.\nNavigated to http:\/\/localhost\/~walter\/widget\/widget.py\n\nI get a 500 server error. Log file contents:\n[Sat Jul 01 08:51:00.922413 2017] [core:info] [pid 75403] AH00096: removed PID file \/private\/var\/run\/httpd.pid (pid=75403)\n[Sat Jul 01 08:51:00.922446 2017] [mpm_prefork:notice] [pid 75403] AH00169: caught SIGTERM, shutting down\nAH00112: Warning: DocumentRoot [\/usr\/docs\/dummy-host.example.com] does not exist\nAH00112: Warning: DocumentRoot [\/usr\/docs\/dummy-host2.example.com] does not exist\n[Sat Jul 01 08:51:01.449227 2017] [mpm_prefork:notice] [pid 75688] AH00163: Apache\/2.4.25 (Unix) PHP\/5.6.30 configured -- resuming normal operations\n[Sat Jul 01 08:51:01.449309 2017] [core:notice] [pid 75688] AH00094: Command line: '\/usr\/sbin\/httpd -D FOREGROUND'\nDo I need to enable cgi in \/etc\/apache2\/users\/walter\/http.conf? Should I?","AnswerCount":3,"Available Count":2,"Score":0.2605204458,"is_accepted":false,"ViewCount":3573,"Q_Id":44862432,"Users Score":4,"Answer":"Got it to work. Here are the steps that I followed:\n\nIn etc\/apache2\/httpd.conf I uncommented:\nLoadModule cgi_module libexec\/apache2\/mod_cgi.so\nRestarted Apache sudo apachectl restart\nList itemAdded a .htaccess file to my directory with the following contents:\nOptions ExecCGI\n AddHandler cgi-script .py\n Order allow,deny\n Allow from all\nAdded #!\/usr\/bin\/env python to the top of my python script\nIn terminal enabled execution of the python script using: chmod +x widget.py","Q_Score":1,"Tags":"python,apache,.htaccess,httpd.conf,macos-sierra","A_Id":44865681,"CreationDate":"2017-07-01T15:12:00.000","Title":"Executing python script on Apache in MacOS Sierra","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have Python scripts in Linux Server.\nI have multiple scripts in directory example \/home\/python\/scripts\nall users use same username \"python\" to login linux server.\nIf multiple users are run same script is there any issues?\nLike if one user start execute one script before finishing this script another user also started same script. Is variable got overwrite ?\nWhat is best way to handle this kind of things.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":917,"Q_Id":44866952,"Users Score":1,"Answer":"So long as the state is not shared in any way between the different interpreters executing the scripts (re each user running the script gets a different Python interpreter process), there should be no problem. However if there is some shared context (such as a log file each process is simultaneously reading\/writing from assuming mutual exclusivity), you will very likely have trouble. The trouble could be mitigated in many ways whether through mutexes or other synchronized access.","Q_Score":0,"Tags":"python,linux","A_Id":44866968,"CreationDate":"2017-07-02T02:23:00.000","Title":"Python script to run same script by multiple users","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there any python\/Shell script to make memory 100% usage for 20 minutes.\nMemory size is very big 4TB. \nOperating System Linux.\nPython version 2.7","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":259,"Q_Id":44882144,"Users Score":0,"Answer":"Is there any python\/Shell script to make memory 100% usage for 20 minutes.\n\nTo be technical, we need to be precise. 100% usage of the whole memory by a single process isn't technically possible. Your memory is shared with other processes. The fact that the kernel is in-memory software debunks the whole idea. \nPlus, a process might start another process, say you run Python from the shell, now you have two processes (the shell and Python) each having their own memory areas.\nIf you mean by that a process that can consume most of ram space, then yes that's not impossible.","Q_Score":0,"Tags":"python,python-2.7,shell","A_Id":44884687,"CreationDate":"2017-07-03T09:31:00.000","Title":"shell\/Python script to utilize memory 100% for 20 mins","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a .c file and this file has to be executed only at a particular time interval say 9pm.\nSo, please let me know if there is any possibility of achieving this via python scripting.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":140,"Q_Id":44882981,"Users Score":0,"Answer":"What previous answers miss is your question about how to trigger at a specific time, e.g. 9pm. \nYou can use cron for this. It's a *nix utility that executes arbitrary commands on a time schedule you specify. \nYou could set cron to run your C program directly, or to launch a Python wrapper that then starts your C executable. \nTo get started: type crontab -e and follow the examples.","Q_Score":0,"Tags":"python,c","A_Id":44883166,"CreationDate":"2017-07-03T10:13:00.000","Title":"Python script to execute a C program at particular time interval","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to install python 3.5 packages so they would be available in Jupyter notebook with pyspark3 kernel.\nI've tried to run the following script action:\n#!\/bin\/bash\nsource \/usr\/bin\/anaconda\/envs\/py35\/bin\/activate py35\nsudo \/usr\/bin\/anaconda\/envs\/py35\/bin\/conda install -y keras tensorflow theano gensim\n\nbut the packages get installed on python 2.7 and not in 3.5","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2656,"Q_Id":44902885,"Users Score":0,"Answer":"Have you tried installing using pip? \nIn some cases where you have both Python 2 and Python 3, you have to run pip3 instead of just pip to invoke pip for Python 3.","Q_Score":1,"Tags":"python,azure,pyspark,jupyter-notebook,azure-hdinsight","A_Id":44903656,"CreationDate":"2017-07-04T10:02:00.000","Title":"how to install python package on azure hdinsight pyspark3 kernel?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been developing a set of scripts in PyCharm using C:\/Python27\/python.exe interpreter...\nI'm creating a batch file and then running all of the scripts through this file from the cmd shell however the shell is not recognizing most modules because it is using the wrong interpreter (path for anaconda instead)...\nHow do I change the shell to use the C:\/Python27\/python.exe interpreter as default all the time instead?\nI've tried looking this up but it all points to just adding the interpreter path, which I have... but the shell still uses the anaconda interpreter.\nAny help appreicated","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":185,"Q_Id":44907749,"Users Score":0,"Answer":"Found a solution to my problem...\nI needed to use:\nset path=%path%;C:\\Python27\nThis switched interpreter.\nCheers","Q_Score":0,"Tags":"python","A_Id":44908141,"CreationDate":"2017-07-04T13:49:00.000","Title":"Choosing specific python interpreter in the shell","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm on linux, I wrote a python script that is located in lets say \/home\/python\/main.py. Every time I run it, it asks for a path using raw_input. I run it from a different place every time.\nsay I run it from \/home\/test like this python \/home\/python\/main.py from terminal.\nHow can I give it the path I'm currently in, if possible. I don't want to hardcode paths. So I want to give it the path from which I'm using the terminal. I don't want to do pwd and copy paths, \nI'm wondering if there's something like ~ which always points to home directory. Something similar which points to the users current directory.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":44914376,"Users Score":3,"Answer":"Use os.getcwd - returns current working directory. Requires import os.\nAlso its slightly unclear, what exactly do you want. You dont really need current working directory in python - every relative path in python will be interpreter as if starting in current directory. So simply use relative paths (in linux those are ones NOT starting with \/) - this will work as long as you dont change current directory (which is a bad habbit anyway).","Q_Score":0,"Tags":"python,bash,python-2.7","A_Id":44914400,"CreationDate":"2017-07-04T21:49:00.000","Title":"How to find the user current path?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I struggle with a very basic problem in the Atom editor. I simply want to run a python file but am unable to just open a command line or console. In other forums I read, I need to install Script. I tried to do it in Packages -> Settings View. But if I type 'script' in the install packages field, only this response appears: Searching for \u201cscript\u201d failed.Show output\u2026\nHow can I make Atom to find this packages?\nThx","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4217,"Q_Id":44926214,"Users Score":0,"Answer":"press ctrl+, to open your preferences\ngo to install\nsearch for script\npress install","Q_Score":1,"Tags":"python,command-line,installation,atom-editor","A_Id":44926753,"CreationDate":"2017-07-05T12:24:00.000","Title":"Installing Script in Atom","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a centos 7 machine , which has 2 python versions , python giver sveriosn 2.7.5 and python2.7 givers version . 2.7.13. I want to make 2.7.13 as default version, such that when I check python --version it gives 2.7.13 and not 2.7.5 . I have added both to PATH.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":97,"Q_Id":44939999,"Users Score":1,"Answer":"If you set Python 2.7.13 to your PATH and not 2.7.5, the used Python should be 2.7.13.\nOr you can try to set the PYTHONPATH variable","Q_Score":0,"Tags":"linux,python-2.7,centos7","A_Id":44940122,"CreationDate":"2017-07-06T04:46:00.000","Title":"Different python versions in centos","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am very new to python, but it seems like I should be able to activate the python environment by simply typing python. Unfortunately, I keep getting the error \"'python' is not recognized as an internal or external command,\noperable program or batch file.\" I can access python through it's path at C:\\Python27\\python, which works fine, but I can't seem to get the shortcut to work. \nDetails: I manually installed Python 2.7, and I run Windows 10. I have tried going to the Advanced System Settings to add the PATH manually, but with no result. None of the tutorials or help articles have suggestions for what to do if adding it manually fails. Is there any way I can fix this? It seems like a small thing, but it's really bugging me.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":381,"Q_Id":44940094,"Users Score":0,"Answer":"Your path is bugged:\nC:\\Program Files\\Git\\cmd;\"C:\\Windows;C:\\Windows\\System32;C:\\Python27\";C\u200c\u200b:\\Python27\\Scripts\nshould not have those random \" characters.","Q_Score":0,"Tags":"python,python-2.7","A_Id":44940434,"CreationDate":"2017-07-06T04:56:00.000","Title":"Unable to add python to PATH","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am very new to python, but it seems like I should be able to activate the python environment by simply typing python. Unfortunately, I keep getting the error \"'python' is not recognized as an internal or external command,\noperable program or batch file.\" I can access python through it's path at C:\\Python27\\python, which works fine, but I can't seem to get the shortcut to work. \nDetails: I manually installed Python 2.7, and I run Windows 10. I have tried going to the Advanced System Settings to add the PATH manually, but with no result. None of the tutorials or help articles have suggestions for what to do if adding it manually fails. Is there any way I can fix this? It seems like a small thing, but it's really bugging me.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":381,"Q_Id":44940094,"Users Score":1,"Answer":"Your path configuration is incorrect; your path should look like this:\n\nC:\\ProgramData\\Oracle\\Java\\javapath;C:\\WINDOWS\\system32;C:\\\u200c\u200bWINDOWS;C:\\WINDOWS\\S\u200c\u200bystem32\\Wbem;C:\\WIND\u200c\u200bOWS\\System32\\Windows\u200c\u200bPowerShell\\v1.0\\;C:\\\u200c\u200bProgram\n Files (x86)\\MiKTeX 2.9\\miktex\\bin\\;C:\\Program Files\\jEdit;C:\\Program\n Files\\Git\\cmd;C:\\Python27\\;C\u200c\u200b:\\Python27\\Scripts;C\u200c\u200b:\\Users\\Maria\\AppDat\u200c\u200ba\\Local\\Microsoft\\Wi\u200c\u200bndowsApps;\n\nAfter changing the path, make sure you restart the command prompt or any other application that needs to use Python (or you can just restart the computer).","Q_Score":0,"Tags":"python,python-2.7","A_Id":44940481,"CreationDate":"2017-07-06T04:56:00.000","Title":"Unable to add python to PATH","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm currently working on a larger Apache Beam pipeline with the Python API which reads data from BigQuery and in the end writes it back to another BigQuery task. \nOne of the transforms needs to use a binary program to transform the data, and for that it needs to load a 23GB file with binary lookup data. So starting and running the program takes a lot of overhead (takes about 2 minutes to load\/run each time) and RAM, and it wouldn't make sense to start it up for just a single record. Plus the 23GB file would need to be copied locally from Cloud Storage every time.\nThe workflow for the binary would be:\n\nCopy 23GB file from cloud storage if it's not there already\nSave records to a file\nrun the binary with call()\nread the output of the binary and return it\n\nThe amount of records the program can process at a time is basically unlimited, so it would be nice to get a somewhat-distributed Beam Transform, where I could specify a number of records to be processed at once (say 100'000 at a time), but still have it distributed so it can run it for 100'000 records at a time on multiple nodes.\nI don't see Beam supporting this behaviour, it might be possible to hack something together as a KeyedCombineFn operation that collects records based on some split criterion\/key and then runs the binary in the merge_accumulators step over the accumulated records. But this seems very hackish to me.\nOr is it possible to GroupByKey and process groups as batches? Does this guarantee that each group is processed at once, or can groups be split behind the scenes by Beam?\nI also saw there's a GroupIntoBatches in the Java API, which sounds like what I'd need, but isn't available in the Python SDK as far as I can tell.\nMy two question are, what's the best way (performance-wise) to achieve this use-case in Apache Beam, and if there isn't a good solution, is there some other Google Cloud service that might be better suited that could be used like Beam --> Other Service --> Beam ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1299,"Q_Id":44942445,"Users Score":1,"Answer":"Groups cannot be split behind the scenes, so using a GroupByKey should work. In fact, this is a requirement since each individual element must be processed on a single machine and after a GroupByKey all values with a given key are part of the same element.\nYou will likely want to assign random keys. Keep in mind that if there are too many values with a given key it may also be difficult to pass all of those values to your program -- so you may also want to limit how many of the values you pass to the program at a time and\/or adjust how you assign keys.\nOne trick for assigning random keys is to generate the random number in start bundle (say 1 to 1000) and then in process element just increment this and wrap 1001 to 1000. This avoids generating a random number for every element, and still ensures a good distribution of keys.\nYou could create a PTransform for both this logic (divide a PCollection into PCollection> chunks for processing), and that would be potentially reusable in similar situations.","Q_Score":0,"Tags":"python,google-cloud-platform,google-cloud-dataflow,apache-beam","A_Id":44953748,"CreationDate":"2017-07-06T07:31:00.000","Title":"Batch Processing in Apache Beam with large overhead","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"How can I set an environment variable with the location of the pytest.ini, tox.ini or setup.cfg for running pytest by default?\nI created a docker container with a volume pointing to my project directory, so every change I make is also visible inside the docker container. The problem is that I have a pytest.ini file on my project root which won't apply to the docker container. \nSo I want to set an environment variable inside the docker container to specify where to look for the pytest configuration. Does anyone have any idea how could I do that?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":7501,"Q_Id":44958993,"Users Score":6,"Answer":"There is no way to do that. You can use a different pytest configuration using pytest -c but tox.ini and setup.cfg must reside in the top-level directory of your package, next to setup.py.","Q_Score":5,"Tags":"python,docker,pytest,pytest-django","A_Id":44959229,"CreationDate":"2017-07-06T21:16:00.000","Title":"pytest: environment variable to specify pytest.ini location","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am looking for using KCL on SparkStreaming using pySpark.\nAny pointers would be helpful.\nI tried few given by spark Kinesis Ingeration link.\nBut i get the error for JAVA class reference.\nSeems Python is using JAVA class.\ni tried linking\nspark-streaming-kinesis-asl-assembly_2.10-2.0.0-preview.jar\nwhile trying to apply the KCL app on spark.\nbut still having the error.\nPlease let me know if anyone has done it already.\nif i search online i get more about Twitter and Kafka.\nNot able to get much help with regard to Kinesis.\nspark verision used: 1.6.3","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":374,"Q_Id":44962107,"Users Score":0,"Answer":"I encountered the same problem. The kinesis-asl jar had several files missing.\nTo overcome this problem, I had included the following jars in my spark-submit.\n\namazon-kinesis-client-1.9.0.jar\naws-java-sdk-1.11.310.jar\njackson-dataformat-cbor-2.6.7.jar\n\nNote: I am using Spark 2.3.0 so the jar versions listed might not be the same as those you should be using for your spark version.\nHope this helps.","Q_Score":0,"Tags":"python-2.7,spark-streaming,amazon-kcl","A_Id":49734017,"CreationDate":"2017-07-07T03:31:00.000","Title":"using Kinesis Client library with Spark Steaming PySpark","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running Ubuntu 17.04 (fresh install) and already installed pip. However, when I try to install anything I get this:\n\nCommand \"python setup.py egg_info\" failed with error code 1 in\n \/tmp\/pip-build-kBfUEp\/kivy\/\n\nDepending on what I'm installing, I get the same thing but slighty different. For example:\n\nCommand \"python setup.py egg_info\" failed with error code 1 in\n \/tmp\/pip-build-zqj5Ka\/pypiwin32\/\n\nI've tried everything and I have absolute no idea how to solve this.\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2087,"Q_Id":44981793,"Users Score":0,"Answer":"I've had the same error, I used the following code to get out of this error:\n(I was working on centOS 7)\n1) sudo yum install MySQL-devel\n2) sudo yum install openssl - devel\n3) sudo yum install python-devel\nI hope this works for you.","Q_Score":0,"Tags":"python,installation,pip","A_Id":47015021,"CreationDate":"2017-07-08T01:59:00.000","Title":"python setup.py egg_info failed with error code 1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have .bat file which outputs data in console, than I need to read that data with python script and run through \"if cycles\"\nThe question is: How to read that data with python script?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":44986563,"Users Score":0,"Answer":"There are two approaches you might take:\n\nUse a pipe (the | character) in your command line to redirect the output that is going to the console to the Python script, and code the Python script to read from stdin.\nRedirect the output that is going to the console to a file (with >filename) and then code the Python script to read from that file.","Q_Score":0,"Tags":"python,c++","A_Id":44987001,"CreationDate":"2017-07-08T13:07:00.000","Title":"Redirect data from console to the python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am migrating my personal hobby python web application from 127.0.0.1 to cloud 9 lately, but found myself completely new to the idea of setting up ssl certificate. I did some online research on openssl and its python wrapper but still couldn't find any definitive guide on how to set it up in practice, specifically for the cloud 9 IDE platform.\nCould someone please give a walkthough, or point out some references link here? Thanks.\nBy the way, I'm using cherrypy for the python server.\nEDIT: specifically, I have the following questions:\n\nis it requred to run openssl from the server(in my case, cloud9 bash), or I can run openssl from my local laptop then upload the generated key and cert?\ndoes it make any sense to use passphrase to protect the key? I don't see any point here, correct me if I'm wrong please\nhow to install it to cloud9?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":184,"Q_Id":44996563,"Users Score":0,"Answer":"Cloud9 runs your app behind an https proxy, so you need to just use http, since cloud9 proxy won't accept your self signed certificate.","Q_Score":0,"Tags":"python,ssl,cloud9-ide","A_Id":44998382,"CreationDate":"2017-07-09T12:49:00.000","Title":"Could someone please provide a walkthrough on how to setup a self signing ssl certificate on cloud 9?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"FindPythonLibs.cmake is somehow finding Python versions that don't exist\/were uninstalled. \nWhen I run find_package(PythonLibs 3 REQUIRED) CMake properly finds my Python3.6 installation and adds its include path, but then I get the error\nNo rule to make target 'C:\/Users\/ultim\/Anaconda2\/libs\/python27.lib', needed by 'minotaur-cpp.exe'. Stop.\nThis directory doesn't exist, and I recently uninstalled Anaconda and the python that came with it. I've looked through my environment variables and registry, but find no reference to this location.\nWould anyone know where there might still be a reference to this location?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":114,"Q_Id":44997969,"Users Score":0,"Answer":"Since the \"REQUIRED\" option to find_package() is not working, you can be explicit about which Python library using CMake options with cache variables:\ncmake -DPYTHON_INCLUDE_DIR=C:\\Python36\\include -DPYTHON_LIBRARY=C:\\Python36\\libs\\python36.lib ..","Q_Score":1,"Tags":"python,cmake,anaconda","A_Id":44999337,"CreationDate":"2017-07-09T15:20:00.000","Title":"CMake's find packages finds nonexisting python library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"For whatever reason, when I click on launch for any app in the Navigator, will say launching app with the status bar moving but it will eventually stop and the app never opens. I have tried starting with administrative privileges and I have even tried uninstalling and reinstalling and then rebooting but still the apps never launch.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1414,"Q_Id":45002397,"Users Score":0,"Answer":"I just had this problem! The cause was in a corrupted registry affecting the launch of CMD.EXE (Used by Navigator to launch applications).\nSolution: Empty the HKEY_CURRENT_USER\\Software\\Microsoft\\Command Processor\\AutoRun key\n(It contained several occurrences of \"if exist\"!?)\nIf you used \"conda init\", then you will need to re-run the command.","Q_Score":3,"Tags":"python,anaconda,navigator","A_Id":67814968,"CreationDate":"2017-07-10T00:48:00.000","Title":"Anaconda Navigator Applications not launching apps","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"For whatever reason, when I click on launch for any app in the Navigator, will say launching app with the status bar moving but it will eventually stop and the app never opens. I have tried starting with administrative privileges and I have even tried uninstalling and reinstalling and then rebooting but still the apps never launch.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1414,"Q_Id":45002397,"Users Score":0,"Answer":"If you are using Debian distribution os then simply you can run following command in your shell after installing anaconda distribution.\nexport PATH= YOUR_ANACONDA_INSTALLATION_FOLDER_LOCATION\/bin:PATH\nthen run,\nanaconda-navigator\nfor example mine anaconda3 was installed at root\/anaconda3,So i used following commands:\n\u250c\u2500\u2500[root@kali]\u2500[\/]\n\u2514\u2500\u2500\u257c # export PATH=\/root\/anaconda3\/bin:PATH\n\u250c\u2500[root@kali]\u2500[\/]\n\u2514\u2500\u2500\u257c # anaconda-navigator\nand that worked fine for me:)","Q_Score":3,"Tags":"python,anaconda,navigator","A_Id":59505434,"CreationDate":"2017-07-10T00:48:00.000","Title":"Anaconda Navigator Applications not launching apps","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was on freelancer website and I found this work proposal:\n\nProject Description\nHello\nWe need experience developer in python.\nOnly bit that person who has a experience in python and Linux.\nI want to execute python code in Booting time before execute Operating\n System Desktop.\n\nI know that unless I candidate, I won't have any detail about the project, but anyway it seems odd to me. \nFrom my understanding python is interpreted, which means that it needs a virtual environment, and that's what makes it platform independent. Therefore how can a python script (which doesn't convert 1:1 to machine instructions) run before the operative system? Since I know little about what's going on at boot time (I guess some pre-defined instructions laying in the motherboard ROM are executed, then the bootloader loads in the RAM the OS, and the program counter holds the address for the entry point of the OS itself, but I am just supposing) I ask you whether such a thing could be possible.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":112,"Q_Id":45014226,"Users Score":3,"Answer":"Linux, being a UNIX type OS, has the concept of runlevels. Each runlevel has a certain number of services stopped or started, giving the user control over the behavior of the machine. As far as I know for Linux, seven runlevels exist, numbered from zero to six. The \"Operating System Desktop\" becomes available at run level 5. At boot time the system will pass through several other runlevels before getting to 5. At level 3 the system will be have Multi-User Mode with Networking, and this would be a good level to run what ever python script you need. Maybe check into configuring Linux init scripts.","Q_Score":2,"Tags":"python,boot","A_Id":45014631,"CreationDate":"2017-07-10T14:07:00.000","Title":"Python before booting?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to create a function using Lambda in Python on Linux. I have tried to use pyad, but it gave me Exception: Must be running Windows in order to use pyad.\nWhat other way can i create user and group in AD?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1731,"Q_Id":45014964,"Users Score":0,"Answer":"I can see 2 aspects that you should pay attention.\n\nLambda runs in a Linux environment. So, if you have some library that uses internal resources from windows, it won't work on AWS Lambda environment. You should search another option, like python-ldap or something similar.\nLambda environment provides only basic python modules. For sure pyad or python-ldap is not included. So, if you want to use it, make sure you will add this module in your zip lambda file.","Q_Score":0,"Tags":"python,linux,lambda,active-directory,aws-lambda","A_Id":45064312,"CreationDate":"2017-07-10T14:38:00.000","Title":"How to create user and group in AD using Lambda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm thinking of starting to use Apache Airflow for a project and am wondering how people manage continuous integration and dependencies with airflow. More specifically\nSay I have the following set up\n3 Airflow servers: dev staging and production.\nI have two python DAG'S whose source code I want to keep in seperate repos.\nThe DAG's themselves are simple, basically just use a Python operator to call main(*args, **kwargs). However the actually code that's run by main is very large and stretches several files\/modules.\nEach python code base has different dependencies\nfor example,\nDag1 uses Python2.7 pandas==0.18.1, requests=2.13.0\nDag2 uses Python3.6 pandas==0.20.0 and Numba==0.27 as well as some cythonized code that needs to be compiled\nHow do I manage Airflow running these two Dag's with completely different dependencies?\nAlso, how do I manage the continuous integration of the code for both these Dags into each different Airflow enivornment (dev, staging, Prod)(do I just get jenkins or something to ssh to the airflow server and do something like git pull origin BRANCH)\nHopefully this question isn't too vague and people see the problems i'm having.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":1922,"Q_Id":45015116,"Users Score":3,"Answer":"We use docker to run the code with different dependencies and DockerOperator in airflow DAG, which can run docker containers, also on remote machines (with docker daemon already running). We actually have only one airflow server to run jobs but more machines with docker daemon running, which the airflow executors call.\nFor continuous integration we use gitlab CI with the Gitlab container registry for each repository. This should be easily doable with Jenkins.","Q_Score":6,"Tags":"python,airflow","A_Id":45117737,"CreationDate":"2017-07-10T14:46:00.000","Title":"Apache Airflow Continous Integration Workflow and Dependency management","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"In python i want to get file was located in rpm package, can i open this package, get file et save this in a python variable ? (without extract all package in \/tmp)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":986,"Q_Id":45036243,"Users Score":0,"Answer":"Most solutions are variants of rpm2cpio | cpio which has the disadvantage that the entire cpio ball is unpacked on the file system rather than extracting a single file onto a pipeline that can be read into a python variable. This is largely a limitation on the cpio(1) archive file selection mechanism.\nThese days GNU tar can\/will handle cpio formats, and so a GNU tar (which can extract a single file from an archive) rpm2cpio | tar pipeline might be able to extract a single file without unpacking the entire archive.","Q_Score":0,"Tags":"python,extract,rpm","A_Id":45277220,"CreationDate":"2017-07-11T13:41:00.000","Title":"python get file in rpm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running ODOO 10 from source code in Eclipse on Windows 10. It's running ok in the web interface (on localhost)\nI want to control the odoo via command line at the same time. Can I do so while its running in the web interface?\nIf so how do I invoke the odoo commands to the server?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1605,"Q_Id":45043181,"Users Score":0,"Answer":"You can try below\npython .\/odoo-bin -c odoo.conf\nHope this help you","Q_Score":1,"Tags":"python,windows,eclipse,odoo-10","A_Id":56924563,"CreationDate":"2017-07-11T19:37:00.000","Title":"How to control odoo 10 from command line while its running in the web","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"So I've decided to learn Python and after getting a handle on the basic syntax of the language, decided to write a \"practice\" program that utilizes various modules.\nI have a basic curses interface made already, but before I get too far I want to make sure that I can redirect standard input and output over a network connection. In effect, I want to be able to \"serve\" this curses application over a TCP\/IP connection.\nIs this possible and if so, how can I redirect the input and output of curses over a network socket?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":779,"Q_Id":45062817,"Users Score":2,"Answer":"This probably won't work well. curses has to know what sort of terminal (or terminal emulator, these days) it's talking to, in order to choose the appropriate control characters for working with it. If you simply redirect stdin\/stdout, it's going to have no way of knowing what's at the other end of the connection.\nThe normal way of doing something like this is to leave the program's stdin\/stdout alone, and just run it over a remote login. The remote access software (telnet, ssh, or whatever) will take care of identifying the remote terminal type, and letting the program know about it via environment variables.","Q_Score":2,"Tags":"python,sockets,curses","A_Id":45063103,"CreationDate":"2017-07-12T16:09:00.000","Title":"Python3 - curses input\/output over a network socket?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Trying to start Anaconda navigator in Linux. Getting this error:\nbyte indices must be integers or slices, not str","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1155,"Q_Id":45068082,"Users Score":3,"Answer":"Found a solution\nThis is what to do:\nrun: $ source activate root and then \n$ anaconda-navigator","Q_Score":2,"Tags":"python,anaconda","A_Id":45068095,"CreationDate":"2017-07-12T21:50:00.000","Title":"Anaconda-navigator: byte indices must be integers or slices, not str","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to use an old-fashioned DOS\/Windows executable (the source is not available). It uses two input files and produces one output file. \nI have to run this several thousands times, using different input files. I wrote a simple python script looping over input files to automate this. \nThe problem is that this exe finishes every single run with the immortal \"press Enter\".\nI start the script, keep the key pressed, 'returns' accumulate in the bufor and the script runs for a while producing several outputs. \nIs there any more elegant way to proceed (i.e. without using the finger and staring at the monitor)? \nI have already tried some obvious solutions (e.g. os.system('return'), os.system('\\n')) but they do not work.\nNext day edit:\n@Eric, many thanks for the code, it works. I also thank others who contribute, and sorry for slopply written question and unformatted code in the comment (it was 3.30 am :)","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4732,"Q_Id":45069855,"Users Score":0,"Answer":"Have you tried os.system(\u2018\\r\\n\u2019)? I think that\u2019s the newline character on windows.\nEdit: Your answer also used a forward slash instead of a backslash--definitely try the other way too, unless that\u2019s just a typo.","Q_Score":4,"Tags":"python,operating-system","A_Id":45069902,"CreationDate":"2017-07-13T01:07:00.000","Title":"Python (os): how to simulate pressing Enter while executing an external application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to use an old-fashioned DOS\/Windows executable (the source is not available). It uses two input files and produces one output file. \nI have to run this several thousands times, using different input files. I wrote a simple python script looping over input files to automate this. \nThe problem is that this exe finishes every single run with the immortal \"press Enter\".\nI start the script, keep the key pressed, 'returns' accumulate in the bufor and the script runs for a while producing several outputs. \nIs there any more elegant way to proceed (i.e. without using the finger and staring at the monitor)? \nI have already tried some obvious solutions (e.g. os.system('return'), os.system('\\n')) but they do not work.\nNext day edit:\n@Eric, many thanks for the code, it works. I also thank others who contribute, and sorry for slopply written question and unformatted code in the comment (it was 3.30 am :)","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":4732,"Q_Id":45069855,"Users Score":1,"Answer":"Use Python's subprocess module and run your executable with Popen.\nThen you can send \"enter\" to the process with communicate.","Q_Score":4,"Tags":"python,operating-system","A_Id":45070691,"CreationDate":"2017-07-13T01:07:00.000","Title":"Python (os): how to simulate pressing Enter while executing an external application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way within Python to set an OS environment var that lives after the Python script has ended?.. So if I assign a var within the Python script and the script ends I want it to be available once I run a \"printenv\" via terminal. I've tried using the sh library using os.system but once the program finishes that var is not available via \"printenv\".","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":264,"Q_Id":45090277,"Users Score":2,"Answer":"You cannot do that. Child processes inherit environment of parent processes by default, but the opposite is not possible.\nIndeed, after the child process has started, the memory of all other processes is protected from it (segregation of context execution). This means it cannot modify the state (memory) of another process.\nAt best it can send IPC signals... \nMore technical note: you need to debug a process (such as with gdb) in order to control its internal state. Debuggers use special kernel APIs to read and write memory or control execution of other processes.","Q_Score":1,"Tags":"python","A_Id":45090303,"CreationDate":"2017-07-13T20:23:00.000","Title":"Set OS env var via python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a docker image that takes about 45min to build. As I'm working with it, I'm finding that I sometimes need to add python packages to it for the code I'm working on. I want to be able to install these packages such that it persists.\nWhat's the best way to achieve this?\nG","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":185,"Q_Id":45107012,"Users Score":2,"Answer":"docker builds the container image from cache if nothing is changed. When it founds a change in a line, it executes again all the lines from the change. So, if you need to add libraries, just add more lines at the end of the dockerfile.","Q_Score":0,"Tags":"python,docker,docker-compose","A_Id":45107280,"CreationDate":"2017-07-14T15:46:00.000","Title":"Best way to enhance a Docker image for small changes that need to be persisted","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We have a front-end server that is executing dags directly with the dag API (DagBag(), get_dag() and then dag_run())\nDags run fine, the problem is, that we could not find a way to execute such dags with specific arguments.\nThe closest solution was to use the Variable API, which uses set() and get() methods, but these variables are global and might have conflicts when working in concurrent operations that might use same variable names.\nHow could we run a dag and set arguments available to its execution? We are mostly using PythonOperator.\nEdit 1:\nOur program is a Python Django front end server. So, we are speaking with Airflow through another Python program. This means we trigger dags through Python, hence, using DagBag.get_dag() to retrieve information from airflow service. run_dag() does not have a way to pass direct parameters though","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1227,"Q_Id":45110781,"Users Score":0,"Answer":"If you use trigger_dag_run (through command line or from another dag) to trigger a dag, you can pass any json as payload.\nAnother option would be to store the argument list on a file, and store that file location as a Variable. The DAG can then pass this file location to python operator and the operator can then handle reading that file and parsing arguments from it.\nIf both these solutions don't work for your use case, giving more details about your dag and kind of arguments might help.","Q_Score":0,"Tags":"python,airflow","A_Id":45120003,"CreationDate":"2017-07-14T20:02:00.000","Title":"How to submit parameters to a Python program in Airflow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am using a form of Lubuntu called GalliumOS (optimized for Chromebooks). I installed pip using $ sudo apt-get install python-pip. I then used pip install --user virtualenv and pip install virtualenv, and then when I tried to subsequently use virtualenv venv I experienced the message bash: virtualenv: command not found.\nBetween the pip installs above, I used pip uninstall virtualenv to get back to square one. The error remained after a reinstall.\nI read several other posts, but all of them seemed to deal with similar problems on MacOS. One that came close was installing python pip and virtualenv simultaneously. Since I had already installed pip, I didn't think that these quite applied to my issue. Why is pip install virtualenv not working this way on LUbuntu \/ GalliumOS?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":4437,"Q_Id":45112111,"Users Score":2,"Answer":"Are you sure pip install is \"failing\"? To me, it sounds like the directory to which pip is installing modules on your machine is not in your PATH environment variable, so when virtualenv is installed, your computer has no idea where to find it when you just type in virtualenv.\nFind where pip is installing things on your computer, and then check if the directory where the pyenv executable is placed is in your PATH variable (e.g. by doing echo $PATH to print your PATH variable). If it's not, you need to update your PATH variable by adding the following to your .bashrc or .bash_profile or etc.:\nexport PATH=\"PATH_TO_WHERE_PIP_PUTS_EXECUTABLES:$PATH\"","Q_Score":1,"Tags":"python,linux,pip,virtualenv,apt-get","A_Id":45112548,"CreationDate":"2017-07-14T21:53:00.000","Title":"bash: virtualenv: command not found \"ON Linux\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using a form of Lubuntu called GalliumOS (optimized for Chromebooks). I installed pip using $ sudo apt-get install python-pip. I then used pip install --user virtualenv and pip install virtualenv, and then when I tried to subsequently use virtualenv venv I experienced the message bash: virtualenv: command not found.\nBetween the pip installs above, I used pip uninstall virtualenv to get back to square one. The error remained after a reinstall.\nI read several other posts, but all of them seemed to deal with similar problems on MacOS. One that came close was installing python pip and virtualenv simultaneously. Since I had already installed pip, I didn't think that these quite applied to my issue. Why is pip install virtualenv not working this way on LUbuntu \/ GalliumOS?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":4437,"Q_Id":45112111,"Users Score":2,"Answer":"What finally worked for me was this. I used \n$ sudo apt-get install python-virtualenv. \nI was then able to create a virtual environment using $ virtualenv venv. \nI was seeking to avoid using $ sudo pip install virtualenv, because of admonitions in other posts to not do this, and agreed, because of experiences I'd had with subsequent difficulties when doing this.","Q_Score":1,"Tags":"python,linux,pip,virtualenv,apt-get","A_Id":45114004,"CreationDate":"2017-07-14T21:53:00.000","Title":"bash: virtualenv: command not found \"ON Linux\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am developing some scripts that I am planning to use in my LAB. Currently I installed Python and all the required modules only locally on the station that I am working with (the development station).\nI would like to be able to run the scripts that i develop through each of my LAB stations.\nWhat is the best practice to do that ?\nWill I need to Install the same environment, except for the IDE of course, in all my stations ? If yes, then what is the recommended way to do that ?\nBy the way, is it mostly recommended to run those scripts from the command line screen (Windows) or is there any other elegant way to do that ?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4897,"Q_Id":45129728,"Users Score":0,"Answer":"If you want to run a single script on multiple computers without installing Python everywhere you can \"compile\" the script to .exe using py2exe, cx_Freeze or PyInstall. The \"compilation\" actually packs Python and libraries into the generated .exe or accompanied files.\nBut if you plan to run many Python scripts you'd better install Python everywhere and distribute your scripts and libraries as Python packages (eggs or wheels).","Q_Score":4,"Tags":"python,python-3.x,automation","A_Id":45130847,"CreationDate":"2017-07-16T14:34:00.000","Title":"What is the best practice for running a python script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I think that this problem is due to the python version. I used Anaconda with python 3.6 for learning django. Now I've to work on google app engine using python2.7. I uninstalled anaconda. Now when i run \"python\" I get:\n\"Python 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:09:58)\".\nIs there a way to default back to python2.7?\nI'm on ubuntu 16.04\nedit: problem is not due to python version","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":3516,"Q_Id":45130283,"Users Score":-1,"Answer":"For those who are using Windows and still facing the same issue, the easiest way is to remove all the other python versions except version 2.7x.","Q_Score":1,"Tags":"python,google-app-engine,ubuntu,anaconda,google-app-engine-python","A_Id":54668398,"CreationDate":"2017-07-16T15:28:00.000","Title":"When I run localserver on googleappengine error is \"File \"~\/dev_appserver.py\", line 102, in assert sys.version_info[0] == 2 AssertionError\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am going to 3.6 now....\n1) I see for my worker servers ...in 2.7 I used gevent with great success for running one worker per core with N gevent threads per core...\n2) For my web dev..for low level..close to CGI as possible I used bottle with nginx\/uWSGI with the gevent loop\n3) For api's I used Flask with nginx\/uWSGI with the gevent loop\nMy api apps are screaming fast...and faster then nodejs for async calls to my backend databases...\nEnter 3.6 ... I am confused....\n1) It appears I can run my workers using asyncio since not dependent on a framework...so here I am OK\n2) It appears that gevent is available for 3.6 and I assume I can still use gevent for flask with the nginx\/uWSGI with the gevent loop \n3) uWSGI supports asyncio\n4) flask support for asyncio does not seem to be widely supported\n5) I refuse to use Django ...so dont event go there.. :)\nSo my question is that if I want to embrace asyncio with 3.6 is it bye-bye Flask in favor of e.g. aiohttp or sanic?\nOn other words...those that build async api's for python 2.7 how did you transition to 3.6 while maintaining non blocking calls?\nIt appears that I can still use gevent with flask with python 3 but this is a monkey patch to force async non blocking calls whereas asyncio is native and part of the STL...\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":840,"Q_Id":45136647,"Users Score":0,"Answer":"It's better to go with asyncio preferably aiohttp which is more mainstream.","Q_Score":0,"Tags":"python,python-3.x,flask,python-asyncio,gevent","A_Id":45175533,"CreationDate":"2017-07-17T05:38:00.000","Title":"New to python 3.6 from 2.7 - is flask still relevant for async calls with gevent?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am going to 3.6 now....\n1) I see for my worker servers ...in 2.7 I used gevent with great success for running one worker per core with N gevent threads per core...\n2) For my web dev..for low level..close to CGI as possible I used bottle with nginx\/uWSGI with the gevent loop\n3) For api's I used Flask with nginx\/uWSGI with the gevent loop\nMy api apps are screaming fast...and faster then nodejs for async calls to my backend databases...\nEnter 3.6 ... I am confused....\n1) It appears I can run my workers using asyncio since not dependent on a framework...so here I am OK\n2) It appears that gevent is available for 3.6 and I assume I can still use gevent for flask with the nginx\/uWSGI with the gevent loop \n3) uWSGI supports asyncio\n4) flask support for asyncio does not seem to be widely supported\n5) I refuse to use Django ...so dont event go there.. :)\nSo my question is that if I want to embrace asyncio with 3.6 is it bye-bye Flask in favor of e.g. aiohttp or sanic?\nOn other words...those that build async api's for python 2.7 how did you transition to 3.6 while maintaining non blocking calls?\nIt appears that I can still use gevent with flask with python 3 but this is a monkey patch to force async non blocking calls whereas asyncio is native and part of the STL...\nThanks","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":840,"Q_Id":45136647,"Users Score":2,"Answer":"Flask + gevent works like a charm for python 3.6. There is no any close solution to Flask-Admin and other robust time-tested libraries (like SQLAlchemy). For real applications I can get the same amount of rps from a flask as for aiohttp or sanic or whatever.","Q_Score":0,"Tags":"python,python-3.x,flask,python-asyncio,gevent","A_Id":45570572,"CreationDate":"2017-07-17T05:38:00.000","Title":"New to python 3.6 from 2.7 - is flask still relevant for async calls with gevent?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Can anyone let me know how to set PYTHONPATH?\nDo we need to set it in the environment variables (is it system specific) or we can independently set the PYTHONPATH and use it to run any independent python application? \ni need to pick the module from a package available in directory which is different from the directory from which I am running my application . How to include these packages in my application","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":371,"Q_Id":45144525,"Users Score":1,"Answer":"I assume you are using Linux\nBefore executing your application u can metion pythonpath=path && execution script\nOther elegant way is using virtualenv. Where u can have diff packages for each application.\nBefore exection say workon env and then deactivate \nPython3 has virtualenv by default","Q_Score":0,"Tags":"python,python-3.x,pythonpath","A_Id":45145568,"CreationDate":"2017-07-17T12:37:00.000","Title":"how to use PYTHONPATH for independent python application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I try to create a Windows AppX with Advanced Installer at the digital signing stage the program stops saying \"The application calls the SetDllDirectory function which is currently not supported by windows UWP applications. A digitally unsigned exe or msi installer works perfectly but the AppX, as not digitally signed, does not run! Is there a work around to this problem? I searched the Pyinstaller docs and also asked a question on the Pyinstaller Google groups. They did not even list my question.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":89,"Q_Id":45150223,"Users Score":1,"Answer":"according to Microsoft the SetDllDirectory function which is used by Pyinstaller is currently no supported by UWP and according to Pyinstaller experts there is no provision to change this in the near future. So right now this is not the way to go. If there is anyone who knows something better now is the time to speak up..","Q_Score":2,"Tags":"python-2.7,pyinstaller,advanced-installer","A_Id":45391696,"CreationDate":"2017-07-17T17:25:00.000","Title":"Digital signing halts with the error \"SetDllDirectory function in Pyinstaller is not currently supported by Windows UWP applications\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using a queue trigger to pass in some data about a job that I want to run with Azure Functions(I'm using python). Part of the data is the name of a file that I want to pull from blob storage. Because of this, declaring a file path\/name in an input binding doesn't seem like the right direction, since the function won't have the file name until it gets the queue trigger. \nOne approach I've tried is to use the azure-storage sdk, but I'm unsure of how to handle authentication from within the Azure Function.\nIs there another way to approach this?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1135,"Q_Id":45155616,"Users Score":0,"Answer":"Storing secrets can (also) be done using App Settings.\nIn Azure, go to your Azure Functions App Service, Then click \"Application Settings\". Then, scroll down to the \"App Settings\" list. This list consists of Key-Value pairs. Add your key, for example MY_CON_STR and the actual connection string as the value.\nDon't forget to click save at this point\nNow, in your application (your Function for this example), you can load the stored value using its key. For example, in python, you can use:\nos.environ['MY_CON_STR']\nNote that since the setting isn't saved locally, you have to execute it from within Azure. Unfortunately, Azure Functions applications do not contain a web.config file.","Q_Score":2,"Tags":"python,azure,azure-blob-storage,azure-functions","A_Id":45165615,"CreationDate":"2017-07-18T00:30:00.000","Title":"Access Blob storage without binding?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Python 3.5.2 is installed, and I need to ensure it doesn't upgrade to 3.6 due to some other dependencies.\nWhen I install OpenCV 3 via brew (see below), brew invokes python3 and upgrades to Python 3.6, the latest build:\nbrew install opencv3 --with-python3\nHow can I install OpenCV 3 without changing my Python build?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":150,"Q_Id":45156570,"Users Score":0,"Answer":"Try using pip, \nYou can create a pip-reqs.txt file and pin things to a specific version\nThen just run \n\npip -r pip-reqs.txt\n\npip will then take care of installing opencv for you for the python version that is currently configured","Q_Score":0,"Tags":"python,opencv3.0","A_Id":45159452,"CreationDate":"2017-07-18T02:37:00.000","Title":"Install OpenCV 3 Without Upgrading Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am coding Python3 in Vim and would like to enable autocompletion.\nI must use different computers without internet access. Every computer is running Linux with Vim preinstalled.\nI dont want to have something to install, I just want the simplest way to enable python3 completion (even if it is not the best completion), just something easy to enable from scratch on a new Linux computer.\nMany thanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2519,"Q_Id":45168222,"Users Score":0,"Answer":"by default, Ctrl-P in insert mode do an autocompletion with all the words already present in the file you're editing","Q_Score":4,"Tags":"python,linux,vim","A_Id":45168297,"CreationDate":"2017-07-18T13:29:00.000","Title":"Enable python autocompletion in Vim without any install","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a project for which I want to create a tar.gz with python setup.py install. \nProblem is, I only get egg files in my dist\/ folder when running python setup.py install. I need the project.tar.gz file so that I can easily make it installable from conda.\nHow do I make python setup.py install create a tar.gz (I do not need any egg files, really).\nWhat I ultimately want is a tar.gz archive showing on pypi with a download link and md5, which I used to get before the PYPI update.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6897,"Q_Id":45168408,"Users Score":12,"Answer":"python setup.py sdist creates .tar.gz source archive and doesn't create eggs. That's probably what you want instad of python setup.py install.","Q_Score":5,"Tags":"python,pypi","A_Id":45175607,"CreationDate":"2017-07-18T13:36:00.000","Title":"Creating tar.gz in dist folder with python setup.py install","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to store json objects on the google cloud platform. I have considered a number of options:\n\nStore them in a bucket as a text (.json) file.\nStore them as text in datastore using json.dumps(obj).\nUnpack it into a hierarchy of objects in datastore.\n\nOption 1: Rejected because it has no organising principles other than the filename and cannot be searched across.\nOption 2: Is easy to implement, but you cannot search using dql.\nOption 3: Got it to work after a lot of wrangling with the key and parent key structures. While it is searchable, the resulting objects have been split up and held together by the parent key relationships. It is really ugly!\nIs there any way to store and search across a deeply structured json object on the google cloud platform - other than to set up mongodb in a compute instance?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1421,"Q_Id":45184482,"Users Score":0,"Answer":"I don't know what your exact searching needs are, but the datastore API allows for querying that is decently good, provided you give the datastore the correct indexes. Plus it's very easy to go take the entities in the datastore and pull them back out as .json files.","Q_Score":1,"Tags":"json,python-2.7,google-app-engine,google-cloud-datastore","A_Id":45204510,"CreationDate":"2017-07-19T08:05:00.000","Title":"Storing json objects in google datastore","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"What will be the effect if I use the same python virtualenv in different OS system?\n\nIf not possible, why?\n\nIts said that what can confirm if the virtualenv can used in the os system?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":45208088,"Users Score":0,"Answer":"the purpose of virtualenv is to separate your python environment, so one virtualenv may have, let's say Django version 1.11 with Python 3, while another virtualenv have django 1.10 with python 2.7 , Using the same python virtualenv in different OS system is doable, pip inside the virtualenv will handle the libraries while the python will handle OS differences when you install python 2.7 or python 3.0 inside the virtualenv.\nOne of the example use of these two virtualenv is let's say your Nginx Server is using different python environment for each domain (one nginx server can handle many domains).","Q_Score":0,"Tags":"python,virtualenv","A_Id":45208247,"CreationDate":"2017-07-20T07:26:00.000","Title":"What will be the effect if I use the same python virtualenv in different OS system","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Inside a TFS Project, I store some set of python scripts inside a specific folder. So I want to trigger a Jenkins build if a change is made on scripts on the particular folder and not others. Is there anyway to do this ?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":683,"Q_Id":45246185,"Users Score":0,"Answer":"You could create a Jenkins job that checks the modified time (If the scripts are under version control, the commit id) periodically, and if there's a change since the last build, kick off the new build.","Q_Score":0,"Tags":"python,jenkins,tfs","A_Id":45246242,"CreationDate":"2017-07-21T20:25:00.000","Title":"Trigger a Jenkins build when a checkin is made on TFS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Inside a TFS Project, I store some set of python scripts inside a specific folder. So I want to trigger a Jenkins build if a change is made on scripts on the particular folder and not others. Is there anyway to do this ?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":683,"Q_Id":45246185,"Users Score":0,"Answer":"You can use Jenkins service hook in TFS:\n\nGo to team project admin page\nSelect Service Hooks\nClick + to add a new service\nSelect Jenkins> Next\nSelect Code checked in event and specify filter path >Next\nSpecify Jenkins base URL, User name, Password, Build job etc\u2026> Finish\n\nAfter that, it will trigger a Jenkins build when a checkin is made on specified path.","Q_Score":0,"Tags":"python,jenkins,tfs","A_Id":45294907,"CreationDate":"2017-07-21T20:25:00.000","Title":"Trigger a Jenkins build when a checkin is made on TFS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Sorry, I don't really have much technical background and I know it sounds like a confused question. However, I will try my best to explain what I want to do in here.\nMy daily routine tasks involved lots of digital marketing data (very large data >20GB+) from different types of platform. As you can see, when I try to analyze these data, I need to aggregate these data into a similar format. The tedious part of my job is it normally involved lots of manual downloads, lots of data cleansing, and lots of upload(I upload the cleaned data to Google Cloud Storage so I can use BigQuery!).\nI feel do these tasks manually are extremely inefficient, and I think the only logical choice is automate these tasks on Google Cloud Platform.\nAfter months of effort, I am managed to do these tasks in semi-auto fashion, which I wrote some python programs and make a schtask batch for following:\n\nDownload (A python program makes API calls to download platform data to my local drive)\nCleansing (A python program cleansing these data locally) \nUpload to Cloud Storage (A python program upload \"cleaned\" data using gsutil)\n\nAlthough, it saved lots of my time, but everything is still done locally on my desktop PC.\nHere are my real questions, I am sure there is a way to manage all these tasks (download, cleansing, upload) in Google Cloud without touching my local PC, where should I start?\n\nHow can I run these Python program on Google Cloud? I know that I can deploy these Python programs in App Engine, however, to allow these program to do their jobs, do I also need a compute engine? or simple deployment would do the job?\nHow do I schtask for these apps on Google Cloud?\nI know Cloud Storage is only one of many ways to store the data on GCP, since I have these data from different types of platform, and they are all in different formats and metrics. So what would be the best way to store these data efficiently on Google Cloud? CloudSQL, Datastore or BigTable?\n\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":118,"Q_Id":45254244,"Users Score":1,"Answer":"We need more information and see some code to be able to help you better but in general the work you describe should be able to get done via http and you don't need any special C libraries, hence you could go with appengine and create task queues for your jobs. \nBe prepared that using only appengine can be trickier than having an operating system that you can leverage. There is no operating system with appengine once you've deployed, you must use only the functionality supplied in appengine. \nBut yes, as far as I can tell from the information you provide, an appengine app should be able to do the work you describe. Try writing some code, deploy the appengine app and get back here and ask if you have specific trouble. \nYou can always add compute engine to your appengine project if you need it later.","Q_Score":2,"Tags":"python,google-app-engine,google-cloud-datastore,google-cloud-platform,google-cloud-storage","A_Id":45258584,"CreationDate":"2017-07-22T11:53:00.000","Title":"Google Cloud: Do we need a compute engine to run a deployed python code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I need to update the latest version of the appium, alas, in the Russian forums there is not much information for it. I have a 1.1.0 beta","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":45264225,"Users Score":0,"Answer":"If you are using the appium command line tool,then you can update to the desired version by using below mentioned command:\nnpm update -g appium\nIf you want to install version 1.6.3 :\nCommand : npm install appium 1.6.3\nAlso,if you want to update through appium UI,then you can update by clicking on 'Check for Updates' button available under 'File' menu.","Q_Score":0,"Tags":"android,appium,python-appium","A_Id":45297308,"CreationDate":"2017-07-23T10:49:00.000","Title":"How to update appium 1.1.0 beta","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm developing with Apache and Django an web application application where users interacts with a data model (C++ implementation wrapped into Python). \nTo avoid load \/ save data in a file or database after each user operation, I prefer keep data model in memory as long as the user is connected to the app. \nUntil now, data models are stored into a variable attached to web service. As Python running under Apache has sometimes strange behavior, I'd prefer execute user operation into separated python process, today on same server, maybe tomorrow on a different node. \nI'm under the impression that Distributed computing library (dispy, dask distributed) does not enable to keep memory attached to a node. Does anyone have a solution \/ idea about what libraries could I use ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":45277365,"Users Score":0,"Answer":"Simple answer : stop wasting your time trying to do complicated things that will never work right with your typical web server and store your data in a database (doesn't have to be a mysql database FWIW). \nLongest answer: in a production environment you typically have several parallel (sub)processes handling incoming requests, and any of those process can serve any user at any time, so keeping your data in memory in a process will never work reliably. This is by design and this is a sane design, so trying to fight against it is just a waste of time and energy. Web server processes are not meant to persist data before requests, that's what your database is for, so use it.","Q_Score":0,"Tags":"python,django,apache,dispy,dask-distributed","A_Id":45279560,"CreationDate":"2017-07-24T09:50:00.000","Title":"Node process with dedicated memory in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"We are using Airflow as a scheduler. I want to invoke a simple bash operator in a DAG. The bash script needs a password as an argument to do further processing.\nHow can I store a password securely in Airflow (config\/variables\/connection) and access it in dag definition file?\nI am new to Airflow and Python so a code snippet will be appreciated.","AnswerCount":8,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":49127,"Q_Id":45280650,"Users Score":2,"Answer":"In this case I would use a PythonOperator from which you are able to get a Hook on your database connection using\nhook = PostgresHook(postgres_conn_id=postgres_conn_id). You can then call get_connection on this hook which will give you a Connection object from which you can get the host, login and password for your database connection. \nFinally, use for example subprocess.call(your_script.sh, connection_string) passing the connection details as a parameter.\nThis method is a bit convoluted but it does allow you to keep the encryption for database connections in Airflow. Also, you should be able to pull this strategy into a separate Operator class inheriting the base behaviour from PythonOperator but adding the logic for getting the hook and calling the bash script.","Q_Score":37,"Tags":"python,airflow","A_Id":45298761,"CreationDate":"2017-07-24T12:27:00.000","Title":"Store and access password using Apache airflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Using MacPorts to try and install OLA for work. Came across this error when trying to build OLA after getting it from github:\n\nchecking for python module: google.protobuf... no\nconfigure: error: failed to find required module google.protobuf\n\nTried googling around to see if there was a solution, didn't find one. Like the title implies, I'm on a Macbook Air running Sierra, Python version is 2.7.10.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":593,"Q_Id":45288631,"Users Score":2,"Answer":"MacPorts has a port for OLA. Installing it will automatically install the required protobuf module.\nCommand is sudo port install ola\nOr is there some reason that you need to install a version from GitHub?","Q_Score":0,"Tags":"python,protocol-buffers,macports","A_Id":45290029,"CreationDate":"2017-07-24T19:31:00.000","Title":"macOS 10.12; Configure: error: failed to find required module google.protobuf","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"How do you run a python script from inside DXL\/DOORS? I attempted using the system() command but only gotten errors.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1617,"Q_Id":45290682,"Users Score":0,"Answer":"I would recommend trying your command string C:\\\\myPython.exe H:\\\\myscript.py in the command line first, seeing if that works. If it does then it could be a permissions error (can't tell unless you have an error code to go with the CreateProcess error, as there are several types). \nIt might then be worth checking you can run any kind of command in the command line from DOORs (system(\"notepad\") should do the trick)\nIf that doesn't work running DOORS as an admin may fix your issue, you can do this by going to the doors.exe right-clicking Properties -> Compatibility and selecting \"Run this program as an administrator\".","Q_Score":0,"Tags":"python,ibm-doors","A_Id":45373289,"CreationDate":"2017-07-24T21:53:00.000","Title":"Running Python Script from inside DXL\/DOORS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"None of the current error inquiries addressed my specific situation (it seems like a pretty well rounded problem). I am trying to install Pillow (image module for Python). I have the correct version of the whl file, and the correct installation of Python 3.6. My paths have been confirmed. \nSteps that I took:\n\nDownloaded the whl file \nOpened downloads in command window \nTyped the pip path, install, and then my whl file.\n\nThen I got the error: \"Fatal error in launcher: Unable to create process using\"","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":4784,"Q_Id":45290798,"Users Score":4,"Answer":"use 'python -m puse 'python -m pip install package-name' instead.","Q_Score":2,"Tags":"python,module,directory,pip,command-prompt","A_Id":45666971,"CreationDate":"2017-07-24T22:02:00.000","Title":"What does \"Fatal error in launcher: Unable to create process using\" mean?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"There are mainly two questions that I would like to ask, thanks in advance.\n(1) How can I open an external program in Linux?\nI know in Windows there is a command os.startfile() to open another program, the equivalent for Ubuntu is open(), but there's no response after I run the code, and the alternative one is subprocess.call(). This works well in Windows, but in Ubuntu it fails, could someone provide a standard templete I can use for? (Similarly like to double click the icon of a program)\n(2) How can I realize functions like the code is able to open the terminal and write down several commands in terminal automatically using python?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3833,"Q_Id":45318019,"Users Score":0,"Answer":"os.system can do this work. for example, you want to run 'ls' under a shell. want_run='ls';os.system('bash -c '+ want_run);","Q_Score":1,"Tags":"linux,python-2.7,ubuntu","A_Id":45318450,"CreationDate":"2017-07-26T05:18:00.000","Title":"How can I open an external program using Python in Ubuntu?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently trying to figure out how to set up using python 3 on my machine (Windows 10 pro 64-bit), but I keep getting stuck.\nI used the Python 3.6 downloader to install python, but whenever I try to use Command Prompt it keeps saying \"'python' is not recognized as an internal or external command, operable program or batch file\" as if I have not yet installed it.\nUnlike answers to previous questions, I have already added \";C:\\Python36\" to my Path environment variable, so what am I doing wrong? \nI am relatively new to python, but know how to use it on my Mac, so please let me know if I'm just fundamentally confused about something.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":262,"Q_Id":45335812,"Users Score":1,"Answer":"In environmental variables under path, add your python path... you said you already so please ensure is their comma separation between previous path..\nAnd once added save environment variables tab. And close all command prompt then open it. \nThen only command prompt will refresh with your python config..\nMain thing, if you enter python which mean python 2. \nFor python3 type, python3 then it should work","Q_Score":0,"Tags":"python","A_Id":45335902,"CreationDate":"2017-07-26T19:26:00.000","Title":"Downloading python 3 on windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently trying to figure out how to set up using python 3 on my machine (Windows 10 pro 64-bit), but I keep getting stuck.\nI used the Python 3.6 downloader to install python, but whenever I try to use Command Prompt it keeps saying \"'python' is not recognized as an internal or external command, operable program or batch file\" as if I have not yet installed it.\nUnlike answers to previous questions, I have already added \";C:\\Python36\" to my Path environment variable, so what am I doing wrong? \nI am relatively new to python, but know how to use it on my Mac, so please let me know if I'm just fundamentally confused about something.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":262,"Q_Id":45335812,"Users Score":0,"Answer":"Thanks everyone, I ended up uninstalling and then re-downloading python, and selecting the button that says \"add to environment variables.\" Previously, I typed the addition to Path myself, so I thought it might make a difference if I included it in the installation process instead. Then, I completely restarted my computer rather than just Command Prompt itself. I'm not sure which of these two things did it, but it works now!","Q_Score":0,"Tags":"python","A_Id":45361096,"CreationDate":"2017-07-26T19:26:00.000","Title":"Downloading python 3 on windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently trying to figure out how to set up using python 3 on my machine (Windows 10 pro 64-bit), but I keep getting stuck.\nI used the Python 3.6 downloader to install python, but whenever I try to use Command Prompt it keeps saying \"'python' is not recognized as an internal or external command, operable program or batch file\" as if I have not yet installed it.\nUnlike answers to previous questions, I have already added \";C:\\Python36\" to my Path environment variable, so what am I doing wrong? \nI am relatively new to python, but know how to use it on my Mac, so please let me know if I'm just fundamentally confused about something.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":262,"Q_Id":45335812,"Users Score":0,"Answer":"Why are you using command prompt? I just use the python shell that comes with IDLE. It\u2019s much simpler.\nIf you have to use command prompt for some reason, you\u2019re problem is probably that you need to type in python3. Plain python is what you use for using Python 2 in the command prompt.","Q_Score":0,"Tags":"python","A_Id":45338244,"CreationDate":"2017-07-26T19:26:00.000","Title":"Downloading python 3 on windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a subprocess in Python that I kick off that produces a log file. I need another subprocess to tail this log file as it is being generated and obtain the results at the end of my first subprocess (the thing that generates the log file).\nThis needs to be achieved on Windows boxes so I cannot use tail. I have looked into Get-Contents but am not entirely sure whether I can make Get-Contents persist and return only when my first subprocess (the log generator) finishes execution.\nHow would I achieve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":307,"Q_Id":45376218,"Users Score":0,"Answer":"I'm not sure but i think you can use tail -f filename even with a log file inside a windows batch","Q_Score":0,"Tags":"python,windows,powershell,logging,cmd","A_Id":45376375,"CreationDate":"2017-07-28T14:37:00.000","Title":"Tail a log file as subprocess in Python [Windows box]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've used successfully subprocess.check_output to call plenty of windows programs.\nYet, I'm troubled at calling icacls. \nBy cmd, this works:\ncmd>icacls \"C:\\my folder\" \/GRANT *S-1-1-0:F\nI've tried:\nsubprocess.check_output(['C:\\\\Windows\\\\System32\\\\icacls.exe','\"C:\\\\my folder\"','\/GRANT *S-1-1-0:F'],shell=True,stderr=subprocess.STDOUT)\nbut return code is 123 (according to micrsoft, invalid file name).\nI've also tried (which also works from cmd)\nsubprocess.check_output(['C:\\\\Windows\\\\System32\\\\icacls.exe','\"C:\/my folder\"','\/GRANT *S-1-1-0:F'],shell=True,stderr=subprocess.STDOUT)\nbut return code is also 123.\nAny idea?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1557,"Q_Id":45421302,"Users Score":0,"Answer":"@Jean-Fran\u00e7ois Fabre gave me the clue:\nQuoting my target argument made sense, since it has blanks and, so, quoting is required when calling from cmd. However, it seems python will over-quote.\nThank you all guys for your help!!!","Q_Score":1,"Tags":"python,windows,cmd,icacls","A_Id":45433548,"CreationDate":"2017-07-31T16:46:00.000","Title":"calling windows' icacls from python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can not install the Python package Gurobipy. I get the following output when I try it:\n\nrunning install\nrunning build running build_py running install_lib running\ninstall_egg_info Removing\n\/usr\/local\/lib\/python2.7\/dist-packages\/gurobipy-7.5.1.egg-info Writing\n\/usr\/local\/lib\/python2.7\/dist-packages\/gurobipy-7.5.1.egg-info\nremoving 'build\/lib.linux-x86_64-2.7' (and everything under it)\n'build\/bdist.linux-x86_64' does not exist -- can't clean it\n'build\/scripts-2.7' does not exist -- can't clean it removing 'build'\n\nI run Ubuntu 16.04, Python 2.7, and Gurobi 7.5.1. gurobi.sh is working fine...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1240,"Q_Id":45437617,"Users Score":0,"Answer":"You can test gurobipy via the command from gurobipy import *. If that gives no error, then the installation worked, and you can ignore the messages from running setup.py.","Q_Score":0,"Tags":"python,linux,ubuntu,gurobi","A_Id":45439246,"CreationDate":"2017-08-01T12:19:00.000","Title":"Install gruobipy package - error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My Python program uses the terminal(system) command to perform the task on the file and script. I am going to convert this Python program into Mac OS application and Linux application using pyinstaller. I am going to pass the application installer file to my friends. However, I have following questions.\nIf script or file doesn't have proper permission which my program is trying to access, will Python get an error?\nRunning some script or opening file will require the permission of root. So is there a possible option that will prompt the user for root(admin) password or run my application with root privilege?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1860,"Q_Id":45453243,"Users Score":1,"Answer":"Try chmod 777 filname.py, this will give the file all rights for execution and editing. There are also other modes for chmod like 755, which will also work for your case.","Q_Score":0,"Tags":"python,linux,macos,root,file-permissions","A_Id":45453549,"CreationDate":"2017-08-02T06:38:00.000","Title":"Python application permission denied","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Newbie to linux, I thought that apt-get purge is usually used to remove a pkg totally, but today it neally crash my whold system. I want to remove a previously installed python 3.4 distribution, but I'm not sure which pkg it belongs, so I used find \/usr -type f -name \"python3.4\" to find it, the command returns several lines, the first one is \/usr\/bin\/python3.4, so then I typed dpkg -S \/usr\/bin\/python3.4 to determine which pkg python3.4 belongs, it returns python-minimal, so I typed sudo apt-get purge python-minimal, but then a lot of pkgs was removed, also some installed, I'm totally confused, and I saw even the app store disappeared, a lot of the system was removed... Can someone help me?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3227,"Q_Id":45473744,"Users Score":3,"Answer":"When you run apt purge or apt remove you are not only instructing apt to remove the named package, but any other package that depends on it. Of course apt doesn't perform that unexpected operation without first asking for your consent, so I imagine it showed the list of packages that it was going to remove, and when you pressed Y it removed all of them.\nSo, to undo the mess, if you still have the window where you run the purge then check which packages it told you it was going to remove, and manually apt install them. If you don't have the list around, then you need to manually install every package that is not working properly.\nIf it is the window manager that got damaged, try apt-get install ubuntu-gnome-desktop or the appropriate package for your distribution\/window manager.\nRule of thumb when deleting\/updating packages: always read the list of packages affected, sometimes there is unexpected stuff.","Q_Score":0,"Tags":"python,linux","A_Id":45473958,"CreationDate":"2017-08-03T02:18:00.000","Title":"What did apt-get purge do?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like a python GUI to have an OCaml process in the background. I would like to keep a single session throughout the program's lifetime, and depending on user inputs, call some OCaml commands and retrieve OCaml's output. Some OCaml variables and structures may be defined along the way so I would like to maintain a single ongoing session.\nMy solution was to hold an OCaml toplevel process using popen and interact with its stdin and stdout. This works purely for me for several reasons:\n1. I don't know when is the OCaml calculation done and can't tell if it's output is complete or there is more to come (especially so if the evaluation takes some time, and if multiple OCaml commands were invoked).\n2. I have no inherent way of telling whether the OCaml command ran smoothly or maybe there were OCaml warnings or errors.\n3. I lose the structure of OCaml's output. For example, if the output spreads over several lines, I can't tell which lines were broken due to line size, and which were originally separate lines. \nI know there are some discussions and some packages for combining python with OCaml, but they all run python commands from OCaml, and I need the opposite.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":345,"Q_Id":45479510,"Users Score":1,"Answer":"As a complementary remark, it is perfectly possible to run a separated toplevel process and send to it input phrases and read the corresponding output. The trick to detect the end of a toplevel output is to add a guard phrase after every input phrase: rather than sending just f ();; to the toplevel process, one can send f ();; \"end_of_input\";; and then watch for the toplevel output corresponding to \"end_of_input\";; (aka - : string = \"end_of_input\"). My experience is that errors and warnings are generally quite easy to detect or parse from the toplevel output; so the only missing point is the formatting of the code.","Q_Score":3,"Tags":"python,ocaml,integration","A_Id":45495926,"CreationDate":"2017-08-03T08:59:00.000","Title":"Integrating OCaml in python - How to hold an ocaml session from python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I try to execute Jupyter Kernel Gateway on Docker, I get the below error:\n2017-08-03T11:00:51.732015249Z [KernelGatewayApp] Kernel shutdown: 27351426-2078-4101-b3f3-86da41d6e141\n2017-08-03T11:00:51.735665285Z Traceback (most recent call last):\n2017-08-03T11:00:51.735690921Z File \"\/opt\/conda\/bin\/jupyter-kernelgateway\", line 11, in \n2017-08-03T11:00:51.735699387Z sys.exit(launch_instance())\n2017-08-03T11:00:51.735705691Z File \"\/opt\/conda\/lib\/python3.4\/site-packages\/jupyter_core\/application.py\", line 267, in launch_instance\n2017-08-03T11:00:51.735711902Z return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)\n2017-08-03T11:00:51.735717618Z File \"\/opt\/conda\/lib\/python3.4\/site-packages\/traitlets\/config\/application.py\", line 591, in launch_instance\n2017-08-03T11:00:51.735723686Z app.initialize(argv)\n2017-08-03T11:00:51.735731330Z File \"\/opt\/conda\/lib\/python3.4\/site-packages\/kernel_gateway\/gatewayapp.py\", line 212, in initialize\n2017-08-03T11:00:51.735737468Z self.init_configurables()\n2017-08-03T11:00:51.735742836Z File \"\/opt\/conda\/lib\/python3.4\/site-packages\/kernel_gateway\/gatewayapp.py\", line 241, in init_configurables\n2017-08-03T11:00:51.735748923Z self.kernel_pool = KernelPool(self.prespawn_count, self.kernel_manager)\n2017-08-03T11:00:51.735755996Z File \"\/opt\/conda\/lib\/python3.4\/site-packages\/kernel_gateway\/services\/kernels\/pool.py\", line 27, in init\n2017-08-03T11:00:51.735762895Z kernel_id = kernel_manager.start_kernel(kernel_name=self.kernel_manager.parent.seed_notebook['metadata']['kernelspec']['name'])\n2017-08-03T11:00:51.735772782Z File \"\/opt\/conda\/lib\/python3.4\/site-packages\/kernel_gateway\/services\/kernels\/manager.py\", line 71, in start_kernel\n2017-08-03T11:00:51.735779471Z raise RuntimeError('Error seeding kernel memory')\n2017-08-03T11:00:51.735785063Z RuntimeError: Error seeding kernel memory","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":398,"Q_Id":45482450,"Users Score":1,"Answer":"This error is due to compile errors in the respective notebook.\nTo try, I commented everything and just annotated a single cell with a GET request.\nIt worked!","Q_Score":1,"Tags":"python-3.x,docker,jupyter-notebook","A_Id":51353255,"CreationDate":"2017-08-03T11:07:00.000","Title":"RuntimeError in Jupyter Kernel Gateway on Docker","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have to SSH into 120 machines and make a dump of a table in databases and export this back on to my local machine every day, (same database structure for all 120 databases).\nThere isn't a field in the database that I can extract the name from to be able to identify which one it comes from, it's vital that it can be identified, as it's for data analysis.\nI'm using the Python tool Fabric to automate the process and export the CSV on to my machine..\n\nfab -u PAI -H 10.0.0.35,10.0.0.XX,10.0.0.0.XX,10.0.0.XX -z 1\n cmdrun:\"cd \/usr\/local\/mysql\/bin && .\/mysql -u root -p -e 'SELECT *\n FROM dfs_va2.artikel_trigger;' >\n \/Users\/admin\/Documents\/dbdump\/dump.csv\"\n download:\"\/Users\/johnc\/Documents\/Imports\/dump.csv\"\n\nAbove is what I've got working so far but clearly, they'll all be named \"dump.csv\" is there any awesome people out there can give me a good idea on how to approach this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":45508137,"Users Score":0,"Answer":"You can try to modify your command as follow:\n\nmysql -uroot -p{your_password} -e 'SELECT * FROM dfs_va2.artikel_trigger;' > \/Users\/admin\/Documents\/dbdump\/$(hostname)_dump.csv\" download:\"\/Users\/johnc\/Documents\/Imports\/$(hostname)_dump.csv\"\n\nhostname returns current machine name so all your files should be unique (of course if machines have unique names)\nAlso you don't need to navigate to \/bin\/mysql every time, you can use simply mysql or absolute path \/usr\/local\/mysql\/bin\/mysql","Q_Score":0,"Tags":"python,automation,fabric,devops","A_Id":45508534,"CreationDate":"2017-08-04T13:28:00.000","Title":"Best way to automate file names of multiple databases","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've installed python via homebrew. It is located in:\n\/usr\/local\/Cellar\/python\/2.7.13_1\nwhich should be right.\nNow I am trying to use this python installation, but \"which python\" only shows the macOS python installation at \"\/usr\/bin\/python\". So i am checking the $PATH and I see that everything should be ok.\n\"echo $PATH\" results in this: \/usr\/local\/bin:\/usr\/bin:\/bin:\/usr\/sbin:\/sbin\nI restarted the terminal window and this occurs every time. I also did the\n\"brew doctor\" and no warnings appeared. \nWhat I am using:\nStandard macOS Terminal-App\nHas anybody a clue how this problem could be solved?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":434,"Q_Id":45530573,"Users Score":0,"Answer":"Update $PATH in your .bashrc file.\nExample add the following line in your .bashrc\nexport PATH=\/usr\/local\/Cellar\/python\/2.7.13_1\/bin:$PATH","Q_Score":2,"Tags":"python,macos,homebrew","A_Id":45530632,"CreationDate":"2017-08-06T09:25:00.000","Title":"Homebrew macOS - Python installation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"How can I make a swi-prolog program that executes a Python file score.py and gets the output?\nI've read about process_create\/3 and exec\/1 but I can't find much documentation","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":403,"Q_Id":45533640,"Users Score":2,"Answer":"You need to use the stdout\/1 and stderr\/1 options of process_create\/3.\nFor example, here is a simple predicate that simply copies the process output to standard\u00a0output:\n\noutput_from_process(Exec, Args) :-\n process_create(Exec, Args, [stdout(pipe(Stream)),\n stderr(pipe(Stream))]),\n copy_stream_data(Stream, current_output),\n % the process may terminate with any exit code.\n catch(close(Stream), error(process_error(_,exit(_)), _), true).\n\nYou can adapt the copy_stream_data\/2 call to write the output to any other stream.","Q_Score":4,"Tags":"python,prolog,swi-prolog","A_Id":45533749,"CreationDate":"2017-08-06T15:36:00.000","Title":"How to get output value from python script executed in swi-prolog","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Please any one can help how to install python or update with new version on centos 6.5.\ni am getting below error while installing through .tat.gz after running make command \nmake: *** No targets specified and no makefile found. Stop.\nkindly any one can help ..\nRegards,\nSriram","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":45559360,"Users Score":0,"Answer":"Make sure you have extracted the archive and are in the directory where the Makefile resides.","Q_Score":0,"Tags":"python,linux,python-2.7,redhat,centos6","A_Id":45559463,"CreationDate":"2017-08-08T04:34:00.000","Title":"Not able to install python new version 2.7.8 on centos","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to install Python 3.6.2 on a windows vps I have but I need admin rights to do it.\nI tried a various different methods but none of them worked.\nThere is no MSI version for python 3 so that does not work either.\nAny ideas?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4368,"Q_Id":45585796,"Users Score":0,"Answer":"You could try the embeddable version (Really a zipped portable version), but I'm not sure about dependencies management (i.e. pip) and path variables and whatnot.","Q_Score":1,"Tags":"python","A_Id":45588633,"CreationDate":"2017-08-09T08:46:00.000","Title":"Install Python 3.6.2 on Windows without admin rights","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My book states:\n\nEvery program that runs on your computer has a current working directory, or cwd. Any filenames or paths that do not begin with the root folder are assumed to be under the current working directory\n\nAs I am on OSX, my root folder is \/. When I type in os.getcwd() in my Python shell, I get \/Users\/apple\/Documents. Why am I getting the Documents folder in my cwd? Is it saying that Python is using Documents folder? Isn't there any path heading to Python that begins with \/ (the root folder)? Also, does every program have a different cwd?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":16071,"Q_Id":45591428,"Users Score":0,"Answer":"os.getcwd() has nothing to do with OSX in particular. It simply returns the directory\/location of the source-file. If my source-file is on my desktop it would return C:\\Users\\Dave\\Desktop\\ or let say the source-file is saved on an external storage device it could return something like G:\\Programs\\. It is the same for both unix-based and Windows systems.","Q_Score":7,"Tags":"python,working-directory","A_Id":45591819,"CreationDate":"2017-08-09T13:00:00.000","Title":"What exactly is current working directory?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My book states:\n\nEvery program that runs on your computer has a current working directory, or cwd. Any filenames or paths that do not begin with the root folder are assumed to be under the current working directory\n\nAs I am on OSX, my root folder is \/. When I type in os.getcwd() in my Python shell, I get \/Users\/apple\/Documents. Why am I getting the Documents folder in my cwd? Is it saying that Python is using Documents folder? Isn't there any path heading to Python that begins with \/ (the root folder)? Also, does every program have a different cwd?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":16071,"Q_Id":45591428,"Users Score":0,"Answer":"Python is usually (except if you are working with virtual environments) accessible from any of your directory. You can check the variables in your path and Python should be available. So the directory you get when you ask Python is the one in which you started Python. Change directory in your shell before starting Python and you will see you will it.","Q_Score":7,"Tags":"python,working-directory","A_Id":45591529,"CreationDate":"2017-08-09T13:00:00.000","Title":"What exactly is current working directory?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed Git Bash, python 3.6 and Anaconda for the course which requires me to use Unix commands within Jupyter, such as !ls, !cat, !head etc.\nHowever, for each of these commands I get (e.g.):\n\n'ls' is not recognized as an internal or external command,\n operable program or batch file.\n\nI am using Windows 10. What can I do to be able to proceed with the course? \nThanks!","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":6860,"Q_Id":45599187,"Users Score":2,"Answer":"If you are running Jupyter Notebook in Windows run conda install posix.\nIt worked for me.","Q_Score":6,"Tags":"python,windows,git,jupyter","A_Id":63553489,"CreationDate":"2017-08-09T19:31:00.000","Title":"Python - Unix commands not recognized in Jupyter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed Git Bash, python 3.6 and Anaconda for the course which requires me to use Unix commands within Jupyter, such as !ls, !cat, !head etc.\nHowever, for each of these commands I get (e.g.):\n\n'ls' is not recognized as an internal or external command,\n operable program or batch file.\n\nI am using Windows 10. What can I do to be able to proceed with the course? \nThanks!","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":6860,"Q_Id":45599187,"Users Score":11,"Answer":"Please don't use !ls as mentioned in the course. \nUse %ls in the jupyter notebook and it works fine.\nHope it helps.","Q_Score":6,"Tags":"python,windows,git,jupyter","A_Id":45751003,"CreationDate":"2017-08-09T19:31:00.000","Title":"Python - Unix commands not recognized in Jupyter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Imagine that I've written a celery task, and put the code to the server, however, when I want to send the task to the server, I need to reuse the code written before.\nSo my question is that are there any methods to seperate the code between server and client.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":222,"Q_Id":45608490,"Users Score":0,"Answer":"Try a web server like flask that forwards requests to the celery workers. Or try a server that reads from a queue (SQS, AMQP,...) and does the same.\nNo matter the solution you choose, you end up with 2 services: the celery worker itself and the \"server\" that calls the celery tasks. They both share the same code but are launched with different command lines.\nAlternately, if the task code is small enough, you could just import the git repository in your code and call it from there","Q_Score":1,"Tags":"python,celery","A_Id":45608632,"CreationDate":"2017-08-10T08:36:00.000","Title":"how to seperate celery code into server and client side?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am new to Oozie. I have couple of questions on oozie job scheduling.\n\nCan we get a list of jobs which scheduled on ozzie server for everyday run using some programmatic approach? Considering there are multiple job scheduled to run everyday may be for next couple of months or year.\nHow to know programmatically that a scheduled job had failed to run at day end for reporting purpose?\nCan we do a ranking on oozie scheduled job on the basis of their execution time?\n\nThanks much for any help on this.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":170,"Q_Id":45633650,"Users Score":0,"Answer":"You can use the Cloudera Hue or Apache Ambari tools that will give you your all info about the Ozzie \nIf you are looking for more you can write you own program using some api expose by oozie.","Q_Score":0,"Tags":"python,hadoop,oozie,oozie-workflow","A_Id":45682834,"CreationDate":"2017-08-11T11:03:00.000","Title":"Apache Oozie workflows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"As a sys admin I am trying to simulate a user on a virtual machine for testing log file monitoring. \nThe simulated user will be automated to perform various tasks that should show up in bash history, \"ls\", \"cd\", \"touch\" etc. It is important that they show up in bash history because the bash history is logged. \nI have thought about writing directly to the bash history but would prefer to more accurately simulate a users behavior. The reason being that the bash history is not the only log file being watched and it would be better if logs for the same event remained synchronized. \nDetails\nI am working on CentOS Linux release 7.3.1611\nPython 2.7.5 is installed \nI have already tried to use pexpect.run('ls') or pexpect.spawn('ls'), 'ls' does not show up in the bash history with either command.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1718,"Q_Id":45646104,"Users Score":0,"Answer":"The short answer appears to be no you cannot do this with pexpect. \nAuditd is a possible alternative for tracking user input, but I have not figured out how to get it to record commands such as 'ls' and 'cd' because they do not call the system execve() command. The best work around I have found is to use the script command which opens another interactive terminal where every command entered in the prompt is recorded. You can use the file the script command outputs to (by default is typescript) to log all user commands.","Q_Score":0,"Tags":"python,linux,bash,shell,pexpect","A_Id":45718528,"CreationDate":"2017-08-12T03:05:00.000","Title":"Is it possible to use pexpect to generate bash shell commands that will show up in bash history?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I ran pip install pyaudio in my terminal and got this error:\n\nCommand \"\/home\/oliver\/anaconda3\/bin\/python -u -c \"import setuptools,\n tokenize;file='\/tmp\/pip-build-ub9alt7s\/pyaudio\/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\\r\\n',\n '\\n');f.close();exec(compile(code, file, 'exec'))\" install\n --record \/tmp\/pip-e9_md34a-record\/install-record.txt --single-version-externally-managed --compile\" failed with error code 1 in \/tmp\/pip-build-ub9alt7s\/pyaudio\/\n\nSo I ran sudo apt-install python-pyaudio python3-pyaudio\nwhich seemed to work. \nThen in jupyter:\nimport pyaudio\nerror: \n\nModuleNotFoundError: No module named 'pyaudio'\n\nCan anyone help me work out this problem? I am not familiar with Ubuntu and it's commands paths etc as I've only been using it a few months.\nIf you need more information, let me know what, and how. Thanks","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":5269,"Q_Id":45676247,"Users Score":0,"Answer":"Check in the documentation of pyaudio if it is compatible with your python version\nSome modules which are not compatible may be installed without issues, yet still won't work when trying to access them","Q_Score":1,"Tags":"python,python-3.x,pyaudio","A_Id":45676889,"CreationDate":"2017-08-14T13:57:00.000","Title":"ModuleNotFoundError: No module named 'pyaudio'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I ran pip install pyaudio in my terminal and got this error:\n\nCommand \"\/home\/oliver\/anaconda3\/bin\/python -u -c \"import setuptools,\n tokenize;file='\/tmp\/pip-build-ub9alt7s\/pyaudio\/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\\r\\n',\n '\\n');f.close();exec(compile(code, file, 'exec'))\" install\n --record \/tmp\/pip-e9_md34a-record\/install-record.txt --single-version-externally-managed --compile\" failed with error code 1 in \/tmp\/pip-build-ub9alt7s\/pyaudio\/\n\nSo I ran sudo apt-install python-pyaudio python3-pyaudio\nwhich seemed to work. \nThen in jupyter:\nimport pyaudio\nerror: \n\nModuleNotFoundError: No module named 'pyaudio'\n\nCan anyone help me work out this problem? I am not familiar with Ubuntu and it's commands paths etc as I've only been using it a few months.\nIf you need more information, let me know what, and how. Thanks","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":5269,"Q_Id":45676247,"Users Score":0,"Answer":"if you are using windows then these command on the terminal:\npip install pipwin\npipwin install pyaudio","Q_Score":1,"Tags":"python,python-3.x,pyaudio","A_Id":59344854,"CreationDate":"2017-08-14T13:57:00.000","Title":"ModuleNotFoundError: No module named 'pyaudio'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way to execute a Python script, yet stay in the Python shell thereafter, so that variable values could be inspected and such?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":184,"Q_Id":45676676,"Users Score":0,"Answer":"Metaphox nailed it:\n\nI think you are looking for python -i .\/file.py, where the -i flag\n will enter interactive mode after executing the file. If you are\n already in the console, then execfile. \u2013 Metaphox 2 mins ago\n\nBut I want to thank for the other suggestions as well, which go beyond the original question yet are useful!","Q_Score":0,"Tags":"python","A_Id":45677292,"CreationDate":"2017-08-14T14:20:00.000","Title":"Run Python console after script execution in same environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to play with a security tool using scapy to spoof ASCII characters in a UDP checksum. I can do it, but only when I hardcode the bytes in Hex notation. But I can't convert the ASCII string word into binary notation. This works to send the bytes of \"He\" (first two chars of \"Hello world\"):\nsr1(IP(dst=server)\/UDP(dport=53, chksum=0x4865)\/DNS(rd=1,qd=DNSQR(qname=query)),verbose=0)\nBut whenever I try to use a variable of test2 instead of 0x4865, the DNS packet is not transmitted over the network. This should create binary for this ASCII:\ntest2 = bin(int(binascii.hexlify('He'),16))\nsr1(IP(dst=server)\/UDP(dport=53, chksum=test2)\/DNS(rd=1,qd=DNSQR(qname=query)),verbose=0)\nWhen I print test2 variable is shows correct binary notation representation.\nHow do I convert a string such as He so that is shows in the checksum notation accepted by scapy, of 0x4865 ??","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":179,"Q_Id":45677057,"Users Score":0,"Answer":"I was able to get this working by removing the bin(). This works:\ntest2 = int(binascii.hexlify('He'),16)","Q_Score":0,"Tags":"python,scapy","A_Id":45693049,"CreationDate":"2017-08-14T14:39:00.000","Title":"Spoofing bytes of a UDP checksum over network","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"While running odoo at the first time it shows ImportError: No module named openerp\n\nC:\\Python27\\python.exe E:\/workspaces\/odoo-10.0-20170812\/odoo.py -c\nE:\\workspaces\\odoo-10.0-20170812\\odoo.conf Traceback (most recent call\n last): \nFile \"E:\/workspaces\/odoo-10.0-20170812\/odoo.py\", line 160, in\n \n main()\n File \"E:\/workspaces\/odoo-10.0-20170812\/odoo.py\", line 156, in main\nimport openerp\nImportError: No module named openerp\nProcess finished with exit code 1","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2238,"Q_Id":45678932,"Users Score":1,"Answer":"Odoo 10 source code does not contain the .\/odoo.py file, it is probably from <=9.0, where the now odoo module was named openerp. You should've got the wrong source, or mixed up the two.","Q_Score":2,"Tags":"python,django,openerp","A_Id":45716439,"CreationDate":"2017-08-14T16:26:00.000","Title":"ImportError: No module named openerp","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"While running odoo at the first time it shows ImportError: No module named openerp\n\nC:\\Python27\\python.exe E:\/workspaces\/odoo-10.0-20170812\/odoo.py -c\nE:\\workspaces\\odoo-10.0-20170812\\odoo.conf Traceback (most recent call\n last): \nFile \"E:\/workspaces\/odoo-10.0-20170812\/odoo.py\", line 160, in\n \n main()\n File \"E:\/workspaces\/odoo-10.0-20170812\/odoo.py\", line 156, in main\nimport openerp\nImportError: No module named openerp\nProcess finished with exit code 1","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2238,"Q_Id":45678932,"Users Score":4,"Answer":"import openerp won't work in Odoo 10 because openerp is replaced with odoo. Upto version 9 it was openerp but in 10 it changed.\nSo try:\nimport odoo instead of import openerp.\nOdoo 10 source code does not contain an import openerp anywhere, maybe you have downloaded from the wrong source.","Q_Score":2,"Tags":"python,django,openerp","A_Id":45686661,"CreationDate":"2017-08-14T16:26:00.000","Title":"ImportError: No module named openerp","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am using python virtualenv to run robot framework in linux.\nMy doubt is about the system date for virtualenv, is it possible to change the date of virtualenv with out changing the OS level system date.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":147,"Q_Id":45686418,"Users Score":1,"Answer":"No, you cannot change the date in a virtualenv separate from the system time. A virtualenv is nothing more than environment variables and symbolic links to some folders, it is not an isolated system.","Q_Score":1,"Tags":"python,virtualenv,robotframework","A_Id":45724271,"CreationDate":"2017-08-15T04:17:00.000","Title":"Python virtualenv date different from OS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to find the command line arguments that my program was called with, i.e. sys.argv, but I want to do that before Python makes sys.argv available. This is because I'm running code in usercustomize.py which is imported by the site module, which is imported before Python populates sys.argv. (If you're curious, the reason I'm doing this is to start my debugger without changing my program code.)\nIs there any way to find the command line arguments without sys.argv?\nAlso: The solution needs to work for Python 2.6 :(","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":299,"Q_Id":45692841,"Users Score":0,"Answer":"Nope. Python uses such libraries as sys, os, and etc to get access to system variables and os functional. It can't do it with just core functions. So, in any case, you need to import sys.","Q_Score":7,"Tags":"python,python-2.6","A_Id":52295075,"CreationDate":"2017-08-15T12:28:00.000","Title":"Python: Find `sys.argv` before the `sys` module is loaded","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been working through the initial tutorial and ran into a load of issues with my anaconda install using python 2.7. In the end it wouldn't launch the server.\nAnyway, I decided to change up on my machine to python3. That said, I am now getting strange results which are:\nIf I use the terminal command $python -m django --version I get the following:\n\"..\/Contents\/MacOS\/Python: No module named django\"\nIf I change to \"$python3 -m django --version\" terminal gives me back: \"1.11.4\"\nNow, when I am in the tutorial and starting again from the beginning I do the following: \"$django-admin startproject mysite\"\nThis seemed to work.\nHowever, when I tried: \"$python manage.py runserver\" I get the following:\nTraceback (most recent call last):\n File \"manage.py\", line 17, in \n \"Couldn't import Django. Are you sure it's installed and \"\nImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?\nIf I change to include 3, so \"$python3 manage.py runserver\" all is well.\nMy question is do I need to always use python3 in every command now? I does not say that in the tutorial.\nMy Mac OSx has a native install of 2.7 which I believe is required by my machine for other apps dependency.\nAny help would be really appreciated! I am sure given I am new to python I am being a complete moron!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":10200,"Q_Id":45700003,"Users Score":0,"Answer":"Yes. Python 3's binaries are installed with a suffix of \"3\", so python will launch a Python 2 interpreter and you need to run python3 to specifically use Python 3.","Q_Score":2,"Tags":"python,django,python-2.7,python-3.x","A_Id":45700170,"CreationDate":"2017-08-15T19:20:00.000","Title":"\"$python manage.py runserver\" not working. Only \"python3 manage.py runserver\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"As we all know; Apple ship OSX with Python, but it locks it away.\nThis force me and anyone else that use python, to install another version and start the painful process of installing with pip with 100 tricks and cheats.\nNow, I would like to understand how to do this right; and sorry but I can't go with the route of the virtualenv, due to the fact that I run this for a build server running Jenkins, and I have no idea how to set that up correctly.\nCould you please clarify for me these?\n\nHow do you tell OSX to run the python from brew, instead than system one?\nWhere is the official python living, and where are the packages installed, when I run pip install with and without the -U and\/or the --user option?\nIn which order should I install a bunch of packages starting from scratch.on a fresh OSX machine,so I can set it up reliably every time?\n\nMostly I use opencv, scikit-image, numpy, scipy and pillow. These are giving me so many issues and I can't get a reliable setup so that Jenkins is happy to run the python code, using these libraries.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":45726623,"Users Score":0,"Answer":"Brew installs packages into \/usr\/local\/Cellar and then links them to \/usr\/local\/bin (i.e. \/usr\/local\/bin\/python3). In my case, I just make sure to have \/usr\/local\/bin in my PATH prior to \/usr\/bin. \nexport PATH=\/usr\/local\/bin:$PATH\nBy using brew, your new packages will be installed to: \n\/usr\/local\/Cellar\/python\nor \n\/usr\/local\/Cellar\/python3\nPackage install order shouldn't matter.","Q_Score":1,"Tags":"python","A_Id":45726701,"CreationDate":"2017-08-17T04:32:00.000","Title":"Questions about double install of Python on OSX","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"i am running the airflow pipeline but codes looks seems good but actually i'm getting the airflow.exceptions.AirflowException: Cycle detected in DAG. Faulty task: \ncan u please help to resolve this issue","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":10290,"Q_Id":45731858,"Users Score":7,"Answer":"Without the code, it's kind of hard to help you. However, this means that you have a loop in your DAG. Generally, thie error happens when one of your task has a downstream task whose own downstream chain includes it again (A calls B calls C calls D calls A again, for example). \nThat's not permitted by Airflow (and DAGs in general).","Q_Score":10,"Tags":"python-2.7,apache-airflow","A_Id":45876209,"CreationDate":"2017-08-17T09:51:00.000","Title":"airflow.exceptions.AirflowException: Cycle detected in DAG. Faulty task","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i am running the airflow pipeline but codes looks seems good but actually i'm getting the airflow.exceptions.AirflowException: Cycle detected in DAG. Faulty task: \ncan u please help to resolve this issue","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":10290,"Q_Id":45731858,"Users Score":24,"Answer":"This can happen due to duplicate task_id'a in multiple tasks.","Q_Score":10,"Tags":"python-2.7,apache-airflow","A_Id":54864435,"CreationDate":"2017-08-17T09:51:00.000","Title":"airflow.exceptions.AirflowException: Cycle detected in DAG. Faulty task","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a bash script, in Python that runs on a Ubuntu server. Today, I mistakenly closed the Putty window after monitoring that the script ran correctly. \nThere is some usefull information that was printed during the scrip running and I would like to recover them. \nIs there a directory, like \/var\/log\/syslog for system logs, for Python logs? \n\nThis scripts takes 24 hours to run, on a very costly AWS EC2 instance, and running it again is not an option.\nYes, I should have printed usefull information to a log file myself, from the python script, but no, I did not do that.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":389,"Q_Id":45732437,"Users Score":1,"Answer":"Unless the script has an internal logging mechanism like e.g. using logging as mentioned in the comments, the output will have been written to \/dev\/stdout or \/dev\/stderr respectively, in which case, if you did not log the respective data streams to a file for persistent storage by using e.g. tee, your output is lost.","Q_Score":0,"Tags":"python,linux,bash,ubuntu,logging","A_Id":45801164,"CreationDate":"2017-08-17T10:17:00.000","Title":"Recover previous Python output to terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Just wondering - as a fail safe backup, I'm setting up a python cronjob script that I can print various things to the terminal.\nI was wondering, once the cronjob has finished - am I able to take a terminal dump for the last output? Even if it errors out...\nProbably going to be running on a Linux VPS - CentOS (not sure if that 100% matters).","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":45736070,"Users Score":0,"Answer":"You can redirect all prints \/dev\/tty to print in all terminals python exec.py &> \/dev\/tty). But cron is execute deattached from all terminals.","Q_Score":0,"Tags":"python,linux,cron,command-line-interface","A_Id":45736226,"CreationDate":"2017-08-17T13:12:00.000","Title":"Python print() Terminal dump after cronjob has finished","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I recently booted up my Pyzo IDE with the intention of doing some programming, however, upon starting up the python shell it gave this following error:\n\nThe given path was not found\nThe process failed to start (invalid command?). (1)\n\nI am not able to run any code with this error. If I try to run it nothing happens and the error re-appears.\nI have tried reinstalling the whole thing without success, I have tried reading the log but there was no error message and I have also tried looking for posts regarding the same problem without success. I was hoping if someone could explain what my problem is and a possible solution, thanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6080,"Q_Id":45742377,"Users Score":0,"Answer":"This is how I fixed it:\n\nI went to miniconda3 file in C:\\Users\\\\Miniconda3 (might be in other file\/the thing is you need to find the miniconda3 file)\nFind the Python application\nRename it to \"python.exe\"\nThen go to shell configuration and replace the path to the operable python program in \"exe\" with your path (for me it was \nC:\\Users\\\\Miniconda3\\python.exe)","Q_Score":0,"Tags":"python,pyzo","A_Id":46764383,"CreationDate":"2017-08-17T18:16:00.000","Title":"Getting a \"The process failed to start (invalid command?). (1)\" error when starting up Pyzo","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"In Python, how do you check that an external program is running? I'd like to track my use of some programs, so I can see the amount of time I've spent with them. For example, if I launch my program , I want to be able to see if Chrome has already been launched, and if so, start a timer which would end when I exit Chrome.\nIve seen that then subprocess module can launch external programs, but this is not what I'm looking for.\nThanks in advance.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":45772510,"Users Score":0,"Answer":"In my case I would try something using Task Manager data, probably using subprocess.check_output(ps)(for me that looks good), but you can the [psutil][1] library.\nTell us what you did later :)","Q_Score":0,"Tags":"python","A_Id":45772896,"CreationDate":"2017-08-19T14:05:00.000","Title":"python: how to check the use of an external program","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a large manifest file containing about 460,000 entries (all S3 files) that I wish to load to Redshift. Due to issues beyond my control a few (maybe a dozen or more) of these entries contain bad JSON that will cause a COPY command to fail if I pass in the entire manifest at once. Using COPY with a key prefix will also fail in the same way.\nTo get around this I have written a Python script that will go through the manifest file one URL at a time and issue a COPY command for each one using psycopg2. The script will additionally catch and log any errors to ensure that the script runs even when it comes across a bad file, and allows us to locate and fix the bad files.\nThe script has been running for a little more than a week now on a spare EC2 instance and is only around 75% complete. I'd like to lower the run time, because this script will be used again. \nMy understanding of Redshift is that COPY commands are executed in parallel, and with that I had an idea - will splitting the manifest file into smaller chunks and then running the script each of those chunks reduce the time it takes to load all the files?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":792,"Q_Id":45785730,"Users Score":1,"Answer":"COPY command can load multiple files in parallel very fast and efficiently. So when you run one COPY command for each file in your python file, that's going to take a lot of time since you are not taking advantage of parallel loading.\nSo maybe you can write a script to find bad JSONs in your manifest and kick them out and run a single COPY with the new clean manifest? \nOr like you suggested, I would recommend splitting manifest file into small chunks so that COPY can run for multiple files at a time. (NOT a single COPY command for each file)","Q_Score":1,"Tags":"python,amazon-web-services,amazon-s3,amazon-redshift","A_Id":45787036,"CreationDate":"2017-08-20T18:49:00.000","Title":"Using multiple manifest files to load to Redshift from S3?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The executable i created works completely fine on my system,but as soon as opened in an other system the cmd opens for a very brief time and then closes.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":45786134,"Users Score":0,"Answer":"Open a command prompt window in windows and execute the .exe from there at the system were it does not work - then you will see the error message why it does not work, and that might help to figure out where the problem is.\nIf you just double-click the exe the error shows as well, but the cmd window is closed immediately since the process terminates","Q_Score":0,"Tags":"python-2.7,py2exe","A_Id":45798385,"CreationDate":"2017-08-20T19:40:00.000","Title":"why does the executable i created using py2exe only runs on my computer and not on others?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"The documentation tells me to type python3 -m venv myenv into the command prompt, assuming the directory I'd like is called myenv. However, when I do this, I get:\n\"python3 is not recognized as an internal or external command, operable program or batch file.\"\nI have not seen this addressed on here, or in the documentation. My installation seems to have run correctly, because simply typing python shows me what it's supposed to show.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":164,"Q_Id":45790373,"Users Score":0,"Answer":"Python3 is not in your \"search path\"\nYou need to alter the Windows PATH value so the Python3.exe module is found.","Q_Score":0,"Tags":"python","A_Id":45790671,"CreationDate":"2017-08-21T06:15:00.000","Title":"Trying to create a virtual env on Windows 7, using Python 3.6.2","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running pytest on a Jenkins machine; although I am not sure which Python it is actually running.\nThe machine is running OSX; and I did install various libraries (like numpy and others), on top of another Python install via Brew, so I keep things separated.\nWhen I run the commands from console; I specify python2.6 -m pytest mytest.py, which works, but when I run the same via shell in Jenkins, it fail, because it can't find the right libraries (which are the extra libraries I did install, after installing Python via Brew).\nIs there a way to know what is Jenkins using, so I can force it to run the correct python binary, which has access to my extra libraries?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":171,"Q_Id":45806967,"Users Score":0,"Answer":"You can use which python to find which python Jenkins use.\nYou can use ABSPATH\/OF\/python to run your pytest","Q_Score":0,"Tags":"python,jenkins","A_Id":45807814,"CreationDate":"2017-08-21T23:45:00.000","Title":"how to find out which Python is called when I run pytest via Jenkins","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed Tensorflow on my macOS Sierra using pip install tensorflow. \nIm getting the following error:\nOSError: [Errno 1] Operation not \npermitted:'\/var\/folders\/zn\/l9gmn4613677f6mlrh6prtb00000gn\/T\/pip-xv3AU6-uninstall\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Extras\/lib\/python\/numpy-1.8.0rc1-py2.7.egg-info'\nIs there anyway to resolve this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":45824737,"Users Score":0,"Answer":"Use Python EasyInstall, is super easy:\nsudo easy_install pip","Q_Score":0,"Tags":"python-2.7,tensorflow","A_Id":45825713,"CreationDate":"2017-08-22T18:39:00.000","Title":"Error in installing Tensorflow on Mac","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I built a GUI in JavaFX with FXML for running a bunch of different Python scripts. The Python scripts continuously collect data from a device and print it to the console as it's collected in a loop at anywhere from around 10 to 70 Hz depending on which script was being run, and they don't stop on their own.\nI want the end-user to be able to click a button on my GUI which launches the scripts and lets them see the output. Currently, the best I have done was using Runtime.exec() with the command \"cmd \/c start cmd \/k python some_script.py\" which opens the windows command prompt, runs python some_script.py in it, and keeps the command prompt open so that you can see the output. The problem with this is that it only works on Windows (my OS) but I need to have universal OS support and that it relies on Java starting an external program which I hear is not very elegant.\nI then tried to remedy this by executing the python some_script.py command in Java, capturing the process output with BufferedReader, creating a new JavaFX scene with just a TextArea in an AnchorPane to be a psuedo-Java-console and then calling .setText() on that TextArea to put the script output in it.\nThis kinda worked, but I ran into many problems in that the writing to the JavaFX console would jump in big chunks of several dozens of lines instead of writing to it line by line as the Python code was making Print() calls. Also, I got a bunch of NullPointerException and ArrayIndexOutOfBoundsException somewhat randomly in that Java would write a couple of hundred lines correctly but then throw those errors and freeze the program. I'm pretty sure both of these issues were due to having so much data at such high data rates which overflowed the BufferedReader buffer and\/or the TextArea.setText() cache or something similar.\nWhat I want to know is what approach I should take at this. I cannot migrate the Python code to Java since it relies on someone else's Python library to collect its data. Should I try to keep with the pseudo-Java-console idea and see if I can make that work? Should I go back to opening a command prompt window from Java and running the Python scripts and then add support for doing the same with Terminal in Mac and Linux? Is there a better approach I haven't thought of? Is the idea of having Java code call Python code and handle its output just disgusting and a horrible idea?\nPlease let me know if you would like to see any code (there is quite a lot) or if I can clarify anything, and I will try my best to respond quickly. Thank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":250,"Q_Id":45852491,"Users Score":0,"Answer":"My solution was to still call the Python code from the Java Processbuilder, but use the -u option like python -u scriptname.py to specify unbuffered Python output.","Q_Score":0,"Tags":"java,python,user-interface,javafx,console","A_Id":45989477,"CreationDate":"2017-08-24T03:33:00.000","Title":"JavaFX show looping Python print output","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"How to produce kafka topic using message batch or buffer with pykafka. I mean one producer can produce many message in one produce process. i know the concept using message batch or buffer message but i dont know how to implement it. I hope someone can help me here","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4131,"Q_Id":45859384,"Users Score":0,"Answer":"Just use the send() method. You do not need to manage it by yourself.\n\nsend() is asynchronous. When called it adds the record to a buffer of\n pending record sends and immediately returns. This allows the producer\n to batch together individual records for efficiency.\n\nYour task is only that configure two props about this: batch_size and linger_ms.\n\nThe producer maintains buffers of unsent records for each partition.\n These buffers are of a size specified by the \u2018batch_size\u2019 config.\n Making this larger can result in more batching, but requires more\n memory (since we will generally have one of these buffers for each\n active partition).\n\nThe two props will be done by the way below: \n\nonce we get batch_size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will \u2018linger\u2019 for the specified time waiting for more records to show up.","Q_Score":1,"Tags":"python,apache-kafka,kafka-producer-api,pykafka","A_Id":45862977,"CreationDate":"2017-08-24T10:37:00.000","Title":"How to produce kafka topic using message batch or buffer with pykafka","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been using Anaconda(4.3.23) on my GuestOS ubuntu 14.04 which is installed on Vmware on HostOS windows 8.1. I have setup an environment in anaconda and have installed many libraries, some of which were very hectic to install (not straight forward pip installs). few libraries had inner dependencies and had to be build together and from their git source.\nProblem\nI am going to use Cloud based VM (Azure GPU instance) to use GPU. but I don't want to get into the hectic installation again as i don't want to waste money on the time it will take me to install all the packages and libraries again\nIs there any way to transfer\/copy my existing env (which has everything already installed) to the Cloud VM?","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":24187,"Q_Id":45864595,"Users Score":1,"Answer":"You can probably get away with copying the whole Anaconda installation to your cloud instance.","Q_Score":18,"Tags":"python,ubuntu,anaconda,virtualenv,conda","A_Id":55539151,"CreationDate":"2017-08-24T14:44:00.000","Title":"How to transfer Anaconda env installed on one machine to another? [Both with Ubuntu installed]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm installing anaconda python 3, version 4.4.0 on Windows machines. The installs finishes normally. But I'm getting errors when I try to use conda to update or to create virtual environments. Package resolution completes and downloads the packages but then hangs for a long time before throwing out a load of errors like so:\nconda create -n py2 python=2.7 anaconda\nINFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',\nprefix: 'c:\\anaconda\\envs\\py2', env_name: 'py2', mode: 'None', used_mode: 'system'\nINFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',\nprefix: 'c:\\anaconda\\envs\\py2', env_name: 'py2', mode: 'None', used_mode: 'system'\nINFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',\nprefix: 'c:\\anaconda\\envs\\py2', env_name: 'py2', mode: 'None', used_mode: 'system'\nINFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',\nprefix: 'c:\\anaconda\\envs\\py2', env_name: 'py2', mode: 'None', used_mode: 'system'\nINFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',\nprefix: 'c:\\anaconda\\envs\\py2', env_name: 'py2', mode: 'None', used_mode: 'system'\nINFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',\nprefix: 'c:\\anaconda\\envs\\py2', env_name: 'py2', mode: 'None', used_mode: 'system'\nINFO menuinst_win32:__init__(182): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}',\nprefix: 'c:\\anaconda\\envs\\py2', env_name: 'py2', mode: 'None', used_mode: 'system'\nThe environment will still have been created but this hanging and waiting is really becoming a problem. I assume this is a fairly new bug because I've been installing anaconda and using conda for quite a while and never seen this error before.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":2583,"Q_Id":45879771,"Users Score":3,"Answer":"I posted this to the anaconda github page. Apparently this is an issue with the displayed output but isn't actually an error in the install. The virtual environment installations and updates do work, although they are slower than normal.","Q_Score":4,"Tags":"python,anaconda,conda","A_Id":46000463,"CreationDate":"2017-08-25T10:52:00.000","Title":"Anaconda3 conda command error menuinst_win32","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to create an Azure webjob that runs a python script which uses pyodbc.\nThe Azure compiler does not recognize pyodbc.\nHow do I install it or reference it in some way?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":962,"Q_Id":45881542,"Users Score":1,"Answer":"Get the working PyODBC first. To use PyODBC you should compile it in its 32bits version. Or install Python 2.7 or 3.4 (32-Bit) and type the command \"pip install pyodbc\"\nTo use it in Azure WebJob, put the PyODBC.pyd file in the root directory of your job and it should work.","Q_Score":1,"Tags":"python,azure,azure-sql-database","A_Id":45882593,"CreationDate":"2017-08-25T12:33:00.000","Title":"How to install a python library for an Azure webjob?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have built a photo booth on a raspberry pi. It works fantastic! But after some coding I now have a problem organizing my scripts. At the moment all scripts are launched via \"lxterminal -e\". So every script has it's own terminal window and everything runs simultaneously. I ask myself if this can be done in a more efficient way.\nThe basic function of the photo booth: People press a remote button, take a picture, picture is being shown on the built-in tft.\n\nstart.sh --> is being executed automatically after booting. It prepares the system, , sets up the camera and brings it in tethered mode. After all that it launches the other, following scripts:\nsystem-watchdog.sh --> checks continuously if one of the physical buttons on the photo booth is being pressed, to reboot or go into setup mode. It's an ever-lasting-while-loop.\nsync.sh --> syncs the captured photo to some folders, where they are modified for beeing printed. Also an ever-lasting-while-loop.\nbackup.sh --> copies all taken pictures to a usb device as a backup. This is a cronjob, every 5 minutes.\ntemp-logger.sh --> Logs the temperature of the CPU continuously, because I had heat-problems.\n\nThe cpu is running constantly at about 20-40%. Maybe with some optimization I could run on viewer scripts and less cpu usage.\nAny suggestions what I could use to organize the scripts in a better way?\nThanks for your suggestions!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":48,"Q_Id":45883224,"Users Score":1,"Answer":"sync.sh --> syncs the captured photo to some folders, where they are modified for 1. being shown on the second screen, 2. upload to\n dropbox and 3. being printed. Also an ever-lasting-while-loop.\nterminal-sync.sh --> copies the taken photos to the\n second-screen-terminal, where they are shown in a gallery. It's also\n an ever-lasting-while-loop.\n\nFor these, you can use inotifywait to wait for file availability before processing the file.\nYou should check using top, which script actually consuming CPU and why. Once you identify the script and why it consume CPU, then you can start finding optimized way to do the same job","Q_Score":0,"Tags":"python,linux,bash,scripting,organization","A_Id":45884636,"CreationDate":"2017-08-25T14:06:00.000","Title":"Bash \/ Python: Organization of several scripts","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been stuck on this for a while. I've read many blogs but still struggling to understand how to set the path in my advanced settings in environment variables so that I can run my scripts in both cmd and python interpreter.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":45899505,"Users Score":0,"Answer":"So the solution for this would be, setting the environment variable for python by following the steps below:\nClick on:- My Computer > Properties > Advanced System Settings > Environment Variables \nThen under system variables, create a new Variable called PyPath.\nIn this variable I have C:\\Python27\\Lib;C:\\Python27\\DLLs;C:\\Python27\\Lib\\lib-tk;\nThis would work for windows","Q_Score":0,"Tags":"python","A_Id":45899652,"CreationDate":"2017-08-26T20:41:00.000","Title":"How do I configure the path of my python scripts so I can open it with cmd and python interpreter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm fairly new to Python and I have a python script that I would like to ultimately convert to a Windows executable (which I already know how to do). Is there a way I can write something in the script that would make it run as a background process in Windows instead of being visible in the foreground?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3312,"Q_Id":45911894,"Users Score":0,"Answer":"You can run the file using pythonw instead of python means run the command pythonw myscript.py instead of python myscript.py","Q_Score":7,"Tags":"python,windows,background-process","A_Id":60206875,"CreationDate":"2017-08-28T04:43:00.000","Title":"How to put a Python script in the background without pythonw.exe?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I want to install opencv in my raspberry pi 3b. When I sudo update, upgrade and finally reboot my rasp pi, I noticed that my LCD touch is now disabled. Good thing I have a back-up of the OS to make the LCD touch enabled again. How will I avoid this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":45913845,"Users Score":0,"Answer":"Before installing any new update in your Raspberry, check first provided drivers for your devices. Otherwise keep a copy of your drivers and reinstall them after update and upgrade.","Q_Score":0,"Tags":"python,raspberry-pi3","A_Id":45914010,"CreationDate":"2017-08-28T07:30:00.000","Title":"LCD not working after sudo update and upgrade","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Trying to install Google Cloud SDK(Python) on Windows 10 for All Users. Getting the following error. \nThis is new machine and start building fresh. Installed python 2.7 version prior to this. \nPlease help me to resolve this. \n\nOutput folder: C:\\Program Files (x86)\\Google\\Cloud SDK Downloading\n Google Cloud SDK core. Extracting Google Cloud SDK core. Create Google\n Cloud SDK bat file: C:\\Program Files (x86)\\Google\\Cloud\n SDK\\cloud_env.bat Installing components. Welcome to the Google Cloud\n SDK! This will install all the core command line tools necessary for\n working with the Google Cloud Platform. Traceback (most recent call\n last): File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\bin\\bootstrapping\\install.py\", line 214, in\n \n main() File \"C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\bootstrapping\\install.py\", line 192, in main\n Install(pargs.override_components, pargs.additional_components) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\bin\\bootstrapping\\install.py\", line 134, in\n Install\n InstallOrUpdateComponents(to_install, update=update) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\bin\\bootstrapping\\install.py\", line 177, in\n InstallOrUpdateComponents\n ['--quiet', 'components', verb, '--allow-no-backup'] + component_ids) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\calliope\\cli.py\", line 813, in\n Execute\n self._HandleAllErrors(exc, command_path_string, specified_arg_names) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\calliope\\cli.py\", line 787, in\n Execute\n resources = args.calliope_command.Run(cli=self, args=args) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\calliope\\backend.py\", line\n 754, in Run\n resources = command_instance.Run(args) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\surface\\components\\update.py\", line 99, in\n Run\n version=args.version) File \"C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\core\\updater\\update_manager.py\",\n line 850, in Update\n command_path='components.update') File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\core\\updater\\update_manager.py\",\n line 591, in _GetStateAndDiff\n command_path=command_path) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\core\\updater\\update_manager.py\",\n line 574, in _GetLatestSnapshot\n *effective_url.split(','), command_path=command_path) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\core\\updater\\snapshots.py\",\n line 165, in FromURLs\n for url in urls] File \"C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\core\\updater\\snapshots.py\",\n line 186, in _DictFromURL\n response = installers.ComponentInstaller.MakeRequest(url, command_path) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\core\\updater\\installers.py\",\n line 285, in MakeRequest\n return ComponentInstaller._RawRequest(req, timeout=timeout) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\core\\updater\\installers.py\",\n line 329, in _RawRequest\n should_retry_if=RetryIf, sleep_ms=500) File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\core\\util\\retry.py\", line 155,\n in TryFunc\n return func(*args, kwargs), None File \"C:\\Program Files (x86)\\Google\\Cloud\n SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\core\\url_opener.py\", line 73,\n in urlopen\n return opener.open(req, data, timeout) File \"c:\\users\\cpa8161\\appdata\\local\\temp\\tmpxcdivh\\python\\lib\\urllib2.py\",\n line 429, in open\n response = self._open(req, data) File \"c:\\users\\cpa8161\\appdata\\local\\temp\\tmpxcdivh\\python\\lib\\urllib2.py\",\n line 447, in _open\n '_open', req) File \"c:\\users\\cpa8161\\appdata\\local\\temp\\tmpxcdivh\\python\\lib\\urllib2.py\",\n line 407, in _call_chain\n result = func(*args) File \"C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\lib\\googlecloudsdk\\core\\url_opener.py\", line 58,\n in https_open\n return self.do_open(build, req) File \"c:\\users\\cpa8161\\appdata\\local\\temp\\tmpxcdivh\\python\\lib\\urllib2.py\",\n line 1195, in do_open\n h.request(req.get_method(), req.get_selector(), req.data, headers) File\n \"c:\\users\\cpa8161\\appdata\\local\\temp\\tmpxcdivh\\python\\lib\\httplib.py\",\n line 1042, in request\n self._send_request(method, url, body, headers) File \"c:\\users\\cpa8161\\appdata\\local\\temp\\tmpxcdivh\\python\\lib\\httplib.py\",\n line 1082, in _send_request\n self.endheaders(body) File \"c:\\users\\cpa8161\\appdata\\local\\temp\\tmpxcdivh\\python\\lib\\httplib.py\",\n line 1038, in endheaders\n self._send_output(message_body) File \"c:\\users\\cpa8161\\appdata\\local\\temp\\tmpxcdivh\\python\\lib\\httplib.py\",\n line 882, in _send_output\n self.send(msg) File \"c:\\users\\cpa8161\\appdata\\local\\temp\\tmpxcdivh\\python\\lib\\httplib.py\",\n line 844, in send\n self.connect() File \"C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\lib\\third_party\\httplib2__init__.py\", line 1081,\n in connect\n raise SSLHandshakeError(e)\n **httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661) Failed to install.","AnswerCount":6,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":13684,"Q_Id":45927259,"Users Score":2,"Answer":"I just spent hours trying to make the installer run trying to edit ca cert files but the installer keeps wiping the directories as part of the installation process. In order to make the bundle gcloud sdk installer work, I ended up having to create an environment variable SSL_CERT_FILE and setting the path to a ca cert text file that contained the Google CAs + my company's proxy CA cert. Then the installer ran without issue. It seems that env variable is used by the python http client for CA validation.\nThen you need to run gcloud config set custom_ca_certs_file before running gcloud init","Q_Score":2,"Tags":"google-cloud-platform,google-cloud-sdk,google-cloud-python","A_Id":56367158,"CreationDate":"2017-08-28T20:56:00.000","Title":"google cloud python sdk installation error - SSL Certification Error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a bunch of Python scripts which are to be run by several different users. The scripts are placed in a Windows server environment. \nWhat I wish to achieve is to protect these scripts in such a way that standard users are allowed to run them but do not have the rights to read\/modify\/move them.\nIs this even possible and if so what is the optimal strategy?\nThanks in advance.","AnswerCount":3,"Available Count":2,"Score":-0.0665680765,"is_accepted":false,"ViewCount":381,"Q_Id":45931604,"Users Score":-1,"Answer":"There is no way to make them not readable but executable at the same time.","Q_Score":0,"Tags":"python,windows,security,server","A_Id":45931678,"CreationDate":"2017-08-29T05:55:00.000","Title":"Protect python script on Windows server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a bunch of Python scripts which are to be run by several different users. The scripts are placed in a Windows server environment. \nWhat I wish to achieve is to protect these scripts in such a way that standard users are allowed to run them but do not have the rights to read\/modify\/move them.\nIs this even possible and if so what is the optimal strategy?\nThanks in advance.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":381,"Q_Id":45931604,"Users Score":1,"Answer":"You can compile Python modules into native libraries with Cython and provide compiled files only; though it involves a lot of hassle and doesn't work in some cases. They still can be decompiled to C code but it will be mostly unreadable.\nPros: 1. compiled libraries can be imported as normal Python modules.\nCons: 1. requires additional setup; 2. doesn't work in some cases, e.g. celery tasks cannot reside in compiled modules because: 3. you lose introspection abilities; 4. tracebacks are basically unreadable.","Q_Score":0,"Tags":"python,windows,security,server","A_Id":45932177,"CreationDate":"2017-08-29T05:55:00.000","Title":"Protect python script on Windows server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have got a Windows7 system, and I installed on it a Virtual Box 5.1.26.\nOn this virtual box, I installed a Debian64 - Linux server. (I think I configured it correctly, it is getting enough memory).\nWhen I want run a Python script on it (which is a web-scraping script, it process around 1000 pages and take it into database), i get always the same error message after a few minutes :\n\nUnable to allocate and lock memory. The virtual machine will be paused. Please close applications to free up memory or close the VM.\nOr something error message with : run out of time (when it want to load a website)\n\nIn the windows7 system my script is working without any problem, so I am a little bit confused now, what is the problem here?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":192,"Q_Id":45934277,"Users Score":0,"Answer":"First check the parameters of your virtual machine you might have given it much more RAM or processors than you have (or not enough).\nIf this is not the case close everything in the VM and only start the script.\nThese errors generally say that you don't have resources to perform the operation.\nCheck if your syntax is ok and if you are using the same version of python on both systems.\nNote that the VM is a guest system and can't have as much resources as your main OS because the main Os will die in some circumstances.","Q_Score":1,"Tags":"python,web-scraping,debian,virtualbox","A_Id":45934419,"CreationDate":"2017-08-29T08:31:00.000","Title":"How to run a python script successfully with a debian system on the VirtualBox?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Does anyone know how I'll be able to get the current user from airflow? We have our backend enabled to airflow\/contrib\/auth\/backends\/ldap_auth.pyand so users log in via that authentication and I want to know how to get the current user that clicks on something (a custom view we have as a plugin).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1335,"Q_Id":45935428,"Users Score":1,"Answer":"You can get it by calling {{ current_user.user.username }} or {{ current_user.user }} in your html jenja template.","Q_Score":0,"Tags":"python,airflow,flask-login,apache-airflow","A_Id":46274739,"CreationDate":"2017-08-29T09:28:00.000","Title":"Airflow: Get user logged in with ldap","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"if you are using a custom python layer - and assuming you wrote the class correctly in python - let's say the name of the class is \"my_ugly_custom_layer\"; and you execute caffe in the linux command line interface, \nhow do you make sure that caffe knows how to find the file where you wrote the class for your layer? do you just place the .py file in the same directory as the train.prototxt?\nor\nif you wrote a custom class in python you need to use the python wrapper interface?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":263,"Q_Id":45942883,"Users Score":2,"Answer":"Your python layer has two parameters in the prototxt: layer: where you define the python class name implementing your layer, and moduule: where you define the .py file name where the layer class is implemented.\nWhen you run caffe (either from command line or via python interface) you need to make sure your module is in the PYTHONPATH","Q_Score":1,"Tags":"python,caffe,layer","A_Id":45944107,"CreationDate":"2017-08-29T15:22:00.000","Title":"Bekeley caffe command line interface","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I know for integers one can use htonl and ntohl but what about pickle byte streams?\nIf I know that the next 150 bytes that are received are a pickle object, do I still have to reverse byte-order just in case one machine uses big-endian and the other is little-endian?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":358,"Q_Id":45992329,"Users Score":0,"Answer":"So I think I figured out the answer.\n\nBy default, the pickle data format uses a printable ASCII\n representation.\n\nsince ASCII is a single byte representation the endianess does not matter.","Q_Score":0,"Tags":"python,sockets","A_Id":45992812,"CreationDate":"2017-09-01T01:52:00.000","Title":"Does Pickle.dumps and loads used for sending network data require change in byte order?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"configured AWS Cli on Linux system.\nWhile running any command like \"aws ec2 describe-instances\" it is showing error \"Invalid IPv6 URL\"","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":6082,"Q_Id":45999285,"Users Score":6,"Answer":"Ran into the same error. \nRunning this command fixed the error for me:\nexport AWS_DEFAULT_REGION=us-east-1\nYou might also try specifying the region when running any command:\naws s3 ls --region us-east-1\nHope this helps!\nor run aws configure and enter valid region for default region name","Q_Score":0,"Tags":"python-2.7,amazon-ec2,aws-cli","A_Id":46288793,"CreationDate":"2017-09-01T11:27:00.000","Title":"Invalid IPv6 URL while running commands using AWS CLI","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"configured AWS Cli on Linux system.\nWhile running any command like \"aws ec2 describe-instances\" it is showing error \"Invalid IPv6 URL\"","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":6082,"Q_Id":45999285,"Users Score":1,"Answer":"I ran into this issue due to region being wrongly typed. When you run aws configure during initial setup, if you try to delete a mistaken entry, it will end up having invalid characters in the region name.\nHopefully, running aws configure again will resolve your issue.","Q_Score":0,"Tags":"python-2.7,amazon-ec2,aws-cli","A_Id":50748839,"CreationDate":"2017-09-01T11:27:00.000","Title":"Invalid IPv6 URL while running commands using AWS CLI","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I tried making a cronjob to run my script every second, i used *\/60 * * * * this a parameter to run every second but it didn't worked, pls suggest me how should i run my script every second ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1441,"Q_Id":46024476,"Users Score":0,"Answer":"Well, no. Your argument - *\/60 * * * * - means \"run every 60 minutes\". And you can't specify a shorter interval than 1 minute, not in standard Unix cron anyway.","Q_Score":1,"Tags":"python,cron","A_Id":46024560,"CreationDate":"2017-09-03T14:34:00.000","Title":"I want to make a cronjob such that it runs a python script every second","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am migrating my java project from Rabbit MQ to Kafka (for some reasons).\nHowever, I am facing one difficulty. \nIn current workflow, I post all the messages to rabbit mq exchange, and based on the routing key of the messages, the messages are redirected to one or more queues.\nI want to retain the same functionality in Kafka also. ( I know kafka is not originally suited for it but I want a workaround).\nBasically, I want something like this: whenever a message is received by a topic, based on the meta present in the message, the message should be redirected to other set of topics.\nWhat is the fastest way to achieve this? I would prefer python or java solution\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":391,"Q_Id":46048165,"Users Score":0,"Answer":"If you publish Kafka messages with keys they will be directed to topic partitions such that all similar keys go to the same partition.\nAlternatively you can use Kafka Streams to read an input topic and route messages to a set of output topics based on the keys provided with the messages.","Q_Score":0,"Tags":"java,python,apache-kafka,workflow","A_Id":46048325,"CreationDate":"2017-09-05T06:18:00.000","Title":"How to add workflow to Kafka messages?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is it possible to install Python from cmd on Windows? If so, how to do it?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":66195,"Q_Id":46056161,"Users Score":1,"Answer":"For Windows\nI was unable to find a way to Download python using just CMD but if you have python.exe in your system then you can use the below Method to install it (you can also make .bat file to automate it.)\n\nDownload the python.exe file on your computer from the official site.\n\nOpen CMD and change Your directory to the path where you have python.exe\n\nPast this code in your Command prompt make sure to change the name with your file version In the below code(e.g python-3.8.5.exe)\n\n\npython-3.6.0.exe \/quiet InstallAllUsers=1 PrependPath=1 Include_test=0\nIt will also set the path Variables.","Q_Score":7,"Tags":"python,cmd,installation,python-install","A_Id":63153547,"CreationDate":"2017-09-05T13:26:00.000","Title":"How to install Python using Windows Command Prompt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am fresh reinstalling python 2.7 (python, pip) and 3.6 (python3, pip3). However, when I installed pipenv and virtualenv for pythn3 using pip3 - the corresponding bash commands are not added, so simple things like $ virtualenv --version\nfail. \nWhat is going on here? can anyone help, please? \nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":373,"Q_Id":46062103,"Users Score":1,"Answer":"From your Python version directory, Pip installs packages to '.\/lib\/python\/site-packages\/' and creates the binary in '.\/bin\/'. If you install a package to your User directory with:\npip install --user [packagename]\nthe Python version directory is: \n\/Users\/[username]\/Library\/Python\/[version]\/\notherwise the directory is usually: \n\/Library\/Frameworks\/Python.framework\/Versions\/[version].\nCreate a symbolic link from the virtualenv binary in \/Users\/[username]\/Library\/Python\/3.6\/bin\/ to \/usr\/local\/bin\/ in your path with ln -s:\nln -s \/Users\/[username]\/Library\/Python\/3.6\/bin\/virtualenv \/usr\/local\/bin\/virtualenv\nand you should be all set.\nIf you need to delete the symbolic link simply use rm:\nrm \/usr\/local\/bin\/virtualenv","Q_Score":0,"Tags":"python,bash,macos","A_Id":46064025,"CreationDate":"2017-09-05T19:31:00.000","Title":"osx pip python3 - installing packages does not create alias","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am performing some operation using DataProcPySparkOperator. This operator is only taking a cluster name as parameter, there is no option to specify region and by default it considers cluster with global region.\nFor clusters with regions other than global, the following error occurs:\n\ngoogleapiclient.errors.HttpError: https:\/\/dataproc.googleapis.com\/v1\/projects\/\/regions\/global\/jobs:submit?alt=json returned \"No current cluster for project id '' with name ''`\n\nAm i missing anything or its just limitation with these operators?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":822,"Q_Id":46067848,"Users Score":1,"Answer":"We were running into the same issue using Google Composer, which was running Airflow 1.9. We upgrade to Airflow 1.10 and this fixed the issue. Google just released it. Now, when I run the operator it can see the cluster - it looks in the correct region. Previously it was always looking in global.","Q_Score":1,"Tags":"python,airflow,google-cloud-dataproc","A_Id":53117725,"CreationDate":"2017-09-06T06:01:00.000","Title":"Airflow DataProcPySparkOperator not considering cluster other than global region","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm highly confused about this. Python3 is installed per default on the MacBook.\nwhich python3 will output \/Library\/Frameworks\/Python.framework\/Versions\/3.6\/bin\/python3\nGreat, so that's my SDK to put into IntelliJ IDEA, one should think.\nI have the Python plugin for IDEA. However, I can't run Python files. So I try to change the configuration and set it to the above PATH for the Python interpreter.\nHowever, still nothing. Trying to run the Python file inside IDEA will prompt a new configuration?\nI can run the script just file doing python3 script.py in the terminal? I know the path for the Python3 library, yet, IDEA doesn't recognise it at all and doesn't save the configuration.\nWhat am I doing wrong in this process? This should be fairly easy to set up but turns out it isn't :) \nI even tried to create a Python 3.6.2 virtual environment with the IDEA internal tool - same thing? It doesn't allow me to run the Python3 script from inside IDEA. \nShould I use python from usr\/bin\/python? If I cd there, I can see Python3. But inside IDEA, i only have access to Python2..","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5298,"Q_Id":46070112,"Users Score":2,"Answer":"Try this in menu of IDEA: File -> Settings -> Project: Name of project -> Project Interpreter and from above in the window you can choice interpreter version or virtualenv.","Q_Score":1,"Tags":"python,macos,intellij-idea","A_Id":46070218,"CreationDate":"2017-09-06T08:17:00.000","Title":"How do I set up Python 3 with IntelliJ IDEA on OSX?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have fresh ubuntu 16.04 setup for production.\nInitially if when i type\npython --version gives me python 2.7 and python3 --version gives me python 3.5\nbut i want python points to python3 by default, so in my ~\/.bashrc\nalias python=python3 and source ~\/.bashrc,\nAfter that i install pip using sudo apt-get install python-pip and when i type pip --version it prints pip 8.1.1 from \/usr\/lib\/python2.7\/dist-packages (python 2.7) instead that i want packages to be installed into and get from \/usr\/local\/lib\/python3.5\/dist-packages.\nI have django application which is written with python3 compatible code. \nUpdate: I want to install other packages which have to load from python3 dist-packages not just pip. I don't want to remove python 2.7 from ubuntu it will break other programs, i thought alias python=python3 would install packages into python3.5 dist-packages as well.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":421,"Q_Id":46107451,"Users Score":0,"Answer":"You need to use pip3 as the command.\npip3 install coolModule\nBe sure to add to your bash profile.\nalias pip3=\"python3 -m pip\"","Q_Score":1,"Tags":"python,ubuntu","A_Id":46107476,"CreationDate":"2017-09-08T01:51:00.000","Title":"python3 loading dist-packages from python2 on ubuntu","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"Excuse the awkward question wording.\nI've made a script. I would like for others to download it from github, and run it by typing programName argument1 argument2, similar to any other popular app used through the terminal such as Jupyter or even opening Atom\/Sublime\/etc. (ex:jupyter notebook, atom .). However, unlike Jupyter or sublime, my script isn't launching another app, it's a small app meant to be used in the shell. \nCurrently, to use my script, one must type into the command line python programName.py arg1 etc from within the file's directory. \nHow do I allow others to dl it and use it from anywhere (not having to be within the directory), without having to type out the whole python programName.py part, and only having to type programName arg1?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":78,"Q_Id":46111984,"Users Score":1,"Answer":"You can simply add your script to PATH variable in order to launch it from anywhere.\nIn Linux distros, you can simply do it by using a bash command PATH=$PATH:\/path\/to\/your\/script.\nMake sure you don't have the space around the \"=\" operator.\nNow, the second thing is you don't want your script to be named as pythonProgram.py.You can simply remove the extension .py from PythonProgram.py by adding a single line to the starting of your script.\nOpen up your script and at the very begining type #!\/usr\/bin\/python.This should be the first line of your code.This line is called shebang and is used to tell the bash which interpreter to be used for compiling the script.\nIf everything went right, you will be able to run your script as pythonProgram arg1.","Q_Score":1,"Tags":"python,python-2.7,python-3.x,shell,command-line","A_Id":46112425,"CreationDate":"2017-09-08T08:27:00.000","Title":"How would I allow others to use my script from anywhere in the shell without having to type out the file extension?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a server process which receives requests from a web clients. \nThe server has to call an external worker process ( another .py ) which streams data to the server and the server streams back to the client.\nThe server has to monitor these worker processes and send messages to them ( basically kill them or send messages to control which kind of data gets streamed ). These messages are asynchronous ( e.g. depend on the web client )\nI thought in using ZeroMQ sockets over an ipc:\/\/-transport-class , but the call for socket.recv() method is blocking.\nShould I use two sockets ( one for streaming data to the server and another to receive control messages from server )?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":563,"Q_Id":46139185,"Users Score":2,"Answer":"Using a separate socket for signalling and messaging is always better\nWhile a Poller-instance will help a bit, the cardinal step is to use separate socket for signalling and another one for data-streaming. Always. The point is, that in such setup, both the Poller.poll() and the event-loop can remain socket-specific and spent not more than a predefined amount of time, during a real-time controlled code-execution.\nSo, do not hesitate to setup a bit richer signalling\/messaging infrastructure as an environment where you will only enjoy the increased simplicity of control, separation of concerns and clarity of intents.\nZeroMQ is an excellent tool for doing this - including per-socket IO-thread affinity, so indeed a fine-grain performance tuning is available at your fingertips.","Q_Score":3,"Tags":"python,sockets,ipc,zeromq","A_Id":46140815,"CreationDate":"2017-09-10T09:27:00.000","Title":"ZeroMQ bidirectional async communication with subprocesses","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to use Python to find a faster way to sift through a large directory(approx 1.1TB) containing around 9 other directories and finding files larger than, say, 200GB or something like that on multiple linux servers, and it has to be Python.\nI have tried many things like calling du -h with the script but du is just way too slow to go through a directory as large as 1TB. \nI've also tried the find command like find .\/ +200G but that is also going to take foreeeever.\nI have also tried os.walk() and doing .getsize() but it's the same problem- too slow.\nAll of these methods take hours and hours and I need help finding another solution if anyone is able to help me. Because not only do I have to do this search for large files on one server, but I will have to ssh through almost 300 servers and output a giant list of all the files > 200GB, and the three methods that i have tried will not be able to get that done. \nAny help is appreciated, thank you!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2409,"Q_Id":46144952,"Users Score":0,"Answer":"It is hard to imagine that you will find a significantly faster way to traverse a directory than os.walk() and du. Parallelizing the search might help a bit in some setups (e.g. SSD), but it won't make a dramatic difference.\nA simple approach to make things faster is by automatically running the script in the background every hour or so, and having your actual script just pick up the results. This won't help if the results need to be current, but might work for many monitoring setups.","Q_Score":0,"Tags":"python,linux","A_Id":46145014,"CreationDate":"2017-09-10T19:51:00.000","Title":"Faster way to find large files with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to import some python files as dependency in airflow and use an function in python_callable in PythonOperator. I tried placing the dependency python file in the dags folder, but doesn't seem to work. I'm assuming the DAG is being moved to some other folder, before being executed. Help appreciated!!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":996,"Q_Id":46154966,"Users Score":3,"Answer":"Get an idea about your environment variable \"AIRFLOW_HOME\". If it is not declared, it points to your home directory by default. In airflow, the Python scripts are normally placed at \"AIRFLOW_HOME\"\/airflow\/dags\nYou can place the Python script and it's dependencies there but I strongly recommend to create and package for the dependencies and install it in your Python environment along airflow and avoid the unnecessary clutter of files in your dag folders.","Q_Score":1,"Tags":"python,airflow,apache-airflow","A_Id":46170951,"CreationDate":"2017-09-11T11:44:00.000","Title":"Airflow: External python in python_callable of PythonOperator","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am designing a backend service that sends to some mobile app user messages from my server. \nHaving retrieved their device token using a webhook, should I store these tokens in DB and call create_platform_endpoint() every time I need to send a message? \nOr storing device token on backend is needless and excessive and once having obtained ARN from create_platform_endpoint(), there is no need to store mobile device tokens on backend?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":359,"Q_Id":46161694,"Users Score":1,"Answer":"I would store the device token (and I do). I was able to use it when I needed to transparently migrate a few million endpoints from a US region to one in Asia. Might also come in handy if you also wanted to migrate off of AWS at some point. \n The only reason I wouldn't store it is because of GDPR, but if you're not worried about that then it's not like it's a lot of data.\nAlso you only need to call create_platform_endpoint() once, storing the result ARN. Watch out for a change to the device token. If it does, you'll need to contact your server and notify it that it's changed and call create_platform_endpoint() again. I've never actually seen this happen, however.","Q_Score":0,"Tags":"python,amazon-web-services,push-notification,amazon-sns","A_Id":46165313,"CreationDate":"2017-09-11T17:56:00.000","Title":"Storing endpoint ARN vs. Device Token at backend","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Code:\nsh 'python .\/selenium\/xy_python\/run_tests.py'\nError:\nTraceback (most recent call last):\n File \".\/selenium\/xy_python\/run_tests.py\", line 6, in \n import nose\nImportError: No module named nose","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5801,"Q_Id":46166480,"Users Score":0,"Answer":"I recommend explicitly activating a python env before you run your script in your jenkinsfile to ensure you are in an environment which has nose installed.\nPlease check out virtualenv, tox, or conda for information on how to do so.","Q_Score":2,"Tags":"python,python-3.x,dsl","A_Id":46199573,"CreationDate":"2017-09-12T01:22:00.000","Title":"Calling a Python Script from Jenkins Pipeline DSL causing import error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i see a lot of examples of how to use multiprocessing but they all talk about spawning workers and controlling them while the main process is alive. my question is how to control background workers in the following way:\nstart 5 worker from command line:\n\nmanager.py --start 5\n\nafter that, i will be able to list and stop workers on demand from command line:\n\nmanager.py --start 1 #will add 1 more worker\nmanager.py --list\nmanager.py --stop 2\nmanager.py --sendmessagetoall \"hello\"\nmanager.py --stopall\n\nthe important point is that manager.py should exit after every run. what i don't understand is how to get a list of already running workers from an newly created manager.py program and communicate with them.\nedit: Bilkokuya suggested that i will have (1)a manager process that manage a list of workers... and will also listen to incoming commands. and (2) a small command line tool that will send messages to the first manager process... actually it sounds like a good solution. but still, the question remains the same - how do i communicate with another process on a newly created command line program (process 2)? all the examples i see (of Queue for example) works only when both processes are running all the time","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":223,"Q_Id":46174679,"Users Score":0,"Answer":"The most portable solution I can suggest (although this will still involve further research for you), is to have a long-running process that manages the \"background worker\" processes. This shouldn't ever be killed off, as it handles the logic for piping messages to each sub process.\nManager.py can then implement logic to create communication to that long-running process (whether that's via pipes, sockets, HTTP or any other method you like). So manager.py effectively just passes on a message to the 'server' process \"hey please stop all the child processes\" or \"please send a message to process 10\" etc.\nThere is a lot of work involved in this, and a lot to research. But the main thing you'll want to look up is how to handle IPC (Inter-Process Communication). This will allow your Manager.py script to interact with an existing\/long-running process that can better manage each background worker.\nThe alternative is to rely fully on your operating system's process management APIs. But I'd suggest from experience that this is a much more error prone and troublesome solution.","Q_Score":0,"Tags":"python,multiprocessing,python-multiprocessing","A_Id":46175030,"CreationDate":"2017-09-12T10:59:00.000","Title":"using python multiprocessing to control independent background workers after the spawning process has been closed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to connect to a PostgreSQL database on Google Cloud using SQLAlchemy. Making a connection to the database requires specifying a database URL of the form: dialect+driver:\/\/username:password@host:port\/database\nI know what the dialect + driver is (postgresql), I know my username and password, and I know the database name. But I don't know how to find the host and port on the Google Cloud console. I've tried using the instance connection name, but that doesn't seem to work. Anyone know where I can find this info on Google Cloud?","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":6507,"Q_Id":46178062,"Users Score":3,"Answer":"Hostname is the Public IP address.","Q_Score":3,"Tags":"python,postgresql,google-cloud-platform,google-cloud-storage,google-cloud-sql","A_Id":64040093,"CreationDate":"2017-09-12T13:42:00.000","Title":"What is the hostname for a Google Cloud PostgreSQL instance?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"In local system elastic search works perfectly but when i'm trying to search in server system it shows in console : \"ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'localhost', port=9200): Read timed out. (read timeout=10))\"","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3169,"Q_Id":46188842,"Users Score":0,"Answer":"It appears that the localhost value you are trying to connect to is a Unicode string host=u'localhost. Not sure how you are getting\/assigning that value into a variable, but you should try to encode\/convert it to ASCII so that it can be properly interpreted during the HTTP connection routine.","Q_Score":0,"Tags":"python,django,elasticsearch","A_Id":46230007,"CreationDate":"2017-09-13T04:25:00.000","Title":"ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host=u'localhost', port=9200): Read timed out. (read timeout=10))","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have installed Anaconda on my Windows 10 system. Now I want to use python in newly installed Atom IDE. Atom cannot find python directory as it's not added to the environment variable path. \nI installed python 3.6 separately and added it to path variables to overcome this issue. However, I still run into issues like missing .dll files. I found that this will continue as long as there Anaconda is installed on the system. \nIs there a way I can add Anaconda python path to Atom or should I just add Anaconda library to path variables (which is not recommended by Anaconda)?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1003,"Q_Id":46201898,"Users Score":0,"Answer":"If you want to run python scripts inside atom, install script package from atom packages. Then open anaconda prompt, cd into your preferred directory and run atom . Now when you run Ctrl+Shift+B to run your python scripts, this script package will run with anaconda python.","Q_Score":1,"Tags":"anaconda,atom-editor,path-variables,sublime-anaconda,python-install","A_Id":64047273,"CreationDate":"2017-09-13T15:49:00.000","Title":"Using Anaconda python directory in Atom IDE","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have the following scenario. Data (packets in this case) are received processed by a Python function in real-time as each datum streams by. So each datum is received and translated into a python object. There is a light-weight algorithm done on that object which returns an output (small dictionary). Then the object is discarded and the next one is handled. I have that program running. \nNow, for each object the algorithm will produce a small dictionary of output data. This dictionary needs to be processed (also in real time) by a separate, second algorithm. I envision my code running two processes. I need to have the second process \"listen\" for the outputs of the first. \nSo how do I write this second algorithm in python so it can listen for and accept the data that is produced by the first? for a concrete example, suppose the first algorithm applies the timestamp, then passes to a buffer, and the second algorithm listens-- it grabs from the buffer and processes it. If there is nothing in the buffer, then as soon as something appears it processes it.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":395,"Q_Id":46202639,"Users Score":0,"Answer":"If you need to use different processes (as opposed to multiple functions in a single process), perhaps a messaging queue would work well for you? Whereby your first process would do whatever it does and put the results in a message queue, which your second process is listening to.\nThere are obviously a lot of options available but based on your description, this sounds like a reasonable approach.","Q_Score":0,"Tags":"python,multiprocessing,buffer","A_Id":46202744,"CreationDate":"2017-09-13T16:31:00.000","Title":"How to send, buffer, and receive Python objects between two python programs?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I had a similar issue when a Python script called from a scheduled task on a windows server tried to access a network shared drive. It would run from the IDLE on the server but not from the task. I switched to using a local drive it worked fine. This script works if run from console or IDLE on the server and partially executes when run as a scheduled task. It pulls data from a MSSQL database and creates a local csv. That works called from the task but the part to upload the file to a Google Drive does not. I have, as I did, before try other methods of calling outside of the scheduled task ex Powershell, bat file... but same results. I am using google-api-python-client (1.6.2) and can't find anything. Thanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":46206424,"Users Score":0,"Answer":"I found my answer. In the optional field of the Windows Scheduled Task Action dialog, \"Start in\" I added the path to the Python Scripts folder and the script now runs perfect.","Q_Score":0,"Tags":"python-2.7,google-api-python-client","A_Id":46245436,"CreationDate":"2017-09-13T20:37:00.000","Title":"Python27 to upload a file to google drive does not work when run as a windows scheduled task. Why?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am attempting to update pip for IDLE (Python 3.5) on mac using the terminal.\nIt tells me that pip is up to date in anaconda:\nDaniels-MacBook-Pro-3:~ danielsellers$ pip install --upgrade pip\nRequirement already up-to-date: pip in .\/anaconda\/lib\/python3.6\/site-packages\nDaniels-MacBook-Pro-3:~ danielsellers$ \nBut IDLE is recommending I update pip, which I am inclined to do because it keeps crashing while trying to install modules.\nHow do I update the version of pip which IDLE is running? I'm somewhat new to python, thanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":112,"Q_Id":46287979,"Users Score":0,"Answer":"If you use Anaconda you are fine, if you don't uninstall it. Systems get very confused on what you want to update\/use so just pick one and use it!","Q_Score":0,"Tags":"python,macos","A_Id":46288007,"CreationDate":"2017-09-18T20:49:00.000","Title":"Pip update is failing in terminal because Anaconda is up to date - idle is not","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"In Python CGI, when I call name = os.popen('whoami').read(), the name will return as Apache. How can I get the original login name that was login to this machine? For example, in terminal windows, when I run whoami, the login name return as \"operator\". In Apache server, is there a way to get the login name as \"operator\"?\nThanks!\nTom Wang","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":183,"Q_Id":46289627,"Users Score":0,"Answer":"Python CGI script gets executed when APACHE gets a request. APACHE redirects the request to python. Since, user 'APACHE' would be running this script, you get that as the id. You can only get the id as operator if user 'operator' is running the script. Users connect to your script using a web browser which is intercepted by APACHE. There is no way to determine which user is making the request from web browser as they never login to the machine where APACHE is running. You can get their IP\/port using the requests library","Q_Score":0,"Tags":"python,apache","A_Id":46290001,"CreationDate":"2017-09-18T23:35:00.000","Title":"How to get original login name while in Python CGI Apache web server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i have searched many links but didn't find any solution to the problem i have. I have seen option to pass key\/var into airflow UI ,but it is really confusing for end user to work as which key is associated with which dag. Is there any way to implement functionality like :\nWhile running an airflow job, end user will be asked for values to some parameters and after entering those details airflow will run the job.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1915,"Q_Id":46294026,"Users Score":7,"Answer":"Unfortunately, it's not possible to wait for user input let say in Airflow UI. DAG's are programmatically authored which means defined as a code and they should not be dynamic since they are imported in web server, scheduler and workers in same time and has to be same.\nThere are two workarounds I came up with, and we use first in production for a while.\n1) Create a small wrapper around Variables. For each DAG then load Variables and compose arguments which are then passed into Operators via default_arguments.\n2) Add Slack operator which can be programmatically configured to wait for user input. Afterwards, propagate that information via XCOM into next Operator.","Q_Score":8,"Tags":"python,parameter-passing,airflow,apache-airflow","A_Id":46296899,"CreationDate":"2017-09-19T07:06:00.000","Title":"In airflow can end user pass parameters to keys which are associated with some specific dag","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am getting the below error while running the cqlsh in cassandra 2.2.10 ?? Can somebody help me to pass this hurdle:\n\n[root@rac1 site-packages]# $CASSANDRA_PATH\/bin\/cqlsh\nPython Cassandra driver not installed, or not on PYTHONPATH. You might\n try \u201cpip install cassandra-driver\u201d.\nPython: \/usr\/local\/bin\/python Module load path:\n [\u2018\/opt\/cassandra\/apache-cassandra-2.2.10\/bin\/..\/lib\/six-1.7.3-py2.py3-none-any.zip\u2019,\n \u2018\/opt\/cassandra\/apache-cassandra-2.2.10\/bin\/..\/lib\/futures-2.1.6-py2.py3-none-any.zip\u2019,\n \u2018\/opt\/cassandra\/apache-cassandra-2.2.10\/bin\/..\/lib\/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip\/cassandra-driver-3.5.0.post0-d8d0456\u2019,\n \u2018\/opt\/cassandra\/apache-cassandra-2.2.10\/bin\u2019,\n \u2018\/usr\/local\/lib\/python2.7\/site-packages\u2019,\n \u2018\/usr\/local\/lib\/python27.zip\u2019, \u2018\/usr\/local\/lib\/python2.7\u2019,\n \u2018\/usr\/local\/lib\/python2.7\/plat-linux2\u2019,\n \u2018\/usr\/local\/lib\/python2.7\/lib-tk\u2019, \u2018\/usr\/local\/lib\/python2.7\/lib-old\u2019,\n \u2018\/usr\/local\/lib\/python2.7\/lib-dynload\u2019]\nError: can\u2019t decompress data; zlib not available","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2223,"Q_Id":46314983,"Users Score":0,"Answer":"Cassandra uses the python driver bundled in-tree in a zip file. If your Python runtime was not built with zlib support, it cannot use the zip archive in the PYTHONPATH. Either install the driver directly (pip install) as suggested, or put a correctly configured Python runtime in your path.","Q_Score":1,"Tags":"python,linux,cassandra","A_Id":47167910,"CreationDate":"2017-09-20T06:42:00.000","Title":"Python Cassandra driver not installed, or not on PYTHONPATH","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I have a rather large json database that I'm maintaining with python. It's basically scraping data from a website on an hourly basis and I'm running daily restarts on the system (Linux Mint) via crontab. My issue is that if the system happens to restart during the database updating process I get corrupted json files. \nMy question is if there is anyway to delay the system restart in my script to ensure the system shuts down at a safe time? I could issue the restart command inside the script itself but if I decide to run multiple scripts that are similar to this in the future I'll obviously have a problem.\nAny help here would be greatly appreciated. Thanks\nEdit: Just to clarify I'm not using the python jsondb package. I am doing all file handling myself","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":46319434,"Users Score":0,"Answer":"So my solution to this was quite simple (Just protect data integrity):\nBefore write - backup the file\nOn successful write - delete the backup (Avoids doubling the size of the DB)\nWhere ever a corrupted file is encountered - revert to backup\nThe idea being that if the system closes the script during the file backup, it doesn't matter, we still have the original and if the system closes the script during write to the original file, the backup never gets deleted and we can just use that instead. All and all it was just an extra 6 lines of code and appears to have solved the issue.","Q_Score":0,"Tags":"python-2.7,jsondb","A_Id":46363482,"CreationDate":"2017-09-20T10:19:00.000","Title":"Delaying system shutdown during json DB update in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an Ubuntu 16.04 system with an Anaconda installation. I want to compile and install OpenCV 3.3 and use also the Python bindings. I used the following CMake command:\n\ncmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=\/usr\/local -D WITH_CUDA=ON -D D WITH_FFMPEG=1 -D WITH_CUBLAS=ON -D WITH_TBB=ON -D WITH_V4L=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D INSTALL_PYTHON_EXAMPLES=ON -D INSTALL_C_EXAMPLES=OFF -D OPENCV_EXTRA_MODULES_PATH=~\/opencv_contrib-3.3.0\/modules -D BUILD_EXAMPLES=ON -D BUILD_TIFF=ON -D PYTHON_EXECUTABLE=\/home\/guel\/anaconda2\/envs\/py27\/bin\/python -D PYTHON2_LIBRARIES=\/home\/guel\/anaconda2\/envs\/py27\/lib\/libpython2.7.so -D PYTHON2_PACKAGES_PATH=\/home\/guel\/anaconda2\/envs\/py27\/lib\/python2.7\/site-packages -DWITH_EIGEN=OFF -D BUILD_opencv_cudalegacy=OFF ..\n\nThe command does the job but then, of course, OpenCV is installed only for a specific conda environment that I created. However, I want to be able to use it also from different environments without having to go through the compilation for each and every environment. Is there a way to achieve that in a simple way? Since the OpenCv libraries are actually installed in \/usr\/local, I can imagine that there must be a simple way to link the libraries to each new conda enviroment but I couldn't figure out exactly how.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2609,"Q_Id":46339134,"Users Score":0,"Answer":"So you are providing the python package and library path to environment specific location, in order to make it available environment try using the anaconda\/bin and lib path. Can't make it as a comment due to low reputation.","Q_Score":3,"Tags":"python,opencv,anaconda,conda","A_Id":46645511,"CreationDate":"2017-09-21T08:34:00.000","Title":"Installing OpenCV for all conda environments","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using xmlrunner in combination with unittest in python for testing purposes by running \nxmlrunner.XMLTestRunner(outsuffix=\"\",).run(suite)\nwhere suite is standard unittest.suite.TestSuite\nWhen I run the tests on my windows machine I get an output by using the standard print() function in my tests. Unfortunately I don't get any output to my terminal when running the tests on my fedora machine. The output is correctly logged to an XML file but I would like to have the output directly to stdout \/ terminal.\nDid I miss something that explains this behaviour?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":249,"Q_Id":46346745,"Users Score":0,"Answer":"OK, I found the reason. On my Fedora 24 machine an old version of xmlrunner (1.14.0 - something) was installed. I used pip to install the lates xmlrunner (1.7.7) for python3 and now I do get the output directly on the terminal.","Q_Score":0,"Tags":"python,linux,unit-testing","A_Id":46359839,"CreationDate":"2017-09-21T14:34:00.000","Title":"No print to stdout when running xmlrunner in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am having trouble installing any PIP module. \nSteps\/Precautions I have taken: \n\nI uninstalled Python and downloaded the most recent Python 3.6.2.\nPIP seems to be installed already C:\\Users\\Danc2>C:\\Users\\Danc2\\AppData\\Local\\Programs\\Python\\Python36-32\\scripts\\pip3.6 (also included are files: pip, pip3).\npip install pyperclip returns \n\n\n'pip' is not recognized as an internal or external command, operable program or batch file.\n\nIn using many different forums and typing commands into CMD I come up with results like: \"'pip' is not recognized as an internal or external command,\noperable program or batch file.\" \nWhen trying to refer to my folder location: \"C:\\Users\\Danc2>C:\\Users\\Danc2>C:\\Users\\Danc2\\AppData\\Local\\Programs\\Python\\Python36-32\\scripts\nAccess is denied.\"\n\nSorry for the common question, but I just cannot figure it out for my individual problem. I appreciate any kind effort to help.\nDaniel.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":147,"Q_Id":46353305,"Users Score":0,"Answer":"I think you should restart your computer. If that doesn't work, go to Control Panel -> System -> Advanced Settings -> Environment Variables.\nIn the system variables you should go to Path and add the folder containing the pip.exe to your path.","Q_Score":0,"Tags":"python,pip,pyperclip","A_Id":46353348,"CreationDate":"2017-09-21T21:17:00.000","Title":"Installing PIP Modules","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am having trouble installing any PIP module. \nSteps\/Precautions I have taken: \n\nI uninstalled Python and downloaded the most recent Python 3.6.2.\nPIP seems to be installed already C:\\Users\\Danc2>C:\\Users\\Danc2\\AppData\\Local\\Programs\\Python\\Python36-32\\scripts\\pip3.6 (also included are files: pip, pip3).\npip install pyperclip returns \n\n\n'pip' is not recognized as an internal or external command, operable program or batch file.\n\nIn using many different forums and typing commands into CMD I come up with results like: \"'pip' is not recognized as an internal or external command,\noperable program or batch file.\" \nWhen trying to refer to my folder location: \"C:\\Users\\Danc2>C:\\Users\\Danc2>C:\\Users\\Danc2\\AppData\\Local\\Programs\\Python\\Python36-32\\scripts\nAccess is denied.\"\n\nSorry for the common question, but I just cannot figure it out for my individual problem. I appreciate any kind effort to help.\nDaniel.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":147,"Q_Id":46353305,"Users Score":0,"Answer":"If your Python installation works at all with the command line, then replacing pip with python -m pip in the command line is likely to fix the issue for you.","Q_Score":0,"Tags":"python,pip,pyperclip","A_Id":46353369,"CreationDate":"2017-09-21T21:17:00.000","Title":"Installing PIP Modules","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using Visual Studio Code (Version 1.16.1) with Python Extension (Don Jayamanne version 0.7.0). As I finish debugging a script, I consistently get an error - \"Debug adapter process has terminated unexpectedly\". This happens regardless of execution process (Integrated or External terminal\/console).\nI'm an instructor for a Python class and all of my students are having to clear this error every time they debug. I and my students would appreciate any help with this.\nThanks,\nJohn","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":613,"Q_Id":46354897,"Users Score":0,"Answer":"in my case removing .vscode folder in my user folder and reinstalling debugger extension helped","Q_Score":1,"Tags":"python","A_Id":48916162,"CreationDate":"2017-09-21T23:59:00.000","Title":"Debug adapter process has terminated unexpectedly","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have to do some work with a company with security restrictions on their computers.\nI need to set Python2.7 virtualenv on their Windows10 machine but can't add python to Windows path. I installed Python through the Windows Software Centre. The interpreter is in usual C:\\Python27\\python.exe but it is not added to Windows path. When I run python in CMD it is not recognizable although C:\\Python27\\python opens the interpreter. \nThe problem is that to add it to Windows path I need admin privileges. It is simply not possible. I know the obvious answer is to contact admin but again it is not an option. \nSo the problem is, having this setup I need to install virtualenv, inside create all my environment and work on it. \nI can't find the way to do it without Python in the path.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":4036,"Q_Id":46363175,"Users Score":1,"Answer":"You could run: SET PATH=%PATH%;C:\\Python27\\ from a command prompt and it will add the python to path temporary (i.e will be gone when the command prompt is closed)","Q_Score":0,"Tags":"python,windows,virtualenv","A_Id":46363515,"CreationDate":"2017-09-22T11:00:00.000","Title":"working with virtualenv without Python in Windows path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I tried installing z3 theorem prover.\nI am using Ubuntu 16.04.\nAnd I am using Python 2.7.12\nI did the installation in two ways:\n\nI used sudo apt-get install z3\nBut when I tried to import z3 by opening python from terminal using from z3 import * and also using import z3 as z I got an error saying No Module named z3\nI used\npython scripts\/mk_make.py\ncd build\nmake\nsudo make install\n\nand also added build\/python to PYTHONPATH and build to LD_LIBRARY_PATH but I got the same problem when i tried to import z3 using the same way.\nNow I tried running examples.py\nwhich is the folder build\/python\nAnd lo!!! No Error!!!\nI also tried running other example files and I didn't get any error for them too.\nCan anybody help me with the problem why I cannot import z3 when I open Python from terminal or any other folder outside of build\/python?\nEDIT:\nI found out that I have to do adding the folders to the path every time I open a terminal outside of build\/python","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1158,"Q_Id":46369458,"Users Score":1,"Answer":"I found out that I have to add the paths everytime I open a new terminal window. Then only z3 can be imported from anywhere.","Q_Score":1,"Tags":"python,python-2.7,z3,z3py","A_Id":47165922,"CreationDate":"2017-09-22T16:38:00.000","Title":"Cannot import z3 in python-2.7.12 in ubuntu even after installation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to understand the way Gevent\/Greenlet chooses the next greenlet to be run. Threads use the OS Scheduler. Go Runtime uses 2 hierarchical queues. \nBy default, Gevent uses libevent to its plumbling. But how do libevent chooses the next greenlet to be ran, if many are ready to?\nIs it random?\nI already had read their docs and had a sight on the sourcecode. Still do not know.\nUpdated: Text changed to recognize that Gevent uses libevent. The question still applies over libevent.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":174,"Q_Id":46384607,"Users Score":0,"Answer":"It's underlying dispatch model, is the event loop in libevent, which uses the event base, which monitors for the different events, and reacts to them accordiningly, then from what I gleamed , it will take the greenlets do some fuckery with semaphores, and then dispatch it onto libevent.","Q_Score":0,"Tags":"python,concurrency,gevent,cpython,greenlets","A_Id":46387137,"CreationDate":"2017-09-23T21:52:00.000","Title":"How do libevent chooses the next Gevent greenlet to run?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have tasks A -> B -> C in Airflow and when I run the DAG and all complete with success, I'd like to be able to clear B alone (while leaving C marked as success). B clears and gets put into the 'no_status' state but then when I try to re-run B, nothing happens. I've tried --ignore_dependencies, --ignore_depends_on_past and --force but to no avail. B seems to only re-run if C is also cleared and then everything re-runs as expected.\nThe reason why I'd like to be able re-run B specifically without changing the pipeline is that some of B's external inputs may change slightly (file changed, or tweak) and I'd like to run it and evaluate it's output before restarting the downstream tasks (to mitigate any potential interruption).","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2112,"Q_Id":46397373,"Users Score":0,"Answer":"In the UI, when you clear a instance task, the downstream case is checked by default.\nIf you unchecked it, it will clear only this one and not re-run the downstream tasks","Q_Score":2,"Tags":"python,airflow","A_Id":46435566,"CreationDate":"2017-09-25T03:55:00.000","Title":"Airflow force re-run of upstream task when cleared even though downstream if marked success","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a core set of bash aliases defined in my .bash_profile (Mac). But when I activate a pipenv with pipenv shell, my aliases don't work and the bash alias command returns nothing. \nIs there a configuration step needed to spawn pipenv shells that inherit bash aliases from the parent shell?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3809,"Q_Id":46416052,"Users Score":20,"Answer":"Aliases are never inherited. .bash_profile is only sourced for login shells, and pipenv apparently creates a nonlogin interactive shell. Aliases should be defined in .bashrc, and on a Mac (where terminal emulators starts login shells by default), add [[ -f ~\/.bashrc ]] && source ~\/.bashrc to the end of your .bash_profile.","Q_Score":14,"Tags":"python,bash,pipenv","A_Id":46416282,"CreationDate":"2017-09-26T00:25:00.000","Title":"pipenv and bash aliases","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Currently, I am running two versions of python on Mac. The native one (2.7.10) (\/usr\/bin\/python), and another one, which has been downloaded via home-brew (2.7.14).\nI want to download two versions of pip and download packages depending on the python version I want to use.\nIs this possible?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2371,"Q_Id":46431306,"Users Score":0,"Answer":"It's worth mentioning (for Windows Users), once you have multiple versions of Python installed, you can easily manage packages for each specific version by calling pip. from a cmd window.\nfor example, I have Python 2.7, 3.6 and 3.7 currently installed, and I can manage my packages for each installation using pip2.7, pip3.6 and pip3.7 respectively ...\nOn Windows 10, $ pip3.7 install works for me - haven't tested it with venv instances yet though","Q_Score":1,"Tags":"python,macos,python-2.7,pip,python-2.x","A_Id":54506145,"CreationDate":"2017-09-26T16:12:00.000","Title":"How to install pip associated to different versions of Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a script that i run every day and want to make a schedule for it, i have already tried a batch file with: \nstart C:\\Users\\name\\Miniconda3\\python.exe C:\\script.py\nAnd im able to run some basic python commands in it, the problem is that my actual script uses some libraries that were installed with Anaconda, and im unable to use them in the script since Anaconda will not load.\nIm working on windows and can't find a way to start Anaconda and run my script there automatically every day.","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":27782,"Q_Id":46437863,"Users Score":7,"Answer":"I had a similar problem a few days ago. \nWhat I discovered is that anaconda prompt is nothing but your usual cmd prompt after running an 'activate.bat' script which is located in the anaconda 'Scripts' folder. \nSo to run your python scripts in anaconda all you need to do is write 2 lines in a batch file. (Open notepad and write the lines mentioned below. Save the file with .bat extension)\n\ncall C:\\....path to anaconda3\\Scripts\\activate.bat\ncall python C:\\path to your script\\Script.py\n\nThen you schedule this batch file to run as you wish and it will run without problems.","Q_Score":26,"Tags":"python,batch-file,scheduled-tasks,anaconda,conda","A_Id":57192970,"CreationDate":"2017-09-27T00:54:00.000","Title":"Schedule a Python script via batch on windows (using Anaconda)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a script that i run every day and want to make a schedule for it, i have already tried a batch file with: \nstart C:\\Users\\name\\Miniconda3\\python.exe C:\\script.py\nAnd im able to run some basic python commands in it, the problem is that my actual script uses some libraries that were installed with Anaconda, and im unable to use them in the script since Anaconda will not load.\nIm working on windows and can't find a way to start Anaconda and run my script there automatically every day.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":27782,"Q_Id":46437863,"Users Score":1,"Answer":"Found a solution, i copied the \"activate.bat\" file in \"C:\\Users\\yo\\Miniconda3\\Scripts\" and renamed it as schedule.bat and added my script (copy pasted it) on the end of the file.\nThen i can schedule a task on windows that executes schedule.bat everyday","Q_Score":26,"Tags":"python,batch-file,scheduled-tasks,anaconda,conda","A_Id":46438132,"CreationDate":"2017-09-27T00:54:00.000","Title":"Schedule a Python script via batch on windows (using Anaconda)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using Jupyter Notebooks (through Anaconda) on a Mac OS X Sierra. Now I would like to access a notebook saved in my iCloud Drive. However, I have been unable to find a way to access iCloud via the Juypter interface. I can of course upload a file with the \"Upload\" button, but not access the file sitting in my iCloud directly, which I would much prefer. How can I do that?","AnswerCount":5,"Available Count":2,"Score":0.1194272985,"is_accepted":false,"ViewCount":8894,"Q_Id":46444903,"Users Score":3,"Answer":"Mac: Access your iCloud Documents folder with Jupyter Notebook or JupyterLab\nHi!\nIn my case what I do it's:\n\nOpen a Terminal in Your Mac\nType:\n\n\ncd ~\/Library\/Mobile*Documents\/com~apple~CloudDocs\/Documents\n\nVerify you are at the right folder. Type\n\n\npwd\n\nOpen Jupyter Notebook or JupyterLab. Type:\n\n\njupyter notebook\n\n\nor type:\n\njupyter lab\n\n\nYour browser will open a Jupyter Notebook (\/Lab) and you'll see your iCloud Documents Folder and all the subfolders included on it","Q_Score":7,"Tags":"python,jupyter-notebook,icloud","A_Id":57729567,"CreationDate":"2017-09-27T09:58:00.000","Title":"Access iCloud Drive in Jupyter Notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using Jupyter Notebooks (through Anaconda) on a Mac OS X Sierra. Now I would like to access a notebook saved in my iCloud Drive. However, I have been unable to find a way to access iCloud via the Juypter interface. I can of course upload a file with the \"Upload\" button, but not access the file sitting in my iCloud directly, which I would much prefer. How can I do that?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":8894,"Q_Id":46444903,"Users Score":12,"Answer":"While someone can launch the Jupyter Lab or Jupyter Notebook software from the iCloud directory, as described in the accepted response, this will need to be done each and every time you want to access Jupyter notebooks within iCloud. Instead I prefer to create a symbolic link (symlink...a shortcut) to my iCloud folder, in my home directory. This will always be present. In doing this I can launch Jupyter Lab or a Jupyter Notebook from any folder or from Anaconda Navigator and I will always see a \"folder\" to iCloud.\nTo do this you need to open a Terminal and copy and paste the statement below into the Terminal window and hit return:\nln -s ~\/Library\/Mobile*Documents\/com~apple~CloudDocs\/ iCloud\nAfter creating this symlink, there will always be an iCloud folder that you can browse in your home directory. You can use whatever name you want by replacing the \"iCloud\" at the on the statement above with your own name.","Q_Score":7,"Tags":"python,jupyter-notebook,icloud","A_Id":59936184,"CreationDate":"2017-09-27T09:58:00.000","Title":"Access iCloud Drive in Jupyter Notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I start debugging (hitting the bug button to top-right), it gets connected and below message is shown: \n\nConnected to pydev debugger (build 172.3968.37)\n\nBut it doesn't stop at break points. Can anyone help me with this problem?\nI am using PyCharm CE on a Mac with python 3.6.2","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2634,"Q_Id":46481314,"Users Score":0,"Answer":"Possibilities:\n\nAn unhandled exception was raised by code long before we ever got to the code containing a breakpoint. That is, the code which gets executed before the code containing a break-point contains an error.\nThe code containing a break-point is never executed, even when the script runs from start to finish. For example, if the break-point is inside an if-block and the condition of the if-statement is never true, then the breakpoint will never be reached.\nYou are not running the script you think you are running. Look in the upper right corner of the UI. What is the name of the file next to the run button (green triangle)?","Q_Score":4,"Tags":"python-3.x,debugging,pycharm","A_Id":50509969,"CreationDate":"2017-09-29T03:20:00.000","Title":"Debugger doesn't stop at break points","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I plan on using python in the LiClipse IDE to play around with AI. However i require a few libraries. The libraries can be installed with pip. They mention the commands to install and upgrade pip(e.g. python -m pip install -U pip), however I am not sure where I should write this command anymore because it does not work in either the CMD or python shell.\nIs there any condition while using these commands I should think about?\nThank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":46487871,"Users Score":0,"Answer":"open cmd\ngoto the D:\\python\\Scripts(Your Installation directory)\nRun The Command pip install -U pip","Q_Score":0,"Tags":"python","A_Id":46487918,"CreationDate":"2017-09-29T11:29:00.000","Title":"How to (and where) install\/upgrade pip for Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm integrating MicroPython into a microcontroller and I want to add a debug step-by-step execution mode to my product (via a connection to a PC).\nThankfully, MicroPython includes a REPL aka Python shell functionality: I can feed it one line at a time and execute.\nI want to use this feature to single-step on the PC-side and send in the lines in the Python script one-by-one. \nIs there ANY difference, besides possibly timing, between running a Python script one line at a time vs python my_script.py?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1057,"Q_Id":46495647,"Users Score":1,"Answer":"I don't know whether MicroPython has compile() and exec() built-in.\nBut when embeded Python has them and when MCU has enough RAM, then I do the following:\n\nSend a line to embeded shell to start a creation of variable with multiline string.\n'_code = \"\"\"\\'\nSend the code I wish executed (line by line or however)\nClose the multiline string with \"\"\"\nSend exec command to run the transfered code stored in the variable on MCU and pick up the output.\n\nIf your RAM is small and you cannot transfer whole code at once, you should transfer it in blocks that would be executed. Like functions, loops, etc.\nIf you can compile bytecode for MicroPython on a PC, then you should be able to transfer it and prepare it for execution. This would use a lot less of RAM.\nBut whether you can inject the raw bytecode into shell and run it depends on how much MicroPython resembles CPython.\nAnd yep, there are differences. As explained in another answer line by line execution can be tricky. So blocks of code is your best bet.","Q_Score":1,"Tags":"python,micropython","A_Id":46496039,"CreationDate":"2017-09-29T19:22:00.000","Title":"Run Python script line-by-line in shell vs atomically","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a requirement that I need to set up hadoop to save files not just text files it can be image video pdf. And there will be a web application from where user can add files and access files whenever its needed.\nCan it Possible to implement ? also the web application will need to develop by me. Thank You.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":139,"Q_Id":46559124,"Users Score":1,"Answer":"If your application is written in Java, this is easily possible using the DFS Client libraries which can read and write files in HDFS in a very similar way to a standard filesystem. Basically can open an input or output stream and read whatever data you want.\nIf you are planning to use python to build the web application, then you could look at webHDFS, which provides a HTTP based API to put and get files from HDFS.","Q_Score":2,"Tags":"php,python-3.x,hadoop,hdfs,bigdata","A_Id":46566798,"CreationDate":"2017-10-04T07:18:00.000","Title":"Can I access HDFS files from Custom web application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"\/lib64\/libc.so.6: version `GLIBC_2.22' not found (required by \/var\/task\/pyhull\/_pyhull.so).\nNot able to fix this error on Aws Lambda any help please ?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3248,"Q_Id":46568074,"Users Score":0,"Answer":"It was related to Lambda Server lib problem.Solved by making zip in aws ec2 server by installing all python libraries there in ec2","Q_Score":0,"Tags":"python,amazon-web-services,lambda,glibc","A_Id":49010419,"CreationDate":"2017-10-04T14:59:00.000","Title":"\/lib64\/libc.so.6: version `GLIBC_2.22' not found in Aws Lambda Server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"\/lib64\/libc.so.6: version `GLIBC_2.22' not found (required by \/var\/task\/pyhull\/_pyhull.so).\nNot able to fix this error on Aws Lambda any help please ?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3248,"Q_Id":46568074,"Users Score":0,"Answer":"The \/var\/task\/pyhull\/_pyhull.so was linked against GLIBC-2.22 or later.\nYou are running on a system with GLIBC-2.21 or earlier.\nYou must either upgrade your AWS system, or get a different _pyhull.so build.","Q_Score":0,"Tags":"python,amazon-web-services,lambda,glibc","A_Id":46575657,"CreationDate":"2017-10-04T14:59:00.000","Title":"\/lib64\/libc.so.6: version `GLIBC_2.22' not found in Aws Lambda Server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Setup: Google App Engine application on Python standard environment.\nCurrently, the app uses the NDB library to read\/write from its Datastore. It uses async tasklets for parallel, asynchronous reads from Datastore, and memcache.\nIf I would like to use Firestore as a replacement for Datastore, it seems that I would have to use the Google Cloud Client Library for Python. I believe that the google-cloud lib doesn't support a mechanism like tasklets. But I wonder: Does the lib use a thread-safe cache-mechanism for its requests to the Firestore API, and maybe even GAE's memcache?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":217,"Q_Id":46585670,"Users Score":2,"Answer":"The Cloud Firestore server-side client libraries are not optimized for App Engine Standard. They don't integrate with a caching solution like GAE's memcache; you'd have to write that layer yourself.","Q_Score":1,"Tags":"google-app-engine,firebase,google-app-engine-python,google-cloud-python,google-cloud-firestore","A_Id":46625646,"CreationDate":"2017-10-05T12:29:00.000","Title":"Does Cloud Python lib in GAE use caching or memcache for access to Cloud Firestore data?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm working with a small company currently that stores all of their app data in an AWS Redshift cluster. I have been tasked with doing some data processing and machine learning on the data in that Redshift cluster.\nThe first task I need to do requires some basic transforming of existing data in that cluster into some new tables based on some fairly simple SQL logic. In an MSSQL environment, I would simply put all the logic into a parameterized stored procedure and schedule it via SQL Server Agent Jobs. However, sprocs don't appear to be a thing in Redshift. How would I go about creating a SQL job and scheduling it to run nightly (for example) in an AWS environment? \nThe other task I have involves developing a machine learning model (in Python) and scoring records in that Redshift database. What's the best way to host my python logic and do the data processing if the plan is to pull data from that Redshift cluster, score it, and then insert it into a new table on the same cluster? It seems like I could spin up an EC2 instance, host my python scripts on there, do the processing on there as well, and schedule the scripts to run via cron?\nI see tons of AWS (and non-AWS) products that look like they might be relevant (AWS Glue\/Data Pipeline\/EMR), but there's so many that I'm a little overwhelmed. Thanks in advance for the assistance!","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":2154,"Q_Id":46618762,"Users Score":1,"Answer":"The 2 options for running ETL on Redshift\n\nCreate some \"create table as\" type SQL, which will take your source\ntables as input and generate your target (transformed table)\nDo the transformation outside of the database using an ETL tool. For\nexample EMR or Glue.\n\nGenerally, in an MPP environment such as Redshift, the best practice is to push the ETL to the powerful database (i.e. option 1). \nOnly consider taking the ETL outside of Redshift (option 2) where SQL is not the ideal tool for the transformation, or the transformation is likely to take a huge amount of compute resource.\nThere is no inbuilt scheduling or orchestration tool. Apache Airflow is a good option if you need something more full featured than cron jobs.","Q_Score":1,"Tags":"python,database,amazon-web-services,amazon-redshift","A_Id":46640656,"CreationDate":"2017-10-07T09:41:00.000","Title":"AWS Redshift Data Processing","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm specifically interested in avoiding conflicts when multiple users upload (upload_file) slightly different versions of the same python file or zip contents.\nIt would seem this is not really a supported use case as the worker process is long-running and subject to the environment changes\/additions of others.\nI like the library for easy, on-demand local\/remote context switching, so would appreciate any insight on what options we might have, even if it means some seamless deploy-like step for user-specific worker processes.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":77,"Q_Id":46627188,"Users Score":0,"Answer":"Usually the solution to having different user environments is to launch and destroy networks of different Dask workers\/schedulers on the fly on top of some other job scheduler like Kubernetes, Marathon, or Yarn.\nIf you need to reuse the same set of dask workers then you could also be careful about specifying the workers= keyword consistently, but this would be error prone.","Q_Score":1,"Tags":"python,dask,dask-distributed","A_Id":46631210,"CreationDate":"2017-10-08T03:15:00.000","Title":"What options exist for segregating python environments in a mult-user dask.distributed cluster?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"how do I write\/execute a Python script full screen at the terminal.\nI want to write a small Programm which shoud be shown like \"vim\", \"sl\", or \"nano\".","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":675,"Q_Id":46636125,"Users Score":0,"Answer":"As fas as I can understand from your question, you can make the terminal to be fullscreen by pressing F11 (at least in Ubuntu)","Q_Score":0,"Tags":"python,terminal","A_Id":46636156,"CreationDate":"2017-10-08T21:13:00.000","Title":"Execute Python script in terminal fullscreen","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm currently writing some integration tests which should run on different physical machines and VMs with different OS.\nFor one type of test I have to find out if an nvidia-graphic card is installed on the running machine. I don't need any other information - only the vendor name (and it would be OK if I only knew if it is an NVIDIA graphic card or not - not interested in other vendors).\nI can only use the python standard lib so I think the best way is to use subprocesses and using the shell.\nAre there some commands for Windows(Win10x64) and Linux(Fedora, CentOS, SUSE) (without installing any tools or external libs) to find out the gpu vendor?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":794,"Q_Id":46641080,"Users Score":3,"Answer":"Following solution:\nOn Linux I'm using lsmod (or \/sbin\/lsmod; thanks to n00dl3) to see any occurence of \"nvidia\" and on Windows I'm using wmic path win32_VideoController get name to get some gpu information.","Q_Score":1,"Tags":"python,linux,windows,command,gpu","A_Id":46664350,"CreationDate":"2017-10-09T07:39:00.000","Title":"Getting gpu vendor name on windows and linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python zosftplib function call that submits a MVS job successfully, but it does not recognize that the job completed and it does not receive the JES output from the job. I can successfully make the MVS FTP connection and can upload and download files.\nThe code looks like this:\njob = Myzftp.submit_wait_job(jcl)\nThe call eventually displays the following error message.\nFile \"C:\\Python27\\lib\\site-packages\\zosftplib.py\", line 410, in submit_wait_job %(msg, resp))\nZftpError: 'submit_wait_job error: 550 JesPutGet aborted, job not found (last response:250 Transfer completed successfully.)'\nAny suggestions would be helpful on how I can resolve this.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":694,"Q_Id":46648039,"Users Score":0,"Answer":"Working with MVS FTP and JES can be very specific. For example my MVS ID was MVSIDD. My jobcard had a jobname of MVSIDDXY. So the submit_wait_job() function would submit the job correctly and it would run successfully. The problem came with returning the JES output back to FTP. It was expecting a jobname with my id and a single character not two. By changing the jobname in the jobcard to MVSIDDX the function worked as expected and waited until the job was over and then returned all the JES output with it.","Q_Score":0,"Tags":"python,zos,mvs","A_Id":46654808,"CreationDate":"2017-10-09T13:46:00.000","Title":"zosftplib submit_wait_job(jcl) function does not receive JES output","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have python webjobs running on Azure. There are times when the script hangs because of which I need to force restart it so next iteration can pick it up.\nHow do I configure force stopping a webjob if running longer than 1 hour? Basically I want to mimic task scheduler's behavior.\nMy files on the webjob:\n\nrun.cmd\nD:\\home\\Python35\\python.exe main.py\nmain.py\njust another python file\nsettings.job\n{\"is_singleton\":true}\n\nAt a given time, I want only 1 instance of the job running.\nEdit (Answer): As a workaround, I changed continuous webjob to triggered one. And added this in app setting:\n\nWEBJOBS_IDLE_TIMEOUT = 120\n\nI'm printing something on the console every now and then. If no CPU activity is detected for 2 mins, the job will be aborted.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":431,"Q_Id":46657070,"Users Score":1,"Answer":"You might want a second webjob that you can incorporate a healthcheck and restart the primary webjob if it detects no activity in your processes.\nAnother idea could be to use azure automation and have a powershell script that just restarts the webjob every hour.","Q_Score":1,"Tags":"python,azure,azure-webjobs","A_Id":46657105,"CreationDate":"2017-10-10T00:51:00.000","Title":"Azure WebJob: Force stop if running longer than X mins","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have just spent 2 days trying to build Tensorflow from source, and finally succeeded when I realized that sudo pip (even with the -H flag) was not finding my anaconda pip, but instead finding a pip installed with apt. Running, then, sudo -H ~\/anaconda3\/bin\/pip ... fixed my problem.\nIn order to avoid this kind of issue ever again (I had several issues in this process with the \"wrong\" python being used), is it possible for me to completely remove python from my system, keeping only Anaconda? Is it advisable?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":417,"Q_Id":46663950,"Users Score":3,"Answer":"This is not just a Ubuntu issue but also a linux world wide issue. The system python is at the core of apt-get and yum package managers. Also the modern grub is based on python so removing it can make your machine unbootable.\nIn short, this will affect RHEL related distributions (CentOS\/Fedora) and Debian related distributions (Debian\/Ubuntu).","Q_Score":1,"Tags":"python,anaconda,ubuntu-16.04","A_Id":46665023,"CreationDate":"2017-10-10T10:02:00.000","Title":"Ubuntu completely remove python that is not Anaconda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Lets say I have directories like:\nfoo\/bar\/\nbar is chmod 777 and foo is 000.\nWhen I call os.path.isdir('foo\/bar') it returns just False, without any Permission Denied Exception or anything, why is it like that? Shouldn't it return True?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1262,"Q_Id":46666941,"Users Score":3,"Answer":"If you are not root then you cannot access foo. Therefore you can't check if foo\/bar exists and it returns False because it cannot find a directory with that name (as it cannot access the parent directory).","Q_Score":7,"Tags":"python,unix","A_Id":46667023,"CreationDate":"2017-10-10T12:34:00.000","Title":"os.path.isdir() returns false on unaccessible, but existing directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Lets say I have directories like:\nfoo\/bar\/\nbar is chmod 777 and foo is 000.\nWhen I call os.path.isdir('foo\/bar') it returns just False, without any Permission Denied Exception or anything, why is it like that? Shouldn't it return True?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":1262,"Q_Id":46666941,"Users Score":2,"Answer":"os.path.isdir can return True or False, but cannot raise an exception.\nSo if the directory cannot be accessed (because parent directory doesn't have traversing rights), it returns False.\nIf you want an exception, try using os.chdir or os.listdir that are designed to raise exceptions.","Q_Score":7,"Tags":"python,unix","A_Id":46666984,"CreationDate":"2017-10-10T12:34:00.000","Title":"os.path.isdir() returns false on unaccessible, but existing directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"How can I identify a remote host OS (Unix\/Windows) using python ? One solution I found is to check whether the port22 is open but came to know that some Windows hosts also having Port 22 open but connections refused. Please let me know the efficient way to do the same. Thanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3377,"Q_Id":46669453,"Users Score":1,"Answer":"For security reasons, most operating systems do not advertise information over the network. While tools such as nmap can deduce the OS running on a remote system by scanning ports over the network the only way to reliably know the OS is to login to the system. In many cases the OS will be reported as part of the login process so establishing a connection over the network will suffice to determine the OS. Running \"uname -a\" on the remote system will also retrieve the OS type on linux systems.\nThis will retrieve the welcome string from HOST which usually includes the OS type. Substitute a valid user name for UNAME and host name for HOST.\n\n #!\/usr\/bin\/env python3\n\n import sys\n import subprocess\n\n CMD=\"uname -a\"\n\n conn = subprocess.Popen([\"ssh\", \"UNAME@HOST\", CMD],\n shell=False,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n res = conn.stdout.readlines()\n print(res)","Q_Score":2,"Tags":"python,linux,windows,remote-access,remote-server","A_Id":46670748,"CreationDate":"2017-10-10T14:36:00.000","Title":"Efficient way of finding remote host's operating system using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"This problem is that when I run my python programs through python launcher python launcher tries to run it in python 2.7, causing the print command(?) to have brackets around them and numerous other broken things.\nI downloaded Python Launcher with python 3.6 from the python.org website.\nWhen opening Python Launcher > Preferences, the 'interpreter' drop-down field has the following options:\n\/usr\/local\/bin\/pythonw\n\/usr\/local\/bin\/python\n\/usr\/bin\/pythonw\n\/usr\/bin\/python\n\/sw\/bin\/pythonw\nI don't know what the difference between python or pythonw is, or even what any of them mean, but no matter which one I select it always tries to run in python 2.7.\nWhat makes it even more baffling to me is when choosing to open my script in IDLE it says right at the top: (python 3.6.3) and runs a window called 'Python 3.6.3 shell'\nHow can I get the program to run using python 3.6.3 through Python Launcher?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":7380,"Q_Id":46671386,"Users Score":1,"Answer":"None of the above worked for me with Python 3.7 installed on OS X Mojave. But simply changing the Interpreter to \"python3\" in the Python Launcher preferences solved the problem.","Q_Score":2,"Tags":"python,macos,python-2.7,python-3.x","A_Id":57103379,"CreationDate":"2017-10-10T16:16:00.000","Title":"Python Launcher preferences in mac OSX not allowing selection of python 3.6 interpreter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have seen many examples of dockerfiles with conda commands in them. And there are pre-build anaconda and miniconda containers. I must be missing something. \nDoesn't docker REPLACE virtualenv and conda? Shouldn't I have all of my dependencies right in my dockerfile? I don't understand what I gain from adding anaconda here. In fact it seems like it makes my container unnecessarily bigger if I have to pull a miniconda container if Im not using all of miniconda's included modules.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":7393,"Q_Id":46678454,"Users Score":11,"Answer":"Docker does not replace anything. It is simply one way to do things.\nNo, you should not have all of your dependencies right in your Dockerfile. I, for one, will be running pip install from a virtualenv without ever touching Docker\/*conda unless I have a good reason. Your lack of requirements.txt is not a good reason :)\nConda came out in 2012 - well before Docker. Since Python has such a strong following in the non-programmer community, I rarely expect intelligible code, much less some type of DevOps ability. Conda was the perfect solution for this group.\nWith Docker, you can have a functional Docker environment with FROM python:xx, COPY . \/workdir, and RUN pip install -r requirements.txt (supposing you're using that file *ahem), but your developers will probably need a volume so they can work (so they need to know --volume. Also, if you're running Django you'll need ports configured (now they need --port and you need EXPOSE). Oh, also Django might need a database. Now you need another container and you're writing a docker-compose file.\nBut consider the following, from almost all of my professional (DevOps) experience IF you just include requirements.txt-\n\nI can use that file in my Docker container\nThe requirements are all in one place\nI can develop on my local with a venv if I want\nTravis can install from requirements.txt and test on multiple versions without using Tox\nSetuptools handles that automatically, so my thing works with pip\nI can reuse those Dockerfiles (or parts) with ECS, Kubernetes, etc\nI can deploy to EC2 without using Docker\nI can install the package locally via pip\n\nHTH - don't get too locked in to one piece of technology!","Q_Score":18,"Tags":"python,docker,containers,anaconda,conda","A_Id":46678633,"CreationDate":"2017-10-11T01:40:00.000","Title":"what is the purpose of conda inside a container?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to install pip for python 3.6 on windows 10. I run get-pip.py. when I try to use pip on terminal I get an error message Pip command is not recognized. \nI already add C:\\Python36\\Scripts to the environmental variable. \nIs there anything I missed ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":131,"Q_Id":46690417,"Users Score":0,"Answer":"Solved :\nthe problem is that python wasn't install in the right place (C:\\python) but just for one user. \nI uninstall and re-install python using \"custom\" configuration.","Q_Score":0,"Tags":"python,pip,windows-10","A_Id":46693783,"CreationDate":"2017-10-11T14:09:00.000","Title":"unable to use pip even after add to environmental variable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"After upgrading to macOS high sierra, vim began to crash with the plugins need python. I am getting the below error whenever i activated a python plugin. For example, i use tern for vim for javascript files. When i activate this plugin, vim opens successfully but it crashes when i open a javascript file.\nI have reinstall vim and python with brew. It did not work. I have also build vim from source, it did not work either.\n\nVim: Caught deadly signal SEGV","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":316,"Q_Id":46695289,"Users Score":0,"Answer":"I had vim crashing after a brew upgrade which also upgraded the python version. Reinstallation did not help, but reinstalling all the plugins (and therefore updating them) did help.\nEspecially rebuilding YouCompleteMe was key.","Q_Score":2,"Tags":"python,vim,macos-high-sierra","A_Id":49448621,"CreationDate":"2017-10-11T18:35:00.000","Title":"vim with python plugins crashes on macOS high sierra","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working on a Windows machine with admin rights and Python 2.7. I would like to use my locally downloaded python to call a script on the shared drive from the command line.\nUnfortunately, this is not working\nC:\\python27\\python.exe net use S:file_path\\python_script.py\nWhat is the right way to call a shared python script but run it with a local copy of python?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5274,"Q_Id":46711553,"Users Score":2,"Answer":"Try using the the \\\\ after the mapped drive letter. I have a shared drive mapped locally on my windows machine under Z. When I run my local python interpreter and give the above shared path, it works as expected.\nFor your example, it should be:\nC:\\path\\to\\python\\python.exe s:\\\\file_path\\python_script.py\nMy example\nC:\\Users\\david.mcmahon>python z:\\\\Test\\hooks\\my_app.py\nRunning from shared drive..\nHope this helps.","Q_Score":0,"Tags":"python-2.7,command-line","A_Id":46711822,"CreationDate":"2017-10-12T13:58:00.000","Title":"Use local python to run python script from shared drive","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Python Script, which uses win32com.client.dispatch and redemption in order to connect to an instance of Outlook and harvest some data from a public folder. \nWhen I execute this script on command line it works just fine.\nAdding it as a scheduled task, it appears to get hung at the line Outlook = win32com\nI added Event Log statements along the way to see where it is getting hung, other than that I don't have much in the way of error logs (since it doesnt actually fail) \nIs there any sort of security settings I should be concerned about or anything I am not thinking of? Everything works fine with a standard python call in the CMD.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":596,"Q_Id":46712780,"Users Score":0,"Answer":"Scheduler runs as a service, and office apps (Outlook included) should nto be used in a service.","Q_Score":0,"Tags":"python,outlook,win32com,outlook-redemption","A_Id":46713942,"CreationDate":"2017-10-12T14:55:00.000","Title":"Python - Win32Com Client Dispatch Hanging as Scheduled Tasks","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have created a single Pool (size Standard_D32_v3) with a single Job. I have set the pool property max_tasks_per_node=32.\nI then have a list which contains 27000 objects.\nSince I cannot add more than 100 tasks at a time to a Job, I \"Chunk\" my list so that I have a list of lists, each with 100 tasks.\nFinally, I insert each \"Chunk\" of tasks. \nIn the StartTask, I mount a File Share (Not BLOB), which contains files needed for processing.\nMy File Share has folders: 2012, 2013, 2014, 2015, 2016, 2017.\nI have found that for some reason, Azure Batch is deleting all files & folders except for 2017. This the 2nd time it has happened.\nNo where in my code do I delete from the file share or anywhere else.\nI do delete the Pool, Job and Tasks when finished.\nWhat the *&^% is going on?\nUPDATE\nThis is still happening.\nWhen the File Share is mounted, it is done so via Bash as a command passed into the StartTask. Azure portal gives the connection information for the File Share and provides the following CHMOD configuration: dir_mode=0777,file_mode=0777'\nI thought that I would be clever and change CHMOD properties to 444 (read only).\nUnfortunately, I then get a \"Permission Denied\" error. I then changed to 555 (read and execute) and files were once again deleted.\nThis is 100% an issue with Azure Batch. Microsoft does not do any logging what-so-ever (or even allow users to) of File Shares. I was hoping to see delete requests\/operations and from which IP and time the request originated, but alas, it is impossible...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":46719690,"Users Score":0,"Answer":"A fix was deployed to all Azure regions on 2017-10-19 that should prevent this behavior from happening. You will need to redeploy your pool to get this fix - if you've already mounted something under $AZ_BATCH_NODE_ROOT_DIR, then it is recommended to remotely login to the node and unmount the device first prior to deleting the pool.\nOn a side note: it is not recommended to mount any resource under a task directory. Because task directories are cleaned up when deleted, this can lead to deletion of mounted resources.","Q_Score":0,"Tags":"python,azure-batch","A_Id":46853245,"CreationDate":"2017-10-12T22:13:00.000","Title":"Azure Batch is deleting files from File Share","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've constructed a python script python_script.py in Linux. Is there a way to do a cron job which will be compatible with Linux and Windows. In fact, even tough I have implemented this script in Linux, it will be run under a cron job in Windows.\nOtherwise, assume the script works well on Linux and Windows. How could we create an automatic task on Windows (similar to a cron job on Linux)?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":400,"Q_Id":46721382,"Users Score":0,"Answer":"Create an automatic task using scheduler in windows.create a bat file to run the Python script and schedule that ask from the windows scheduler.hope this helps","Q_Score":0,"Tags":"python,excel","A_Id":46721674,"CreationDate":"2017-10-13T02:00:00.000","Title":"Cron job which will work under Linux and Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am testing out a very basic Pub\/Sub subscription. I have the push endpoint set to an App I have deployed through a Python Flex service in App Engine. The service is in a project with Identity-Aware Proxy enabled. The IAP is configured to allow through users authenticated with our domain.\nI do not see any of the push requests being processed by my app.\nI turned off the IAP protection and then I see that the requests are processed. I turn it back on and they are no longer processed.\nI had similar issues with IAP when trying to get a Cron service running; that issue resolved itself after I deployed a new test app in the same project.\nHas anyone had success with configuring a push subscription through IAP? I also experimented with putting different service accounts on the IAP access list and none of them worked.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":890,"Q_Id":46729915,"Users Score":1,"Answer":"I had a pretty similar issue - a GAE 2nd G standard application in project A, which is wired under IAP, that cannot receive the pushed pub\/sub message from project B.\nMy workaround is: \n\nSetup Cloud Function (HTTP triggered) in project A;\nSetup the subscription of project B Pub\/Sub topic to push the message to above Cloud Function endpoint;\nThe above Cloud Function works like a proxy to filter (needed based on my case, ymmv) and forwards the Pub\/Sub message in a http request to the GAE app;\nSince the Cloud Function is within same project with the GAE app, there is only needed to add the IAP authentication for above http request (which fetches the token assigned from the specific SA).\nThere should be a project A's SA setup in Project B IAM, which may have at least Pub\/Sub Subscriber and Pub\/Sub Viewer roles.\n\nHope this could be an option for your case.","Q_Score":2,"Tags":"python,google-app-engine,push,publish-subscribe,google-iap","A_Id":59740146,"CreationDate":"2017-10-13T12:25:00.000","Title":"Google Pub\/Sub push subscription into IAP-protected App Engine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm running into a performance issue with Google Cloud Bigtable Python Client. I'm working on a flask API that writes to and reads from a GCP Bigtable instance. The API uses the python client to communicate with Bigtable, and was deployed to GCP App Engine flexible environment.\nUnder low traffic, the API works fine. However during a load test, the endpoints that reads and writes to Bigtable suffers a huge performance decrease compare to a similar endpoint that doesn't communicate with Bigtable. Also, a large percentage of requests went to the endpoint receives a 502 Bad Gateway, even when health check was turned off in App Engine.\nI'm aware of that the client is currently in Alpha. I wonder if the performance issue is known, or if anyone also ran into the same issue\nUpdate\nI found a documentation from Google stating:\n\nThere are issues with the network connection. Network issues can\n reduce throughput and cause reads and writes to take longer than\n usual. In particular, you'll see issues if your clients are not\n running in the same zone as your Cloud Bigtable cluster.\n\nIn my case, my client is in a different region, by moving it to the same region had a huge increase in performance. However the performance issue still exist, and the recommendation from the documentation is to put client in the same zone as Bigtable.\nI also considered using Container engine or Compute Engine where it is easier to specify the zone, but I want stay with App Engine for its autoscale functionality and managed services.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":503,"Q_Id":46740127,"Users Score":3,"Answer":"Bigtable client take somewhere between 3 ms to 20 ms to complete each request, and because python is single threaded, during that period of time it will just wait until the response comes back. The best solution we found was for any writes, publish the request to Pubsub, then use Dataflow to write to Bigtable. It is significantly faster because publishing a message in Python would take way below 1 ms to complete, and because Dataflow can be set to exactly the same region as Bigtable, and it is easy to parallel, it can write much faster.\nThough it doesn't solve the scenario where you need frequent read or write need to be instantaneous","Q_Score":1,"Tags":"google-app-engine,google-cloud-platform,bigtable,google-cloud-bigtable,google-cloud-python","A_Id":47776406,"CreationDate":"2017-10-14T02:03:00.000","Title":"Google Cloud Bigtable Python Client Performance Issue","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to be pull up the name python-3.5, python3.6m, etc.\nWhen I do a \"python --version\" it's not in the correct format and it could be subjected to change. Is there any way to find the names generally in \/usr\/bin\/* folder? Or should I just grep for it and assume that it'll always be in that directory for other users?\nI am using the command \"$pkg-config python-3.6m --ldlibs --cflags\" and I would like to have a dynamic way to find the \"python-3.6m\" in that line so that the user doesn't have to change it every time they run it on a different version of Python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":46751743,"Users Score":0,"Answer":"You can use 'python-' + sysconfig.get_config_var('LDVERSION'). There are numerous other variables in that module.\nNote that this won't work on non-cpython implementations, but they usually require special cases for building anyway.","Q_Score":0,"Tags":"linux,python-3.x","A_Id":46751883,"CreationDate":"2017-10-15T04:36:00.000","Title":"How can I find the python package name in any linux distribution?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python script that contains a Linux shell command. I'm using subprocess.check_output. My question is about the faster python method to execute a Linux shell command from python script like os.system().","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":107,"Q_Id":46768006,"Users Score":0,"Answer":"I like subprocess.Popen, but it has troubles (maybe it can't) to deal with '>' ==> unconvenient if you have a '>' in the command line \notherwise subprocess.check_output","Q_Score":1,"Tags":"python,linux,python-2.7","A_Id":46769357,"CreationDate":"2017-10-16T10:21:00.000","Title":"execute linux shell command from python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to find a way to log all queries done on a Cassandra from a python code. Specifically logging as they're done executing using a BatchStatement\nAre there any hooks or callbacks I can use to log this?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":2113,"Q_Id":46773522,"Users Score":1,"Answer":"Have you considered creating a decorator for your execute or equivalent (e.g. execute_concurrent) that logs the CQL query used for your statement or prepared statement?\nYou can write this in a manner that the CQL query is only logged if the query was executed successfully.","Q_Score":10,"Tags":"python,cassandra,cassandra-python-driver","A_Id":46839220,"CreationDate":"2017-10-16T15:12:00.000","Title":"Logging all queries with cassandra-python-driver","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I try to run a dockerized pants build for a scala project and it fails with an error message \"error in cryptography setup command: Invalid environment marker: python_version < '3' \".\nI haven't manually specified anything to install cryptography. In the documentation of cryptography I could see that it happens either because of pip or setuptools is out of date. I tried to update this as well. But in case of pants I'm not very sure where should I specify this also. I specified this in pants file and in thidparty \"requirements.txt\" file. But no difference . It was working fine but suddenly it failed one day.\nI use the following versions of \nUbuntu -14.04\npython -2.7.4\npants -1.0.0 (tried upgrading to 1.1.0 but no difference)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":689,"Q_Id":46784908,"Users Score":0,"Answer":"Explicitly mentioning following versions in requirements.txt for pants builds will resolve the issue\npycparser==2.17\ncryptography==2.0.1","Q_Score":0,"Tags":"python,python-2.7,pants","A_Id":47027841,"CreationDate":"2017-10-17T07:38:00.000","Title":"Fails to install cryptography when a pants build is run","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using python kubernetes 3.0.0 library and kubernetes 1.6.6 on AWS.\nI have pods that can disappear quickly. Sometimes when I try to exec to them I get ApiException Handshake status 500 error status.\nThis is happening with in cluster configuration as well as kube config.\nWhen pod\/container doesn't exist I get 404 error which is reasonable but 500 is Internal Server Error. I don't get any 500 errors in kube-apiserver.log where I do find 404 ones.\nWhat does it mean and can someone point me in the right direction.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1730,"Q_Id":46789946,"Users Score":0,"Answer":"For me, The reason for 500 was basically pod unable to pull the image from GCR","Q_Score":2,"Tags":"python,kubernetes","A_Id":69754077,"CreationDate":"2017-10-17T12:16:00.000","Title":"kubectl exec returning `Handshake status 500`","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using python kubernetes 3.0.0 library and kubernetes 1.6.6 on AWS.\nI have pods that can disappear quickly. Sometimes when I try to exec to them I get ApiException Handshake status 500 error status.\nThis is happening with in cluster configuration as well as kube config.\nWhen pod\/container doesn't exist I get 404 error which is reasonable but 500 is Internal Server Error. I don't get any 500 errors in kube-apiserver.log where I do find 404 ones.\nWhat does it mean and can someone point me in the right direction.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1730,"Q_Id":46789946,"Users Score":0,"Answer":"For me the reason was,\nI had two pods, with same label attached, 1 pod was in Evicted state and other was running , i deleted that pod, which was Evicted and issue was fixed","Q_Score":2,"Tags":"python,kubernetes","A_Id":70119831,"CreationDate":"2017-10-17T12:16:00.000","Title":"kubectl exec returning `Handshake status 500`","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python project and i want to deploy it on an AWS EC2 instance. My project has dependencies to other python libraries and uses programs installed on my machine. What are the alternatives to deploy my project on an AWS EC2 instance?\nFurther details : My project consist on a celery periodic task that uses ffmpeg and blender to create short videos.\nI have checked elastic bean stalk but it seems it is tailored for web apps. I don't know if containerizing my project via docker is a good idea...\nThe manual way and the cheapest way to do it would be :\n1- Launch a spot instance\n2- git clone the project\n3- Install the librairies via pip\n4- Install all dependant programs\n5- Launch periodic task\nI am looking for a more automatic way to do it.\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":467,"Q_Id":46791798,"Users Score":2,"Answer":"Beanstalk is certainly an option. You don't necessarily have to use it for web apps and you can configure all of the dependencies needed via .ebextensions.\nContainerization is usually my go to strategy now. If you get it working within Docker locally then you have several deployment options and the whole thing gets much easier since you don't have to worry about setting up all the dependencies within the AWS instance. \nOnce you have it running in Docker you could use Beanstalk, ECS or CodeDeploy.","Q_Score":1,"Tags":"python,amazon-web-services,amazon-ec2,deployment,ffmpeg","A_Id":46792730,"CreationDate":"2017-10-17T13:48:00.000","Title":"What are the ways to deploy python code on aws ec2?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"This program listen to Redis queue. If there is data in Redis, worker start to do their jobs. All these jobs have to run simultaneously that's why each worker listen to one particular Redis queue.\nMy question is : Is it common to run more than 20 workers to listen to Redis ? \npython \/usr\/src\/worker1.py\npython \/usr\/src\/worker2.py\npython \/usr\/src\/worker3.py\npython \/usr\/src\/worker4.py\npython \/usr\/src\/worker5.py\n....\n....\npython \/usr\/src\/worker6.py","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":834,"Q_Id":46808342,"Users Score":0,"Answer":"If your worker need to do a long task with data, it's a solution. but each data must be treated by a single worker.\nBy this way, you can easly (without thread,etc..) distribute your tasks, it's better if your worker doesn't work in the same server","Q_Score":0,"Tags":"python,redis,queue,worker","A_Id":46808422,"CreationDate":"2017-10-18T10:44:00.000","Title":"Is it common to run 20 python workers which uses Redis as Queue ?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"This program listen to Redis queue. If there is data in Redis, worker start to do their jobs. All these jobs have to run simultaneously that's why each worker listen to one particular Redis queue.\nMy question is : Is it common to run more than 20 workers to listen to Redis ? \npython \/usr\/src\/worker1.py\npython \/usr\/src\/worker2.py\npython \/usr\/src\/worker3.py\npython \/usr\/src\/worker4.py\npython \/usr\/src\/worker5.py\n....\n....\npython \/usr\/src\/worker6.py","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":834,"Q_Id":46808342,"Users Score":0,"Answer":"Having multiple worker processes (and when I mean \"multiple\" I'm talking hundreds or more), possibly running on different machines, fetching jobs from a job queue is indeed a common pattern nowadays. There even are whole packages\/frameworks devoted to such workflows, like for example Celery. \nWhat is less common is trying to write the whole task queues system from scratch in a seemingly ad-hoc way instead of using a dedicated task queues system like Celery, ZeroMQ or something similar.","Q_Score":0,"Tags":"python,redis,queue,worker","A_Id":46808475,"CreationDate":"2017-10-18T10:44:00.000","Title":"Is it common to run 20 python workers which uses Redis as Queue ?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to make an application in python that has 1 topic (demo-topic) and 1 partition.\nIn this topic messages are pushed randomly\nI have 1 consumer (consumer1) (demo-group) that uses this messages to makes some background calculations (that take some time).\nHaving this application on amazon i want to be able to scale it (when computation takes to long) in the way that the newly created machine will have another consumer (consumer 2) from the same group (demo-group) reading in the same topic (demo-topic) but in the way that they start splitting the load (consumer 1 takes some load and consumer 2 takes the rest but they never get the same messages)\nAfter the surge of data comes to an halt, second machine is decommissioned and \nconsumer 1 takes all the load again.\nIs this even possible to do (without adding before hand more partitions). Is there a workaround??\nThank you","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1106,"Q_Id":46812351,"Users Score":0,"Answer":"You could do this, but shouldn't.\nThe basic unit of parallelism in Kafka is the partition: in a consumer group, each consumer reads from one or more partitions and consumers do not share partitions. In order to share a partition, you would need to use a tool like ZooKeeper to lock access to the partition (and keep track of each consumer's position).\nThe use case that you're describing is better served by SQS and an Auto-scaling group.","Q_Score":0,"Tags":"python,amazon-web-services,apache-kafka,multiprocessing,scale","A_Id":46812799,"CreationDate":"2017-10-18T14:16:00.000","Title":"Multiprocessing with python kafka consumers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to make an application in python that has 1 topic (demo-topic) and 1 partition.\nIn this topic messages are pushed randomly\nI have 1 consumer (consumer1) (demo-group) that uses this messages to makes some background calculations (that take some time).\nHaving this application on amazon i want to be able to scale it (when computation takes to long) in the way that the newly created machine will have another consumer (consumer 2) from the same group (demo-group) reading in the same topic (demo-topic) but in the way that they start splitting the load (consumer 1 takes some load and consumer 2 takes the rest but they never get the same messages)\nAfter the surge of data comes to an halt, second machine is decommissioned and \nconsumer 1 takes all the load again.\nIs this even possible to do (without adding before hand more partitions). Is there a workaround??\nThank you","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1106,"Q_Id":46812351,"Users Score":1,"Answer":"You cannot have multiple consumers within the same group consume at the same time from the same partition.\nIf you subscribe a second consume within the same group to the same partition, it will act as a hot standby and won't be consuming any messages until the first one stops.\nThe best solution is to add partitions to your topic. That way, you can add consumers when you see a surge in traffic and remove them when traffic slows down. Kafka will do all load balancing for you.","Q_Score":0,"Tags":"python,amazon-web-services,apache-kafka,multiprocessing,scale","A_Id":46812746,"CreationDate":"2017-10-18T14:16:00.000","Title":"Multiprocessing with python kafka consumers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am writing a small chat app with python using bash commands. I'm using nc to accomplish this but I want to be able to append a username before the user's message. How do I go about doing this without breaking the connection?\nThe command I'm using to connect is just \nnc -l -p 1234 -q 0\nand the desired outcome is that when the person sends something it would look like: Hello\nThank you!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":232,"Q_Id":46814272,"Users Score":0,"Answer":"its hard to understand the context and the applications structure, but i would assume that putting the connection in a separate thread would help. So that the connection is always open and the messages can be processed in preferred fashion.","Q_Score":0,"Tags":"python,bash,sockets,networking,scripting","A_Id":46846940,"CreationDate":"2017-10-18T15:48:00.000","Title":"How to print text while netcat is listening at a port?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have some files on a remote Windows system, that I need to retrieve. It's running Windows Server 2008 R2. I would like to retrieve those files to my Ubuntu System and I would like to do it using Python. I have gone through using wmi-client-wrapper, paramiko and wmic. None of them seem to work. Some guidance would be appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":133,"Q_Id":46826969,"Users Score":0,"Answer":"The easiest way is to use Samba. This way you don't have to install anything on Windows.\n\nYou can do something like mount \/\/windowspc\/share \/mnt, then use Python to copy files from \/mnt to wherever you want.\nYou can call smbclient from Python as a system command\n\nAlternatively, you can install rsync or ssh from Cygwin, or even something like FTP.\nPS: this is not really a Python question, but basic filesharing.","Q_Score":0,"Tags":"python,linux,windows","A_Id":46827088,"CreationDate":"2017-10-19T09:39:00.000","Title":"Python on Linux remotely connect to a Windows PC to retrieve files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have some couchdb document need to be removed, Is there good way\/steps to backup and delete and backout those document from couchdb?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":225,"Q_Id":46843096,"Users Score":0,"Answer":"Using CouchDB 1.X you can just save *.couch files corresponding to your databases. Then later you'll be able to use these files to recover your data.\nUsing 2.X I have no idea how to do achieve that since each database is split in shards and the _dbs.couch seems to be needed to restore data. Then you can have a complete backup but not a single database backup. If someone knows how to do that I need the answer too :)","Q_Score":0,"Tags":"javascript,couchdb,backup,couchdb-futon,couchdb-python","A_Id":47079805,"CreationDate":"2017-10-20T05:55:00.000","Title":"Is there good way to backup and delete and backout the document from couchdb?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I downloaded python anaconda 2.7.\nI used to work with the regular python from python.org, but I was asked to work with anaconda.\nany how, I have 2 problems.\n\nright click ->edit with idle (does not exist).\ncan't run py file as a program (like cmd)\/.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":46848067,"Users Score":0,"Answer":"After installing Anaconda, mostly the path environment variable is overridden hence the system now refers to the Anaconda python interpreter, a quick fix is to correct the path environment (this depends on the type of OS you're running).","Q_Score":0,"Tags":"python,anaconda","A_Id":46848373,"CreationDate":"2017-10-20T11:20:00.000","Title":"anaconda python running py file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My first gut reaction is that Luigi isn't suited for this sort of thing, but I would like the \"pipeline\" functionality and everything keeps pointing me back to Luigi\/Airflow. I can't use Airflow as it is a Windows environment.\nMy use-case:\nSo currently from my \u201csource\u201d folder we have 20 or so machines that produce XML data. Over time, some process puts these files into a folder continually on each machine (its log data). On any given day these machines could have 0 files in the folder or it could have 100k+ files (each machine). Eventually someone will go delete all of the files. \nOne part of this process is to to watch all of these directories on all of these machines, and copy the files down to an archive folder if they are new.\nMy current process makes a listing of all the files on each machine every 5 minutes, grabs a listing of the files and loops over the source checking if the file is available at the destination. Copies if it doesn't exist at destination, skips if it does. \nIt seems that Luigi wants to work with only \"a\" (singular) file in its output and\/or target. The issue is, I could have 1 new file, or several thousand files that shows up. \nThis same exact issue happens through the entire process. As the next step in my pipeline is to add the files and its metadata information (size, filename, directory location) to a db record. At that point another process reads all of the metadata record rows and puts them into a content extracted table of the XML log data. \nIs Luigi even suited for something like this? Luigi seems to want to deal with one thing, do some work on it, and then emit that information out to another single file.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":252,"Q_Id":46854766,"Users Score":0,"Answer":"I can tell you that mine workflow works with 10K log files every day without any glitches. The main good stuff here that I have created one task for work with each file.","Q_Score":0,"Tags":"python,luigi","A_Id":47312020,"CreationDate":"2017-10-20T17:58:00.000","Title":"Is Luigi suited for building a pipeline around lots of small files (100k+)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Why is the virtualenvwrapper mkproject command is not working in Windows 10? And what file do you edit for shell startup in Windows 10 to setup virtualenvwrapper?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":418,"Q_Id":46864680,"Users Score":0,"Answer":"mkproject is included in v1.2.4 of virtualenvwrapper-win.\nThere is no need to call a setup script like you would do in linux. The linux version implements virtualenvwrapper as shell functions, while virtualenvwrapper-win implements them as .bat files located in your python's Scripts directory.","Q_Score":1,"Tags":"python,windows-10,virtualenvwrapper","A_Id":47361363,"CreationDate":"2017-10-21T15:08:00.000","Title":"virtualenvwrapper mkproject and shell startup in windows issue?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"If I'm just updating, say, my main.py file, is there a better way to update the app then running gcloud app deploy, which takes several minutes? I wouldn't think I need to completely blow up and rebuild the environment if I'm just updating one file.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":46867161,"Users Score":1,"Answer":"You must redeploy the service. App Engine isn't like a standard hosting site where you FTP single files, rather you upload a service that becomes containerized that can scale out to run on many instances. For a small site, this might feel weird, but consider a site serving huge amounts of traffic that might have hundreds of instances of code running that is automatically load balanced. How would you replace that single file in that situation across all your instances? So you upload a new version of a service and then you can migrate traffic to the new version either immediately or ramped up. \nWhat you might consider an annoyance is part of the tradeoff that makes App Engine hugely powerful in not having to worry about how your app scales or is networked.","Q_Score":0,"Tags":"python,google-app-engine","A_Id":46868601,"CreationDate":"2017-10-21T19:25:00.000","Title":"How can I update one (or a couple) files in a Google App Engine Flask App?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Using os.system('ping 127.0.0.1 -t >> new.txt'). I am able to get the ping result in new.txt document. How to get the ping result in command window and text file at the same time in case of stream output like this...?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":62,"Q_Id":46887360,"Users Score":0,"Answer":"It works with os.system('powershell \"ping 127.0.0.1 -t | tee ping.txt')","Q_Score":0,"Tags":"python,ping,command-window,stream-operators","A_Id":46953793,"CreationDate":"2017-10-23T10:38:00.000","Title":"How to get stream output in command window and text document simultaneously","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I accidentally downloaded python2.6.6 on my centos Virtual Machine from python.org's official download package and compiled it to source.\nNow in my \/usr\/local\/bin I have a python2.6 shell available and now if I use which python it will give me the path of \/usr\/local\/bin instead of original python2.7's path which is \/usr\/bin.\nSince I installed it from source, yum doesn't recognise python2.6.6 as a package and I want to get rid of it.\nIf I do rpm -q python it gives me a result of python-2.7.5-48.0.1.el7.x86_64\nIs it possible to uninstall python2.6.6 and I will just re-point my python system variable to \/usr\/bin again?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2370,"Q_Id":46897967,"Users Score":1,"Answer":"Sure, but you'll have to do it the hard way. Dig through \/usr\/local looking for anything Python-related and remove it. The python in \/usr\/bin should be revealed once the one in \/usr\/local\/bin is removed.\nAlso, next time make altinstall. It will install a versioned executable that won't get in the way of the native executable.","Q_Score":2,"Tags":"python,linux,centos,yum,system-configuration","A_Id":46924884,"CreationDate":"2017-10-23T20:29:00.000","Title":"Uninstall python 2.6 without yum","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I realized that while using the ReadFromDatastore PTransform, if the query has a limit set, the query won't be split across workers. The documentation for the Python class says:\n\"... when the query is configured with a limit ..., then all the returned results will be read by a single worker in order to ensure correct data. Since data is read from a single worker, this could have significant impact on the performance of the job.\"\nIn my case, I need to specify the limit, because there are many more entities matching the query in Datastore than I need for this job. However, the performance hit is severe enough that specifying a limit doesn't give me results any faster (or fast enough). What can I do to somehow finish the job and flush the pipeline when I have processed a certain number of entities without getting a performance hit?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":177,"Q_Id":46898800,"Users Score":0,"Answer":"You could omit limit and filter by something else (date?) and then do top N on Dataflow instead.","Q_Score":0,"Tags":"python,google-cloud-datastore,google-cloud-dataflow","A_Id":46920522,"CreationDate":"2017-10-23T21:26:00.000","Title":"Datastore query splitting behaviour when specifying limit on Dataflow pipeline","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Today I messed up the versions of Python on my CentOS machine. Even yum cannot work properly. I made the mistake that I removed the default \/usr\/bin\/python which led to this situation. How could I get back a clear Python environment? I thought remove them totally and reinstall Python may work, but do not know how to do it. Wish somebody could help!","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":19404,"Q_Id":46915455,"Users Score":-1,"Answer":"To install Python on CentOS: sudo yum install python2\/3 (select the version as per requirement)\nTo uninstall Python on CentOS: sudo yum remove python2\/3 (select the version as per your requirement)\nTo check version for python3(which you installed): python3 --version\nTo check version for python2 (which you installed): python2 --version","Q_Score":3,"Tags":"python,centos,rpm,yum","A_Id":63481093,"CreationDate":"2017-10-24T16:20:00.000","Title":"Remove clearly and reinstall python on CentOS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to install bluepy 1.0.5. However, I get receiving error below. Any idea how can i solve it? (I am using Mac OS X El Capitan)\n\n40:449: execution error: The directory '\/Users\/isozyesil\/Library\/Caches\/pip\/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.\n The directory '\/Users\/isozyesil\/Library\/Caches\/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.\n Command \/usr\/bin\/python -u -c \"import setuptools, tokenize;__file__='\/private\/var\/folders\/95\/f900ttf95g1b7h02y2_rtk400000gn\/T\/pycharm-packaging669\/bluepy\/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record \/var\/folders\/95\/f900ttf95g1b7h02y2_rtk400000gn\/T\/pip-djih0T-record\/install-record.txt --single-version-externally-managed --compile\n failed with error code 1 in \/private\/var\/folders\/95\/f900ttf95g1b7h02y2_rtk400000gn\/T\/pycharm-packaging669\/bluepy\/\n (1)","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2751,"Q_Id":46918646,"Users Score":0,"Answer":"What part of the diagnostic message did you find unclear?\nDid you consider ensuring writable self-owned files by doing sudo chown -R isozyesil \/Users\/isozyesil\/Library\/Caches\/pip ?\nDid you consider sudo pip install bluepy ?","Q_Score":2,"Tags":"python,serial-port,pycharm,signals,signal-processing","A_Id":46918772,"CreationDate":"2017-10-24T19:32:00.000","Title":"Bluepy Installation Error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying a Django tutorial. For some reason my existing superuser got deleted; creating that went fine, but I can't make another one. This also happens when I try to use pip.\nI didn't change anything in the libraries so not sure why this happens now but didn't before. On windows 7 (Python 3.6.3 and Django 1.11). I've seen similar but not the exact same problems for Windows. Still I checked the file and there seems to be a PathLike class.\nI've also tried to repair my Python installation but it didn't help. any ideas?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3463,"Q_Id":46920188,"Users Score":0,"Answer":"Update your Python and Django version and that will work perfect.","Q_Score":3,"Tags":"python,django,attributeerror","A_Id":64591309,"CreationDate":"2017-10-24T21:19:00.000","Title":"manage.py createsuperuser: AttributeError: module 'os' has no attribute 'PathLike'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm trying a Django tutorial. For some reason my existing superuser got deleted; creating that went fine, but I can't make another one. This also happens when I try to use pip.\nI didn't change anything in the libraries so not sure why this happens now but didn't before. On windows 7 (Python 3.6.3 and Django 1.11). I've seen similar but not the exact same problems for Windows. Still I checked the file and there seems to be a PathLike class.\nI've also tried to repair my Python installation but it didn't help. any ideas?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3463,"Q_Id":46920188,"Users Score":0,"Answer":"Seems like you may have modified the settings.py file. But as MrName mentioned you need to share full stack trace","Q_Score":3,"Tags":"python,django,attributeerror","A_Id":46932401,"CreationDate":"2017-10-24T21:19:00.000","Title":"manage.py createsuperuser: AttributeError: module 'os' has no attribute 'PathLike'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have researching for a few hours now but I cannot confirm If as of October 2017, you can run airflow on Windows. I have installed it using Python package \"pip install airflow\" but I cannot initialize it or even see the version, which I assume that it cannot run on Windows.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":11045,"Q_Id":46922950,"Users Score":14,"Answer":"We make heavy use of airflow, and we use VM's running Linux to get it running. We have Windows machines, but have to use VM's or mount drives on Linux\/Mac boxes to get it to work. As far as I know it's not even on the road map to have Airflow run on Windows. \nSo, long answer short: No, even as of October 2017 airflow runs only on Unix based systems (it uses some python libraries that only work for unix underneath), and it's unlikely that anytime soon it will support Windows.","Q_Score":3,"Tags":"python,airflow,apache-airflow","A_Id":46962458,"CreationDate":"2017-10-25T02:25:00.000","Title":"Can I run Airflow on Windows?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running a uwsgi application on my linux mint. it has does work with a database and shows it on my localhost. i run it on 127.0.0.1 IP and 8080 port. after that i want to test its performance by ab(apache benchmark).\nwhen i run the app by command uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi and get test of it, it works correctly but slowly.\nso i want to run the app with more than one thread to speed up. so i use --threads option and command is uwsgi --socket 0.0.0.0:8080 --protocol=http -w wsgi --threads 8 for example. \nbut when i run ab to test it, after 2 or 3 request, my application stops with some errors and i don't know how to fix it. every time i run it, type of errors are different. some of errors are like these:\n\n(Traceback (most recent call last): 2014, 'Command Out of Sync')\n\nor \n\n(Traceback (most recent call last): File \".\/wsgi.py\", line 13, in\n application\n return show_description(id) File \".\/wsgi.py\", line 53, in show_description\n cursor.execute(\"select * from info where id = %s;\" %id) File \"\/home\/mohammadhossein\/myFirstApp\/myappenv\/local\/lib\/python2.7\/site-packages\/pymysql\/cursors.py\",\n line 166, in execute\n result = self._query(query) File \"\/home\/mohammadhossein\/myFirstApp\/myappenv\/local\/lib\/python2.7\/site-packages\/pymysql\/cursors.py\",\n line 322, in _query\n conn.query(q) File \"\/home\/mohammadhossein\/myFirstApp\/myappenv\/local\/lib\/python2.7\/site-packages\/pymysql\/connections.py\",\n line 856, in query\n self._affected_rows = self._read_query_result(unbuffered=unbuffered) 'Packet sequence number\n wrong - got 1 expected 2',) File\n \"\/home\/mohammadhossein\/myFirstApp\/myappenv\/local\/lib\/python2.7\/site-packages\/pymysql\/connections.py\",\n line 1057, in _read_query_result\n\nor \n\n('Packet sequence number wrong - got 1 expected 2',) Traceback (most\n recent call last):\n\nor \n\n('Packet sequence number wrong - got 1 expected 2',) Traceback (most\n recent call last): File \".\/wsgi.py\", line 13, in application\n return show_description(id) File \".\/wsgi.py\", line 52, in show_description\n cursor.execute('UPDATE info SET views = views+1 WHERE id = %s;', id) File\n \"\/home\/mohammadhossein\/myFirstApp\/myappenv\/local\/lib\/python2.7\/site-packages\/pymysql\/cursors.py\",\n line 166, in execute\n result = self._query(query)\n\nPlease help me how to run my uwsgi application wiht more than one thread safety. any help will be welcome","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":238,"Q_Id":46927517,"Users Score":0,"Answer":"it has solved.\nthe point is that you should create separate connection for each completely separate query to avoid missing data during each query execution","Q_Score":0,"Tags":"python,multithreading,server,uwsgi","A_Id":47568008,"CreationDate":"2017-10-25T08:26:00.000","Title":"uwsgi application stops with error when running it with multi thread","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"How to open my .py script from anywhere on my pc directly into python idle form command prompt?\nIs there any way so that i can typeidle test.py on cmd so that it opens the test.py file in the current directory and if test.py is not available creates a new file and opens it into idle","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2809,"Q_Id":47001108,"Users Score":3,"Answer":"You can do that by adding the directory where you have installed the idle editor the PATH environment variable.\nHow to do that depends on your operating system: just search the Internet for add directory to path plus your operating system: e.g. Windows\/Ubuntu, etc.\nAfter changing the environment variable it may be a good idea to restart your PC (to make sure that all programs use the updated version)","Q_Score":3,"Tags":"python,windows,python-idle","A_Id":47001239,"CreationDate":"2017-10-29T14:07:00.000","Title":"Python IDLE from cmd","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Raspberry Pi running Linux. My plan is that I can plug in a USB in the robot and have and it will run the python files. the reason I chose this method is that it allows for easy editing and debugging of the scripts.\nIs there a way to execute my files when the USB is inserted?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2024,"Q_Id":47005067,"Users Score":1,"Answer":"Try to use os.path.exists to detect whether the pendrive is there in an infinite loop and when detected execute code on pendrive using os.system and break out of loop .","Q_Score":0,"Tags":"python,usb,autorun","A_Id":58617987,"CreationDate":"2017-10-29T20:44:00.000","Title":"Auto run python file when usb inserted","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I like to change my operating system. I also frequently format it when it becomes cluttered and restore needed files from backup. I started to develop my tiny script in bash that automatizes some of those tasks, like adding repositories, installing software, setting up wallpaper and panels and so on. Unfortunately, this bash script is getting less and less readable. I was wondering, what language could I pick, so that after reinstalling operating system I will be able to copy my little program from pendrive, run it and let it do whole work for me.\nMost of programming languages require to install some kind of running environment (let's take java and JRE as an example). This is why I am focusing on languages, that can be run immediately after installing operating system. As I am only using GNU\/Linux systems, bash was an obvious choice. But readability is a downside. I thought about Python, but some operating systems have 2.X and some 3.X.\nWhat can I do, to create tiny generic program, that will work on most Linux based operating systems?\nI know, that this is pretty much hard question without specifying those operating systems, but I simply do not know what operating system will I use in future (beside fact, that it will be mainstream Linux OS). We can assume, that it is enough if it can run on at least 80 of 100 operating systems listed on distrowatch.com.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":47060759,"Users Score":0,"Answer":"You should definitely learn basic C++. \nYou really just have to learn 's std::cout, std::cin, std::endl and the 's system() function for executing commands.\nPros:\n\nIt's available for every Linux system.\nIt's far more extensible than Bash.\nYou can compile your program using static linking so that it doesn't need ANY dependency on a new system.\n\nCons:\n\nHarder to learn.\nHave to compile the code.","Q_Score":0,"Tags":"python,linux,bash","A_Id":47060870,"CreationDate":"2017-11-01T17:44:00.000","Title":"Bootstraping operating system settings after reinstall","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm writing a python script that generates a text file report of CPU usage per core. Really, what I want is the information that top provides once you type 1. \nHowever, optimally this would be returned to the terminal (just like running top -b) so I can grep etc. \nIs there a way of getting this information, either with top or another command, in a format that I can then grep and handle within my python script. Thanks very much!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":134,"Q_Id":47061792,"Users Score":0,"Answer":"You can run the \"top\" command from within a python script using \"subprocess.run()\" and get the output in the returned \"CompletedProcess\" instance.","Q_Score":0,"Tags":"python,linux","A_Id":47061983,"CreationDate":"2017-11-01T18:55:00.000","Title":"Generate report of Linux CPU info per core using top","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I created a django app with a celery task that organises database tables. The celery task takes about 8 seconds to do the job ( quite heavy table moves and lookups in over 1 million lines )\nThis celery starts through a button click by a registered user.\nI created a task-view in views.py and a separate task-url in urls.py. When navigating to this task-url, the task starts. \nMy Problem:\n\nWhen you refresh the task-url while the task is not finished, you fire up a new task.\nWith this set-up, you navigate to another page or reload the view\n\nHow do I prevent the task from firing up each time on refresh and prevent navigating to another page. Ideally, a user can only run one task of this particular type.\nMight this be doable with JQuery\/Ajax? Could some hero point me out in the right direction as I am not an expert an have no experience with JQuery\/Ajax.\nThanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1196,"Q_Id":47075629,"Users Score":2,"Answer":"\"How do I prevent the task from firing up each time on refresh\"\n\nFirst make sure you use the \"POST\/redirect\/GET\" pattern: your view should only fire the task on a \"POST\" request (GET request must not have side-effects), and then return a HttpResponseRedirect. \nThis won't prevent the user from firing a second task while the first is still running, but it does prevent re-submitting the form each time you refresh (\"GET\") the page...\n\nIdeally, a user can only run one task of this particular type.\n\nWhen firing a task, store it's id in the session, and before firing a task check the session for a task id. If there's one, use it to check the task's state. If it's done (whatever the result), remove the task id and proceed with the new task, else just display a message telling your user he already has a task running.\nJust make sure you correctly handle the case of a \"ghost\" task_id in the session (it might be a long finished task for which celery already discarded the results, or a task that got lost in a worker or broker crash - yeah, sh!t happens - etc). A working solution is to actually store a (timestamp, task_id) pair and if celery has no state for the task_id \n - actually \"no state\" is (mis)named celery.states.PENDING, which really means \"I don't have a clue about this task's state ATM\" - check the timestamp. If it's way older than it should then you can probably consider it as long dead and 6 feets under. \n\nand prevent navigating to another page. \n\nWhy would you want to prevent your user to do something else in the meantime, actually ? From a UI\/UX point of view, once the task is fired, your user should be redirected to a \"please wait\" page with (as much as possible) some progress bar or similar feedback. The simple (if a bit heavy) solution here is to do some polling using ajax : you setup a view that takes a task_id, check results \/ progress and returns them as json, and the 'please wait' page calls this views (using ajax) every X seconds to update itself (and possibly redirect the user to the next page when the task is done).\nNow if there are some operations (apart from re-launching the same task) your user couldn't do while a task is running, you can use the same \"check session for a current task\" mechanism for those operations (making it a view decorator really helps).","Q_Score":0,"Tags":"jquery,python,ajax,django,celery","A_Id":47076457,"CreationDate":"2017-11-02T12:43:00.000","Title":"run celery task only once on button click","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Using dockerbuild file, how can I do there something like:\nexport PYTHONPATH=\/:$PYTHONPATH\nusing RUN directive or other option","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2449,"Q_Id":47079459,"Users Score":7,"Answer":"In your Dockerfile, either of these should work:\n\nUse the ENV instruction (ENV PYTHONPATH=\"\/:$PYTHONPATH\")\nUse a prefix during the RUN instruction (RUN export PYTHONPATH=\/:$PYTHONPATH && )\n\nThe former will persist the changes across layers. The latter will take effect in that layer\/RUN command","Q_Score":2,"Tags":"python,docker","A_Id":47079624,"CreationDate":"2017-11-02T15:50:00.000","Title":"Adding path to pythonpath in dockerbuild file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"What's the best way to manage multiple Python installations (long-term) if I've already installed Python 3 via brew? In the past Python versions were installed here, there, and everywhere, because I used different tools to install various updates. As you can imagine, this eventually became a problem.\nI once was in a situation where a package used in one of my projects only worked with Python 3.4, but I had recently updated to 3.6. My code no longer ran, and I had to scour the system for Python 3.4 to actually fire up the project. It was a huge PITA. \nI recently wiped my computer and would like to avoid some of my past mistakes. Perhaps this is na\u00efve, but I'd like to limit version installation to brew. (Unless that's non-sensical \u2014 I'm open to other suggestions!) Furthermore, I'd like to know how to resolve my past version management woes (i.e. situations like the one above). I've heard of pyenv, but would that conflict with brew?\nThanks!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":432,"Q_Id":47082736,"Users Score":0,"Answer":"I agree to using virtualenv, it allows you to manage different python versions separately for different projects and clients. \nThis basically allows you to have each project it's own dependencies which are isolated from others.","Q_Score":0,"Tags":"python,macos,homebrew","A_Id":47084214,"CreationDate":"2017-11-02T18:58:00.000","Title":"Managing multiple Python versions on OSX","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"What's the best way to manage multiple Python installations (long-term) if I've already installed Python 3 via brew? In the past Python versions were installed here, there, and everywhere, because I used different tools to install various updates. As you can imagine, this eventually became a problem.\nI once was in a situation where a package used in one of my projects only worked with Python 3.4, but I had recently updated to 3.6. My code no longer ran, and I had to scour the system for Python 3.4 to actually fire up the project. It was a huge PITA. \nI recently wiped my computer and would like to avoid some of my past mistakes. Perhaps this is na\u00efve, but I'd like to limit version installation to brew. (Unless that's non-sensical \u2014 I'm open to other suggestions!) Furthermore, I'd like to know how to resolve my past version management woes (i.e. situations like the one above). I've heard of pyenv, but would that conflict with brew?\nThanks!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":432,"Q_Id":47082736,"Users Score":2,"Answer":"Use virtualenvs to reduce package clash between independent projects. After activating the venv use pip to install packages. This way each project has an independent view of the package space. \nI use brew to install both Python 2.7 and 3.6. The venv utility from each of these will build a 2 or 3 venv respectively. \nI also have pyenv installed from brew which I use if I want a specific version that is not the latest in brew. After activating a specific version in a directory, I will then create a venv and use this to manage the package isolation. \nI can't really say what is best. Let's see what other folks say.","Q_Score":0,"Tags":"python,macos,homebrew","A_Id":47083151,"CreationDate":"2017-11-02T18:58:00.000","Title":"Managing multiple Python versions on OSX","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"The doc is:\n\nWith this option you can configure the maximum number of tasks a worker can execute before it\u2019s replaced by a new process.\n\nIn what condition will a worker be replaced by a new process ? Does this setting make a worker, even with multi processes, can only process one task at one time?","AnswerCount":1,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":3499,"Q_Id":47089489,"Users Score":7,"Answer":"It means that when celery has executed tasks more than the limit on one worker (the \"worker\" is a process if you use the default process pool), it will restart the worker automatically.\nSay if you use celery for database manipulation and you forget to close the database connection, the auto restart mechanism will help you close all pending connections.","Q_Score":4,"Tags":"python-3.x,celery","A_Id":47645248,"CreationDate":"2017-11-03T06:10:00.000","Title":"What does [Max tasks per child setting] exactly mean in Celery?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"As in the title, when running the appserver I get a DistributionNotFound exception for google-cloud-storage:\n\nFile \"\/home\/[me]\/Desktop\/apollo\/lib\/pkg_resources\/init.py\", line 867, in resolve\n raise DistributionNotFound(req, requirers)\n DistributionNotFound: The 'google-cloud-storage' distribution was not found and is required by the application\n\nRunning pip show google-cloud-storage finds it just fine, in the site packages dir of my venv. Everything seems to be in order with python -c \"import sys; print('\\n'.join(sys.path))\" too; the cloud SDK dir is in there too, if that matters.\nNot sure what to do next.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1810,"Q_Id":47104930,"Users Score":0,"Answer":"The solution for me was that both google-cloud-storage and pkg_resources need to be in the same directory.\nIt sounds like your google-cloud-storage is in venv and your pkg_resources is in the lib folder","Q_Score":7,"Tags":"google-cloud-platform,google-cloud-storage,google-cloud-python","A_Id":65637184,"CreationDate":"2017-11-03T21:56:00.000","Title":"google-cloud-storage distribution not found despite being installed in venv","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"What is the difference between \"AWS Command Line Interface\" and \"AWS Elastic Beanstalk Command Line Interface\"? \nDo I need both to deploy a Django project through AWS Elastic Beanstalk?\nThank you!","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1072,"Q_Id":47106848,"Users Score":2,"Answer":"You should start with the EBCLI and then involve the AWSCLI where the EBCLI falls short.\nThe AWSCLI (aws) allows you to run commands from a bunch of different services, whereas, the EBCLI (eb) is specific to Elastic Beanstalk. The EBCLI makes a lot of tedious tasks easier because it is less hands on than the AWS CLI. I have observed, for most of my tasks, the EBCLI is sufficient; I use the AWS CLI and the AWS SDKs otherwise.\nConsider deploying your Django app.\n\nYou could start off by performing eb init, which would take you through an interactive set of menus, from which you would choose your region, and solution stack (Python).\nNext, you would perform eb create, which creates an application version and subsequently an Elastic Beanstalk environment for you.\n\nThe above two EBCLI steps translate to half a dozen or more AWSCLI steps. Furthermore, a lot of the processes that the EBCLI hides from you involve multiple AWS services, which can make the task of replicating the EBCLI through the AWS CLI all the more tedious and error-prone.","Q_Score":3,"Tags":"python,django,amazon-web-services,aws-cli,amazon-elastic-beanstalk","A_Id":47276695,"CreationDate":"2017-11-04T02:40:00.000","Title":"What is the difference between AWSCLI and AWSEBCLI?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"What is the difference between \"AWS Command Line Interface\" and \"AWS Elastic Beanstalk Command Line Interface\"? \nDo I need both to deploy a Django project through AWS Elastic Beanstalk?\nThank you!","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1072,"Q_Id":47106848,"Users Score":0,"Answer":"You only need eb to deploy and control Elastic Beanstalk. You can use aws to control any other resource in AWS. You can also use aws for lower-level control of Elastic Beanstalk.","Q_Score":3,"Tags":"python,django,amazon-web-services,aws-cli,amazon-elastic-beanstalk","A_Id":47106874,"CreationDate":"2017-11-04T02:40:00.000","Title":"What is the difference between AWSCLI and AWSEBCLI?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"For example i want to count number of files in a directory in real time. As python script will be running continuously in background, when i add a file in the directory, value of count variable updates in my script.\nSo, i need a solution with which pyton script keeps running and updates vcalue in runtime.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":152,"Q_Id":47110412,"Users Score":0,"Answer":"You can use 'task scheduler' if you are using windows, or 'crontab' if you are using linux, those will run the scrip in the background for you.\nThere are different ways to do the thing that you are asking for.\nFor example, you can run the script that counts the files, and each time it runs, writes the number of files it found to a txt file somewhere.","Q_Score":0,"Tags":"python,python-3.x","A_Id":47110895,"CreationDate":"2017-11-04T11:50:00.000","Title":"Python script to keep a python file running all the time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"For example i want to count number of files in a directory in real time. As python script will be running continuously in background, when i add a file in the directory, value of count variable updates in my script.\nSo, i need a solution with which pyton script keeps running and updates vcalue in runtime.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":152,"Q_Id":47110412,"Users Score":0,"Answer":"You can use a package called \"threading\", which runs the code every n seconds(minutes, hours etc.)","Q_Score":0,"Tags":"python,python-3.x","A_Id":47110985,"CreationDate":"2017-11-04T11:50:00.000","Title":"Python script to keep a python file running all the time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need a way to ensure only one python process is processing a directory.\nThe lock\/semaphore should be local to the machine (linux operating system). \nNetworking or NFS is not involved.\nI would like to avoid file based locks, since I don't know where I should put the lock file.\nThere are libraries which provide posix IPC at pypi.\nIs there no way to use linux semaphores with python without a third party library?\nThe lock provided by multiprocessing.Lock does not help, since both python interpreter don't share one the same parent.\nThreading is not involved. All processes have only one thread.\nI am using Python 2.7 on linux.\nHow to to synchronize two python scripts on linux (without file based locking)?\nRequired feature: If one process dies, then the lock\/semaphore should get released by the operating system.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1095,"Q_Id":47136965,"Users Score":0,"Answer":"I would like to avoid file based locks, since I don't know where I should put the lock file.\n\nYou can lock the existing file or directory (the one being processed).\n\nRequired feature: If one process dies, then the lock\/semaphore should get released by the operating system. \n\nThat is exactly how file locks work.","Q_Score":2,"Tags":"python,linux,ipc,python-multiprocessing","A_Id":47141169,"CreationDate":"2017-11-06T12:22:00.000","Title":"Linux IPC: Locking, but not file based locking","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been using pywinauto for opening a command prompt (Mingw-64) and was passing commands using type_keys\nIt was working properly in my local system but, when i hosted my code into RDP server, i am not able to restore the window and pass the commands when RDP is in minimized state \nPlease give me a proper solution and let me know if any package does the same purpose.\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1048,"Q_Id":47137372,"Users Score":1,"Answer":"There are several points to improve.\n\nIt's better to use standard Python module subprocess with stdin re-direction to communicate with a command line application. I'd highly recommend you this way which is resistant to RDP minimizing.\nRDP doesn't provide GUI context in minimized state (any GUI automation tool will give up here). To workaround it simply switch RDP from full-screen mode to restored window state (non-minimized!), run your GUI automation script inside RDP window and quickly switch to your local machine (to another window) and continue your work without affecting the automation script. Just don't ever minimize RDP. It's a manual quick hack, if you do it rarely.\nThird thing to automate is using command psexec with key -i (interactive). This way you can run remote commands with GUI context automatically without manual hacks. Just find and download PsexecTools (recommended) or learn similar commands for Power Shell.\nTo eliminate this problem at all just use VNC Server software like TightVNC instead of RDP. If you used RDP at least once, you have to reboot the remote machine though. One more possible pitfall is the fact that VNC display is not virtual (like RDP session), hence it requires to have relevant display drivers for your video card. Otherwise you may face with black screen or small resolution. The big plus of VNC that it keeps GUI context even if you disconnect from current session (i.e. closed your laptop before going home).","Q_Score":0,"Tags":"python-2.7,pywinauto","A_Id":47142927,"CreationDate":"2017-11-06T12:45:00.000","Title":"How to restore a window in RDP using pywinauto, when RDP is in Minimized state","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed Python3 and pip3 on my Macbook pro. \nRunning python --version shows Python 3.6.3\nRunning pip --version shows pip 9.0.1 from \/usr\/local\/lib\/python3.6\/site-packages (python 3.6)\nhowever running aws --version shows aws-cli\/1.11.170 Python\/2.7.10 Darwin\/16.7.0 botocore\/1.7.28\nLooks like it's using python2. How do I fix this?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":3869,"Q_Id":47146493,"Users Score":2,"Answer":"Why is it really an issue?\nI assume you installed the AWS CLI tool by downloading the installer directly. If you want to \"fix\" it then uninstall the CLI tool, and then install it through pip with pip install awscli.","Q_Score":2,"Tags":"python,amazon-web-services,boto3,aws-cli","A_Id":47146546,"CreationDate":"2017-11-06T21:54:00.000","Title":"AWS CLI is using Python 2 instead of Python 3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm trying to create a python script to help me manage my car radio's music library. The idea is the following: I have a USB flash drive with 2-hour podcast mp3 files. Since I never drive such long journeys, the script splits the files in 5-minute fragments and removes the originals.\nNow, the next thing I would want to do is automatically remove the ones I've already played. My first idea was something like DRM or a self-destructing file that erased itself once it's played, but from what I have found online that's pretty much impossible.\nSo, the question is, can I check with pydub if the file has been already played, so when I arrive home I can plug the USB in the computer, run the script, detect the played files and erase them?\nThanks and sorry if it's a dumb question!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":47146728,"Users Score":0,"Answer":"No. Playing an audio file doesn't modify the file.\nUnless your car stereo writes some sort of log to the flash drive of what it's played -- which is unlikely; I've never seen or heard of a stereo that did that -- there's no way to determine which files have been played.","Q_Score":0,"Tags":"python,audio,pydub","A_Id":47147058,"CreationDate":"2017-11-06T22:13:00.000","Title":"Can pydub know if a file has ever been played?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"After running sudo pip install google.cloud.pubsub\nI am running the following python code in ubtunu google compute engine:\nimport google.cloud.pubsub_v1\nI get the following error when importing this:\n\nImportError: No module named pubsub_v1 attribute 'SubscriberClient'\n\nCan anyone tell me how to fix this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":558,"Q_Id":47193190,"Users Score":1,"Answer":"Update your google-cloud-pubsub to the latest version. It should resolve the issue","Q_Score":2,"Tags":"python,cloud,publish-subscribe","A_Id":51531222,"CreationDate":"2017-11-09T03:02:00.000","Title":"AttributeError: 'module' object has no attribute 'SubscriberClient'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wanted to know whether win32com.client package be used in an python app if I want to run that application in the Unix server. My goal is to set a cron in the unix server to automate some mail related task using win32com.client. All I want to know is that will this whole win32com will work smoothly in the Unix server.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":624,"Q_Id":47194826,"Users Score":0,"Answer":"No. win32com can only be installed and used on a Windows OS.","Q_Score":0,"Tags":"python,unix,win32com","A_Id":47194889,"CreationDate":"2017-11-09T05:56:00.000","Title":"Small query regarding win32com.client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been using gcloud and gsutil for a while but now suddenly for any gsutil command I run I get errors:\nTraceback (most recent call last):\n File \"\/Users\/julian\/google-cloud-sdk\/bin\/bootstrapping\/gsutil.py\", line 12, in \n import bootstrapping\n File \"\/Users\/julian\/google-cloud-sdk\/bin\/bootstrapping\/bootstrapping.py\", line 22, in \n from googlecloudsdk.core.credentials import store as c_store\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/credentials\/store.py\", line 27, in \n from googlecloudsdk.core import http\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/http.py\", line 31, in \n from googlecloudsdk.core.resource import session_capturer\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/resource\/session_capturer.py\", line 32, in \n from googlecloudsdk.core.resource import yaml_printer\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/resource\/yaml_printer.py\", line 17, in \n from googlecloudsdk.core.resource import resource_printer_base\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/resource\/resource_printer_base.py\", line 38, in \n from googlecloudsdk.core.resource import resource_projector\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/resource\/resource_projector.py\", line 34, in \n from google.protobuf import json_format as protobuf_encoding\nImportError: cannot import name json_format\n\nI tried gcloud update and gcloud reinstall but still get same problem. Is there a conflict with the python installation? Any other ideas?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":2067,"Q_Id":47200623,"Users Score":1,"Answer":"I had the same issue. I am using a mac. \nLooking into \/usr\/local\/lib\/python2.7\/site-packages i found a homebrew protobuf link.\nI removed it with \"rm homebrew-protobuf.pth\"\nThen gsutil started working.","Q_Score":9,"Tags":"python,gcloud,gsutil,google-cloud-sdk","A_Id":49879820,"CreationDate":"2017-11-09T11:24:00.000","Title":"gsutil no longer works?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been using gcloud and gsutil for a while but now suddenly for any gsutil command I run I get errors:\nTraceback (most recent call last):\n File \"\/Users\/julian\/google-cloud-sdk\/bin\/bootstrapping\/gsutil.py\", line 12, in \n import bootstrapping\n File \"\/Users\/julian\/google-cloud-sdk\/bin\/bootstrapping\/bootstrapping.py\", line 22, in \n from googlecloudsdk.core.credentials import store as c_store\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/credentials\/store.py\", line 27, in \n from googlecloudsdk.core import http\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/http.py\", line 31, in \n from googlecloudsdk.core.resource import session_capturer\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/resource\/session_capturer.py\", line 32, in \n from googlecloudsdk.core.resource import yaml_printer\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/resource\/yaml_printer.py\", line 17, in \n from googlecloudsdk.core.resource import resource_printer_base\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/resource\/resource_printer_base.py\", line 38, in \n from googlecloudsdk.core.resource import resource_projector\n File \"\/Users\/julian\/google-cloud-sdk\/lib\/googlecloudsdk\/core\/resource\/resource_projector.py\", line 34, in \n from google.protobuf import json_format as protobuf_encoding\nImportError: cannot import name json_format\n\nI tried gcloud update and gcloud reinstall but still get same problem. Is there a conflict with the python installation? Any other ideas?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2067,"Q_Id":47200623,"Users Score":0,"Answer":"For CentOS 7.5 (probably earlier as well) using the Google Cloud SDK rpm install, removing the protobuf-python package yum remove protobuf-python will solve this.","Q_Score":9,"Tags":"python,gcloud,gsutil,google-cloud-sdk","A_Id":51340051,"CreationDate":"2017-11-09T11:24:00.000","Title":"gsutil no longer works?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm wondering what is the proper reaction to evens that lead to the error_cb callback being called.\nInitially our code was always throwing an Exception from the error_cb regardless of anything. We're running our stuff in Kubernetes, so restarting a consumer\/producer is (technically) not a big deal. But the number of restart were quite significant, so we added a couple of exceptions, which we just log without quitting:\n\nKafkaError._MSG_TIMED_OUT (both consumer and producer)\nKafkaError._TRANSPORT (consumer)\n\nThese are the ones that we see a lot, and confluent-kafka-python seems to be able to recover from them without any extra help. \nNow I'm wondering if we were right to throw any exceptions in error_cb to begin with. Should we start treating error_cb just as a logging function, and only react to exceptions thrown explicitly by poll and flush?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1415,"Q_Id":47215245,"Users Score":2,"Answer":"librdkafka will do its best to automatically recover from any errors it hits, so the error_cb is mostly informational and it is generally not advisable for the application to do anything drastic upon such an error.\n\n_MSG_TIMED_OUT and _TIMED_OUT- Kafka protocol requests timed out, typically due to network or broker issues. The requests will be retried according to the retry configuration, or the corresponding API \/ functionality willl propagate a more detailed error (e.g., failure to commit offsets). This error can safely be ignored.\n_TRANSPORT - the broker connection went down or could not be established, again this is a temporary network or broker problem and may too be safely ignored.","Q_Score":0,"Tags":"python,apache-kafka,confluent-platform","A_Id":47280609,"CreationDate":"2017-11-10T03:31:00.000","Title":"error_cb in confluent-kafka-python producers and consumers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to read from standard input chunk by chunk until EOF. For example, I could have a very large file, and I want to read in and process 1024 bytes at a time from STDIN until EOF is encountered. I've seen sys.stdin.read() which saves everything in memory at once. This isn't feasible because there might not be enough space available to store the entire file. There is also for \"line in sys.stdin\", but that separates the input by newline only, which is not what I'm looking for. Is there any way to accomplish this in Python?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2344,"Q_Id":47232558,"Users Score":0,"Answer":"You can read stdin (or any file) in chunks using f.read(n), where n is the integer number of bytes you want to read as an argument. It will return the empty string if there is nothing left in the file.","Q_Score":2,"Tags":"python,python-3.x","A_Id":47232633,"CreationDate":"2017-11-10T23:16:00.000","Title":"Python: How to read from stdin by byte chunks until EOF?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working with python3 on Debian Stable Linux. I see that I can install most packages from Debian repository as well as using pip install packagename command. \nIs there any difference between these two approaches and should I prefer one over the other? Are the location of packages installed different in two methods? Thanks for your answers\/comments.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":47235661,"Users Score":0,"Answer":"It is much easier to install or uninstall modules with the Pip, or Pip3 command.","Q_Score":0,"Tags":"python,linux,python-3.x,package","A_Id":47235704,"CreationDate":"2017-11-11T07:48:00.000","Title":"Intalling python packages with pip versus installing from Linux repository","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have two scripts inside a bigger system I'm developing. Lets call one foo.py and the other one bar.py.\nfoo creates files that bar will afterwards read and delete. Now, if foo is running and working on a file, bar will mess things up by working on the same file before foo is finished.\nfoo and bar get started automatically by other stuff, not manually by someone that runs them. How can I make sure that, if foo is working on a file, bar will not start?\nI have thought about editing both of them so that foo writes a 1 to a text file at the beginning of the execution and a 0 when it finishes, and to make bar read the text file to check wheter it can start. But I'm not really sure this is the most elegant solution, and also I don't want to block bar if foo ever fails in the middle of execution, leaving the file at 1.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":878,"Q_Id":47240282,"Users Score":0,"Answer":"A possible strategy : \n\nfoo build todo.foo.\nit renames it todo.bar when done.\nbar look after todo.bar, compute it and delete it when done.","Q_Score":2,"Tags":"python,python-3.x","A_Id":47240379,"CreationDate":"2017-11-11T16:41:00.000","Title":"Prevent python script from running if another one is being executed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"CentOS release 6.9 (Final) : Linux version 2.6.32-696.10.2.el6.i686 (mockbuild@c1bl.rdu2.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC) ) #1 SMP Tue Sep 12 13:54:13 UTC 2017\nInstall command : \nwget https:\/\/www.python.org\/ftp\/python\/3.5.4\/Python-3.5.4.tgz\ntar -zxvf Python-3.5.4.tgz\ncd Python-3.5.4\nmkdir \/usr\/local\/python3.5\n.\/configure --prefix=\/usr\/local\/python3.5\nError step :\nmake\ngcc -pthread -c -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Werror=declaration-after-statement -I. -I.\/Include -DPy_BUILD_CORE \\\n -DGITVERSION=\"\\\"LC_ALL=C \\\"\" \\\n -DGITTAG=\"\\\"LC_ALL=C \\\"\" \\\n -DGITBRANCH=\"\\\"LC_ALL=C \\\"\" \\\n -o Modules\/getbuildinfo.o .\/Modules\/getbuildinfo.c\n\/bin\/sh: \/usr\/bin\/gcc: Permission denied\nmake: *** [Modules\/getbuildinfo.o] Error 126\n\nthe power of \/bin\/sh and \/usr\/bin\/gcc \n[root@iZ2814clj1uZ Python-3.5.4]# ll \/bin\/sh\nlrwxrwxrwx 1 root root 4 May 11 2017 \/bin\/sh -> bash\n[root@iZ2814clj1uZ Python-3.5.4]# ll \/bin\/bash\n-rwxr-xr-x 1 root root 872372 Mar 23 2017 \/bin\/bash\n[root@iZ2814clj1uZ Python-3.5.4]# ll \/usr\/bin\/gcc\n-rwxr-xr-x 2 root root 234948 Mar 22 2017 \/usr\/bin\/gcc\n\nI haved tried chmod 777 \/bin\/sh , \/bin\/bash , \/usr\/bin\/gcc and reboot the system, but it doesn't work. Has anyone else had this problem with this ?\nThanks for any help\/suggestions.\n\nupdate 2017-11-13: selinux audit log \ntype=USER_AUTH msg=audit(1510502012.108:840): user pid=4273 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:authentication acct=\"root\" exe=\"\/usr\/sbin\/sshd\" hostname=[ip address1] addr=[ip address1] terminal=ssh res=success'\ntype=USER_ACCT msg=audit(1510502012.108:841): user pid=4273 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:accounting acct=\"root\" exe=\"\/usr\/sbin\/sshd\" hostname=[ip address1] addr=[ip address1] terminal=ssh res=success'\ntype=CRYPTO_KEY_USER msg=audit(1510502012.108:842): user pid=4273 uid=0 auid=4294967295 ses=4294967295 msg='op=destroy kind=session fp=? direction=both spid=4274 suid=74 rport=31432 laddr=[ip address] lport=5676 exe=\"\/usr\/sbin\/sshd\" hostname=? addr=[ip address1] terminal=? res=success'\ntype=USER_AUTH msg=audit(1510502012.108:843): user pid=4273 uid=0 auid=4294967295 ses=4294967295 msg='op=success acct=\"root\" exe=\"\/usr\/sbin\/sshd\" hostname=? addr=[ip address1] terminal=ssh res=success'\ntype=CRED_ACQ msg=audit(1510502012.109:844): user pid=4273 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred acct=\"root\" exe=\"\/usr\/sbin\/sshd\" hostname=[ip address1] addr=[ip address1] terminal=ssh res=success'\ntype=LOGIN msg=audit(1510502012.109:845): pid=4273 uid=0 old auid=4294967295 new auid=0 old ses=4294967295 new ses=106\ntype=USER_START msg=audit(1510502012.111:846): user pid=4273 uid=0 auid=0 ses=106 msg='op=PAM:session_open acct=\"root\" exe=\"\/usr\/sbin\/sshd\" hostname=[ip address1] addr=[ip address1] terminal=ssh res=success'\ntype=USER_LOGIN msg=audit(1510502012.189:847): user pid=4273 uid=0 auid=0 ses=106 msg='op=login id=0 exe=\"\/usr\/sbin\/sshd\" hostname=[ip address1] addr=[ip address1] terminal=ssh res=success","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":47248056,"Users Score":0,"Answer":"Today,I found the answer. I haved install a security software, which prohibited create and excute binary file in the system directory. So I uninstalled it and my system back to normal.","Q_Score":1,"Tags":"python,linux,bash,gcc","A_Id":47265562,"CreationDate":"2017-11-12T11:07:00.000","Title":"Centos6.9 install python3.5 display error permission denied after make","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've got a google app engine application that loads time series data real-time into a google datastore nosql style table. I was hoping to get some feedback around the right type of architecture to pull this data into a web application style chart (and ideally something I could also plug into a content management system like Word Press).\nMost of my server-side code is python. What's a reasonable client-server setup to pull the data from the datastore database and display into my webpage? Ideally I'd have something that scales and doesn't cause an unnecessary number of reads on my database (potentially using google-app-engine's built in caching\/etc).\nI'm guessing this is a common use-case but I'd like to get an idea of what might be some best practices around this. I've seen some examples using client web side javascript\/ajax with server side php to read the DB- is this really the best way?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":47254930,"Users Score":0,"Answer":"Welcome to \"it depends\".\nYou have some choices. Imagine the classic four-quadrant chart. Along one axis is data size, along the other is staleness\/freshness.\nIf your time-series data changes rapidly but is small enough to safely be retrieved within a request, you can query for it on demand, convert it to JSON, and squirt it to the browser to be rendered by the JavaScript charting package of your choice. If the data is large, your app will need to do some sort of server-side pre-processing so that when the data is needed, it can be retrieved in sufficiently fewer requests that that the request won't time out. This might involve something data dependent like pre-bucketing the time series. \nIf the data changes slowly, you have the option of generating your chart on the server side, perhaps using matplotlib. When new data is ingested, or perhaps at intervals, spawn off a task to generate and cache the chart (or JSON to hand to the front-end) as a blob in the datastore. If the data is sufficiently large that a task will timeout, you might need to use a backend process. If the data is sufficiently large and you don't pre-process, you're in the quadrant of unhappiness.\nIn my experience, GAE memcache is best for caching data between requests where the time between requests is very short. Don't rely on generating artifacts, stuff them in memcache and hoping that they'll be there a few minutes later. I've rarely seen that work.","Q_Score":0,"Tags":"python,wordpress,google-app-engine,charts,google-cloud-datastore","A_Id":47257054,"CreationDate":"2017-11-12T22:55:00.000","Title":"Load chart data into a webapp from google datastore","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Non-zero exit code (1): \n _pydevd_bundle\/pydevd_cython.c:13:20: fatal error: Python.h: No such file or directory\n compilation terminated.\n error: command 'x86_64-linux-gnu-gcc' failed with exit status 1\n\nPlease help me resolve this error of trying to install Cython in PyCharm.","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":8886,"Q_Id":47257766,"Users Score":2,"Answer":"For python 3.7 sudo apt install libpython3.7-dev\nsolved my problem","Q_Score":38,"Tags":"python,python-3.x,compiler-errors,pycharm,cython","A_Id":61422234,"CreationDate":"2017-11-13T05:43:00.000","Title":"Compile Cython Extensions Error - Pycharm IDE","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using VSCode on OSX to start python development. I'm supposed to be using python 3.6.xxxx but when I use python -V i get 2.7.10\nMy path variable is\n\nbash: \/Library\/Frameworks\/Python.framework\/Versions\/3.6\/bin:\/usr\/local\/bin:\/usr\/bin:\/bin:\/usr\/sbin:\/sbin:\/usr\/local\/share\/dotnet:\/Library\/Frameworks\/Mono.framework\/Versions\/Current\/Commands:\/Users\/leonardo\/.rbenv\/shims:\/Library\/Frameworks\/Python.framework\/Versions\/3.6\/bin:\/Users\/leonardo\/bin:\/Users\/leonardo\/bin: No such file or directory\n\nI also can't get pylint to work on VSCode because, when i try to install it, i complains that pip is unavailable...\nwhat should i do?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":420,"Q_Id":47264196,"Users Score":1,"Answer":"I manage to do it by going on VS Code Command Pallete (Command+Shift+P) and using the command Python: Select Interpreter and chose 3.6","Q_Score":1,"Tags":"python,macos,visual-studio-code","A_Id":47381497,"CreationDate":"2017-11-13T12:16:00.000","Title":"How to force python 3.6 on MAC OSX","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working on a Windows computer and have been using Git Bash up until now without a problem. However, Git Bash seems to be missing some commands that Cygwin can provide, so I switched to Cygwin.\nI need to use AWS CLI with Cygwin but any time I input any aws command, I get the following error:\n\nC:\\users\\myusername\\appdata\\local\\programs\\python\\python36\\python.exe:\n can't open file\n '\/cygdrive\/c\/Users\/myusername\/AppData\/Local\/Programs\/Python\/Python36\/Scripts\/aws':\n [Errno 2] No such file or directory\n\nI've seen other questions about getting Cygwin working with AWS, but they seem to talk about AWS CLI being incompatible with Windows' Anaconda version of Python (which mine doesn't seem to be). Any thoughts on how to fix this? Thanks.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":942,"Q_Id":47289424,"Users Score":0,"Answer":"Ok, so I spent ages trying to do this as well because I wanted to get setup on the fast.ai course. Nothing seemed to work. However, I uninstalled Anaconda3 and installed Anaconda2 instead. That did the trick!","Q_Score":0,"Tags":"python,amazon-web-services,cygwin,aws-cli","A_Id":47595992,"CreationDate":"2017-11-14T15:24:00.000","Title":"AWS CLI not working in Cygwin","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working on a Windows computer and have been using Git Bash up until now without a problem. However, Git Bash seems to be missing some commands that Cygwin can provide, so I switched to Cygwin.\nI need to use AWS CLI with Cygwin but any time I input any aws command, I get the following error:\n\nC:\\users\\myusername\\appdata\\local\\programs\\python\\python36\\python.exe:\n can't open file\n '\/cygdrive\/c\/Users\/myusername\/AppData\/Local\/Programs\/Python\/Python36\/Scripts\/aws':\n [Errno 2] No such file or directory\n\nI've seen other questions about getting Cygwin working with AWS, but they seem to talk about AWS CLI being incompatible with Windows' Anaconda version of Python (which mine doesn't seem to be). Any thoughts on how to fix this? Thanks.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":942,"Q_Id":47289424,"Users Score":0,"Answer":"You are mixing cygwin posix path with with a not cygwin Python.\nC:\\users\\myusername\\appdata\\local\\programs\\python\\python36\\python.exe \nIs not the cygwin python so it can't open the file as \n\/cygdrive\/c\/Users\/myusername\/AppData\/Local\/Programs\/Python\/Python36\/Scripts\/aws is not a windows path that it can understand . Only Cygwin programs understand it.\nTwo possible solutions:\n1 Use a windows path\n2 Use a cygwin Python","Q_Score":0,"Tags":"python,amazon-web-services,cygwin,aws-cli","A_Id":47312502,"CreationDate":"2017-11-14T15:24:00.000","Title":"AWS CLI not working in Cygwin","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Currently i have the stuff in place in my TFS like below :\n\nTFS Job calls powershell script1\npowershell script1 calls powershell script2 (having module)\nPowershell module calls the python script using Start process which returns the standard output through -RedirectStandardOutput. And the same is capturing\/returning to TFS.\n\nBut the thing is all the output from python script is returning in one go and not able to get the line by line logs to TFS instantly.\nCan anyone suggest if there is way to return the output from python script to TFS instantly line by line logs?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":138,"Q_Id":47291361,"Users Score":1,"Answer":"You'd have to make the Start-Process call asynchronously either using Jobs or Runspaces, and then constantly monitor the job\/runspace child session for output to display.","Q_Score":0,"Tags":"python,powershell,tfs","A_Id":47295271,"CreationDate":"2017-11-14T17:01:00.000","Title":"Capturing the output from python script which called through powershell","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed Anaconda (python 3) on a Windows 10 machine. Whilst import pdb works inside a script I cant use pdb