[{"Question":"I use Centos 7 with OpenSSL 1.0.2k (openssl-1.0.2k-19.el7.x86_64.rpm)\nI've tried to upgrade to OpenSSL 1.1.1c by:\nyum install openssl11\nwhich basically installs: openssl11-1.1.1c-2.el7.x86_64.rpm and openssl11-libs-1.1.1c-2.el7.x86_64.rpm\nIt doesn't upgrade by overrides openssl 1.0.2 with openssl 1.1.1 but install openssl11 and its RPMs along with the existing openssl version.\nwhen I try to uninstall openssl 1.0.2, it causes some dependencies issues, which make perfectly sense since openssl11 provides openssl11 and not openssl:\nopenssl is needed by (installed) python2-cryptography-1.7.2-2.el7.x86_64\nopenssl is needed by (installed) pcs-0.9.162-5.el7.centos.x86_64\nopenssl is needed by (installed) rng-tools-6.3.1-3.el7.x86_64\n\/usr\/bin\/openssl is needed by (installed) authconfig-6.2.8-30.el7.x86_64\nWhat should I do to overcome this?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":23805,"Q_Id":63508872,"Users Score":13,"Answer":"You can't uninstall openssl 1.0.2 CentOS requires it and you cannot upgrade it until they decide to do ( but they won't , you'll have to upgrade to CentOS 8)\nOpenssl11 is for \"spot\" usage with specific environments if you need it.","Q_Score":11,"Tags":"python-2.7,openssl,centos7","A_Id":63525186,"CreationDate":"2020-08-20T15:56:00.000","Title":"Upgrading CentOS 7 to OpenSSL 1.1.1 by yum install openssl11","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am writing index.py for my web page test on my Ubuntu laptop.\nI wrote #!usr\/bin\/env python3 at the top of the code and used some print() lines below it.\nwhen I open the webpage under my IP address... the page is not printing what I intended and shows the whole code directly.\nCould you see what I was wrong with?\nThanks!\nPaul","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":63538389,"Users Score":0,"Answer":"Browsers don't understand python and they shouldn't run any file on your PC, so they will show the file as if it were HTML.\nTo make a website with python, you can either use a webframework like django or just write the ouput of your python script into a html\/text file.","Q_Score":0,"Tags":"python,html,shebang","A_Id":63538634,"CreationDate":"2020-08-22T16:16:00.000","Title":"shebang python line on ubuntu is not working for my webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Background\nBelow, I detail two different ways of running Python files - I am confused about the difference between them.\n\nRunning the Python file as an executable.\n\nTo run a Python file as an executable, I must first set a shebang in my file (# \/usr\/bin\/env python3), then run $ chmod +x filename.py at the command line, then run $ .\/filename.py at the command line to execute the file.\n\nRunning the Python file through the python3 command line command.\n\nTo run a Python file through the python3 command, I open my command line and run $ python3 filename.py.\nMy Question\nI understand that, when running the Python file as an executable, the shebang directs the computer to launch the (in this case) python3 interpreter which will interpret the subsequent code in the file and therefore run the file. When running the file through the python3 command, I understand that this is just another way of directing the computer to launch python3 to interpret the code in the file. To me, these two techniques therefore seem identical.\nAm I missing something? What's the difference, if any, between these two ways of running a Python file?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1820,"Q_Id":63543484,"Users Score":1,"Answer":"Nope, you have pretty much captured it.\nA practical consequence is that the shebang relieves you from having to remember whether it's python3 frobnicate or python frobnicate or sh frobnicate or bash frobnicate or awk frobnicate or perl frobnicate or...\nThis also makes it easy down the line to change your mind. Many tools of mine have started life as simple shell scripts, then been rewritten in Python or something else; but the calling interface doesn't change.\nBefore Unix, there was an unbridgable gap between system utilities (which you invoke simply by name) and user scripts (which before the introduction of the shebang always had to be called with an explicit interpreter).You still see remnants of this division in lesser systems. An important consequence was that users were able to easily and transparently wrap or replace standard commands with their own versions. This in some sense democratized the system, and empowered users to try out and evaluate improvement ideas for the system on their own. (Figuring out why your brilliant theory wasn't so great in practice is also an excellent way to learn and improve.) I don't think the importance of this versatility and flexibility can be overstated; it's one of those things which converted us from mere users to enthusiasts.","Q_Score":2,"Tags":"python,python-3.x","A_Id":63544153,"CreationDate":"2020-08-23T04:45:00.000","Title":"Difference Between Running Python File as Executable vs. Running from Command Line?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Background\nBelow, I detail two different ways of running Python files - I am confused about the difference between them.\n\nRunning the Python file as an executable.\n\nTo run a Python file as an executable, I must first set a shebang in my file (# \/usr\/bin\/env python3), then run $ chmod +x filename.py at the command line, then run $ .\/filename.py at the command line to execute the file.\n\nRunning the Python file through the python3 command line command.\n\nTo run a Python file through the python3 command, I open my command line and run $ python3 filename.py.\nMy Question\nI understand that, when running the Python file as an executable, the shebang directs the computer to launch the (in this case) python3 interpreter which will interpret the subsequent code in the file and therefore run the file. When running the file through the python3 command, I understand that this is just another way of directing the computer to launch python3 to interpret the code in the file. To me, these two techniques therefore seem identical.\nAm I missing something? What's the difference, if any, between these two ways of running a Python file?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1820,"Q_Id":63543484,"Users Score":2,"Answer":"In practice, they're identical.\nThe shebang is just a convention that tells the OS what to do with the script file that you've chmod-ed to be executable -- i.e., executable with what. Without it, the OS just treats the file as a text file and will try to execute it as a shell script.","Q_Score":2,"Tags":"python,python-3.x","A_Id":63543511,"CreationDate":"2020-08-23T04:45:00.000","Title":"Difference Between Running Python File as Executable vs. Running from Command Line?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to open the start menu and open some applications like paint,Ms word etc. Is there any module in python which can do these tasks ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":63549848,"Users Score":0,"Answer":"You can do it directly with the python subprocess module.\nimport subprocess\nsubprocess.Popen(['C:\\Windows\\system32\\mspaint.exe'])\nBut if you want to open start menu and click on the applications then Pyautogui module can be used.","Q_Score":1,"Tags":"python,windows,user-interface,automation","A_Id":63549956,"CreationDate":"2020-08-23T17:11:00.000","Title":"Want to interact with windows applications using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using Dask Futures to speed up a Monte Carlo process in python, and am looking to improve the code by displaying a \"time remaining\" feature to the user. My idea is to somehow grab the time to completion from the previous completed tasks executed by futures to estimate this time remaining value. I see this timing information displayed in the dask scheduler GUI, but is there some way of grabbing this in python code so I can utilize it? Any feedback on this would be greatly appreciated. Thank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":63563323,"Users Score":0,"Answer":"If you have a scheduler local to your process (this is commonly the case if you are using LocalCluster at client.cluster.scheduler) then you might want to look at the .total_occupancy attribute. If you want to go into more detail then check out .task_groups","Q_Score":0,"Tags":"python,time,dask,dask-distributed","A_Id":63784710,"CreationDate":"2020-08-24T14:46:00.000","Title":"Access Time to completion from Dask Task Stream","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Am starting learning python, when i want erase all the result in shell, I need to close the shell, can I do this with some command or control keyboard?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":131,"Q_Id":63569746,"Users Score":0,"Answer":"You can use os.system('clear') to talk to the operating system, and ask the system to run the clear command. Of course you have to add import os at the beginning.","Q_Score":0,"Tags":"python,python-3.x,developer-tools","A_Id":63569832,"CreationDate":"2020-08-24T22:58:00.000","Title":"How to clean Python\u00b4s shell?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"If you accidentally install a dependency in poetry as a main dependency (i.e. poetry add ...), is there a way to quickly transfer it to dev dependencies (i.e. poetry add --dev ...), or do you have to uninstall it and reinstall with poetry add --dev?","AnswerCount":2,"Available Count":2,"Score":0.3799489623,"is_accepted":false,"ViewCount":2774,"Q_Id":63574469,"Users Score":4,"Answer":"You can also poetry add -D and poetry remove in either order. Just be sure to use the same version constraint. Poetry stops\/warns you if you use different constraints as they'd conflict.","Q_Score":19,"Tags":"python-3.x,python-poetry","A_Id":64814045,"CreationDate":"2020-08-25T08:00:00.000","Title":"Transfer dependency to --dev in poetry","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"If you accidentally install a dependency in poetry as a main dependency (i.e. poetry add ...), is there a way to quickly transfer it to dev dependencies (i.e. poetry add --dev ...), or do you have to uninstall it and reinstall with poetry add --dev?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2774,"Q_Id":63574469,"Users Score":24,"Answer":"You can move the corresponding line in the pyproject.toml from the [tool.poetry.dependencies] section to [tool.poetry.dev-dependencies] by hand and run poetry lock --no-update afterwards.","Q_Score":19,"Tags":"python-3.x,python-poetry","A_Id":63578471,"CreationDate":"2020-08-25T08:00:00.000","Title":"Transfer dependency to --dev in poetry","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Fairly new 'programmer' here, trying to understand how Python interacts with Windows when multiple unrelated scripts are run simultaneously, for example from Task Manager or just starting them manually from IDLE. The scripts just make http calls and write files to disk, and environment is 3.6.\nIs the interpreter able to draw resources from the OS (processor\/memory\/disk) independently such that the time to complete each script is more or less the same as it would be if it were the only script running (assuming the scripts cumulatively get nowhere near using up all the CPU or memory)? If so, what are the limitations (number of scripts, etc.).\nPardon mistakes in terminology. Note the quotes on 'programmer'.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":571,"Q_Id":63583023,"Users Score":2,"Answer":"how Python interacts with Windows\nPython is an executable, a program. When a program is executed a new process is created.\npython myscript.py starts a new python.exe process where the first argument is your script.\nwhen multiple unrelated scripts are run simultaneously\nThey are multiple processes.\nIs the interpreter able to draw resources from the OS (processor\/memory\/disk) independently?\nYes. Each process may access the OS API however it wishes, to the extend that it is possible.\nWhat are the limitations?\nMost likely RAM. The same limitations as any other process might encounter.","Q_Score":3,"Tags":"python","A_Id":63583166,"CreationDate":"2020-08-25T16:14:00.000","Title":"Performance running multiple Python scripts simultaneously","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to upgrade to the most recent version of python, which is currently 3.8.5.\nWhen I type python3 --version, I get:\nPython 3.7.0\nSo it appears my python is not up-to-date. I then type brew upgrade python and get a warning:\nWarning: python 3.8.5 already installed\nSo, again, I type python3 --version, and again I get:\nPython 3.7.0\nWhy is python3 --version not returning Python 3.8.5?\n(PS - if I type python --version I get Python 2.7.11 as expected for my Mac)","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1728,"Q_Id":63586773,"Users Score":1,"Answer":"You may have multiple python binaries on your system but the system path is finding 3.7.0. The command which python3 will show the path of your current python binary.\nIt might be worthwhile taking a look at the system path with echo $PATH to see where your system is looking for Python\nUpdating your system environment using export as below should enable the system to find python3.8.5\nPATH=\"\/path\/to\/python3.8.5\/bin:${PATH}\"\nexport PATH","Q_Score":0,"Tags":"python,python-3.x,macos,terminal,homebrew","A_Id":63587137,"CreationDate":"2020-08-25T20:41:00.000","Title":"python3 --version command not returning correct version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Sorry, I'm a noob in iptables.\nI have a VPN app which binds on local port 1080, while it goes to destination port 1194 (openvpn). The app does not support privileged port binding (which needs root, of which I have). I want the app to bind on local port 25. I have browsed Google and the answer seems to be iptables. I have seen many posts, of which many say the SNAT target is the one I should use.\nI have tried this code:\niptables -I POSTROUTING -o wlan0 -t nat -p tcp --destination 195.123.216.159 -m tcp --dport 1194 -j SNAT --to-source 192.168.43.239:25\nAnd these:\niptables -I FORWARD -p tcp -d 192.168.43.239 -m tcp --dport 25 -j ACCEPT\niptables -I FORWARD -p tcp -s 192.168.43.239 -m tcp --sport 25 -j ACCEPT\niptables -I OUTPUT -o wlan0 -p tcp -m tcp --sport 25 -j ACCEPT\niptables -I INPUT -i wlan0 -p tcp -m tcp --dport 25 -j ACCEPT\nWhat I want is to make the output to be something like this when I run the netstat command:\n\ntcp 0 0 192.168.43.239:25 195.123.216.159:1194 ESTABLISHED\n\nBut instead, after running all the codes, the output to netstat becomes this:\n\ntcp 0 0 192.168.43.239:1080 195.123.216.159:5000 ESTABLISHED\n\nIs it impossible to change binding port using iptables? Please help me to understand the concepts of networking.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":63599735,"Users Score":0,"Answer":"Turns out iptables was just doing its job correctly. Translated packets turn out to not be tracked by netstat. I was lost and completely didnt understand that iptables doesnt alter ip v6 traffic of which the app was using. And the forward rules where not necessary since the chain policy was to accept the packets.","Q_Score":0,"Tags":"sockets,tcp,iptables,python-iptables","A_Id":68606109,"CreationDate":"2020-08-26T14:22:00.000","Title":"Is there a way to change local port bound using iptables?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was using virtualenv in terminal when my terminal crashed and all my libraries were working fine. All my packages\/libraries are in \/Library\/Python\/2.7\/site-packages directory. I am using Python 3.8 I have tried export PYTHONPATH=\/Library\/Python\/2.7\/site-packages but this also does not work when I run my programs. I also get this error on certain programs: ImportError: Missing required dependencies ['numpy']-although Numpy is there in path I mentioned earlier. After the terminal crashed I can no longer create virtualenv's either, something is very wrong.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":63616389,"Users Score":0,"Answer":"use python -m pip install [package] to install all packages.","Q_Score":0,"Tags":"python,macos,terminal,package,libraries","A_Id":65349834,"CreationDate":"2020-08-27T12:51:00.000","Title":"Python Libraries not working in Terminal on Mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hello i need to install Anbox for running some of android apps in my parrot machine,i tried install the anbox by the documentation provide by them , For installing the Kernel Modules i need to add ppa repo it give exception\nsudo add-apt-repository ppa:morphis\/anbox-support\ngives exception\naptsources.distro.NoDistroTemplateException: Error: could not find a distribution template for Parrot\/n\/a\nCan you help me to install the anbox in my parrot machine","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1122,"Q_Id":63617916,"Users Score":0,"Answer":"Parrot OS does not take add-apt-repository command.\nTry adding manually in the sources file instead.","Q_Score":0,"Tags":"python,android,linux,parrot-os","A_Id":64254397,"CreationDate":"2020-08-27T14:11:00.000","Title":"How to install Anbox in Parrot OS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to install pip for the default installation of Python on Mac OS.\nPlease don't recommend brew, I already have it and installed Python 3 with it, but it seems that Automator only knows how to use the default version of Python located in \/usr\/bin\/python That's the reason behind my specific request\nI did my homework first, or tried to, before asking the question, but what I found confusing is that the recommended method seems to be using get-pip.py, but the pip documentation says\n\nWarning Be cautious if you are using a Python install that is managed\nby your operating system or another package manager. get-pip.py does\nnot coordinate with those tools, and may leave your system in an\ninconsistent state.\n\nThis threw me off, as I don't want to risk breaking the default Python on Mac OS, as I understood that might mess my system.\nI also didn't want to use the deprecated easy_install.\nAnd I couldn't find an answer to my question, as usually the answers just recommend installing a different version of Python with brew.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":175,"Q_Id":63625520,"Users Score":0,"Answer":"Problem\nSeems like Automator isn\u2019t loading \/usr\/local\/Cellar\/bin into your PATH. You can echo $PATH in Automator to confirm this.\nSolution\nReinstall using brew and ensure that you run brew link python.\nYou can export PATH=... before running your script or move \/usr\/bin\/python to \/usr\/bin\/pythonx.x where x is the default version installed, then symlink \/usr\/bin\/python to your brew installed python in \/usr\/local\/bin\/.","Q_Score":0,"Tags":"python,macos,pip","A_Id":63751388,"CreationDate":"2020-08-27T23:40:00.000","Title":"Installing pip with default python on Mac OS 10.14 Mojave","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am building a custom Linux image in Yocto Zeus (Previously used Yocto Thud). I have moved all the required code to Python3 and hence don't require Python2 anymore. Is there a way we can remove python2 and its modules completely from Image","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":796,"Q_Id":63628282,"Users Score":2,"Answer":"Start from an image inheriting core-image-minimal and add packages manually. Only the packages that are specified to be installed explicitly in your image recipe and the packages specified in RDEPENDS and RRECOMMENDS of those packages will be installed in the recipe.\nSome packages are also pulled because of configuration files (machine, distro or local.conf).\nIf there are RRECOMMENDS you don't want, you can use BAD_RECOMMENDATIONS in your image recipe to ask the image to not pull them in.\nIf it's an RDEPENDS that you don't want, maybe it's pulled because of a selected PACKAGECONFIG that you don't need, in that case create a bbappend for that recipe and set PACKAGECONFIG accordingly.\nIf that still does not do it, you'll have to dig deeper into what can be removed from RDEPENDS and for which reason (is it a mistake? is it safe in one specific configuration in which RDEPENDS is not needed?).\nThe way to know which package is pulling which package is to use -g as argument of bitbake. Do not attempt to create a scheme\/drawing\/image from the dot files, they are too big for dot to handle properly (takes hours and the result is unusable). \"recipeA:do_foo\" => \"recipeB:do_bar\" means that do_foo task from recipeA depends on do_bar from recipeB.\nPACKAGE_EXCLUDE in one of the configuration files (local.conf or distro.conf) should\nalso make it easier to identify which recipe needs the one recipe you don't want.","Q_Score":2,"Tags":"python-3.7,embedded-linux,yocto,bitbake,openembedded","A_Id":63631141,"CreationDate":"2020-08-28T06:01:00.000","Title":"Remove Python2 and related components completely in Yocto","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have made a timer, when the time is up, it shows a notification using the module plyer but when I turn it into an executable with pyinstaller using the command pyinstaller timer.pyw and run it, the notification does not show and the window just crashes. I think it's because that pyinstaller does not support plyer. I've tried using the --hidden-import option but it still does not work. how do I solve this problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":137,"Q_Id":63636533,"Users Score":1,"Answer":"I just went to python's site-packages folder, for me it was c:\\python\\Lib\\site-packages\\ and copied the plyer folder to the PyInstaller directory.","Q_Score":0,"Tags":"python,pyinstaller","A_Id":64448486,"CreationDate":"2020-08-28T15:13:00.000","Title":"problem with plyer when using PyInstaller","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"After upgrading my system interpreter to python@3.8 with brew, now everything got out of order. I think brew had symlinks of pip3 through pip which enabled the direct use of pip install instead of using pip3 after the upgrade, this thing stopped existing, whenever I call pip, I get a bad interpreter: No such file or directory and I wasn't able to install any new packages unless I refer to pip3.8 by full path which is in \/usr\/local\/Cellar\/python@3.8. So I symlinked pip to pip3.8 which is not the best practice ever, and most packages depend on python3.7 are currently crashing ex: jupyter notebook, which when I start using jupyter notebook from the terminal, it opens jupyter in the browser, but as soon as I open a notebook, I get a kernel error because for some reason it depends on python3.7 which is not there. Even when I tried python3.8 -m jupyter notebook I get the same error. Needless to mention most of the packages that are launched from the terminal ex: scrapy are looking for python3.7 interpreter. So I installed python@3.7 using brew which is automatically not installed in \/usr\/local\/bin or \/usr\/bin to avoid clashing with the system.\nSo ... you got the idea, I have a mess and as I do not understand the dynamics of how python structure lives within OSX system I was thinking the best way is wipe all python3.x including packages, pip and various components, break dependencies and let them depend on native python2.7 maybe or I don't know I'm sure there is some clean way to do it and then do a clean install of python3.8. Any suggestions on how to do this without creating more mess and be able to clean everything up and do that clean python3.8 install at the end?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":358,"Q_Id":63642939,"Users Score":1,"Answer":"You might try using which python or which python3.8 to be able to find where they are and remove em if that's what you want to do.\nYou can also try alias python=python3.8 or whatever version you end up needing. To make it so you don't have to type that every time you open a new terminal, you can edit ~\/.bashrc and add it in there that way it's the default.","Q_Score":0,"Tags":"python,macos,homebrew,upgrade,python-3.8","A_Id":63644232,"CreationDate":"2020-08-29T02:46:00.000","Title":"How to wipe the entire python3.x with all installed pip packages and pip on macOS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed git, github desktop, python 3.7.0 and 3.8.1 and have py.exe in my c:\/\/windows folder. Nevertheless, when I try to type a command in power shell in github desktop for a repo that I have cloned, it says that the \"pip\" is not recognized . Any idea how to fix this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":102,"Q_Id":63646827,"Users Score":0,"Answer":"Executables are searched om your PATH environment variable, which probably doesn't include c:\\windows. You need to edit this environment variable and add c:\\windows to it.","Q_Score":1,"Tags":"python,git,github,github-for-windows,github-desktop","A_Id":63646920,"CreationDate":"2020-08-29T12:13:00.000","Title":"\"pip\" is not recognized in GITHUB desktop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am on a macOS system and I am not able to install PyAudio On a Mac even with using the portaudio and homebrew technique . What should I do ? Please Help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":63655597,"Users Score":0,"Answer":"PyAudio is a python package, so you need to install it with pip. Try running pip install pyaudio or pip3 install pyaudio depending on which version of python you are using.","Q_Score":0,"Tags":"python,macos,pyaudio","A_Id":63655709,"CreationDate":"2020-08-30T08:43:00.000","Title":"Unable To Install PyAudio On A Mac Even With HomeBrew Installed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to install a package 'backtesting' in my Anaconda prompt by typing 'pip install backtesting' but getting EnvironmentError......\nERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'c:\\users\\james\\appdata\\local\\continuum\\anaconda3\\lib\\site-packages\\~il\\_imaging.cp36-win_amd64.pyd'\nConsider using the --user option or check the permissions.\nHow do I 'Consider using the --user option or check the permissions.'? Many thanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":313,"Q_Id":63669050,"Users Score":0,"Answer":"The above answer is the correct '--user' usage.\nI find when I cannot install with conda or pip keywords, I simply add the module to the environment variables.\nYou can do this by accessing your search field on your taskbar and type 'Environment variable' (minus the single quotes) which will produce a window allowing you the option to edit environment vars.\nOr you can go directly to 'System Properties' and select the 'Advanced' tab. On the bottom right is a button for environment variables. Clicking on environment variables\nyou are taken to a new window.\nSelecting 'New' from the 'System Variables' field, you want to add the full path of your project solution and once saved your program can 'see' the module. The clincher is you must restart your computer after adding the variable, once the computer is back up the edited 'Environment Variable' becomes effective. You will be able to use the import statement like you always do at the start of program without the error.","Q_Score":0,"Tags":"python-3.x,installation,package","A_Id":64549457,"CreationDate":"2020-08-31T10:27:00.000","Title":"Trouble installing a new package using Anaconda Prompt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have several registered tasks in my app and it's working as expected but i moved my async tasks to another path recently and i realized that my old tasks those were registered by old app where calling the old same path and raising errors because the task were not there any more. now i'm looking for a solution to register tasks independently to location or find a way to update the tasks path.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":63669382,"Users Score":0,"Answer":"I finally found a patch. I left the tasks in the old location and made a copy of them in the new location and imported tasks from new location under the old tasks. this way celery creates new tasks from new location also celery can find tasks in their old location and empty its old tasks. old tasks can be removed after all old tasks are done.","Q_Score":1,"Tags":"python,python-3.x,celery","A_Id":63811142,"CreationDate":"2020-08-31T10:49:00.000","Title":"Celery dependency to tasks location","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to get my Arduino to talk to PyCharm via serial port but it keeps throwing up the following error:\n[Errno 20] could not open port \/dev\/tty\/ACM0: [Errno 20] Not a directory: '\/dev\/tty\/ACM0'\nThis is definitely the right port for the Arduino Uno, as confirmed by the Arduino IDE. In fact, the script works perfectly without issue using 'COM4' or similar on Windows. Unfortunately I need to move it over to linux and it doesn't seem to be a simple solve of substituting 'COM4' for '\/dev\/tty\/ACM0'.\nI've ran python -m serial.tools.list_ports to check that ports are found and it's returning 2 ports: \/dev\/tty\/ACM0 and \/dev\/ttyS0 which is a good sign.\nI've scoured the internet but can't seem to find any threads where someone has had this specific error code.\nI'm not sure what it means by 'Not a directory' and what the workaround would be for this.\nAny help would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":752,"Q_Id":63674089,"Users Score":0,"Answer":"for archive purposes I'll post my solution to my own question, I've realised that I've mistyped \/dev\/tty\/ACM0 instead of \/dev\/ttyACM0 in the port name in the following line:\narduino = serial.Serial('portname', 115200, timeout=.1)","Q_Score":0,"Tags":"python,arduino,pycharm,pyserial","A_Id":63674127,"CreationDate":"2020-08-31T15:52:00.000","Title":"Serial port with arduino on PyCharm pyserial error - \/dev\/tty\/ACM0 not a directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I know, I should port to PY3, but let's say I am unable to.\nHow much longer, apps written in PY2.7 will remain working on GAE, before support is completely removed and existing 2.7 apps won't work any more?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":41,"Q_Id":63686358,"Users Score":1,"Answer":"There is no official statement from Google as of now as to when Python 2 will be deprecated, however, official support for Python 2 has been stopped since January of this year, so it is advised to work with Python 3 as soon as possible, although Google always does give notice in great advance beforehand. So I would advise to start migrating when you can and wait for a communication from Google to have an exact date.","Q_Score":0,"Tags":"python-2.7,google-app-engine","A_Id":63690198,"CreationDate":"2020-09-01T11:00:00.000","Title":"When is GAE standard python 2.7 runtime going away?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I know, I should port to PY3, but let's say I am unable to.\nHow much longer, apps written in PY2.7 will remain working on GAE, before support is completely removed and existing 2.7 apps won't work any more?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":41,"Q_Id":63686358,"Users Score":1,"Answer":"I don't work for Google and only they can say, but from what I've seen, I expect that we have at least a few years.\nMany people are in your position, and Google likely doesn't want to screw us over, so to speak.","Q_Score":0,"Tags":"python-2.7,google-app-engine","A_Id":63686711,"CreationDate":"2020-09-01T11:00:00.000","Title":"When is GAE standard python 2.7 runtime going away?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to delete older messages from kafka it didnt works as expected. I have configure kafka\nrentension.ms, log.cleanup propery also. But it didnt delete older messages after 5 mins. Here is the configuration and within 5 mins new messages also published eventhough older records present in kafka topic. can you please help me out what im missing in this configuration?. Because it increase storage cost.\n-config retention.bytes=-1 --config cleanup.policy=delete --config retention.ms=300000","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":268,"Q_Id":63704272,"Users Score":0,"Answer":"First, it is important to understand that the LogCleaner will only delete data on old segments of the topic, as described in the configuration description of cleanup.policy:\n\n\"A string that is either \"delete\" or \"compact\" or both. This string designates the retention policy to use on old log segments.\"\n\nIt is likely that all your data is still in one segment, so you need to reduce the segment.bytes configuration for your topic such that you actually get \"old\" segments. This configuration default to 1GB and is described as:\n\n\"This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.\"\n\nIf you don't want to wait until a segment is being filled up, feel free to also reduce the configuration segment.ms from the default value of 7 days to something more fitting to your case. This config is described as:\n\n\"This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data.\"\n\nIf you have short retention times like 5 minutes you may also want to reduce the broker-wide configuration log.cleaner.delete.retention.ms from its default of 1 day to a lower value. This configuration is described as:\n\n\"How long are delete records retained?\"","Q_Score":1,"Tags":"python,amazon-web-services,apache-kafka","A_Id":63704566,"CreationDate":"2020-09-02T11:11:00.000","Title":"How to delete older message not recent messages from kafka","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My company recently migrated a large project from TFS to GIT and while the migration is over we have about 40 repos that need the yaml file updated, we broke up the project into a bunch of repos. During the initial migration we had a boilerplate yaml we added in manually but it was very bare bones.\nThrough trial and error we came up with a pipeline that would work for each repo but I was wondering if there was a way to mass update the yaml files in these repos to save time, I'm also pretty sure there will be tweaks made to the pipelines and I don't really want to update each yaml manually every time.\nI looked into the Azure CLI but don't believe that has what I'm looking for. I'm relatively new to Azure DevOps and wanted to know if there was a tool or plugin that already does this and if not has anyone come across an example of a script that did something similar?\nThanks for any help you can give.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":202,"Q_Id":63711873,"Users Score":0,"Answer":"After doing a lot of googling and talking with an SRE from another team I'm going to solve this via templates.\nI'm going to create a central repository for scripts, yaml templates, etc. and then put a very basic pipeline in each repository that basically passes in parameters and that's it. That way any global changes I make I can just make the change to the template.\nNot 100% the way I wanted to solve it but it will fix most of my issues.","Q_Score":0,"Tags":"python,git,azure,azure-devops,yaml","A_Id":63728660,"CreationDate":"2020-09-02T18:49:00.000","Title":"Is there a way to script a mass update of yaml files in Azure Repos","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a set of python exe that is created using pyinstaller. I want to package these exe in .rpm . How do I do this ? The reason I need to do this is to enable me to install the rpm on a red hat linux server\nPS : I don't have an option to switch from RPMs","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":63725536,"Users Score":0,"Answer":"I found the answer fortunately after hours of trying , you can wrap all the exe in a tar in %install stage of rpm spec and pass the name of the tar file in %file . If you have a single exe , you do not need to create a tar","Q_Score":0,"Tags":"python,redhat,rpm,packaging","A_Id":63757731,"CreationDate":"2020-09-03T14:17:00.000","Title":"How to create an rpm from a python exe?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"here is idea\/workflow:\nI have bunch of network devices that I need to communicate with.\na. client (flask\/python) sends request: \"mac-table sw1\" to Celery.\nb. celery look at available workers\/tasks and if no workers responsible for sw1 exists - it will create one and pass job to it.\nall consequent requests for sw1 will be forwarded to existing worker automatically (thus eliminating establishing sessions for every requests and limiting concurrent sessions to the device)\nc. if worker idle for some time, it close connection to device and exit.\nquestions: is Celery good for this workflow ? do you know any example of similar workflow I can get ideas from.\nThank you!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":494,"Q_Id":63731582,"Users Score":0,"Answer":"If I understood well, what you want to achieve can be accomplished by dynamically creating a queue for particular job, picking one of the available workers, and subscribing it to that queue. It also requres some cleanup task that would remove unused queues (it can be a task that runs every N minutes that inspects queues, checks whether there is anything running there, if not unsubscribe the worker from that queue).","Q_Score":1,"Tags":"python,celery","A_Id":63741578,"CreationDate":"2020-09-03T21:00:00.000","Title":"use celery for dynamic tasks","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I started studying the socket library in Python for the first time, and I realized that there is a constant called AF_UNIX that does the same thing as AF_INET but it is faster and establishes connection only with the same machine and uses the system file instead of internet connection\nbut what I didn't understand is why? why would i establish communication with my own machine? what use would that be and in what situation would it really be useful to use AF_UNIX?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":315,"Q_Id":63733211,"Users Score":3,"Answer":"Your question has many answers, I'll try to answer a few use cases for AF_UNIX:\n\nYou could use it to expose a private channel for other processes to communicate securely using files (since it'll respect the filesystem hierarchy and *nix permission model). PostgreSQL is known to expose a unix socket for it's psql cli to connect securely.\n\nYou could implement exchangeable communications using a socket protocol, as it's easy to move between AF_INET and AF_UNIX and expand it's communication throughout different hosts (redis and docker does this).\n\nYou could create a extendable and fast way for different processes and technologies to communicate without the device overhead AF_INET would include, even if you're using loopback interface.\n\n\nThe possibilities are actually limitless, as it includes a bit of personal preference. But the fact that AF_UNIX uses a similar specification as AF_INET makes it a powerful tool to extend communications between processes without having to rely on heavily different tools (FIFOs, shared mem, internal queues, files, to name a few).","Q_Score":3,"Tags":"python,sockets,python-3.7,local","A_Id":63733290,"CreationDate":"2020-09-04T00:09:00.000","Title":"When and why to use AF_UNIX, why work with the same machine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am getting the following error in the VS Code output terminal:\n'C:\\Users\\S' is not recognized as an internal or external command,\noperable program or batch file.\nAny help in resolving this would be appreciated...","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":211,"Q_Id":63739354,"Users Score":1,"Answer":"The user \"S\" doesn't exist on your computer. Please change the username settings in VSCode\nReg","Q_Score":0,"Tags":"python,vscode-settings","A_Id":63739380,"CreationDate":"2020-09-04T10:26:00.000","Title":"I am getting an error in output terminal of VS Code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am on a windows machine, and I am attempting to set up plaidml after installing using the \"plaidml-setup\" command, but it does not work. I keep getting the \"'plaidml-setup' is not recognized as an internal or external command, operable program or batch file.\" error. I have already installed plaidml using the \"pip install -U plaidml-keras\" command, and have tried uninstalling and reinstalling it.\nI've searched online to solve the issue but cannot find why the command will not work and allow me to chose my AMD gpu. The only information I was able to find were people repeating the \"pip install -U plaidml-keras\" and \"plaidml-setup\" commands.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":271,"Q_Id":63748356,"Users Score":0,"Answer":"The console doesn't recognize the command, because you are trying to call a program which does not exist in the location you are in. You are either in the wrong location (go to the correct one by typing cd [path]) or call the program with the correct name.","Q_Score":2,"Tags":"python,plaidml","A_Id":65267175,"CreationDate":"2020-09-04T21:44:00.000","Title":"Plaidml wont setup: 'plaidml-setup' is not recognized as an internal or external command","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I made a Python program that I do NOT want to turn into .exe or anything else.\nHowever since the program needs python.exe to run, it runs python.exe openly.\nI want to know if I can add a piece of code to the program I made so that it runs in the background.\nBTW, I do not want to run through cmd.I want it to be a background process automatically.\nBasically, I want to add some code into my program so that as soon as it is clicked upon, it runs in background.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":289,"Q_Id":63756319,"Users Score":0,"Answer":"I found out the way to do this easily: I just renamed my file.py into file.pyw\nNow, it runs without a console. Basically I typed this in the cmd:\nren file.py file.pyw","Q_Score":0,"Tags":"python,windows,background-process","A_Id":63770595,"CreationDate":"2020-09-05T16:48:00.000","Title":"How to make a python file run in the background when minimized?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm getting a strange error \"Process finished with exit code -1073741819 (0xC0000005)\" running python 3.7 on a windows machine. The process just crashes. The error comes in random time. It seems to appear inside a thread.\nIs there someway to get more information where exactly the error comes from? Right now my only solution is to add logging, but that is really time consuming.\nThanks a lot for any hint.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3196,"Q_Id":63762702,"Users Score":0,"Answer":"I had the same issue, not long time ago and I have solved this with the following solution:\nreinstall python \u2013 you don't have python33.dll in c:\\WINDOWS\\system32\\\nMaybe you have different python versions \u2013 look at folders in root of c:\nIf yes, then point to your version of python.exe in pyCharm > Settings > Project Interpreter","Q_Score":2,"Tags":"python,crash,python-3.7","A_Id":65817279,"CreationDate":"2020-09-06T09:36:00.000","Title":"Python: Process finished with exit code -1073741819 (0xC0000005). How to Debug?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm defining an update procedure on the air-gapped standalone RedHat7 server with the preinstalled python3 and\nbasic packages.\nThe python3 applications are developed on a host with an Internet access, and are delivered to this standalone\nserver using DOK.\nAdditional packages can't be installed using pip, but must be trasfered from development host to the local user's home directory.\nI'm looking for a proper way to update and activate python3 applications on this standalone server, without appending a new package path using sys.path.append(\"\/home\/user\/packages\/pack_N\") before importing it.\nThanks\nZeev","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":63768729,"Users Score":0,"Answer":"I'm assuming you meant without using \"PyPi\".\nDownloading the packages can be done with pip wheel after installing the wheel package.\nInstalling them locally without using pip's servers (PyPi) can be done like so:\npip install --no-index --find-links=","Q_Score":0,"Tags":"python-3.x","A_Id":63768807,"CreationDate":"2020-09-06T20:42:00.000","Title":"Using user's dir to store copied python3 packages","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When i type 'python \u2212\u2212version' on CMD; it does not show me the version of python i am using instead it displays:python: can't open file '\u0393\u00ea\u00c6\u0393\u00ea\u00c6version': [Errno 2] No such file or directory","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":63769573,"Users Score":0,"Answer":"There are multiple ways to find the python version which you can try:\n\nIn the CMD: python -V\nIn a python document you can import sys and print out sys.version\nIn a python document you can import platform and print out platform.python_version()\n\nHope that helps. If you have an issue with your python installation, you could try reinstalling it from scratch as well. If there is no issue with your python installation then check if it is in your path and accessible from your command line, if not then either add it to your path or open your command line into the folder with the python.exe file and run your commands directly in there.","Q_Score":0,"Tags":"python,django","A_Id":63769612,"CreationDate":"2020-09-06T22:55:00.000","Title":"I can't see the version of python in cmd","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am the beginner to tornado(python based web server). I have to create an application which will have public chat rooms and private messaging between two users.so, I have been looking for a good tutorial about tornado to implement the same but what i found is we can just create the websockets and once we have connected to socket we can send message to server and we can open multiple tabs of browser to replicate multiple users. So all users can send messages to server and every other user and can see all those messages but i need to create private message chat between two users like whatsapp. So can i do the same with tornado ? Please help me out. Any help would be appreciable.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":96,"Q_Id":63783183,"Users Score":1,"Answer":"If you can form sockets, from client to the server then yes!\nSockets are just data streams. You will have to add chat room request data and authentication to the sockets so the server can direct each client to the appropriate chat 'room' (or drop the connection if authentication fails).\nafter that it's the same as what you have implemented already.\nFor secure chat, you'll need some form of encryption on top of all this - at least so that clients know they are talking to the correct server. From there it's adding encryption for clients to know they are talking to the right clients.\nThe final step would be to implement peer to peer capabilities after authenticating at the server.","Q_Score":0,"Tags":"python,websocket,tornado,django-channels,sockjs-tornado","A_Id":63783494,"CreationDate":"2020-09-07T19:22:00.000","Title":"Private messaging in tornado","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My goal is to install a package to a specific directory on my machine so I can package it up to be used with AWS Lambda.\nHere is what I have tried:\npip install snowflake-connector-python -t .\npip install --system --target=C:\\Users\\path2folder --install-option=--install-scripts=C:\\Users\\path2folder --upgrade snowflake-connector-python\nBoth of these options have returned the following error message:\nERROR: Can not combine '--user' and '--target'\nIn order for the AWS Lambda function to work, I need to have my dependencies installed in a specific directory to create a .zip file for deployment. I have searched through Google and StackOverflow, but have not seen a thread that has answered this issue.\nUpdate: This does not seem to be a problem on Mac. The issue described is on Windows 10.","AnswerCount":3,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":11156,"Q_Id":63783587,"Users Score":8,"Answer":"We had the same issue just in a Python course: The error comes up if Python is installed as an app from the Microsoft app store. In our case it was resolved after re-installing Python by downloading and using the installation package directly from the Python website.","Q_Score":11,"Tags":"python,pip,command-line-interface","A_Id":63812237,"CreationDate":"2020-09-07T20:03:00.000","Title":"PIP Install: Cannot combine --user and --target","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"My goal is to install a package to a specific directory on my machine so I can package it up to be used with AWS Lambda.\nHere is what I have tried:\npip install snowflake-connector-python -t .\npip install --system --target=C:\\Users\\path2folder --install-option=--install-scripts=C:\\Users\\path2folder --upgrade snowflake-connector-python\nBoth of these options have returned the following error message:\nERROR: Can not combine '--user' and '--target'\nIn order for the AWS Lambda function to work, I need to have my dependencies installed in a specific directory to create a .zip file for deployment. I have searched through Google and StackOverflow, but have not seen a thread that has answered this issue.\nUpdate: This does not seem to be a problem on Mac. The issue described is on Windows 10.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":11156,"Q_Id":63783587,"Users Score":0,"Answer":"I got a similar error recently. Adding my solution so that it might help someone facing the error due to the same reason.\nI was facing an issue where all my pip installed packages were going to an older python brew installation folder.\nAs part of debugging, I was trying to install awscli-local package to user site-package using:\npip install --user awscli-local\nThen I got:\nERROR: cannot combine --user and --target\nIn my case, it was due to the changes in pip config I had set some time back for some other reason.\nI had set the 'target' config globally - removing which removed this error and my actual issue I was debugging for.\n\nCheck the following if solutions given above doesn't resolve your issue:\ntry the command:\npip config edit --editor \nFor me:\npip config edit --editor sublime\nThis will open the current config file where you can check if there's any conflicting configuration like the 'target' set in my case.","Q_Score":11,"Tags":"python,pip,command-line-interface","A_Id":69517869,"CreationDate":"2020-09-07T20:03:00.000","Title":"PIP Install: Cannot combine --user and --target","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"My goal is to install a package to a specific directory on my machine so I can package it up to be used with AWS Lambda.\nHere is what I have tried:\npip install snowflake-connector-python -t .\npip install --system --target=C:\\Users\\path2folder --install-option=--install-scripts=C:\\Users\\path2folder --upgrade snowflake-connector-python\nBoth of these options have returned the following error message:\nERROR: Can not combine '--user' and '--target'\nIn order for the AWS Lambda function to work, I need to have my dependencies installed in a specific directory to create a .zip file for deployment. I have searched through Google and StackOverflow, but have not seen a thread that has answered this issue.\nUpdate: This does not seem to be a problem on Mac. The issue described is on Windows 10.","AnswerCount":3,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":11156,"Q_Id":63783587,"Users Score":18,"Answer":"We encountered the same issue when running pip install --target .\/py_pkg -r requirements.txt --upgrade with Microsoft store version of Python 3.9.\nAdding --no-user to the end of it seems solves the issue. Maybe you can try that in your command and let us know if this solution works?\npip install --target .\/py_pkg -r requirements.txt --upgrade --no-user","Q_Score":11,"Tags":"python,pip,command-line-interface","A_Id":67259534,"CreationDate":"2020-09-07T20:03:00.000","Title":"PIP Install: Cannot combine --user and --target","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a topic from which I need to consume and process data and I am using kafka-python package to do so.\nI am facing a lot of issues related to rebalancing, slow data consumption across a few partitions, and want to confirm if it because of any compatibility issues.\nSo, how can I check what is the underlying Kafka version used in kafka-python so that I can eliminate the possibility\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":63791469,"Users Score":0,"Answer":"kafka-python is a native Python library, so there is no \"Kafka version\" such as the native Java library it depends on.\nThe native binary Kafka TCP protocol has no discernible version\nIf you were to use confluent kafka Python library, then that would depend on a version of librdkafka C bindings","Q_Score":0,"Tags":"python,apache-kafka,kafka-python","A_Id":63797142,"CreationDate":"2020-09-08T09:56:00.000","Title":"Underlying kafka version in kafka-python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In my application I have 4 camera modules (one mic in each) (same vendorid and productid) connected to a Ubuntu linux system.\nI want to connect to all 4 mics and identify which channel correspond with specific camera module which is connected in a particular USB physical path (e.g 2-1.3 -> USB bus 2 - Port 1 - Port 3)\nHow can I (by python code) get the input_device_index for a particular USB device based on his route?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":63792877,"Users Score":0,"Answer":"At least four years ago there there was no way to do that. Not on Linux nor Windows. I could imagine only somehow kick out audio driver and read sound directly from USB endpoints but this is crazy solution...","Q_Score":1,"Tags":"python,usb,pyaudio","A_Id":63846641,"CreationDate":"2020-09-08T11:24:00.000","Title":"How can I open specific device in pyaudio based on USB physical port?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was using Django-python framework for my web app. When I build a strategy in the UI it gets converted into a .py script file and every 5 mins (can be variable based on my candle interval) the file gets executed. I was using celery beat to invoke the file execution and the execution happens on the same machine using celery.\nHere the problem is actually with scalability, if I have more strategies my CPU and memory usage were going more than 90%. How do I design the server architecture so that it can scale. Thank you.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":123,"Q_Id":63793935,"Users Score":1,"Answer":"When one Celery worker is no longer enough, you create more. This is quite easy if you are on a cloud platform where you can easily create more virtual machines.\nIf you can't create more, than you have to live with the current situation and try to spread the execution of your strategies across a longer period of time (throughout the day I suppose).","Q_Score":0,"Tags":"python,server,architecture,celery,algorithmic-trading","A_Id":63794819,"CreationDate":"2020-09-08T12:26:00.000","Title":"How to design architecture for scalable algo trading application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I tried this way but it still does not work.\nControl Panel >> System and Security >> System >> Advanced System Settings >> Advanced >> Environment variables.\nAdd MongoDB's bin folder path to path variable in Environment variables\nError message:\n'mongorestore' is not recognized as an internal or external command, operable program or batch file.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3178,"Q_Id":63794153,"Users Score":0,"Answer":"make sure your using the msi version not the rpm or tgz if your using windows. Also, add the directory of the files you want to restore to your path. The mongorestore command was not being recognized there until I added it, it was only being recognized in the directory I had it installed.","Q_Score":1,"Tags":"python,mongodb","A_Id":67964062,"CreationDate":"2020-09-08T12:40:00.000","Title":"'mongorestore' is not recognized as an internal or external command, operable program or batch file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I just updated my python3 using homebrew and my python3 was messed up badly. I followed the instructions on other threads and was able to cleanly install python3 but some linking still exists that I am unable to figure out.\nProblem:\npython3 -version\ndyld: Library not loaded: \/usr\/local\/Cellar\/python\/3.6.5_1\/Frameworks\/Python.framework\/Versions\/3.6\/Python\nReferenced from: \/Users\/abcd\/.ansible\/py3\/bin\/python3\nReason: image not found\nzsh: abort python3 -version\nPython paths:\nabcd@abcd-ltm Cellar % which python\n\/Users\/abcd\/.ansible\/py3\/bin\/python\nabcd@abcd-ltm Cellar % which python3\n\/Users\/abcd\/.ansible\/py3\/bin\/python3\necho $PATH:\n\/Users\/abcd\/.ansible:\/Users\/abcd\/.ansible\/py3\/bin:\/Users\/abcd\/.ansible\/bin:\/Users\/abcd\/bin:\/usr\/local\/bin:\/usr\/bin:\/bin:\/usr\/sbin:\/sbin:\/Users\/abcd\/.ansible:\/Library\/Apple\/usr\/bin\nI dont know from where it is still refering and getting this error at any python3 command\ndyld: Library not loaded: \/usr\/local\/Cellar\/python\/3.6.5_1\/Frameworks\/Python.framework\/Versions\/3.6\/Python\nI can confirm there is no folder named python inside \/usr\/local\/Cellar\/. There is one newly created python@3.8 though which should be correct. Any pointers how I can find where is it picking up the incorrect python path and how I can fix it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":159,"Q_Id":63817793,"Users Score":1,"Answer":"It appears you have an ansible installation that built a virtualenv associated with the Python that was upgraded in Homebrew. You should rebuild those virtualenvs or remove them from your PATH if you don't need them.","Q_Score":0,"Tags":"python,python-3.x,macos,homebrew","A_Id":63818009,"CreationDate":"2020-09-09T18:46:00.000","Title":"python3 issues after homebrew based upgrade","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to make my python file open on the \"Windows Terminal\" program instead of the python application that is opened by default. Is there a way to do this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":166,"Q_Id":63848471,"Users Score":0,"Answer":"use the windows start command ie:\nc:>start python","Q_Score":0,"Tags":"python,windows,terminal,windows-terminal","A_Id":63848533,"CreationDate":"2020-09-11T13:53:00.000","Title":"Make a python file run on the new \"windows terminal\" by default (Windows)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently working on automating commands for a Docker container with a Python script on the host machine. This Python script for now, builds and runs a docker-compose file, with the commands for the containers written into the docker-compose file and the Dockerfile itself.\nWhat I want to do is have the Python script action all commands to run within the container, so if I have different scripts I want to run, I am not changing the container. I have tried 2 ways.\nFirst was to run os.system() command within the Python script, however, this works only as far as opening the shell for the container, the os.system() command does not execute code in the Docker container itself.\nThe second way uses CMD within the Dockerfile, however, this is limited and is hard coded to the container. If I have multiple scripts I have to change the Dockerfile, I don't want this. What I want is to build a default container with all services running, then run Python scripts on the host to run a sequence of commands on the container.\nI am fairly new to Docker and think there must be something I am overlooking to run scripted commands on the container. One possible solution I have come across is nsenter. Is this a reliable solve and how does it work? Or is there a much simpler way? I have also used docker-volume to copy the python files into the container to be run on build, however, I can still not find a solve to automate the accessing and running these python scripts from the host machine.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":4651,"Q_Id":63877747,"Users Score":1,"Answer":"If the scripts need to be copied into a running container, you can do this via the docker cp command. e.g. docker cp myscript.sh mycontiainer:\/working\/dir.\nOnce the scripts are in the container, you can run them via a docker exec command. e.g docker exec -it mycontainer \/working\/dir\/myscript.sh.\nNote, this isn't a common practice. Typically the script(s) you need would be built (not copied) into container image(s). Then when you want to execute the script(s), within a container, you would run the container via a docker run command. e.g. docker run -it mycontainerimage \/working\/dir\/myscript.sh","Q_Score":0,"Tags":"python-3.x,docker,docker-compose,nsenter","A_Id":63879306,"CreationDate":"2020-09-14T03:34:00.000","Title":"Python Script to run commands in a Docker container","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I started learning python recently and I know my problem may not be sth complicated. I issued below command from my Windows cmd to install pytest framework and its required dependencies\npy -3 -m pip install pytest\nand then issued:\npy -3 -m pip install pytest-pep8\nto install pep8 plug-in and required dependencies. Both commands were done successfully.\nBut when I want to run pytest by py.test --pep8 exp1.py command; I get mentioned error.\nany ideas?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":10909,"Q_Id":63878489,"Users Score":1,"Answer":"i am learning python with head first python book,\ni have been trying to use pytest module but was,\nnot working and even search online everywhere\nbut i could not find the solution the problem but\nsome how, i manage to figure it out.\nso here's the solution\nthe current version of pytest is version 6.2.2 unfortunately if use it with\npy.test --pep8\nit will not work because it has been deprecated the simple solution is to use this\nversion\npip install pytest==2.9.1\nand when it's successfully installed when you tried\nwith the\npy.test --pep8\nit will work.","Q_Score":3,"Tags":"python,pytest,pep8","A_Id":67713338,"CreationDate":"2020-09-14T05:21:00.000","Title":"'py.test' is not recognized as an internal or external command, operable program or batch file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I use subprocess.getstatus() to get output of bash command running inside python script, in my case top linux command:\noutput = subprocess.getoutput(\"top -bc -n 1 | grep some_process_name\")\nUnfortunately, output string of the function is limited to 80 chars. If the string is longer, I just get the first 80 chars.\nAny other alternate way to get long outputs in shell commands, in full?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":48,"Q_Id":63888639,"Users Score":1,"Answer":"You could pipe the output to a text file, and then display the text file.","Q_Score":0,"Tags":"python,linux,subprocess","A_Id":63888729,"CreationDate":"2020-09-14T16:49:00.000","Title":"subprocess.getstatus() return limited-length string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"For background, I work mostly in the terminal and edit code using vim. For long-term python projects I manage venvs and lock files in source control using poetry.\nThere are some modules I like to have installed in almost every venv I work in, such as ipython\/ptpython, (pytest-)icdiff, and other \"quality of life\" extensions that I need not foist on project collaborators who don't use my workflow. I can install ptpython in the global scope using my distro's package manager (or pipx), but then when I run it, it does not run inside the local venv and local dependencies are not accessible. This gets obnoxious since I'm periodically recreating venvs as the lock files change. Right now I have a shell script that installs the things, but that feels like a hack.\nAn ideal solution might be a way to create something like a venv template, similar to the git templatedir option. Is there anything of such for Python?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":63889768,"Users Score":0,"Answer":"Solution\nThis problem isn\u2019t specific to Python venv, it\u2019s present in rvm and nvm also. Just install the package under the global Python namespace and add it to your PYTHONPATH so that if the package isn\u2019t installed in the local repository Python falls back to your global Python namespace without modifying the repository lockfile.","Q_Score":0,"Tags":"python,python-3.x,python-venv,ptpython","A_Id":63890080,"CreationDate":"2020-09-14T18:14:00.000","Title":"Persisting module installation across new python virtual environments","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"If I'm in the Python IDLE editor and the shell is not open, is there some way to open the shell without running a program? I expect it's something simple that I just can't find.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":272,"Q_Id":63894130,"Users Score":2,"Answer":"For windows:\n\nWin+R to open run window\ncmd to open, well, the command line\npython to run python. Make sure you've added the python.exe file to PATH","Q_Score":2,"Tags":"python,python-idle","A_Id":63895712,"CreationDate":"2020-09-15T02:19:00.000","Title":"Python IDLE how do I get from editor to shell","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I deleted Python's directory, now I can't reinstall it, it tries to upgrade.\nI deleted HKEY_CURRENT_USER\/Software\/Python\nI looked at HKEY_LOCAL_MACHINE\/SOFTWARE but there was no Python.\nplatform: Windows 7 64 bit","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":273,"Q_Id":63896763,"Users Score":1,"Answer":"Under Windows OS You should go to Your Control Panel and chose Uninstall a Program. Then You should find Your current Python installation and remove it with all dependencies.\nUnder Ubuntu You have to go to the terminal and execute command sudo apt-get remove python3* for Python 3.n versions.","Q_Score":1,"Tags":"python,python-3.x","A_Id":63896902,"CreationDate":"2020-09-15T07:07:00.000","Title":"Can't reinstall Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am having trouble installing pip. I downloaded the latest version of python (3.8.5) and then i got the get-pip.py file and saved it in my documents folder. After that I went into the command prompt to the documents folder and type \"python get-pip.py\" as well as \"py get-pip.py\" and nothing happened. According to the tutorials that I watched, I should've seen a progress bar and some other technical information. I installed this on my Mac machine a while back and was able to do this easily.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":261,"Q_Id":63904988,"Users Score":0,"Answer":"So to solve this, all I did was install python in an easy to access location. Before ot was under users\\appdata..... and a bunch of other useless locations. Then I made sure pip was clicked in the installation wizard. I also had to add it to the path myself. For that I went to control panel -> System and Security -> System -> advanced system settings (left hand menu) ->environment variables (button at bottom of first screen) -> under system variables scroll until you find path then double click or select and click edit -> click new -> type in path to file","Q_Score":0,"Tags":"python,windows,installation,pip","A_Id":63930477,"CreationDate":"2020-09-15T15:20:00.000","Title":"Pip not installing windows 10","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am having trouble installing pip. I downloaded the latest version of python (3.8.5) and then i got the get-pip.py file and saved it in my documents folder. After that I went into the command prompt to the documents folder and type \"python get-pip.py\" as well as \"py get-pip.py\" and nothing happened. According to the tutorials that I watched, I should've seen a progress bar and some other technical information. I installed this on my Mac machine a while back and was able to do this easily.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":261,"Q_Id":63904988,"Users Score":0,"Answer":"You need to add the path of the pip install to your PATH system variable via Control Panel or by using the setx command\nGenerally it is installed to C:\\Python38\\Scripts\\pip by default.\nIf you are unsure of whether or not it is in your path you can type echo %path% into a CMD prompt.","Q_Score":0,"Tags":"python,windows,installation,pip","A_Id":63906392,"CreationDate":"2020-09-15T15:20:00.000","Title":"Pip not installing windows 10","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to install package inside a docker container(python:rc-slim).\nAs of now I see that most recent azureml-core wheel uploaded to PyPI is:\n\nazureml_core-1.13.0-py3-none-any.whl\n\nbut when I run pip install azureml-core==1.13.0 I get following error:\n\nERROR: Could not find a version that satisfies the requirement\nazureml-core==1.13.0 (from versions: 0.1.50, 0.1.57, 0.1.58, 0.1.59,\n0.1.65, 0.1.68, 0.1.74, 0.1.80, 1.0rc83, 1.0rc85, 1.0.2, 1.0.6, 1.0.8, 1.0.10, 1.0.15, 1.0.17, 1.0.17.1, 1.0.18, 1.0.21, 1.0.23, 1.0.30, 1.0.33, 1.0.33.1, 1.0.39, 1.0.41, 1.0.41.1, 1.0.43, 1.0.43.1, 1.0.45, 1.0.48, 1.0.53, 1.0.55, 1.0.57, 1.0.57.1, 1.0.60, 1.0.62, 1.0.62.1, 1.0.65, 1.0.65.1, 1.0.69, 1.0.72, 1.0.74, 1.0.76, 1.0.76.1, 1.0.79, 1.0.81, 1.0.81.1, 1.0.83, 1.0.85, 1.0.85.1, 1.0.85.2, 1.0.85.3, 1.0.85.4, 1.0.85.5, 1.0.85.6, 1.1.0rc0, 1.1.1rc0, 1.1.1.1rc0, 1.1.1.2rc0, 1.1.2rc0, 1.1.5, 1.1.5.1, 1.1.5.2, 1.1.5.3, 1.1.5.4, 1.1.5.5, 1.1.5.6, 1.1.5.7)\n\nWhen installing packages from 'apt-get' I usually have to update the index first but I can't find a comparable command to do that with pip.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":166,"Q_Id":63927086,"Users Score":-1,"Answer":"Two possibilities:\n\nthe package needs you to use an underscore (since hyphens don't behave) so pip can download it: so run pip install azureml_core==1.13.0\nRun with the --no-cache-dir argument, so pip install --no-cache-dir azureml_core==1.13.0. This argument forces pip to refresh it's package cache.","Q_Score":0,"Tags":"python,docker,pip","A_Id":63927228,"CreationDate":"2020-09-16T19:51:00.000","Title":"pip using out of date package index","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to clone my existing venv to another PC but simply copy paste is not working. When I copy the venv and paste to the second machine and run\n\npip list\n\nIt only list pip and setup_tools as the installed dependencies.\nI tried another way to clone the packages.\nI created a new venv in the second machine and copied all the file of first venv to that new venv with skipping the existing files with the same name in new venv. Now, when I run\n\npip list\n\nIt shows all the dependencies but, when I try to launch the jupyter notebook as\n\njupyter notebook\n\nIt gives the following error.\n\nFatal error in launcher: Unable to create process using '\"f:\\path\\to\\first_venv\\on_first_machine\\scripts\\python.exe\"\n\"C:\\path\\to\\new_venv\\on_the_second_machine\\Scripts\\jupyter.exe\" notebook': The system cannot find the file specified.\n\nI don't know to make things working. Please help!\nEdit\nThe problem is I don't have internet connection on the second machine. Actually it's a remote machine with some security protocols applied and having no internet connection is part of security ! My bad :'(","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2265,"Q_Id":63932002,"Users Score":10,"Answer":"You can't copy-paste venvs from one machine to another since scripts in them may refer to system locations. (The same stands for attempting to move venvs within a machine.)\nInstead, recreate the environment on the new machine:\n\nOn the old machine, run pip freeze -l > packages.txt in the virtualenv.\nMove packages.txt over to the new machine.\nCreate a new virtualenv on the new machine and enter it.\nInstall the packages from the txt file: pip install -r packages.txt.\n\nEDIT: If you don't have internet access on the second machine, you can continue from step 2 with:\n\nRun pip wheel -w wheels -r packages.txt in the venv on the first machine. This will download and build *.whl packages for all the packages you require. Note that this assumes both machines are similar in OS and architecture!\nCopy the wheel files over to the new machine.\nCreate a new virtualenv on the new machine and enter it.\nInstall the packages from wheels in the new virtualenv: pip install *.whl.","Q_Score":4,"Tags":"python,jupyter-notebook,python-venv","A_Id":63932038,"CreationDate":"2020-09-17T05:52:00.000","Title":"Unable to clone Python venv to another PC","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to setup Atom as a Python IDE.\nI have installed atom-ide-ui, and ide-python packages. Python and pyls are installed as well to the latest versions. Nonetheless I am not able to use any of the functions the packages should provide (e.g. autocomplete, highlighting, etc.), they just do not seem to be active.\nI have tried to set the python-ide Python Executable path to the actual install path (C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python38-32\\python), as I though it could be a problem with the defaults. Still nothing.\nI am wondering if I am missing any import step in the setup, or if I am using something wrongly.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":180,"Q_Id":63940568,"Users Score":0,"Answer":"Did you load the language-python extension? Its necessary to run Python. Was your file in color after opening a valid python code?\nYou should also load the extension named Script. Script is required to run your python file. after you start atom, under the Packages menu you'll see the word Script. Click on this and you'll get another pane that says run script. Click on this to run you python code\/","Q_Score":0,"Tags":"python,atom-editor","A_Id":63973995,"CreationDate":"2020-09-17T14:44:00.000","Title":"Can't use any Atom Python IDE functions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have kali linux, amd64, last python3.8 version, and when i am trying to install pyaudio i get just this ERROR: Failed building wheel for PyAudio Running setup.py clean for PyAudio Failed to build PyAudio Installing collected packages: PyAudio Running setup.py install for PyAudio ... error ERROR: Command errored out with exit status 1: command: \/usr\/bin\/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'\/tmp\/pip-install-l87_4qp_\/PyAudio\/setup.py'\"'\"'; __file__='\"'\"'\/tmp\/pip-install-l87_4qp_\/PyAudio\/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' install --record \/tmp\/pip-record-2u86ewfj\/install-record.txt --single-version-externally-managed --compile --install-headers \/usr\/local\/include\/python3.8\/PyAudio cwd: \/tmp\/pip-install-l87_4qp_\/PyAudio\/ Complete output (16 lines): running install running build running build_py creating build creating build\/lib.linux-x86_64-3.8 copying src\/pyaudio.py -> build\/lib.linux-x86_64-3.8 running build_ext building '_portaudio' extension creating build\/temp.linux-x86_64-3.8 creating build\/temp.linux-x86_64-3.8\/src x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I\/usr\/include\/python3.8 -c src\/_portaudiomodule.c -o build\/temp.linux-x86_64-3.8\/src\/_portaudiomodule.o src\/_portaudiomodule.c:29:10: fatal error: portaudio.h: \u041d\u0435\u0442 \u0442\u0430\u043a\u043e\u0433\u043e \u0444\u0430\u0439\u043b\u0430 \u0438\u043b\u0438 \u043a\u0430\u0442\u0430\u043b\u043e\u0433\u0430 29 | #include \"portaudio.h\" | ^~~~~~~~~~~~~ compilation terminated. error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: \/usr\/bin\/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'\/tmp\/pip-install-l87_4qp_\/PyAudio\/setup.py'\"'\"'; __file__='\"'\"'\/tmp\/pip-install-l87_4qp_\/PyAudio\/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' install --record \/tmp\/pip-record-2u86ewfj\/install-record.txt --single-version-externally-managed --compile --install-headers \/usr\/local\/include\/python3.8\/PyAudio Check the logs for full command output\nIdk what to do with that","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":211,"Q_Id":63941593,"Users Score":0,"Answer":"I solved it by installing portaudio19-dev","Q_Score":1,"Tags":"python,pyaudio","A_Id":63941767,"CreationDate":"2020-09-17T15:41:00.000","Title":"Cant install PyAudio python3.8","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Now i am trying to rename the contents of a folder that contains txt files using python:\nThe original files naming starts from 0.txt to 100.txt\nand i want to change their name to start from 10 instead of 0 (so the files would be 10.txt to 110.txt for example)\nI have 2 lists, one contains the original names path and another one with the new names path,and i am trying to use os.rename() or shutil.move() to rename the files.\nHowever, when i try using os.rename(), i get the error that i cannot create a file that already exists.\nWhen i try to use shutil.move(), it deletes every single repeated folder, giving me only the last 9 folders(101 to 110).\nIs there a way around this ??","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1664,"Q_Id":63942474,"Users Score":0,"Answer":"It sounds like you need to start at the end of your list. Try moving 100->110 first, then 99->109, etc. Since 11 is going to already exist when you try moving 1->11 if you start at the beginning of your lists.\nEDIT: I'd actually recommend Tarranoth's answer above","Q_Score":2,"Tags":"python,rename,shutil","A_Id":63942560,"CreationDate":"2020-09-17T16:33:00.000","Title":"os.rename cannot create a file that already exists","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Now i am trying to rename the contents of a folder that contains txt files using python:\nThe original files naming starts from 0.txt to 100.txt\nand i want to change their name to start from 10 instead of 0 (so the files would be 10.txt to 110.txt for example)\nI have 2 lists, one contains the original names path and another one with the new names path,and i am trying to use os.rename() or shutil.move() to rename the files.\nHowever, when i try using os.rename(), i get the error that i cannot create a file that already exists.\nWhen i try to use shutil.move(), it deletes every single repeated folder, giving me only the last 9 folders(101 to 110).\nIs there a way around this ??","AnswerCount":5,"Available Count":4,"Score":0.0798297691,"is_accepted":false,"ViewCount":1664,"Q_Id":63942474,"Users Score":2,"Answer":"use os.replace() to replace a file that already exists. To check if file exists, use os.path.isfile('file path')","Q_Score":2,"Tags":"python,rename,shutil","A_Id":63942561,"CreationDate":"2020-09-17T16:33:00.000","Title":"os.rename cannot create a file that already exists","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Now i am trying to rename the contents of a folder that contains txt files using python:\nThe original files naming starts from 0.txt to 100.txt\nand i want to change their name to start from 10 instead of 0 (so the files would be 10.txt to 110.txt for example)\nI have 2 lists, one contains the original names path and another one with the new names path,and i am trying to use os.rename() or shutil.move() to rename the files.\nHowever, when i try using os.rename(), i get the error that i cannot create a file that already exists.\nWhen i try to use shutil.move(), it deletes every single repeated folder, giving me only the last 9 folders(101 to 110).\nIs there a way around this ??","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":1664,"Q_Id":63942474,"Users Score":1,"Answer":"You accurately defined the problem. For instance, when you deal with 02.txt, you rename it to 12.txt. Ten iterations later, you rename the same file to 22.txt, then 23.txt, ... until you finish with 102.txt. You do this with the first ten files, systematically wiping out your other 90 files.\nYou have to start at the end of the name space where you have expansion room. Work from 100.txt downward. Rename 100.txt to 110.txt, then 99.txt to 109.txt, etc. This way, you always have an interval of 10 unused file names for your movements.\nCoding is left as an exercise for the reader. :-)","Q_Score":2,"Tags":"python,rename,shutil","A_Id":63942617,"CreationDate":"2020-09-17T16:33:00.000","Title":"os.rename cannot create a file that already exists","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Now i am trying to rename the contents of a folder that contains txt files using python:\nThe original files naming starts from 0.txt to 100.txt\nand i want to change their name to start from 10 instead of 0 (so the files would be 10.txt to 110.txt for example)\nI have 2 lists, one contains the original names path and another one with the new names path,and i am trying to use os.rename() or shutil.move() to rename the files.\nHowever, when i try using os.rename(), i get the error that i cannot create a file that already exists.\nWhen i try to use shutil.move(), it deletes every single repeated folder, giving me only the last 9 folders(101 to 110).\nIs there a way around this ??","AnswerCount":5,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":1664,"Q_Id":63942474,"Users Score":2,"Answer":"I'd advise to move the files and rename them in a temporary directory, delete the original files and move the files back from the temporary directory. And then you can optionally delete the temporary directory you created.","Q_Score":2,"Tags":"python,rename,shutil","A_Id":63942546,"CreationDate":"2020-09-17T16:33:00.000","Title":"os.rename cannot create a file that already exists","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"as below:\nMicrosoft Windows [Version XXXXX]\n(c) 2019 Microsoft Corporation. All rights reserved.\nC:\\Users\\XXX\\Desktop\\Projects and Learning>\"C:\/Users\/XXX\/AppData\/Local\/Programs\/Python\/Python38-32\/python.exe\" \"c:\/Users\/XXXX\/Desktop\/Projects and Learning\/Project1.py\"\nWelcome XXX!\nC:\\Users\\XXXX\\Desktop\\Projects and Learning>","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":63943486,"Users Score":0,"Answer":"You can type clear or if you are looking for a fresh new terminal you can click on terminal up on the menu bar and select new terminal or use the Mac shortcut ^[shift] and `","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":63943825,"CreationDate":"2020-09-17T17:41:00.000","Title":"How to clear Terminal of command prompt after installing VSCode for python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"as below:\nMicrosoft Windows [Version XXXXX]\n(c) 2019 Microsoft Corporation. All rights reserved.\nC:\\Users\\XXX\\Desktop\\Projects and Learning>\"C:\/Users\/XXX\/AppData\/Local\/Programs\/Python\/Python38-32\/python.exe\" \"c:\/Users\/XXXX\/Desktop\/Projects and Learning\/Project1.py\"\nWelcome XXX!\nC:\\Users\\XXXX\\Desktop\\Projects and Learning>","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":63943486,"Users Score":0,"Answer":"If I\u2019m understanding correctly you are just trying to clear what\u2019s currently displayed in the terminal\nctrl + l clears the terminal\nOr just type clear and press enter to clear the window","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":63943686,"CreationDate":"2020-09-17T17:41:00.000","Title":"How to clear Terminal of command prompt after installing VSCode for python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Facing issues when trying to downgrade python version from 3.7.3 to 3.6.5_1. I have did lot of research before reaching to stackoverflow family and all of them were suggesting the same resolution steps that i have tried as below, but found no luck. I looking for the python 3.6.5_1 version because Python 3.7 is unsupported by Keras\/TensorFlow and thus are not a good choice for OpenCV either:\nOption 1:\nbrew unlink python\nbrew install --ignore-dependencies https:\/\/raw.githubusercontent.com\/Homebrew\/homebrew-core\/f2a764ef944b1080be64bd88dca9a1d80130c558\/Formula\/python.rb\nError: Calling Installation of python from a GitHub commit URL is disabled! Use 'brew extract python' to stable tap on GitHub instead.\nOption 2:\nbrew switch python 3.6.5\nError: python does not have a version \"3.6.5\" in the Cellar.\nafter couple of tries I realized that it is problem with the git url that homebrew has it to get python 3.6.5_1 version and that would have changed or the commit url might not be the latest.\nmacOS version : Catalina 10.15.6\nSo seeking your guidance and suggestions on how to solve this problem. Also please let me know if missed any info that is required to post here(which could've helped in solving the problem)\nThanks in advance","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":2982,"Q_Id":63948556,"Users Score":10,"Answer":"using brew install sashkab\/python\/python@3.6 works. credit to @Jean-Pierre Matsumoto.","Q_Score":1,"Tags":"python,python-3.x,macos,opencv,homebrew","A_Id":65308770,"CreationDate":"2020-09-18T02:35:00.000","Title":"Unable to downgrade python version in macos using Homebrew","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i have an NI USB Data Logger. when i plug-in the device, device sends same 10 byte data every 100 ms before opening the software. i captured this data using an USB packet sniffer software.(assume this data is device id). my question is how the device can send data without its software being open? and how can i find out with which endpoint this packets are sending?\nthank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":63969316,"Users Score":0,"Answer":"I can't answer your first question (why a particular device could be sending 10-byte packets before its accompanying application software is launched). There are numerous possibilities that are in line with the USB spec.\nTo your 2nd question, you may use a tracing software like WireShark or Microsoft Event Analyzer (the latter has recently reached EOL, though) to find out what endpoints do the transfers belong to and even decode the content to produce an easily-readable outputs (for devices belonging to standard classes).","Q_Score":1,"Tags":"python,c#,c,usb,libusb","A_Id":64883324,"CreationDate":"2020-09-19T13:24:00.000","Title":"is there a way to send a USB data packet without host permission?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In a terminal emulator with readline support, I can use key binding Ctrl-X Ctrl-E to bring up $EDITOR to edit a command.\nHow do I do that in iPython to bring up $EDITOR to edit half-finished code?\nP.S. My $EDITOR is set to \"vim -u \".","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":62,"Q_Id":63976712,"Users Score":1,"Answer":"shortcuts used:\n'g' to launch gvim with the content of current cell (you can replace gvim with whatever text editor you like).\nso, when you want to edit the cell with your preferred editor, hit 'g', make the changes you want to the cell, save the file in your editor (and then quit), then press 'u'.","Q_Score":0,"Tags":"ipython,editor","A_Id":63976963,"CreationDate":"2020-09-20T07:49:00.000","Title":"iPython shell - how to use $EDITOR to edit commands","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to python and I read online that pip is a good tool to install. I searched online on how to install pip and the first step listed was to check if python was installed properly by opening cmd in admin mode and typing python and it took me to the Microsoft store. I searched online about this and I followed some guide to editing PATH but it did not work. Some other posts said to try typing py and it worked. Then I followed the guide to install a file called get-pip.py. I downloaded it, went into cmd and into the directory where it was downloaded, and typed python get-pip.py and the command prompt just skipped one line. Now I tried pip -V and cmd did not recognize it. So I tried moved the get-pip.py file into where my python.exe file existed and tried py get-pip.exe and got the error C:\\Users\\[*myname*]\\AppData\\Local\\Programs\\Python\\Python38-32\\python.exe: can't open file 'get-pip.py': [Errno 2] No such file or directory. Any help would be appreciated and sorry if this has already been asked I couldn't find good solutions.\nThanks in advance","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":63978266,"Users Score":0,"Answer":"The easiest way to install pip is with modifying the python installation.\nStep 1 - Open Apps & Features\nStep 2 - Find Python and click on it\nStep 3 - Press Modify\nStep 4 - Select pip\nStep 5 - Select Add Python to environment variables and install everything\nThat will install pip and add it to your environment variables so you can run pip task in command prompt or powershell from anywhere.","Q_Score":0,"Tags":"python,python-3.x,pip","A_Id":64043279,"CreationDate":"2020-09-20T11:03:00.000","Title":"Trouble with installing pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to python and I read online that pip is a good tool to install. I searched online on how to install pip and the first step listed was to check if python was installed properly by opening cmd in admin mode and typing python and it took me to the Microsoft store. I searched online about this and I followed some guide to editing PATH but it did not work. Some other posts said to try typing py and it worked. Then I followed the guide to install a file called get-pip.py. I downloaded it, went into cmd and into the directory where it was downloaded, and typed python get-pip.py and the command prompt just skipped one line. Now I tried pip -V and cmd did not recognize it. So I tried moved the get-pip.py file into where my python.exe file existed and tried py get-pip.exe and got the error C:\\Users\\[*myname*]\\AppData\\Local\\Programs\\Python\\Python38-32\\python.exe: can't open file 'get-pip.py': [Errno 2] No such file or directory. Any help would be appreciated and sorry if this has already been asked I couldn't find good solutions.\nThanks in advance","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":63978266,"Users Score":0,"Answer":"if you are new start using Google Colab first to learn how to code in python most of the packages are already installed and no need to install the basic packages","Q_Score":0,"Tags":"python,python-3.x,pip","A_Id":63978527,"CreationDate":"2020-09-20T11:03:00.000","Title":"Trouble with installing pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to python and I read online that pip is a good tool to install. I searched online on how to install pip and the first step listed was to check if python was installed properly by opening cmd in admin mode and typing python and it took me to the Microsoft store. I searched online about this and I followed some guide to editing PATH but it did not work. Some other posts said to try typing py and it worked. Then I followed the guide to install a file called get-pip.py. I downloaded it, went into cmd and into the directory where it was downloaded, and typed python get-pip.py and the command prompt just skipped one line. Now I tried pip -V and cmd did not recognize it. So I tried moved the get-pip.py file into where my python.exe file existed and tried py get-pip.exe and got the error C:\\Users\\[*myname*]\\AppData\\Local\\Programs\\Python\\Python38-32\\python.exe: can't open file 'get-pip.py': [Errno 2] No such file or directory. Any help would be appreciated and sorry if this has already been asked I couldn't find good solutions.\nThanks in advance","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":63978266,"Users Score":0,"Answer":"you may either:\n\ninclude your pip location (\\Python37\\Scripts) into PATH; or\ncall pip by typing whole path, like \\Python37\\Scripts\\pip -v","Q_Score":0,"Tags":"python,python-3.x,pip","A_Id":63984084,"CreationDate":"2020-09-20T11:03:00.000","Title":"Trouble with installing pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I try to install gensim through cmd prompt, it gives me following error:\n\n\"ERROR: Could not install packages due to an EnvironmentError:\n[WinError 5] Access is denied:\n'c:\\programdata\\anaconda3\\lib\\site-packages\\pycache\\cython.cpython-38.pyc'\nConsider using the --user option or check the permissions. \"\n\nI'm unable to sort this issue, please help me out!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":181,"Q_Id":63979393,"Users Score":0,"Answer":"Open windows command prompt with administrative permissions (Right-click on cmd and choose Run as administrator option).\n\nor\nTry using the --user flag\nExample :\npip install gensim --user","Q_Score":0,"Tags":"python-3.x,gensim","A_Id":63979448,"CreationDate":"2020-09-20T13:16:00.000","Title":"Unable to install 'gensim'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to make an app that launches different exe's but I have a problem, one app requires to run as admin, so my question is how can I run that app as admin from a python script?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":178,"Q_Id":63983579,"Users Score":1,"Answer":"I used the elevate module in python! That worked! Thanks anyway!","Q_Score":0,"Tags":"python,windows,admin,launcher,python-os","A_Id":64042751,"CreationDate":"2020-09-20T20:56:00.000","Title":"How to run an exe as admin using os module (Windows)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am wondering how to make an entry point to call a python script, similar to black.\ne.g.:\nblack my_script.py\nSay I have a python file called fix_newline.py.\nInstead of calling python fix_newline.py path\/to\/my_script.py in the directory of fix_newline.py, I'd like to assign the name fix_newline to python path\/to\/fix_newline.py.\nThe ultimate goal is to call fix_newline from anywhere in my directory tree, as long as I am in the same environment (e.g. ~\/.bash_profile).","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":179,"Q_Id":63983596,"Users Score":0,"Answer":"Add path\/to to your PATH variable .bash_profile. (If you have a lot of scripts, consider installing them in a fixed location like ~\/bin\/, so that you don't add a lot of unnecessary directories to your PATH.\n\nMake sure you script is executable and has an appropriate shebang.\n\nDrop the .py from the script name.","Q_Score":0,"Tags":"python,python-3.x,alias,argv","A_Id":63983639,"CreationDate":"2020-09-20T20:58:00.000","Title":"How to make alias for python script, similar to black?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Running Sublime Text 3 on Ubuntu here, Version 3.2.2 build 3211.\nProblem: When running a script with Python3 or Latex, the process stays in memory after it's finished. I have to kill them manually. I discovered this after my computer froze at least twice, and it was caused by python processes from Sublime eating up all RAM and swap. Also, I had problems with matplotlib complaining that all available resources for new windows were taken.\nExpectation: when a job is finished, the process should be killed, freeing up memory.\nTests: I didn't test with other languages besides Latex or Python. I tried in Sublime build 3210 and 3209, and it had the same behavior. I tried to look for in Sublime forums and here on Stack Overflow, and I couldn't find anything related.\nThanks in advance for any help!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":129,"Q_Id":63985492,"Users Score":0,"Answer":"The problem was a \"-i\" flag in the build configuration for Python3... I copied this config somewhere and forgot to check. Now the processes are correctly closed!\nAbout the Latex one, I will uninstall Latex packages and try to install them again.","Q_Score":0,"Tags":"python,build,sublimetext3","A_Id":64038207,"CreationDate":"2020-09-21T02:34:00.000","Title":"Sublime Text 3 doesn't kill processes when finished","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have looked all over the internet including here. I have tried all the suggested solutions, but none of them worked.\nHere is the error message I get _mysql.c:44:10: fatal error: 'my_config.h' file not found \nCould the problem come from the fact that my MySQL and Python versions are not compatible?\nMySql version:Ver 8.0.21 for osx10.15 on x86_64 (Homebrew)\nPython version:Python 2.7.16\nCould that simply be it?\nThanks!","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":721,"Q_Id":63990900,"Users Score":1,"Answer":"This solution is for macOS users\nAfter struggling a while on this issue I finally solved it, and it was pretty simple in the end.\nMaybe this will help others too, because I have seen this issue around a lot during my research. There are plenty of solutions out there, that worked for some and not for others. None of them worked for me.\nBefore you go into a copy\/paste of commands rampage in your terminal, you might want to check your MySQL version. Because apparently MySQL only supports the MySQL-python dependency until version 5.7. So I just had to downgrade my MySQL 8 to 5.7, and then I could finally run the pip install MySQL-python command, and it worked!\nSo if you have a MySQL version higher than 5.7, you might want to downgrade.\nTo that, type in these commands in your terminal:\nbrew unlink mysql\nbrew install mysql@5.7\nThen, try mysql --version If mysql is still red, run this command:\nexport PATH=${PATH}:\/usr\/local\/mysql\/bin \nAnd you should finally be able to run:\npip install MySQL-python\nThis worked for me, hope it will for you!","Q_Score":0,"Tags":"python,mysql-python,mysql-connector","A_Id":63994390,"CreationDate":"2020-09-21T10:58:00.000","Title":"MacOS: \"pip install MySQL-python\" returning an error: \"_mysql.c:44:10: fatal error: 'my_config.h' file not found\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to invoke a docker instance from python subprocess (windows \/ wsl).\nLet's just assume that the docker I need to run is a simple docker run -it busybox (it's not going to be that, but it's a shell for experimenting) but once loaded I need to insert programmatically (asynchronous or blocked, either way is fine) some commands for git to pull some sources and then compile them and deploy them (before invoking docker, I am asking the user to choose a tag from a set of repos).\nSo far using the normal subprocess.Popen I was able to tap in to docker, but I need to have this persistent until I leave docker interactive shell (from busybox).\nIs this possible to be done, or once I get the subprocess done it stops at the next command (as it happens now)?\n(PS I can post some of my code, but I need to clean up some bits first)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":63997490,"Users Score":0,"Answer":"Can you possibly do a simple while loop? My other thoughts would be if all the commands are the same each time they are called, put them in a batch file and call the batch from python. All I can come up with without code.","Q_Score":0,"Tags":"python,docker,subprocess","A_Id":63997645,"CreationDate":"2020-09-21T17:46:00.000","Title":"Invoking multiple docker commands via python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Environment\nI am running code I found online and it uses the logging library to create logs. I am using python3.6.9 on Ubuntu 18.04. The code is a neural network Tensorflow code, in case that is somehow relevant.\nThe Problem + More Info\nWhen I use vim to open the log files produced it looks like they are in binary.\nUsing the file command in Ubuntu I see that the type of the file is \"data\".\nIn the code the logger is initiated using logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)\nthe file is being saved as a .0 and .1 files, i.e events.out.tfevents.1600700600.mycomputername.21941.1.\nPlease let me know any other information you need me to provide.\nThank you in advance for any help available.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1534,"Q_Id":64004709,"Users Score":1,"Answer":"Most probably the file you open is not one generated by the python logging module because the configuration shown: logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) is just logging stuff to stdout not to a file","Q_Score":0,"Tags":"python-3.x,ubuntu-18.04,python-logging","A_Id":64004794,"CreationDate":"2020-09-22T07:10:00.000","Title":"Read Python Logging Log File in Ubuntu","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When writing a Python package, I know how to specify other required Python packages in the setup.py file thanks to the field install_requires from setuptools.setup.\nHowever, I do not know how to specify external system dependencies that are NOT Python packages, i.e. a commands such as git or cmake (examples) that my package could call via subprocess.call or subprocess.Popen?\nDo I have to manually check the availability of the commands in my setup.py file, or is there a fancy way to specify system requirements?\nEdit: I just want to be able to check if the external tools are available, and if not invite the user to install them (by themself). I do not want to manage the installation of external tools when installing the package.\nSummary of contributions: it seems that setuptools has no support for this, and it would be safer to do the check at runtime (c.f. comments and answers).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":549,"Q_Id":64005822,"Users Score":4,"Answer":"My recommendation would be to check for the presence of those external dependencies not at install-time but at run-time. Either at the start of each run, or maybe at the first run.\nIt's true that you could add this to your setup.py, but the setup.py is not always executed at install-time: for example if your project is packaged as a wheel then it doesn't even contain the setup.py file at all. And even if you do not distribute your project as a wheel, if I am not mistaken pip tends to build a wheel locally anyway and reuse it for the subsequent installations.\nSo although it would be possible to do such checks as part of the setup script at install time (provided that you can guarantee the presence and execution of setup.py), I would say run-time is a safer bet.","Q_Score":6,"Tags":"python,python-3.x,setuptools,setup.py,python-packaging","A_Id":64028598,"CreationDate":"2020-09-22T08:25:00.000","Title":"How to specify external system dependencies to a Python package?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have code written in Python3 with dot version of 4 (3.4).\nI want to install latest Python3 onto Windows 10. The latest version is 3.8.\n\nWhen it installs IDLE will also install.\nIs it possible to force Idle to \"use\" specific version? Similar to Java compile for specific\nversion.\n\nI do not want functions to be shown that are not available in 3.4.\nAs an aside, it does not matter what machine is used for IDE and development. Python 3.x behaves the same on Windows as it does on CentOs as it does on Mac?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":64014525,"Users Score":0,"Answer":"Short answer 'No'.\nReason 1a is that if you have, in this instance, 3.4, you also have or can get the 3.4 version of IDLE and can use that. Admittedly the old version is not as nice as the current version. But IDLE is currently the same in all 'current' versions, which is now 3.8 to 3.10. And IDLE for 3.7 is not much different.\nOr 1b, you can edit a file with current IDLE (leaving aside the issues you raised), save it, and run it with 3.4 from (on Windows) Command Prompt.\nThe above assume that 3.4 still exits and runs an on a particular machine. So 1c, you can try running your 3.4 code with 3.8. Because we mostly try to not break backward compatibility, it might work as it, or with little change. This should be especially true of code written by beginners.\nReason 2 is that the IDLE that comes with Python 3.x must be written to run in Python 3.x. (Actually, the IDLE code is kept almost the same across current versions, so IDLE is currently limited to what runs on 3.8.) To run your code, Python 3.x is started in a subprocess. The first code it runs is the 3.x version of idlelib.run, which provides the connection to IDLE in the IDLE process. For IDLE Shell, it also simulates interactive mode.\nReason 3 is that any attempt to work around reason 2 would be expensive, as it would require paid work and separate test machines.","Q_Score":0,"Tags":"python,python-3.x,python-idle","A_Id":64051091,"CreationDate":"2020-09-22T17:08:00.000","Title":"If I have code in Python3.4 and I install latest Python3, am I able to force IDLE to use specific version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"For example, ClamWin.exe is installed in the \"ClamWin\" folder, which contains a lib and bin folder, however just using \"Path\".parent returns me the bin folder, I will to go all the way up to the ClamWin folder and ensure that it will work for the other applications\neg Minecraft.exe in Minecraft folder\navp.exe in the kaspersky folder\nUsing python","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":64040836,"Users Score":0,"Answer":"If it is only used on windows, I think you can consider using the registry to obtain the path: HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\App Paths\\Bandizip.exe\nIf you get the parent folder recursively until you reach 'Program Files', this may not be universal","Q_Score":0,"Tags":"python,python-3.x","A_Id":64040926,"CreationDate":"2020-09-24T06:42:00.000","Title":"How do I get the base parent folder of an installed application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed python via anaconda on an EC2 Ubuntu Instance.\nThe command which python returns *\/home\/ubuntu\/anaconda3\/bin\/python*\nJenkins is instead installed in *\/var\/lib\/jenkins*\nI am trying to run a simple \"Hello World\" script saved on a file named *test.py* and located within the *\/home\/ubuntu\/scripts\/* folder.\nWhile running *python \/home\/ubuntu\/scripts\/test.py* works on terminal, it fails as an \"Execute shell\" build step in Jenkins.\nWhy and how do I configure Jenkins to run python scripts step by step?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":249,"Q_Id":64046579,"Users Score":0,"Answer":"The issue was that the anaconda python installation was only available to the user \"ubuntu\". For Jenkins to be able to run python scripts, the \"jenkins\" user needs to use that installation.\nTo solve the problem, this is what I did:\n\nLogged in as jenkins with the command sudo su -s \/bin\/bash jenkins\nEdited the python install location as export PATH=\/home\/ubuntu\/anaconda3\/bin:$PATH\nChecked that the path is correct through which python\nLogged back as ubuntu user\nRestarted Jenkins through sudo service jenkins restart (not sure if necessary)\n\nNow I can run python scripts through Jenkins.","Q_Score":0,"Tags":"python,jenkins,amazon-ec2,anaconda","A_Id":64061420,"CreationDate":"2020-09-24T12:36:00.000","Title":"How to configure Jenkins to run Python scripts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to execute a shell command, for e.g \"ls\" in a different directory from my python script. I am having issues changing the directory directly from the python code from subprocess.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":627,"Q_Id":64086194,"Users Score":1,"Answer":"To add to tripleees excellent answer, you can solve this in 3 ways:\n\nUse subprocess' cwd argument\nChange dir before, using os.chdir\nRun the shell command to the exact dir you want, e.g. ls \/path\/to\/dir OR cd \/path\/to\/dir; ls, but note that some shell directives (e.g. &&, ;) cannot be used without adding shell=True to the method call\n\nPS as tripleee commented, using shell=True is not encouraged and there are a lot of things that should be taken into consideration when using it","Q_Score":0,"Tags":"python,subprocess","A_Id":64086862,"CreationDate":"2020-09-27T08:00:00.000","Title":"How to run shell commands in different directory using python subprocess command?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I\u2019m trying to deploy my dash app on EBS. I currently reference a local directory (on my machine) as \u2018C:\\users\\me\\projects\\superstar\\assets\\database.db\u2019\nMy dash app.py is in the superstar folder. I\u2019ve tried changing this to \u2018\/assets\/database.db\u2019, but the code is unable to find the file.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":69,"Q_Id":64108819,"Users Score":0,"Answer":"My problem was that the app.py file was not in the root directory. Once I moved it there, accessing '\/assets\/database.db' worked fine.","Q_Score":0,"Tags":"python,plotly-dash,amazon-ebs","A_Id":64252301,"CreationDate":"2020-09-28T19:57:00.000","Title":"How do I reference a local file when moving to a remote server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a sh file that runs: python -m grafana_backup.cli save --config $settings_file.\nI run this file from a crontab, runnning the .sh file but I get this error: python: command not found.\nThe shell in the crontab is SHELL=\/bin\/bash and in the .sh file is #!\/bin\/bash","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":436,"Q_Id":64118478,"Users Score":1,"Answer":"do:\n\n'which python3' - a possible result is \/usr\/bin\/python3\nAdd the result of #1 to the crontab command\n\nA general advice:\nUse full path to every resource your sh script is using","Q_Score":1,"Tags":"python,shell,cron,grafana","A_Id":64118531,"CreationDate":"2020-09-29T11:15:00.000","Title":"Command not found in sh script running from crontab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Every time when I try to run a file in the JupiterLab console I get the following message:\nERROR:root:File 'thing.py' not found.\nIn this case, my file is called thing.py and I try to run it with the trivial run thing.py command in the console. The code is running and it gives me correct results when executed in the console, but I wanted to have it saved, so I put it in a JupiterLab text file and changed the extension to .py instead of .txt. But I get the aforementioned message regardless of which file I try to run. I am new to JupiterLab and admit that I might have missed something important. Every help is much appreciated.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":748,"Q_Id":64118493,"Users Score":-1,"Answer":"You might want to understand exactly what a Jupyter Lab file is, and what a Jupyter Lab file is not. The Jupyter Notebooks have the extension, .ipynb.\nSo anyway, the Jupyter Notebooks are not saved or formatted with python extensions. There are no Jupyter Notebooks or Jupyter Labs ending with the .py extension. That means Jupyter will not recognize an extension having .py, or .txt or .R or etc.... Jupyter opens, reads, and saves files having the .ipynb extension.\nJupyter Notebooks are an open document format based on JSON.\nJupyter can export in a number of different formats. Under the File tab, is the Export feature. The last time I looked there were about 20 different export formats. But there isn't a python or .py export format. A Jupyter file can also be Downloaded. Also under the File tab is the Download feature. This will download a standard text formatted JSON file. JSON files are mostly unreadable unless you've spent years coding JSON.\nSo there's not much purpose in downloading the Jupyter file unless you are working on a remote server and cannot save your work at that site. And it makes much more sense to save and copy the Jupyter file in its native Jupyter format - that means having the extension, .ipynb . Then just open and use that file on another PC.\nHopefully this should clarify why Jupyter won't open any .py or .txt files.","Q_Score":0,"Tags":"python,jupyter,jupyter-lab","A_Id":64131515,"CreationDate":"2020-09-29T11:16:00.000","Title":"File not found when running a file in JupiterLab console","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How to delete python executable file after it finishes? I have tried the following:\n\nos.remove() - deleting the python exe file\nos.chmod() - removing the readOnly\nos.getcwd - combined with os.remove()\nshutile.rmtree - combined with os.getcwd()\nsys.argv[0]\n\nAll of these works when I still use .py extension, but when I convert it to exe it gives me permission error. How do I remove it?\nI want to delete the main.exe because I'm going to distribute it to my friends. I don't want the program to stay inside their system permanently that's why I decided to create an auto-delete script.\nThe code I'm running revolves under pyqt5.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":686,"Q_Id":64121273,"Users Score":0,"Answer":"I would start with ensuring that what you are trying to do is what you actually need. If you provide more details about your real purpose, probably we could find a better solution without using strange workarounds and\/or crutches to solve your problem\nIn Windows 10 it is not possible to remove an executable when it is running (no matter whether you are trying to remove the file from the inside or the outside of the executable), this is why you are getting permission error with the .exe file. Probably there is some workaround which would use some special Windows-only features (because as far as I'm concerned it's not possible to do what you are asking about with libraries such as threading\/multiprocessing), but if you really want to do this, the easiest way I see is using another executable intended specifically to delete the executable which calls it after the original executable finishes.\n\nBut, again, this is not a good solution and to be honest, I don't think there is a perfect one, because your goal seems an anti-pattern to me","Q_Score":0,"Tags":"python,python-3.x","A_Id":64121802,"CreationDate":"2020-09-29T14:03:00.000","Title":"How can I delete the application after it upon closing it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We are using a combination of django, rabbitmq, celery and ffmpeg to read camera streams and break them into images to be stored into filesystem. This setup works 24x7. Now, for each camera stream we are creating a separate task and each will theoretically run for indefinite period.\nIf a stream goes down, we wait for n number of frames, create an exception and in the exception handler, after creating a delay of 1 min using time.sleep we rerun the ffmpeg process.\nMy questions are,\nIs this a right approach?\nShould we use celery for reading streams?\nIs celery the right tool to use for this task?\nCan we create delay in celery task using time.sleep ? Will it affect the other tasks?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":888,"Q_Id":64123114,"Users Score":1,"Answer":"We have relatively big Celery cluster and many of our tasks run for hours, some even run over 24 hours, so I would say yes, Celery is a good choice for long-running tasks. I know very little about audio\/video processing, so I do not think there should be any problems doing it inside a Celery task. The only thing I would perhaps change in the original idea is the following: I would not sleep (yes, you can call sleep in Celery task) and continue processing, but run a new task instead. Other tasks should not be affected at all.","Q_Score":1,"Tags":"python-3.x,ffmpeg,rabbitmq,celery,django-celery","A_Id":64137486,"CreationDate":"2020-09-29T15:46:00.000","Title":"Is celery a good choice for long running tasks?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a dag in puckel\/docker-airflow container and in dag script I imported a custom module from another script:\nfrom app_store_reviews.app_store_reviews.spiders.list_ids import ListIdsSpider \nWhere:\nListIdsSpider is a class from list_ids.py. that's in this path = 'usr\/local\/airflow\/dags\/app_store_reviews\/app_store_reviews\/spiders'\nBut I'm getting this kind of error from airflow:\nBroken DAG: [\/usr\/local\/airflow\/dags\/reviews_analysis_dag.py] No module named 'app_store_reviews.app_store_reviews.spiders.list_ids'\nHow I can solve this? Maybe add this path to PYTHONPATH? if so, How I can do this in a runing container?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":499,"Q_Id":64123610,"Users Score":1,"Answer":"Looks like you see an error in Airflow Web UI.\nThat error is thrown by airflow-scheduler when it scans your dag files.\nYou need to make sure your custom module is installed inside the airflow-scheduler environment.","Q_Score":0,"Tags":"python,docker,airflow","A_Id":64354409,"CreationDate":"2020-09-29T16:16:00.000","Title":"Cannot import custom module to a dag script in Airflow Docker","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to make my python cmd output colorful!\nI have color-codes like this:\n\\033[91m\nNow the output in cmd isn't colorful. I get a \"\u2190\". How can I change this?\nDid anybody have the same problem? :D\nEdit\nIs there an alternative to cmd? Is it hard to programm a cmd window in e.g. C#?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":277,"Q_Id":64159164,"Users Score":0,"Answer":"You should use third party \"cmds\" for example, because they often come with more options. Microsoft also publishes many, check out the MS Store :) I use Fluent Terminal. This is graphically also very appealing","Q_Score":0,"Tags":"python","A_Id":66477356,"CreationDate":"2020-10-01T16:05:00.000","Title":"CMD color problems","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a consumer polling from subscribed topic. It consumes each message and does some processing (within seconds), pushes to different topic and commits offset.\nThere are totally 5000 messages,\nbefore restart - consumed 2900 messages and committed offset\nafter restart - started consuming from offset 0.\nEven though consumer is created with same consumer group, it started processing messages from offset 0.\nkafka version (strimzi) > 2.0.0\nkafka-python == 2.0.1","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1313,"Q_Id":64165265,"Users Score":1,"Answer":"We don't know how many partitions you have in your topic but when consumers are created within a same consumer group, they will consume records from different partitions ( We can't have two consumers in a consumer group that consume from the same partition and If you add a consumer the group coordinator will execute the process of Re-balancing to reassign each consumer to a specific partition).\nI think the offset 0 comes from the property auto.offset.reset which can be :\n\nlatest: Start at the latest offset in log\nearliest: Start with the earliest record.\nnone: Throw an exception when there is no existing offset data.\n\nBut this property kicks in only if your consumer group doesn't have a valid offset committed.\nN.B: Records in a topic have a retention period log.retention.ms property so your latest messages could be deleted when your are processing the first records in the log.\nQuestions: While you want to consume message from one topic and process data and write them to another topic why you didn't use Kafka Streaming ?","Q_Score":1,"Tags":"apache-kafka,kafka-consumer-api,kafka-python,strimzi,confluent-kafka-python","A_Id":64167542,"CreationDate":"2020-10-02T01:48:00.000","Title":"Kafka Consumer not consuming from last commited offset after restart","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python server application that handles a git repository. It creates commits and switches branches to apply changes locally and then pushes them to the remote repo.\nFor some reason, users that run the server on Mac see their repository ending up in a detached HEAD state. This has never happen for users running the server on Windows machines.\nThe tool uses GitPython, and there is no service that does a checkout to a specific commit SHA, it only switches to branch names. It does perform git pull --rebase and git push.\nIs there a way to end up in the detached HEAD state by performing pulls with rebase, fetches or pushes, or any other way that is not a checkout to a commit SHA?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":99,"Q_Id":64165595,"Users Score":1,"Answer":"Looks like the error was caused by git-lfs not installed by default in the macOS from the work machines. The plug-in failed somehow and ended in detached HEAD state","Q_Score":0,"Tags":"windows,git,macos,python-2.7,gitpython","A_Id":65662633,"CreationDate":"2020-10-02T02:46:00.000","Title":"Is it possible to end in git detached head from pull, push, fetch or rebase?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working with Carla 0.9.9 in Unreal 4.24 (Windows10\/RTX2080) right now in order to get some basic autonomous driving functions going. So far it seems to be working fine, but I have a weird problem and I'm quite confident that it's not caused by my code. I've googled around and this problem seems to be quite unique, but maybe one of you guys can point me in the right direction:\nI am displaying a few steps of my lane detection algorithm in different windows (e.g. ROI, detected lines...), and every few seconds, depending on the current framerate, the image will randomly flip to being upside down in some of the windows (only one at a time and only for a single\/few frames), except for the main window where I am controlling the car (manually for now). I've tried setting the Unreal Editor to different framerates, and there is definitely a connection between the output framerate (server side) and the amount of these \"flips\" happening, to the point where it's almost never happening if I run it at 15-20fps. There is also some \"tearing\" going on (e.g. only roughly the upper half of the image is flipped, like Vsynch is switched off) sometimes, which leads me to believe that the root cause is somewhere in the rendering part and not the python scripts. The point is: when the image is upside down, my lane detection is seeing the \"lane\" in the wrong place, which could lead to all sorts of trouble down the line.\nTo be honest I'm not that familiar with the whole Unreal Engine\/DirectX(?)-rendering pipeline, so I am a little lost what might be causing this issue. I'm grateful for any ideas on how to fix this, thanks in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":52,"Q_Id":64169887,"Users Score":0,"Answer":"Okay, in case anybody ever has the same problem, here's what I found out:\nthe client was running too fast in relation to the server. I limited the client side to 30fps now and that fixed it. This issue will only occur if the tick rate of the client is so high that it has trouble keeping up while running the calculations in the background.\nI still can't figure out why the image is upside down in that case, but hey...","Q_Score":0,"Tags":"python,image-processing,unreal-engine4,carla","A_Id":64222796,"CreationDate":"2020-10-02T10:14:00.000","Title":"Random image flipping in CARLA\/UE4","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"While working with the Enterprise Architect API I have noticed that when you export an EA project to XMI, several different kinds of elements get an attribute called ea_localid. That means in the XMI you'll find a tag that has the ea_localid as an attribute. This attribute seems to be used to reference source and target of connecting elements (at least this is valid for 'transitions', as we are working with State Machine diagrams).\nSo far, so good. Now the problem for my intended usage is that these values seem to be newly distributed every time you do an import and an export. EDIT: it is not quite clear to me when exactly during this process. EDIT #2 It seems to happen on import.\nThat means after having exported your project, reimporting it, changing nothing and then exporting it again gives the generated XMI document a set of different ea_localid values. Moreover, it seems that some values that used to belong to one element can now be used for an entirely different one.\nDoes anybody know anything about the distribution mechanism? Or, even better, a way of emulating it? Or a way to reset all counters?\nAs far as I've seen, generally there seem to be different classes of elements and within these classes a new ea_localid for the next element is generated by counting +1. So the first one has the value 1, then the next one 2 and so on.\nMy goal here is doing 'roundtrips' (XMI --> project --> XMI ...) and always getting the same ea_localid values, possibly by editing the XMI document after export. Any help or suggestions would be highly appreciated. Cheers","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":83,"Q_Id":64170768,"Users Score":0,"Answer":"So, after some testing I have found out that regarding the aforementioned goal of doing a roundtrip (xmi --> import to EA --> xmi) and always getting the exact same document, the easiest solution is ...\nrunning a filter over the xmi that just deletes all nodes containing ea_localid, ea_sourceID (sic!) and ea_targetID values.\nOn reimport EA will just assign them new values. The information regarding source and target of 'transitions' and other connecting elements is also stored with the GUID, so there is no loss of information.","Q_Score":1,"Tags":"python,xml,enterprise-architect","A_Id":64242391,"CreationDate":"2020-10-02T11:29:00.000","Title":"How are 'ea_localid' values distributed in Enterprise Architect API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm looking for recommendations on a python logging framework python within a microservice. There's the built in logging system provided by python, there's structlogger. Currently I use structlogger with an ELK stack with filebeat instead of logstash. Please let me know what you would recommend and why? My usual criterial is popularity on stackoverflow (I'm not kidding), as it makes it a lot easier to get over technical issues or bugs.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":112,"Q_Id":64180362,"Users Score":2,"Answer":"Use the builtin logging module.\nIt does pretty much anything you need. structlogger isn't really a different framework and more of a default configuration for the builtin logging module. Also if you need something other than just logging to files or stdout the builtin module has a lot of handlers, and there exist a lot of third party handlers that work with the builtin module. (e.g. graylog)","Q_Score":0,"Tags":"python,logging,microservices,filebeat,elk","A_Id":64182761,"CreationDate":"2020-10-03T03:16:00.000","Title":"Python logging framework","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can run pianobar from command by typing pianobar and it loads the config file just fine. I want to run it from a python script or shell script and it runs but it does not load the config file. Pianobar is a PATH that you can run from any directory and I don't know where the app is installed. I'm working on a GUI for Raspberry pi.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":67,"Q_Id":64186813,"Users Score":0,"Answer":"I found the answer, to run it from a shell do: sudo -u pi pianobar","Q_Score":0,"Tags":"python,shell","A_Id":64189988,"CreationDate":"2020-10-03T17:02:00.000","Title":"How do I run pianobar (pandora) from a shell script with config access","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am attempting to perform a sam deployment and upon running the command:\nsam build --template template.yaml --build-dir .\/build --use-container\nI see that the image \"amazon\/aws-sam-cli-build-image-python3.6\" is successfully pulled but then I obtain the following error:\nBuild Failed Error: PythonPipBuilder:ResolveDependencies - pip executable not found in your python environment at \/var\/lang\/bin\/python3.6\nI really have no clue why this is happening since I would expect that once the python image was pulled, pip and its dependencies would be installed.\nAppreciate any help, thanks in advance!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2785,"Q_Id":64189929,"Users Score":0,"Answer":"I encountered this issue after installing the sam-cli with pipx (I also use pyenv). My global python version from pyenv was 3.6 and the sam-cli was somehow finding python at \/usr\/bin\/python3.8 instead of the pyenv shim. After setting the local python version to 3.8.6 with pyenv's .python-version file in the root of the project, the error message went away.","Q_Score":1,"Tags":"python,amazon-web-services,docker,aws-lambda","A_Id":65267643,"CreationDate":"2020-10-03T23:37:00.000","Title":"aws sam deployment failed due to pip executable not found - python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"python version is displayed as Python 2.7.14 when checked in cmd (strange because I have always installed python 3.6 onward). But python shell in IDLE shows Python 3.7.6. I was notified of this when using f'' strings got errors.\nAlso python path in environment variables is set to python37. I was wondering why this was happening and how to change it.\nP.S: I have not tried reinstalling python yet.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":64221098,"Users Score":0,"Answer":"Check if your environment variable is set to the correct Python executable path.\nYou can find your active executable with where python on windows and with which python on Linux. I guess you have an old executable active on your machine.\nhappy coding,\nbreadberry","Q_Score":0,"Tags":"python,python-3.x,version","A_Id":64221280,"CreationDate":"2020-10-06T07:24:00.000","Title":"Python version displayed different than installed version [WINDOWS]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have just installed 3.9 to replace a 3.6 installation on my laptop.\nBecause Windows file association only works with .exe files, not .bat, files, there is no way to click on a file .py and get it to open with IDLE. But it did with 3.8 on my desktop.\nThe 3.8 programs (including idle.exe pip etc) were installed into the default folder C:\\Users{..}\\AppData\\Local\\Microsoft\\WindowsApps\nThe 3.9 (without idle.exe pip etc) was installed on C:\\Program Files\\Python39\nWhat has changed, and why?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":677,"Q_Id":64244612,"Users Score":0,"Answer":"Anything installed in C:\\Users\\Someuser\\AppData is installed for Someuser only. Anything installed in ...\\Local\\Microsoft\\WindowsApps comes from Microsoft. So I presume your 3.8 comes from the Microsoft store. I believe it will be upgraded to 3.8.6 soon if not yet. 3.8 installed for one user with the python.org installer will, I believe, go by default in the location indicated by superb rain.\nYour 3.9 is installed for all users with, I presume, the 64-bit installer from Python.org. The Windows installer predates the Microsoft store version by maybe 2 decades and does not do exactly the same thing.\nEDIT: If you can use arguments in a file association, and if 3.9 is indeed from python.org and if you checked the box to install py.exe, then py -3.9 -m idlelib should work if Windows simply appends the file path to the string given.","Q_Score":1,"Tags":"python,default,python-idle","A_Id":64250165,"CreationDate":"2020-10-07T13:04:00.000","Title":"Python IDLE.bat, not IDLE.exe, installed on Win10 with Python 3.9","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I bumped into the following bug. When not all files are utf-8 encodable, tests fail on poetry run pytest -v.\n\n============================================================== ERRORS ===============================================================\n____________________________________________ ERROR collecting tests\/anonymized\/test.txt\n_____________________________________________ .venv\/lib\/python3.8\/site-packages\/py\/_path\/common.py:171: in read_text\nreturn f.read() ..\/..\/..\/.pyenv\/versions\/3.8.2\/lib\/python3.8\/codecs.py:322: in decode\n(result, consumed) = self._buffer_decode(data, self.errors, final) E UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf8 in\nposition 380: invalid start byte\n====================================================== short test summary info ======================================================\nERROR tests\/anonymized\/test.txt - UnicodeDecodeError: 'utf-8' codec\ncan't decode byte 0xf8 in position 380: invalid start byte\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error\nduring collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n\nIn the meantime, exactly the same setup worked with Python 3.7, and if I run tests with poetry run pytest tests\/my_tests.py.\nI am using: Python 3.8.2, poetry 1.0.5\nHow can I fix it? This bug is annoying as it fails in CI.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":300,"Q_Id":64246724,"Users Score":1,"Answer":"As it came out, the issue was not related to any version of poetry or python. I named the file test.txt and put it under the tests\/ folder. By default, pytest finds all files prefixed with test and verifies their encoding. As the test.txt was in iso-8859-1, I got a UTF-8 related issue.\nNOTE:\nMake sure that you don't name files with the test prefix when using pytest. Beware that if you decide to name them with the prefix, they should be UTF-8 encoded.","Q_Score":2,"Tags":"python,utf-8,python-3.8","A_Id":64371399,"CreationDate":"2020-10-07T14:55:00.000","Title":"How to skip Python 3.8.2 test of files encoding?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to execute fastapi in shell.\nfor example, we can do it with below code in django:\npython manage.py shell\nhow can I do this in fastapi?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1545,"Q_Id":64247596,"Users Score":1,"Answer":"Simple answer, You can not.\nmanage.py does the same thing as django-admin but also sets the DJANGO_SETTINGS_MODULE environment variable so that it points to your project\u2019s settings.py file. In FastAPI we don't have admin utility, because there is no out-of-box config, environment management etc. That's the main difference between a microframework and a high-level framework.\nFastAPI does not have any administration utilities out-of-box.","Q_Score":3,"Tags":"python,shell,fastapi","A_Id":64251509,"CreationDate":"2020-10-07T15:42:00.000","Title":"How to run FastAPI from terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"How do I get logs from different workers in a single file in Celery.\nI have 3 workers running python tasks and a master node which runs the broker. I want to consolidate logs from these worker machines and store it in the master machine. How can I do that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":189,"Q_Id":64263702,"Users Score":0,"Answer":"We have all our Celery workers on AWS EC2 instances configured to upload Celery logs to CloudWatch. This is one way to achieve what you want. It is not difficult to implement this kind of system even if you are not on AWS - all you need is an agent running on each worker machine that periodically uploads Celery logs to central place. It can even be a cron-job running a script that does the job for you.","Q_Score":1,"Tags":"python,flask,celery,python-logging,celery-log","A_Id":64264246,"CreationDate":"2020-10-08T13:35:00.000","Title":"How to consolidate celery logs from different computers. CELERY","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"It is .sh for mac os\nIt is .bat for windows\nBut what is it for linux\nThanks\n:)\nLinux","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":77,"Q_Id":64295322,"Users Score":1,"Answer":"It is .sh just like in macOS. These operating systems are similar if you are familiar with one it will be easy to learn the other one","Q_Score":0,"Tags":"python,python-3.x,linux","A_Id":64295413,"CreationDate":"2020-10-10T15:50:00.000","Title":"How to execute linux commands using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"It is .sh for mac os\nIt is .bat for windows\nBut what is it for linux\nThanks\n:)\nLinux","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":64295322,"Users Score":0,"Answer":"For bash .sh like Mac os, These operating systems are similar if you are familiar with one it will be easy to learn the other one","Q_Score":0,"Tags":"python,python-3.x,linux","A_Id":64295659,"CreationDate":"2020-10-10T15:50:00.000","Title":"How to execute linux commands using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Would require a single ansible regex for below three outputs:\ninput1: aa::bb::cc\noutput1: aa::cc\ninput2: bb::aa::cc\noutput2: aa::cc\ninput3: aa::cc::bb\noutput3: aa::cc\nI have written the below reg exp. But the extra double colons are still there.\nexample:\n{{ aa::bb::cc | regexx_replace('bb') }} --> gives output as: aa::::cc","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":685,"Q_Id":64315094,"Users Score":0,"Answer":"This is your regexp: (::b+|b+::)\nIt will match :: followed by one or more bs or one or more bs followed by :: so it will take the double colons with it, no matter if they are in front of the bs or after. When it sees ::bb:: as in aa::bb::cc only one of them will match and the result will be aa::cc (so the second pair of colons persists).","Q_Score":1,"Tags":"python,regex,ansible","A_Id":64317152,"CreationDate":"2020-10-12T09:25:00.000","Title":"Ansible regular expression to remove an item from string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I created a .exe out of .py by using the PyInstaller on Windows. Can this .exe run on Mac, Linux, or other platforms?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":145,"Q_Id":64321073,"Users Score":2,"Answer":"Nope. Executable formats for Windows are completely different from those used on other OSes. You might be able to run them in Linux under WINE, but they're not natively compatible with any other OS.","Q_Score":0,"Tags":"python,windows","A_Id":64321112,"CreationDate":"2020-10-12T15:47:00.000","Title":"Can the .exe run on Mac, Linux, or other platforms?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I downloaded and installed Python 3.9 using the Original Installer from python.org, and also tried it with Homebrew after that, which also installed 3.9. However, python3 --version still tells me I have 3.5.1?\nMy work computer does not have this issue, so something seems to be pointing the wrong way on my personal machine. 3.5 has reached the end of its life, as Python keeps telling me, so any suggestions are appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1564,"Q_Id":64327599,"Users Score":0,"Answer":"I tried to just restart the terminal but which python3 still pointed to 3.5. After changing the PATH with echo 'export PATH=\"\/usr\/local\/opt\/python@3.9\/bin:$PATH\"' >> ~\/.zshrc and then also restarting the terminal it worked.","Q_Score":0,"Tags":"python,macos,installation,python-3.5,python-3.9","A_Id":64337315,"CreationDate":"2020-10-13T02:25:00.000","Title":"terminal still showing python version 3.5 even though 3.9 was installed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to be able to, from the command line or in a Python script\/program, execute the action that you can do manually in the iTunes application: File->Library->Export Library. So that I can have a copy of an up to date xml file of my iTunes library. Is this possible?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":64364784,"Users Score":0,"Answer":"I have searched for a solution to this problem for quite a lot of time. And I still have not found anything.\nHowever, as I needed the XML only for the \"played count\" and \"name\" variable of each track I instead created an AppleScript which is then directly called in the python script in which I used to import the XML file.\nNote that the AppleScript directly asks the Music app for up-to-date data but it does need the Music app to be running.\nMaybe this can be adapted to work for your problem, too.\nHope this helps, Alex.","Q_Score":1,"Tags":"python,xml,bash,command-line,ituneslibrary","A_Id":64487115,"CreationDate":"2020-10-15T04:15:00.000","Title":"Mac Terminal Command Line command to perform the iTunes File->Library->Export Library action?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When running mkvirtualenv --python=python3.8 test on my Mac terminal I get this permissions error:\n\ncreated virtual environment CPython3.8.2.final.0-64 in 545ms creator\nCPython3Posix(dest=\/Users\/blake\/.virtualenvs\/test, clear=False,\nglobal=False) seeder FromAppData(download=False, pip=latest,\nsetuptools=latest, wheel=latest, via=copy,\napp_data_dir=\/Users\/blake\/Library\/Application\nSupport\/virtualenv\/seed-app-data\/v1.0.1) activators\nBashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator\nvirtualenvwrapper_run_hook:12: permission denied:\n\/Users\/blake\/Library\/Python\/3.8\/\n\nI'm able to import the virtualenvwrapper module directly in Python 3.8 so I know I have it installed correctly, it just doesn't allow me to create the virtualenv.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":378,"Q_Id":64364932,"Users Score":2,"Answer":"Seems like reinstalling virtualenvwrapper and putting export VIRTUALENVWRAPPER_PYTHON=\/usr\/local\/bin\/python3 in .bash_profile resolved the issue.","Q_Score":0,"Tags":"python,python-3.x,virtualenvwrapper","A_Id":64365189,"CreationDate":"2020-10-15T04:34:00.000","Title":"mkvirtualenv giving permission error for virtualenvwrapper_run_hook:12","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I currently have python versions 2.7, 3.8, and 3.9 on my Mac and it just causes problems in package installing, etc. I do not know how to remove all of them and reinstall python from the beginning in a clean way this time. What should I delete?","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":407,"Q_Id":64389830,"Users Score":-1,"Answer":"First, delete the folder \/Library\/Frameworks\/Python.framework\/ using this command:\nsudo rm -r \/Library\/Frameworks\/Python.framework\/\n\nSecond, remove everything related to python (python2.7, python3.8, etc.) in this dir: \/usr\/local\/bin\/\n\nNOTE: You need admin permissions to do this on your computer.","Q_Score":1,"Tags":"python,macos,duplicates","A_Id":64390338,"CreationDate":"2020-10-16T13:09:00.000","Title":"How to remove multiple versions of python from MacOS Catalina 10.15.4","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running a python script on MACOSX 10.15.7(Catalina). In my Bash Profile, I have the following line:\nexport PIP_REQUIRE_VIRTUALENV=false\nHowever, everytime I am trying to run any python script it shows the following error:\nERROR: Could not find an activated virtualenv (required).\nI apreciate if anyone can help me to figure this problem out. I tried multiple solutions discussed in the stackoverflow but none of them worked for me.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":120,"Q_Id":64394916,"Users Score":2,"Answer":"In my Bash Profile, I have the following line: export PIP_REQUIRE_VIRTUALENV=false\n\nOK, so that is a little strange. Because setting PIP_REQUIRE_VIRTUALENV to false or an empty string, or unsetting it should turn off that warning. (I just checked.)\nSo I think you must be running in a context where that environment variable setting hasn't taken effect.\n\nDid you \"source\" the profile after adding that to the profile? Or restart the shell? Type echo $PIP_REQUIRE_VIRTUALENV to see what the variable is set to in the shell.\n\nAre you (perchance) using sudo? By default, environment variable settings are NOT passed through to the environment in which sudo runs the command.\n\n\nI also think it can't be running \"any python script\". I think it must be some script that entails running pip.","Q_Score":0,"Tags":"python,bash,virtualenv","A_Id":64401581,"CreationDate":"2020-10-16T18:56:00.000","Title":"Virtual Environment Requirement for PIP on Mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to utilize the command line in Windows for Python as much as possible, and obviously have a fundamental misunderstanding of how to enter commands.\n\nWindows Key + R\nType \"CMD\"\nType \"python\" at C:\\Users\\Owner prompt\nType \"import matplotlib.pyplot as plt\"\n\nERROR MESSAGE:\nTraceback (most recent call last):\nFile \"\", kube 1m ub \nModuleNotFound: No module named 'matplotlib'","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":64412618,"Users Score":0,"Answer":"you should first activate the environment that has matplotlib. For example, you installed it at the default environment of anaconda which is base. Then do the following:\n\ntype python\ntype import matplotlib.pyplot as plt","Q_Score":0,"Tags":"python,import,command-line","A_Id":64412656,"CreationDate":"2020-10-18T11:13:00.000","Title":"Command Line Python versus Anaconda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is it correct to use client.loop_start(), do some things, then client.loop_stop(), and finally client.loop_forever()?\nContext: I have a bootloader.py on my Raspberry device. The bootloader is to be launched automatically when the device boots. The bootloaders connects to the application server (via dedicated IP and topic), checks the expected sofware installed at the device, downloads the newer software if it is not available locally, starts the wanted application, and then the bootloader should become a subscriber that listens to the \"service messages\" from outside. (For example: \"send me your status\", \"check for the new software, and if there is something new, download it and reboot the device\".\nCurrent implementation: I have already implemented the loading process and launching the wanted application. The implementation uses the client.loop_start() (that is processing the communication using separathe thread). After downloading the wanted files, the bootloader calls subprocess.Popen(cmd), and becomes the proces of the launched application.\nWhat I want: Now I want to separate the process of the launched application. The bootloader should became a client that listens forever. My idea was to client.loops_stop() after the communication with the application server was finished, launch the application, and then client.loop_forever() as the last action of the script to make it listening forever. Is such approach correct?\nIs the described situation usual? Is it a well known pattern? If yes, could you point me to the related documentation? If no, can you see any flaw in the approach?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":229,"Q_Id":64415819,"Users Score":1,"Answer":"It's not really a pattern I've seen elsewhere, but if it meets your needs.\nThe only problem would be if the time between calling loop_stop() and loop_forever() is longer than the Keep Alive period which will result in the broker disconnecting the client.\nYou may also get a burst of messages when you restart the event loop.","Q_Score":0,"Tags":"python,mqtt,paho","A_Id":64415959,"CreationDate":"2020-10-18T16:45:00.000","Title":"Combining loop_start(), loop_stop(), and loop_forever(); launching the separate application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Until yesterday we works with python 2.7.5 and wheels packages was installed for python 2.7.5 by pip ( pip that is related to python 2.7.5 )\nNow we install the latest python version from redhat \u2013 3.8\nWe also installed the pip3 , when we installed python 3.8 the additional rpm was also pip3\nSo until now everything is ok\nWhat we want to understand is about the current wheels packages that was installed with pip (pip2)\nSince now we have python 3 , I guess we need to install with pip3 the new wheels for python 3 , I assume python 3 cant use the wheels for python 2\nPlease let me if I correct , and I will happy to get corrections","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":147,"Q_Id":64416314,"Users Score":3,"Answer":"Yes you will need to install new wheels and new packages as well. (Most of the time if you install the packages with PIP3 it will install the wheels automatically.)","Q_Score":3,"Tags":"python-3.x,python-2.7,pip,rhel7,python-wheel","A_Id":64416624,"CreationDate":"2020-10-18T17:33:00.000","Title":"Python3 and pip3 + dose the wheels that installed for python2 can used also for python3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm currently using renpy and I want to access some features of the native NSWindow API (and eventually the win32 equivalent but I'm starting with the machine I'm using.) However PyObjC doesn't seem to be compatible, presumably because renpy's python implementation is not CPython, but I don't truly know. I asked in the discord and basically got a shrug so as a hail mary I'm gonna throw this question out to the people of stackoverflow:\nIs there a way to access native APIs, like cocoa and win32, from renpy?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":94,"Q_Id":64430926,"Users Score":1,"Answer":"Ren'Py is only compatible with libraries written in pure Python, so it's not possible at this point. PyObjC uses more than just Python.","Q_Score":0,"Tags":"python,pyobjc,renpy","A_Id":64797229,"CreationDate":"2020-10-19T15:54:00.000","Title":"Is there a way to access native APIs from renpy?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using a macOS.\nWhen I open a new terminal, I am able to install python packages. However, as soon as I open a Jupiter notebook from the terminal, when I run pip install, brew install or other installation methods, nothing happens. No error message, it just no longer runs.\nI don't know what other information would be helpful, but I would appreciate any suggestions!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":23,"Q_Id":64451676,"Users Score":0,"Answer":"Notebook server runs in foreground (as in a persistent process), so you're probably typing commands to the server application (which doesn't take any command line input) instead of to the shell. You should try entering your commands into a new shell.","Q_Score":0,"Tags":"python,terminal,jupyter","A_Id":64451749,"CreationDate":"2020-10-20T18:57:00.000","Title":"Can't install python packages in terminal after opening Jupiter notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working through a Udemy course. I am a Python beginner.\nI am trying to change the directory in the Python terminal (in both Python 3.8 and PyCharm, same result). My current directory is C:\\Users\\*username*\\PycharmProjects\\Woops\\venv\\Python For Beginners and when I try to change directory using the following command:\ncd C:\\Users\\*username*\\PycharmProjects\\Woops\\venv\\Python For Beginners>python myparser.py\nI receive the message:\n\n\"Access is denied.\"\n\nI've gone into every folder and sub folder containing this item, and clicked on properties, general and security and ensured that I have full permission. I've done the same for Python and PyCharm. I've gone into advanced security settings and the auditing and effective access headers and made sure my username has full access.\nFinally, I've even opened PyCharm as an Administrator, and no matter what, when I enter the command:\ncd C:\\Users\\*username*\\PycharmProjects\\Woops\\venv\\Python For Beginners>python myparser.py\nit returns \"Access is denied.\"\nAnyone have an idea what is happening here, and how to execute the cd command?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":360,"Q_Id":64452909,"Users Score":0,"Answer":"The problem is that you can't change the directory into a python file you can only change directories to a folder like for example\nExample: cd C:\\Users\\username\\PycharmProjects\\Woops\\venv\\Python For Beginners\nTo run the code write python3 or python myparser.py","Q_Score":0,"Tags":"python,directory,pycharm,permission-denied,access-denied","A_Id":64487256,"CreationDate":"2020-10-20T20:31:00.000","Title":"Change directory in terminal - Access Denied","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How to add virtual environment in the python shell node in node-red window? Actually me already specify my virtual environment path to the node then when tried to run the python script it return exit code: -4058 error.. but when remove the virtual environment path it can run perfectly. Need help because really need to run the python script within my virtual environment ..","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":383,"Q_Id":64455333,"Users Score":0,"Answer":"You will need to enable the virtual environment in the shell before starting node-red, so all the required environment variables are updated.","Q_Score":0,"Tags":"python,node.js,node-red","A_Id":64469229,"CreationDate":"2020-10-21T01:18:00.000","Title":"Add virtual environment path in node-red python shell node windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I tried to build a docker container with python and the tensorflow-gpu package for a ppc64le machine. I installed miniconda3 in the docker container and used the IBM repository to install all the necessary packages. To my surprise the resulting docker container was twice as big (7GB) as its amd64 counterpart (3.8GB).\nI think the reason is, that the packages from the IBM repository are bloating the installation. I did some research and found two files libtensorflow.so and libtensorflow_cc.so in the tensorflow_core directory. Both of theses files are about 900MB in size and they are not installed in the amd64 container.\nIt seems these two files are the API-files for programming with C and C++. So my question is: If I am planning on only using python in this container, can I just delete these two files or do they serve another purpose in the ppc64le installation of tensorflow?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":256,"Q_Id":64465852,"Users Score":2,"Answer":"Yes. Those are added as there were many requests for it and it's a pain to cobble together the libraries and headers yourself for an already built TF .whl.\nThey can be removed if you'd rather have the disk space.\nWhat is the content of your \"amd64 container\"? Just a pip install tensorflow?","Q_Score":1,"Tags":"python,docker,tensorflow,conda,powerpc","A_Id":64482752,"CreationDate":"2020-10-21T14:31:00.000","Title":"Tensorflow on IBM Power9 ppc64le - Can libtensorflow.so be deleted?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My Anaconda Navigator (v1.9.12) has been prompting me to upgrade to 1.10.0. Only problem is, when I click \"yes\" on the update prompt (which should close the navigator and update it), nothing happens.\nNo problem, I thought. I ran\n\nconda update anaconda-navigator\n\nin the terminal. To no avail (and yes, I read the doc online and ran \"conda deactivate\" beforehand), same with\n\nconda install anaconda-navigator=1.10\n\nBoth ran for a while, but the desktop navigator is still on the old version. One thing to note: the Looking for incompatible packages line was taking way too long (hours with no notable progress), so I ctrl-c'ed out. But I ran these commands again they managed to finish running.\nNow I'm out of ideas, would anyone know what I can do to go through with the update? Thanks a lot!","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":14031,"Q_Id":64468858,"Users Score":2,"Answer":"How I fixed this problem is by the following steps!\n\nOpen the anaconda navigator in admin mode.\n\nTry to click the update notice again.\n\n\nThen I updated my anaconda navigator successfully.","Q_Score":23,"Tags":"python,anaconda","A_Id":67397539,"CreationDate":"2020-10-21T17:37:00.000","Title":"Trouble updating to Anaconda Navigator 1.10.0 (MacOS)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My Anaconda Navigator (v1.9.12) has been prompting me to upgrade to 1.10.0. Only problem is, when I click \"yes\" on the update prompt (which should close the navigator and update it), nothing happens.\nNo problem, I thought. I ran\n\nconda update anaconda-navigator\n\nin the terminal. To no avail (and yes, I read the doc online and ran \"conda deactivate\" beforehand), same with\n\nconda install anaconda-navigator=1.10\n\nBoth ran for a while, but the desktop navigator is still on the old version. One thing to note: the Looking for incompatible packages line was taking way too long (hours with no notable progress), so I ctrl-c'ed out. But I ran these commands again they managed to finish running.\nNow I'm out of ideas, would anyone know what I can do to go through with the update? Thanks a lot!","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":14031,"Q_Id":64468858,"Users Score":12,"Answer":"If you prefer, you may update Navigator manually.\nOpen the Anaconda prompt (terminal on Linux or macOS):\nRun this command to deactivate conda:\n\nconda deactivate\n\nThen run this command to update Navigator:\n\nconda update anaconda-navigator\n\nHad the same problem, worked on me.","Q_Score":23,"Tags":"python,anaconda","A_Id":64469274,"CreationDate":"2020-10-21T17:37:00.000","Title":"Trouble updating to Anaconda Navigator 1.10.0 (MacOS)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Does anyone know how can i use tesseract on Windows without using the .exe\nI want to use pytesseract for a Proof of concept on my company's system where i don't have access to install the executable.\nPlease suggest any alternative here ?\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":653,"Q_Id":64474678,"Users Score":0,"Answer":"You can use the tesseract-ocr-data python package,\ntho it is quite big.","Q_Score":0,"Tags":"ocr,tesseract,python-tesseract","A_Id":68657801,"CreationDate":"2020-10-22T03:08:00.000","Title":"How to use tesseract on Windows without using the executable?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wish to write a producer that can retain the messages in Kafka per 16 seconds and after flush all messages to topic. Is possible do it? I am using the Kafka python.\nThe idea is:\nI have 100 messages, in normal behavior the producer will do:\nProducer:\nmessage 1\nmessage 2\nmessage 3\n...\nmessage 100\nConsumer:\nmessage 1\nmessage 2\nmessage 3\n...\nmessage 100\nI want do it:\nProducer:\nmessage 1\nmessage 2\nmessage 3\n...\nmessage 100\nConsumer:\nmessage 1 message 2 message 3...\n[after 16s]\n...message N-1 message N\n[after 16s]\nmessage 100\nIs possible do it only use Kafka?\nTkz","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":297,"Q_Id":64484041,"Users Score":0,"Answer":"You can pause your consumer(partitions) at the required time and continue to keep polling(or the broker with kick your consumer out as @OneCricketeer suggested) during which it will return no records on the paused partitions and then resume the consumer(partitions) after the time interval.","Q_Score":2,"Tags":"python,apache-kafka","A_Id":64951987,"CreationDate":"2020-10-22T14:07:00.000","Title":"Retain kafka message in buffer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wish to write a producer that can retain the messages in Kafka per 16 seconds and after flush all messages to topic. Is possible do it? I am using the Kafka python.\nThe idea is:\nI have 100 messages, in normal behavior the producer will do:\nProducer:\nmessage 1\nmessage 2\nmessage 3\n...\nmessage 100\nConsumer:\nmessage 1\nmessage 2\nmessage 3\n...\nmessage 100\nI want do it:\nProducer:\nmessage 1\nmessage 2\nmessage 3\n...\nmessage 100\nConsumer:\nmessage 1 message 2 message 3...\n[after 16s]\n...message N-1 message N\n[after 16s]\nmessage 100\nIs possible do it only use Kafka?\nTkz","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":297,"Q_Id":64484041,"Users Score":1,"Answer":"A Kafka topic already is somewhat of a buffer, and producers send messages in batches as well, not one by one.\nYou can increase size of that batch, depending on the kafka library you're using, and manually flush it, but it appears you want the consumer to be halted for a period of time, which can be done, but it'll also cause the consumer group to rebalance as it expects a continuous heartbeat from active consumers (another timeout setting that can be modified)\nOther solution would be use a dequeue in your consumer code with a timer that is continuously added to, but only emptied on a schedule\nAnd if you want messages N-1 and N, then resume back to some earlier event, you need to seek the consumer group to (near) end of the topic","Q_Score":2,"Tags":"python,apache-kafka","A_Id":64484277,"CreationDate":"2020-10-22T14:07:00.000","Title":"Retain kafka message in buffer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a question about manifest.yml files, and the command argument. I am trying to run multiple python scripts, and I was wondering if there was a better way that I can accomplish this?\ncommand: python3 CD_Subject_Area.py python3 CD_SA_URLS.py\nPlease let me know how I could call more than one script at a time. Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":830,"Q_Id":64488560,"Users Score":0,"Answer":"To run a couple of short-term (ie. run and eventually exit) commands you would want to use Cloud Foundry tasks. The reason to use tasks over adding a custom command into manifest.yml or a Procfile is because the tasks will only run once.\nIf you add the commands above, as you have them, they may run many times. This is because an application on Cloud Foundry will run and should execute forever. If it exits, the platform considers it to have crashed and will restart it. Thus when your task ends, even if it is successful (i.e. exit 0), the platfrom still thinks it's a crash and will run it again, and again, and again. Until you stop your app.\nWith a task, you'd do the following instead:\n\ncf push your application. This will start and stage the application. You can simply leave the command\/-c argument as empty and do not include a Procfile[1][2]. The push will execute, the buildpack will run and stage your app, and then it will fail to start because there is no command. That is OK.\n\nRun cf stop to put your app into the stopped state. This will tell the platform to stop trying to restart it.\n\nRun cf run-task . For example, cf run-task my-cool-app \"python3 CD_Subject_Area.py\". This will execute the task in it's own container. The task will run to completion. Looking at cf tasks will show you the result. Using cf logs --recent will show you the output.\n\n\nYou can then repeat this to run any number of other tasks commands. You don't need to wait for the original one to run. They will all execute in separate containers so one task is totally separated from another task.\n\n[1] - An alternative is to set the command to sleep 99999 or something like that, don't map a route, and set the health check type to process. The app should then start successfully. You'll still stop it, but this just avoids an unseemly error.\n[2] - If you've already set a command and want to back that out, remove it from your manifest.yml and run cf push -c null, or simply cf delete your app and cf push it again. Otherwise, the command will be retained in Cloud Controller, which isn't what you'd want.","Q_Score":1,"Tags":"python,cloud-foundry","A_Id":64489284,"CreationDate":"2020-10-22T18:39:00.000","Title":"Can I have multiple commands run in a manifest.yml file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Bottom-line up front: Is there a way to support non-Java discovery client made instances in Zookeeper, while providing custom metadata.\nI'm building a Spring Cloud API Gateway. We're using Zookeeper discovery for most of our routes and Spring Cloud Load Balancer. It works well for Java micro-services. In addition to normal discovery and load balancing, I want to use metadata to specify customization attributes; like security and rate limiting parameters.\nI'm needing to introduce some non-Java services written in Python, so I'm wanting to use discovery for those since they're hosted in a dynamic cluster. In this case, continuing to use Zookeeper makes a lot of since for us if I can get it to do what I need.\nI've inspected the contents of Zookeeper \/services node to see what's going on and I'm able to replicate most of it and actually get discovery and load balancing to work, but the metadata is a level deeper and I can't get that to work, as it's embedded in the Curator\/Zookeeper-specific object.\nI think I know enough to write my own implementation of DiscoveryClient and return my own ServiceInstances, but that seems like a lot of work if I can almost use what I already have. I'll also have to write the Python client for this as well.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":211,"Q_Id":64489391,"Users Score":0,"Answer":"I got around this by re-implementing the InstanceSerializer to manually build the ZookeeperInstance if the ObjectMapper doesn't recognize the ZookeeperInstance automatically.","Q_Score":0,"Tags":"python,service-discovery,spring-cloud-gateway,spring-cloud-zookeeper","A_Id":64611829,"CreationDate":"2020-10-22T19:39:00.000","Title":"Non-Java discovery registration for Spring Cloud Gateway\/Zookeeper","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a question concerning the access of the shared memory within the Ray Framework.\nImagine the following setup on 1 machine:\n\nStart Ray cluster\nStart a process\/worker python script w1.py, which puts an object O1 into the shared memory via ray.put(O1)\nStart a process\/worker python script w2.py, which tries to get O1 from the shared memory via ray.get(...)\n\nIs there a way to access the object O1 (put into shared memory from w1.py process) from another worker process w2.py?\nWhen I execute ray.objects() from w2.py, I get the obejct reference string, but how could I retrieve the object from the shared memory then? I can not init a ObjectRef object in w2.py","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":577,"Q_Id":64499456,"Users Score":1,"Answer":"This is not natively supported. The reason is ray\u2019s objects have various metadata that is for various features (ex, perf optimization or automatic memory management using reference counting).\nIf you\u2019d like to achieve this, I think there are 2 solutions.\n\nUse a detached actor api. Detached actors are actors whose lifetime is not fare-sharing with drivers. Once you create a detached actor, you can obtain the actor handle using ray.get_actor API. This way, you can put an object inside a detached actor and access from multiple drivers.\n\nThere\u2019s another way which uses cloudpickle, but I am not so familiar with this solution, so I won\u2019t write about it. Please go to Ray\u2019s discussion page in its Github repo to ask more details about it.","Q_Score":1,"Tags":"python,redis,multiprocessing,shared-memory,ray","A_Id":64502561,"CreationDate":"2020-10-23T11:50:00.000","Title":"Python Ray shared memory access","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've just installed Anaconda by using Package Control; and, after installing it, the terminal show me this message:\n\nanacondaST3: ERROR - Anaconda worker could not start because:\nconnection to localhost:50462 timed out after 0.2s. tried to connect 7\ntimes during 2.0 seconds\ncheck that there is Python process executing the anaconda\njsonserver.py script running in your system. If there is, check that\nyou can connect to your localhost writing the following script in your\nSublime Text 3 console:\nimport socket; socket.socket(socket.AF_INET,\nsocket.SOCK_STREAM).connect((\"localhost\", 50462))\nIf anaconda works just fine after you received this error and the\ncommand above worked you can make anaconda to do not show you this\nerror anymore setting the 'swallow_startup_errors' to 'true' in your\nconfiguration file.\n\nAnd, the Anaconda autocompletation just doesn't work, the only reason I installed Anaconda for.\nWhat should I do?\nI'm practically a beginner in all of this programming stuff, so, be nice:(","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":3876,"Q_Id":64505028,"Users Score":3,"Answer":"I come cross this problem When I use sublime3\uff0c\nI fix it When I change the python_interpreter\nI think you could change the default Anaconda Python interpreter from python to python3.\nIn the preference > Package Settings > Anaconda > \"Settings - Default\"\nyou could see and change this,from python to python3:\n\"python_interpreter\": \"python3\", which is in line 100.","Q_Score":3,"Tags":"python,anaconda,sublimetext3","A_Id":71389887,"CreationDate":"2020-10-23T17:52:00.000","Title":"Anaconda worker could not start because: connection to localhost timed out after 0.2s. tried to connect 7 > times during 2.0 seconds","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed python3 and required module with root access. But, When I try to run the scrip as non-root user. I am getting following error:\n\nNo module found Error.\n\nWhat is the right way to run the python3 script as non-root user. virtualenv works fine If interactively runs it. But, I need to run it from nifi. So, I should be able to execute it without virtualenv.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":474,"Q_Id":64509638,"Users Score":1,"Answer":"You would need to install the module as non-root, or more specifically, the user account that runs NiFi.\nYou shouldn't be using sudo with pip anyway","Q_Score":1,"Tags":"python,python-3.x,apache-nifi","A_Id":64509697,"CreationDate":"2020-10-24T03:19:00.000","Title":"Run python script as non-root user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed python3 and required module with root access. But, When I try to run the scrip as non-root user. I am getting following error:\n\nNo module found Error.\n\nWhat is the right way to run the python3 script as non-root user. virtualenv works fine If interactively runs it. But, I need to run it from nifi. So, I should be able to execute it without virtualenv.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":474,"Q_Id":64509638,"Users Score":0,"Answer":"The easiest way to do this would be to install Anaconda (big Python distribution with a nice installer) in a location accessible to NiFi and chown the Anaconda folder to the NiFi service account user.","Q_Score":1,"Tags":"python,python-3.x,apache-nifi","A_Id":64536932,"CreationDate":"2020-10-24T03:19:00.000","Title":"Run python script as non-root user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have the wheel file called ssh2_python-0.23.0-cp38-cp38-win_amd64.whl downloaded, and from my understanding the cp38 part means that it has to be installed with python 3.8 and that the win_amd64 part means that its for windows with an 64 bit architechture(which i have).\nBut if i try to install it with python -m pip install ssh2_python-0.23.0-cp38-cp38-win_amd64.whl i get the following error message:\nERROR: ssh2_python-0.23.0-cp38-cp38-win_amd64.whl is not a supported wheel on this platform.\nOutput of python -V: Python 3.8.5\nI'm running the commands in a conda enviroment, does this make a difference?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":97,"Q_Id":64516333,"Users Score":2,"Answer":"ssh2_python-0.23.0-cp38-cp38-win_amd64.whl is a wheel for 64-bit Python. It seems you have 32-bit Python. Either install 64-bit Python or use a 32-bit wheel.","Q_Score":0,"Tags":"python,python-wheel","A_Id":64516549,"CreationDate":"2020-10-24T17:45:00.000","Title":"Can't install ssh2-python wheel file with python 3.8 although it's for python3.8","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to working with APIs, and I recently started a Python project making use of the Google Calendar API that has been put on github. As such, to protect the API keys I created a .env file and stored the keys as environment variables.\nI followed guides that told me to make sure to .gitignore the .env file. However, I don't understand how a user who downloads my app and uses it would be able to access the API key values in the .env file, if the .env file is not in the git repo to begin with.\nThe values in my .env file are essential to authorizing the user's Google account (via OAuth) for use with the app.\nWhat steps would I take to make sure a user of the app is able to retrieve the variables in the ignored .env file?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":64521833,"Users Score":1,"Answer":"I see, so I would need to provide the values to them somehow, and have them configure it manually?\n\nYes, but if those values are sensitive, there should not be stored in the Git repository in the first place.\nWhich means your README (in that git repository) should include instructions in order for a user to:\n\nfetch those values\nbuild the env file","Q_Score":1,"Tags":"python-3.x,git,google-api,environment-variables,google-oauth","A_Id":64533059,"CreationDate":"2020-10-25T08:37:00.000","Title":"How would a user who installs my app access variables from the .env file, if it is ignored by git?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I refer to frameworks such as aiohttp, tornado, gevent, quart, fastapi, among others.\nIf you look for tutorials on how to use celery with django and flask to do things like background and periodic tasks, for example, to send an email when a user registers to confirm their account, you'll find a lot of content about it. But not with the ones above, or they are very few and talk about other topics than perform background or periodic tasks. Does this mean that with these frameworks I don't need celery because since they are asynchronous I can do the same?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":252,"Q_Id":64527328,"Users Score":2,"Answer":"Celery (and similar like Huey or RQ) has different purpose than the frameworks you have listed. No matter which framework you pick, in order to distribute execution of tasks among (potentially hundreds) nodes you would need to implement the whole system yourself. Things would get even more complicated if you want to implement something like Celery workflow...\nSo the answer ca be both YES and NO.\nNO: you do not need Celery if you want to implement all the functionality that Celery provides out-of-box by yourself.\nYES: more pragmatic - you use your async framework to implement your (web) service(s), but whenever you need to distribute execution of some either CPU intensive or long-running tasks, you use Celery, as that is what it is made for.","Q_Score":2,"Tags":"python,asynchronous,celery,web-frameworks","A_Id":64539115,"CreationDate":"2020-10-25T18:30:00.000","Title":"Do I need celery with asynchronous web frameworks?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I created an exe file using PyInstaller and it works on my PC with Windows 8.1 and laptop with Windows 10, but on computers with Windows 7 it has error\n\"error loading python37 dll \"\nand something about dynamic linked libraries.\nEDIT:\nError loading Python DLL 'C:\\Users\\Dell\\Appdata\\Local|Temp|_MEI16442\\python37.dll'. LoadLibrary: Procedure of initialize dynamic linked library (DLL) failed.\nIt is translated from Polish\nDo you know maybe how can I fix it?\nI was reading about static linked dll but I dont know how to do it. I am working on Windows only, I dont know Linux\/Mac.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2389,"Q_Id":64580820,"Users Score":0,"Answer":"I had this same issue while compiling the executable with a Pyinstaller command. To fix it, I added the --noupx option and everything worked fine.","Q_Score":2,"Tags":"python,dll,static,pyinstaller","A_Id":67610907,"CreationDate":"2020-10-28T20:28:00.000","Title":"Error loading python37 dll on Windows 7 after Pyinstaller created exe","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I created an exe file using PyInstaller and it works on my PC with Windows 8.1 and laptop with Windows 10, but on computers with Windows 7 it has error\n\"error loading python37 dll \"\nand something about dynamic linked libraries.\nEDIT:\nError loading Python DLL 'C:\\Users\\Dell\\Appdata\\Local|Temp|_MEI16442\\python37.dll'. LoadLibrary: Procedure of initialize dynamic linked library (DLL) failed.\nIt is translated from Polish\nDo you know maybe how can I fix it?\nI was reading about static linked dll but I dont know how to do it. I am working on Windows only, I dont know Linux\/Mac.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2389,"Q_Id":64580820,"Users Score":0,"Answer":"This used to happen to me all the time, and it was always because I tried to run the executable file from the build folder while the one that works is in the dist folder.","Q_Score":2,"Tags":"python,dll,static,pyinstaller","A_Id":64581007,"CreationDate":"2020-10-28T20:28:00.000","Title":"Error loading python37 dll on Windows 7 after Pyinstaller created exe","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to figure out what is the best way for logging in a multiservice environment in python.\nI am using the python logging package and use the FileRotateHandler. I am holding a folder that rotated every day at midnight (log -> log_date). The problem here is race conditions, many processes are trying to rotate the folder at the same time so I must use a lock that hurts the performance.\nI was thinking about using MongoDB for logging.\ncan you suggest a better way for the logging?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":64594003,"Users Score":0,"Answer":"Rotate files, not directories. In Unix-derived operating systems log rotation using files is guaranteed to not lose data and not block.\nOnce the files are rotated, the old file can be moved to a different place including a directory structure you like.","Q_Score":0,"Tags":"python,mongodb,logging,multiprocessing","A_Id":64601068,"CreationDate":"2020-10-29T15:14:00.000","Title":"What is the best practice for using logs in a multi-services system in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So my issue is that I build ETL pipelines in Airflow, but really develop and test the Extract, Transform and Load functions in Jupyter notebooks first. So I end up copy-pasting back and forth all the time, between my Airflow Python operator code and Jupyter notebooks, pretty inefficient! My gut tells me that all of this can be automated.\nBasically, I would like to write my Extract, Transform and Load functions in Jupyter and have them stay there, while still running the pipeline in Airflow and having the extract, transform and load tasks show up, with retries and all the good stuff that Airflow provides out of the box.\nPapermill is able to parameterize notebooks, but I really can't think of how that would help in my case. Can someone please help me connect the dots?","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":2804,"Q_Id":64595804,"Users Score":2,"Answer":"Why do you want the ETL jobs as jupyter notebook. What advantage do you see? The Notebooks are generally meant for building a nice document with live data. The ETL jobs are supposed to be scripts running in the background and automated.\nWhy can't these jobs be plain python code instead of notebook?\nAlso when you run the notebook using PapermillOperator the output of the run will be another notebook saved somewhere. It is not that friendly to keep checking these output files.\nI would recommend writing the ETL job in plain python and run it with PythonOperator. This is much more simpler and easier to maintain.\nIf you want to use the notebook for it's fancy features, that is a different thing.","Q_Score":3,"Tags":"python,jupyter-notebook,airflow,papermill","A_Id":67671965,"CreationDate":"2020-10-29T16:58:00.000","Title":"ETL in Airflow aided by Jupyter Notebooks and Papermill","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I know that I can run a background process in python using subprocess. But the problem is that when I make a gui and then use subprocess with close_fds=True parameter, the window changes to not responding.\nSo, what I want is that I need to create a background process but it should run separately along with the main process and when that process is done, it should again combine with the main process.\nBTW, I am using PySide2 as the gui framework\nAny help would be appreciated","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60,"Q_Id":64603990,"Users Score":2,"Answer":"I think what would be more beneficial to you would be threading, you are able to start a process in another thread without blocking the main thread which runs your gui. Once the other thread has completed its task it will join the main thread","Q_Score":1,"Tags":"python,python-3.6,pyside2","A_Id":64604025,"CreationDate":"2020-10-30T07:07:00.000","Title":"Run Background Process in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a app written in python\/kivy that i have signed and installed on my android phone. The filemanager , an instance of the MDFileManager opens the root dir (\/) and displays the file system. But i am not able to open any directory in internal or external storage, so that i can select a file. Most of the directories have a small lock icon on them and those dirs wont open. The directories without lock icon do not have any files or the type of files i want. I browsed the directory using a terminal app and found that the ls command on a locked directory returns a permission denied error. This happens only with this app. I am able to open directories and chose files with other 3rd party apps as well as my own app, written in react native. So not sure if this is an issue with MDFIleManager or something else. Any advise\/workaround is highly appreciated","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":90,"Q_Id":64618642,"Users Score":1,"Answer":"Viewing folders without accessing them is most likely due to not setting permissions. Add the following permissions in your manifest according to level of access:\n\nREAD_EXTERNAL_STORAGE\nWRITE_EXTERNAL_STORAGE\nMANAGE_EXTERNAL_STORAGE.","Q_Score":0,"Tags":"python,android,kivy,kivymd","A_Id":64625019,"CreationDate":"2020-10-31T05:12:00.000","Title":"My android app not able to open directories to read files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Unfortunately anaconda has been corrupted and I need to uninstall and reinstall anaconda to fix the issue (Anconda navigator application was not opening so i have to uninstall it). I unistalled the anconda. When i have reinstalled it, it didn't installed properly.It is showing error in command prompt when i try to run conda. It is showing error message 'conda is not recognized as internal or external command'","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":697,"Q_Id":64633440,"Users Score":3,"Answer":"Simple Answer : Anaconda is not installed proprly due to presence of already present application.\nDetail Answer :I have found the solution to this. If your anaconda navigator application or other anaconda features has stoped working and you have already uninstalled it to fix it by reinstalling please don't do that you will going to have a lot of difficulty in that, try to fix that only rather than uninstalling and again installing. But if you have already uninstalled it and want to install it again make sure that you manually delete condas aplication which you will find in c->Users->admin->appdata. try to delete all the anaconda aplication by searching it.And again install it. If finds errors after installing also, so you haven't deleted (manually) properly. (there are pip files(aprox 700MB),conda files which you have to delete manually)","Q_Score":2,"Tags":"python,anaconda,data-science,anaconda3","A_Id":64640533,"CreationDate":"2020-11-01T15:13:00.000","Title":"How to uninstall and reinstall anaconda naviator","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"For example, if I was to have it search and find a file named ex.exe, it should then prompt whether you'd like to open the file. Thank you.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":64638512,"Users Score":0,"Answer":"You can import os\nthen with the help of\nos.system(\"pass any windows cmd here\")\nlike regular file search make it happen","Q_Score":0,"Tags":"python,search","A_Id":64638559,"CreationDate":"2020-11-02T00:53:00.000","Title":"Locate & Open File - Using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have python script running from SSIS package.\nI am running python script in \".bat\" file.\nIf I Execute SSIS package, it is running fine. Same package if I deploy and run\/scheduled run it is failing with below error:\nerror: In Executing \"D:\/SSIS\/PYTHON_SCRIPT\/task.bat \"D:\/SSIS\/PYTHON_SCRIPT\/\". The process exit code was \"1\" while the expected was \"0\"\nAny one have similar issue. Help me to solve this.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":193,"Q_Id":64642203,"Users Score":0,"Answer":"Make sure your Python Libraries are correctly configured and check Environment Variables as well","Q_Score":0,"Tags":"python,ssis","A_Id":67786301,"CreationDate":"2020-11-02T08:37:00.000","Title":"Python script not running in SSIS package","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"In python 3 I am getting errors when I use input() and I want to take user input and do something with it.\nI am executing the python script in BBEdit, Sublime or IDLE.\nCode results in errors unless I remove the Input() Syntax:\n\n\ninput(\"Hi I'm new to python\")\n\n\nuserInput = input(\"Enter a string\")\nprint(f\"You entered {userInput}\")\n\nIn the console it displays the string from input(\"String\") but any user keystrokes are instead typed in the code editor, not interactively.\nI read that Python could not be interactive via those apps but I don't understand how to execute a syntactically correct \"input(\"enter your favorite sushi roll\") and interact with it (on pc and Mac)\nFrom Console:\n\nEnter a string \nTraceback (most recent call last):\nFile \"\/Users\/michaelking\/Desktop\/BBEditRunTemp-hellowWorld.py\", line 1, in \nuserInput = input(\"Enter a string\") \nEOFError: EOF when reading a line \n================================================================================\nNov 2, 2020 at 9:53:07 PM\n~\/Desktop\/hellowWorld.py\nTraceback (most recent call last):\nFile \"\/Users\/michaelking\/Desktop\/BBEditRunTemp-hellowWorld.py\", line 1, in \nuserInput = input()\nEOFError: EOF when reading a line","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":173,"Q_Id":64657598,"Users Score":0,"Answer":"When you're running a script that requires interactive input, you'll have to run it in Terminal (or an equivalent such as iTerm).\nI can't speak to other products :-) but when using BBEdit, the \"Run in Terminal\" command on the #! menu will do this for you.","Q_Score":0,"Tags":"python,input,printing,traceback","A_Id":64810517,"CreationDate":"2020-11-03T06:10:00.000","Title":"How to get User Input in Python 3 when input() returns errors","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Since many years I have been using a Python script, to add a \"virtual printer\" to my macOS printing dialogue and print on a PDF stationery from any application.\nThe script is placed within ~\/Library\/PDF Services, took sys.argv[3] as an input file, merged the input file with a given PDF stationery and saved it in ~\/Downloads.\nI was happy until I updated to macOS Catalina and always got this message from Console:\nSandbox: Python(30225) System Policy: deny(1) file-read-data \/Users\/me\/Documents\/stationery.pdf\nIt seems that, due to the new permissions in macOS Catalina, the script cannot access the stationery file anymore. Python however has full hard disk access.\nIf I run the same script from the Terminal, everything works fine.\nHow can I grant the script access to the required document (\/Users\/me\/Documents\/stationery.pdf) when executed from withing the printing dialogue?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":62,"Q_Id":64662338,"Users Score":1,"Answer":"EDIT:\nThis problem seems to have been fixed in Monterey: python scripts will now correctly run as PDF Services.\n(Of course, they're removing python, so you'll have to install your own. Swings and Roundabouts.)","Q_Score":0,"Tags":"python,macos,pdf","A_Id":65738829,"CreationDate":"2020-11-03T11:52:00.000","Title":"macOS python PDF service denied access in macOS Catalina","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I know this is a common error and it's been asked many times here on SO. I've been through all the solutions and none of them are working for me.\nI'm using crontab on my iMac (running Catalina) to set up a cron job:\n42 11,20 * * * cd path\/to\/directory && echo | sudo -S \/Library\/Frameworks\/Python.framework\/Versions\/3.8\/bin\/python3 filename.py >> log.txt\nThe full error I'm getting:\nPassword:\/Library\/Frameworks\/Python.framework\/Versions\/3.8\/bin\/python3: can't open file 'filename.py': [Errno 1] Operation not permitted\nI've tried:\n\nAllowing Terminal to have Full Disk Access\nSetting permissions on the files in the directory with sudo chown my-username:my-groupname filename\nAdding password in the command\n\nbut this error never changes.\nAny help gratefully accepted.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1125,"Q_Id":64666365,"Users Score":1,"Answer":"I was able to resolve this. These are steps you can try to follow:\n\nOpen up System Preferences -> Security & Privacy (Mac OSX Catalina)\nOpen Privacy tab\nClick Full Disk Access\nEnsure your version of python has been added to the list of application with access.\n\nTo do this, just find out where in your system Python is located, navigate there by pasting the path into Finder->Go->Go To Folder, finding the exe file, and dragging it into the Full Disk Access section of Privacy.","Q_Score":0,"Tags":"python,macos,cron","A_Id":64685153,"CreationDate":"2020-11-03T16:03:00.000","Title":"Errno 1: Operation not permitted while running cron job in crontab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using prefect workflow for business applications.\nI have a question about logs stored in postgresql.\nIf daily logs are kept stored on the postgresql server, the amount of data will be enormous.\nIs there a mechanism to rotate this log and write it to a text file?\nAlso, is it okay to delete the corresponding record after writing the data from the postgresql table to a text file, etc.?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":488,"Q_Id":64673822,"Users Score":3,"Answer":"Since you're running an instance of Prefect Server, it would make sense to write a Flow that connects to the postgres container and archives the logs to some sort of cloud storage (GCS, S3, etc) for you. I'd probably think about it like this:\n\nUse the Prefect Client to gather all the IDs of Flow Runs older than a certain date.\nConnect to postgres and select the logs from the logs table for logs with those Flow Run IDs\nWrite those logs to CSV\/SQL\/Text as preferred\nDelete those logs from postgres","Q_Score":2,"Tags":"python,postgresql,airflow,workflow,prefect","A_Id":64806809,"CreationDate":"2020-11-04T03:18:00.000","Title":"About managing logs of prefect workflow","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been successfully using a virtual environment created by pipenv for months, located at ~\/.virtualenvs. However, today when I tried to activate it using \"pipenv shell\" (while in the proper directory) pipenv creates a new .venv file in the current directory instead of loading the environment from ~\/.virtualenvs. My main concern is: how do I redirect pipenv to the existing virtual environment? Out of curiosity, any ideas about what could suddenly cause this behavior?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":360,"Q_Id":64703208,"Users Score":0,"Answer":"just go to that project\/venv bin folder and then do source activate it will as activate as it is just a wrapper around virtualenv","Q_Score":0,"Tags":"python,pipenv","A_Id":64703321,"CreationDate":"2020-11-05T18:24:00.000","Title":"Pipenv shell command creates new venv instead of loading existing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been running GDAL through python on anaconda for years with no problem. Today when I tried to import gdal I got this error code:\n\nOn Windows, with Python >= 3.8, DLLs are no longer imported from the\nPATH. If gdalXXX.dll is in the PATH, then set the\nUSE_PATH_FOR_GDAL_PYTHON=YES environment variable to feed the PATH\ninto os.add_dll_directory().\n\nI've been looking for a solution to this but can't seem to figure out how to fix this. Anybody has a solution?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1321,"Q_Id":64707915,"Users Score":0,"Answer":"use:\nfrom osgeo import gdal\ninstead of:\nimport gdal","Q_Score":4,"Tags":"python,gdal","A_Id":72133539,"CreationDate":"2020-11-06T02:10:00.000","Title":"how to set the USE_PATH_FOR_GDAL_PYTHON=YES environment variable to feed the PATH into os.add_dll_directory()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using kafka-python-2.0.1 for consuming data from kafka brokers. As of now I am running a single consumer instance. We receive 2M records every 5 minutes. I noticed that kafka-python is not able to read data that faster to consume all the messages in a timely manner. I am new to kafka-python and not sure how can I put the implementation in place to read data faster. Should I run more than one consumer?\nconsumer = KafkaConsumer(bootstrap_servers='',security_protocol='SASL_SSL', sasl_mechanism = 'GSSAPI', auto_offset_reset = 'latest', sasl_kerberos_service_name = 'kafka',ssl_cafile='', ssl_check_hostname=False,api_version=(0,10))\nThanks,","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":369,"Q_Id":64708194,"Users Score":0,"Answer":"Kafka scales with the number of partition within a topic.\nIf you want to increase your throughput you could increase the number of partitions of the topic. Then note, each partition can be consumed by at most one consumer within the same consumer group. So you should have the number of consumers within a consumer group match the numbers of partitions of the topic.\nThe consumer group can be set in the KafkaConsumer through the configuration group_id.","Q_Score":1,"Tags":"python-3.x,apache-kafka,kafka-consumer-api,kafka-python","A_Id":64725179,"CreationDate":"2020-11-06T02:53:00.000","Title":"kafka-python-2.0.1 performance with large data set","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am having issues installing anaconda silently on some machines. I am running Anaconda3-2020_07-Windows-x86_64.exe \/InstallationType=AllUsers \/RegisterPython=1 \/AddToPath=1 \/S. It runs for a bit and creates the Anaconda3 directory in program data but the only files that show up are _conda.exe and uninstall_Anaconda3.exe and the pkgs,Lib,Conda-meta directories, with sub folders and files in them but no python and don't see the rest of the files in the main directory I normally would see on complete install. It also never adds the path environment variables but it registers as installed with windows. My biggest issue here is there is nothing to go on as to what it is doing. I have looked everywhere but this thing does not create a logfile. Am I missing it or is that just an oversight of whoever made this? If I had a logfile I could troubleshoot this better.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":794,"Q_Id":64722377,"Users Score":0,"Answer":"I'm having the same problem with 2021.05 via SCCM silent install, I had no problem with 2019.03.\nEarly testing suggests that the SCCM option 'Run installation and uninstall program as 32-bit process on 64-bit clients' helps.","Q_Score":0,"Tags":"python,anaconda,logfile,silent","A_Id":69092170,"CreationDate":"2020-11-06T22:24:00.000","Title":"Cannot find Anaconda install logging","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to install OpenCV in a docker container (CentOS).\nI tried installing python first and then tried yum install opencv-contrib but it doesn't work.\nCan someone help me out as to how to install OpenCV in Docker (CentOS)?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":225,"Q_Id":64739928,"Users Score":0,"Answer":"To install OpenCV use the command: sudo yum install opencv opencv-devel opencv-python\nAnd when the installation is completed use the command to verify: pkg-config --modversion opencv","Q_Score":2,"Tags":"python,linux,docker,opencv,anaconda","A_Id":64748740,"CreationDate":"2020-11-08T15:39:00.000","Title":"How to install OpenCV in Docker (CentOs)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am going to use Kafka as a message broker in my application. This application is written entirely using Python. For a part of this application (Login and Authentication), I need to implement a request-reply messaging system. In other words, the producer needs to get the response of the produced message from the consumer, synchronously.\nIs it feasible using Kafka and its Python libraries (kafka-python, ...) ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1322,"Q_Id":64748185,"Users Score":1,"Answer":"I'm facing the same issue (request-reply for an HTTP hit in my case)\nMy first bet was (100% python):\n\nstart a consumer thread,\npublish the request message (including a request_id)\njoin the consumer thread\nget the answer from the consumer thread\nThe consumer thread subscribe to the reply topic (seeked to end) and deals with received messages until finding the request_id (modulus timeout)\n\nIf it works for a basic testing, unfortunatly, creating a KafkaConsumer object is a slow process (~300ms) so it's not an option for a system with massive traffic.\nIn addition, if your system deals with parallel request-reply (for example, multi-threaded like a web server is) you'll need to create a KafkaConsumer dedicated to request_id (basically by using request_id as consumer_group) to avoid to have reply to request published by thread-A consumed (and ignored) by thread-B.\nSo you can't here reclycle your KafkaConsumer and have to pay the creation time for each request (in addition to processing time on backend).\nIf your request-reply processing is not parallelizable you can try to keep the KafkaConsuser object available for threads started to get answer\nThe only solution I can see at this point is to use a DB (relational\/noSQL):\n\nrequestor store request_id in DB (as local as possible) aznd publish request in kafka\nrequestor poll DB until finding answer to request_id\nIn parallel, a consumer process receiving messages from reply topic and storing result in DB\n\nBut I don't like polling..... It wil generate heavy load on DB in a massive traffic system\nMy 2CTS","Q_Score":7,"Tags":"python,apache-kafka,synchronous,kafka-python","A_Id":66068833,"CreationDate":"2020-11-09T08:35:00.000","Title":"How to implement request-reply (synchronous) messaging paradigm in Kafka?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a linux console application which behaves in a similar way to bash in that it receives commands while it's running in the same way and I want to use it via Python (use python to send inputs and receive outputs to\/from the application while it\u2019s running).\nThe application in question is custom Minecraft server software, it does not appear to have a useable API\/SDK for my needs\nHow would I be able to run it, capture its output and interact with it (Enter commands into the program) using Python code.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":204,"Q_Id":64754327,"Users Score":0,"Answer":"You can use os.sytem(command) to just run a command, or if you want the output then try subprocess.check_output(commands), which returns whatever is returned by bash. Pass a list of commands to this, for example, to run node index.js, you'd pass [\"node\", \"index.js\"]","Q_Score":0,"Tags":"python-3.x,linux,console,console-application","A_Id":64755783,"CreationDate":"2020-11-09T15:17:00.000","Title":"How to interact with console application using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to run a .exe file with required dll's in same folder with os.system or subprocess.call(), which is woriking perfectly fine in local machine but after deployment even os.system('date') is resulting in 0. Is there anything I can do to solve this issue?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":154,"Q_Id":64763280,"Users Score":0,"Answer":"use the check param of the subprocess.call to check the error code.\ncheck if your application needs vcredist to be installed.\nUsed dependency walker to see what are dependencies of your application and make sure all the dependencies are available in your target environment.","Q_Score":0,"Tags":"python-3.x,azure-function-app","A_Id":64763750,"CreationDate":"2020-11-10T05:17:00.000","Title":"os.system or subprocess.call() not working in azure function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have python files (named in same way) in different folders. I want to run all python files at the same time with one script, any of the terminals (of the executed files) should not to be closed when the new terminal is opened. Which means, I want them to run simultaneously, each in a new\/different terminal. What kind of script could do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":129,"Q_Id":64766611,"Users Score":0,"Answer":"SOLVED\nI just had to use:\n\nsubprocess.Popen(\"python pathtoeachfile\/\/file.py\", creationflags=subprocess.CREATE_NEW_CONSOLE)\n\nand loop over my files.","Q_Score":1,"Tags":"python,terminal","A_Id":64775873,"CreationDate":"2020-11-10T09:56:00.000","Title":"How to run multiple python files simultaneously?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to find a way to cancel Airflow dag run while it is being executed. (whichever task it is at that moment). I wonder if I can set the status to \"failed\" while dag is running?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":64770232,"Users Score":0,"Answer":"yes , you can just click on task that is running and mark as failed. This will fail all downstream tasks and then DAG eventually","Q_Score":0,"Tags":"python,airflow","A_Id":64777700,"CreationDate":"2020-11-10T13:52:00.000","Title":"Is there a way to cancel Airflow dag run while it is being executed?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"How do you get a Ruby, Python and Node.js development environments running on Apple Silicon architecture. What about virtualization software e.g. Docker?","AnswerCount":4,"Available Count":1,"Score":-1.0,"is_accepted":false,"ViewCount":18717,"Q_Id":64774787,"Users Score":-15,"Answer":"Seems everything will work as is...\nFrom the event presentation they said \"Existing Mac apps that have not been updated to Universal will run seamlessly with Apple\u2019s Rosetta 2 technology.\"","Q_Score":32,"Tags":"python,node.js,ruby,docker,apple-silicon","A_Id":64775397,"CreationDate":"2020-11-10T18:37:00.000","Title":"Running Ruby, Node, Python and Docker on the new Apple Silicon architecture?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I has been forced to develop python scripts on Windows 10, which I have never been doing before.\nI have installed python 3.9 using windows installer package into C:\\Program Files\\Python directory.\nThis directory is write protected against regular user and I don't want to elevate to admin, so when using pip globally I use --user switch and python installs modules to C:\\Users\\AppData\\Roaming\\Python\\Python39\\site-packages and scripts to C:\\Users\\AppData\\Roaming\\Python\\Python39\\Scripts directory.\nI don't know how he sets this weird path, but at least it is working. I have added this path to %Path% variable for my user.\nProblems start, when I'm trying to use virtual environment and upgrade pip:\n\nI have created new project on local machine in C:\\Users\\Projects and entered the path in terminal.\npython -m venv venv\nsource venv\\Scrips\\activate\npip install --upgrade pip\n\nBut then I get error:\nERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access denied: 'C:\\Users\\\\AppData\\Local\\Temp\\pip-uninstall-7jcd65xy\\pip.exe'\nConsider using the --user option or check the permissions.\nSo when I try to use --user flag I get:\nERROR: Can not perform a '--user' install. User site-packages are not visible in this virtualenv.\nSo my questions are:\n\nwhy it is not trying to install everything inside virtual enviroment (venv\\Scripts\\pip.exe)?\nhow I get access denied, when this folder suppose to be owned by my user?\n\nWhen using deprecated easy_install --upgrade pip everything works fine.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":81,"Q_Id":64810229,"Users Score":1,"Answer":"I recently had the same issue for some other modules. My solution was simply downgrade from python 3.9 to 3.7. Or make an virtual environment for 3.7 and use that and see how it works.","Q_Score":1,"Tags":"python,windows,pip,virtualenv","A_Id":64811453,"CreationDate":"2020-11-12T19:06:00.000","Title":"python on windows 10 cannot upgrade modules in virtual environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"when i try to install psycopg2 the error appears\nfatal error: Python.h: No such file or directory\n#include \nbecause, there is a search for this file along the path \/usr\/include\/python3.8\nbut this file is located in the path \/usr\/local\/include\/python3.8\/Python.x\nHow to solve this problem?? Is Python installed in the wrong directory?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2180,"Q_Id":64830200,"Users Score":1,"Answer":"Its not clear what version of Ubuntu you are using. Assuming it is a fresh install and you don't have these installed. I would suggest you install the following\nsudo apt-get install python3 python-dev python3-dev build-essential\nOnce these are installed then try again installing psycopg2","Q_Score":0,"Tags":"python,ubuntu","A_Id":64830233,"CreationDate":"2020-11-14T01:56:00.000","Title":"Ubuntu, fatal error: Python.h: No such file or directory #include","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an issue executing pytest from GitBash.\nIn GitBash im located in directory of my pytest and .py file. Writing pytest in GitBash gives me bash: pytest: command not found. I know that i can execute pytest from the PyChamr's terminal, but it's not this comfortable to use as executed from Bash.\nI looked in internet and found, that pytest installed in venv, this may cause some issue.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":453,"Q_Id":64838404,"Users Score":0,"Answer":"You need to either add the directory where pytest is located to your PATH environment variable or you need to specify the full path in the command. For GitBash, you can do this by editing the .bashrc file in your home directory.","Q_Score":0,"Tags":"python,pytest","A_Id":64838419,"CreationDate":"2020-11-14T20:39:00.000","Title":"Executing pytest from GitBash","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have just installed the latest Termux on my Android device and Python 3.9 is the default Python installation. I need to run Python 3.8.x due to some package incompatibilities.\nMy searching tells me there is no way to downgrade Python within Termux - is this correct?\nIf I install a previous version of Termux, will this in turn install an earlier version of Python or will it just collect the same default version?\nIs there another way for me to make this change?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":11692,"Q_Id":64853742,"Users Score":2,"Answer":"It doesn't depends upon the Version of termux. It depends upon the repository. And it always update it's packages . So i think there is no way","Q_Score":2,"Tags":"python,android,downgrade,termux","A_Id":64853866,"CreationDate":"2020-11-16T06:50:00.000","Title":"Need to run Python 3.8.x on Termux on Android, currently installed with Python 3.9","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"We are running a Cloud Composer setup on GCP and want to run a remote ETL job in a secured environment on premises.\nCloud Composer uses Redis which is running on the K8S cluster.\nWe can not connect to it via VPN.\nHow can we use our master node on GCP to allow the remote worker to interact with it?\nCan we safely expose our Redis?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":504,"Q_Id":64861286,"Users Score":0,"Answer":"You can configure a password that Airflow will provide Redis to access it. You can also set cloud permissions so that only your Airflow machines will have access to the Redis machine or access to the Redis ports on the machine that it's posted on.\nYou will add significant latency by having it connect to an on premises machine though.","Q_Score":0,"Tags":"python,celery,airflow,google-cloud-composer","A_Id":64867928,"CreationDate":"2020-11-16T15:56:00.000","Title":"Use Cloud Composer with Celery Executor to run job on remote worker in secured network","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have one Kafka topic with one group and one partition\nI have 15 Kafka consumers subscribed to the topic\nI want to all of the consumers to receive a message when I publish it.\nI am using pythonkafka library.\nIs it possible in Kafka?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":64883830,"Users Score":0,"Answer":"Give all consumers a unique consumer group id, then they each have individually tracked offsets and receive the same messages","Q_Score":0,"Tags":"python,apache-kafka","A_Id":64885934,"CreationDate":"2020-11-17T21:47:00.000","Title":"How do I send the same message to all consumers of a kafka topic?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to make a python script that I can turn into an .exe file and give to others in my company. How can I do this without requiring them to download the Anaconda Distribution, conda installing the correct libraries, etc? Does turning a file into an .exe file take care of this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":64885227,"Users Score":0,"Answer":"The existing tools I'm aware of (PyInstaller, py2exe, py2app and cx_Freeze) are all designed to encapsulate all dependencies into a single executable, with no external dependencies, so you should be able to distribute the (possibly quite large) executable without worrying about dependencies.\nTo be clear, there is nothing intrinsic to the .exe conversion that causes it to avoid dependencies. .exe files can depend on .dll library files, external data files, etc. It's just that most people who want to make such an executable are trying to remove dependencies on the installed Python entirely, not just third-party library dependencies, so the tooling tends to support this use case by avoiding any dependencies at all.","Q_Score":0,"Tags":"python","A_Id":64885305,"CreationDate":"2020-11-18T00:13:00.000","Title":"Will python .exe files work for others in my company?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to use Python on my Mac Catalina to communicate with my Arduino and keep getting \"no such file or directory\" when I input the Mac serial port \/dev\/cu.usbmodem1433301 (Arduino Uno) as indicated in the Arduino Tools\/Port list and I run the script. If I can't get a workable port for Python, I might as well not use it.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":171,"Q_Id":64898725,"Users Score":1,"Answer":"I just discovered that the usb port on the opposite end of my mac's keyboard is the one that python can communicate with arduino through. The port I was using was 143301 and the one that works (on the caps lock end) is 143101. Made all the difference in the world. Problem solved. Life is good again.","Q_Score":1,"Tags":"python,macos,arduino,port","A_Id":64921597,"CreationDate":"2020-11-18T18:00:00.000","Title":"What is workable serial port name for MacOS of \/dev\/cu.usbmodem143301 that can be accepted by Python's Pyserial?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to cross-compile Python 2.7.18 for an x86,uclibc machine using a crosstool-ng example toolchain. The commands used are the following:\nCONFIG_SITE=config.site CC=\/home\/msainz\/x-tools\/x86_64-unknown-linux-uclibc\/bin\/x86_64-unknown-linux-uclibc-gcc CXX=\/home\/msainz\/x-tools\/x86_64-unknown-linux-uclibc\/bin\/x86_64-unknown-linux-uclibc-g++ AR=\/home\/msainz\/x-tools\/x86_64-unknown-linux-uclibc\/bin\/x86_64-unknown-linux-uclibc-ar RANLIB=\/home\/msainz\/x-tools\/x86_64-unknown-linux-uclibc\/bin\/x86_64-unknown-linux-uclibc-ranlib READELF=\/home\/msainz\/x-tools\/x86_64-unknown-linux-uclibc\/bin\/x86_64-unknown-linux-uclibc-readelf LDFLAGS=\"-L\/home\/msainz\/Projects\/Selene\/WP3\/local\/uclibc\/base_rootfs\/lib -L\/home\/msainz\/Projects\/Selene\/WP3\/local\/uclibc\/base_rootfs\/usr\/lib\" CFLAGS=\"-I\/home\/msainz\/Projects\/Selene\/WP3\/local\/uclibc\/base_rootfs\/usr\/include -I\/home\/msainz\/Projects\/Selene\/WP3\/local\/uclibc\/base_rootfs\/include\" CPPFLAGS=\"-I\/home\/msainz\/Projects\/Selene\/WP3\/local\/uclibc\/base_rootfs\/usr\/include -I\/home\/msainz\/Projects\/Selene\/WP3\/local\/uclibc\/base_rootfs\/include\" .\/configure --enable-shared --host=x86_64-unknown-linux-uclibc --build=x86_64 --disable-ipv6 --prefix=\/home\/msainz\/Projects\/python2_top_uclibc\/\nfollowed by\nPATH=$PATH:\/home\/msainz\/Projects\/python2_top_glibc\/bin\/ make\nand\nPATH=$PATH:\/home\/msainz\/Projects\/python2_top_glibc\/bin\/ make install\nExecution ends with the following error:\nfi \/home\/msainz\/x-tools\/x86_64-unknown-linux-uclibc\/bin\/x86_64-unknown-linux-uclibc-gcc -L\/home\/msainz\/Projects\/Selene\/WP3\/local\/uclibc\/base_rootfs\/lib -L\/home\/msainz\/Projects\/Selene\/WP3\/local\/uclibc\/base_rootfs\/usr\/lib -Xlinker -export-dynamic -o python \\ Modules\/python.o \\ -L. -lpython2.7 -ldl -lpthread -lm _PYTHON_PROJECT_BASE=\/home\/msainz\/Projects\/Python-2.7.18 _PYTHON_HOST_PLATFORM=linux2-x86_64 PYTHONPATH=.\/Lib:.\/Lib\/plat-linux2 python -S -m sysconfig --generate-posix-vars ;\\ if test $? -ne 0 ; then \\ echo \"generate-posix-vars failed\" ; \\ rm -f .\/pybuilddir.txt ; \\ exit 1 ; \\ fi python: error while loading shared libraries: libc.so.0: cannot open shared object file: No such file or directory generate-posix-vars failed make: *** [Makefile:523: pybuilddir.txt] Error 1 \npython2_top_glibc dir contains a previous Python-2.7.18 installation but for native glibc which was compiled perfectly. libc.so.0 is in fact in the base_rootfs of target system, which is being linked in .\/configure stage. I'm stuck at this at the moment. Any clue will be appreciated. Any additional info will be supplied on demand.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":191,"Q_Id":64909671,"Users Score":0,"Answer":"python: cannot open shared object file: No such file or directory\n\nThis is a run-time loader error. You are trying to run a python executable that is linked against that libc.so.0.\nIf this executable can actually run in your host environment, you can enable it by adding your base_rootfs library to LD_LIBRARY_PATH. Otherwise, you need to use your host python executable in this step of the build process, or disable it altogether.","Q_Score":0,"Tags":"python,linux,cross-compiling,libc,uclibc","A_Id":64909996,"CreationDate":"2020-11-19T10:21:00.000","Title":"Unable to cross-compile Python-2.7.18 for x86,uclibc","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Process: Python [1593]\nPath: \/Library\/Frameworks\/Python.framework\/Versions\/3.5\/Resources\/Python.app\/Contents\/MacOS\/Python\nIdentifier: Python\nVersion: 3.5.1 (3.5.1)\nCode Type: X86-64 (Native)\nParent Process: zsh [1569]\nResponsible: iTerm2 [1562]\nUser ID: 501\nDate\/Time: 2020-11-21 08:15:58.865 +0800\nOS Version: macOS 11.0.1 (20B29)\nReport Version: 12\nBridge OS Version: 5.0.1 (18P2561)\nAnonymous UUID: E76F7C18-1C08-D433-A979-D43ED08102AF\nSleep\/Wake UUID: E8807548-2D08-4BC7-840E-21E0138FEC36\nTime Awake Since Boot: 1400 seconds\nTime Since Wake: 210 seconds\nSystem Integrity Protection: enabled\nCrashed Thread: 0\nException Type: EXC_CRASH (SIGABRT)\nException Codes: 0x0000000000000000, 0x0000000000000000\nException Note: EXC_CORPSE_NOTIFY\nTermination Reason: DYLD, [0x1] Library missing\nApplication Specific Information:\ndyld: launch, loading dependent libraries\nDyld Error Message:\ndyld: No shared cache present\nLibrary not loaded: \/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation\nReferenced from: \/Library\/Frameworks\/Python.framework\/Versions\/3.5\/Resources\/Python.app\/Contents\/MacOS\/Python\nReason: image not found\nBinary Images:\n0x100000000 - 0x100000fff +org.python.python (3.5.1 - 3.5.1) <16087962-95EF-B9B7-A634-47CA97FED0B7> \/Library\/Frameworks\/Python.framework\/Versions\/3.5\/Resources\/Python.app\/Contents\/MacOS\/Python\n0x7fff624d8000 - 0x7fff62573fff dyld (832.7.1) <2705F0D8-C104-3DE9-BEB5-B1EF6E28656D> \/usr\/lib\/dyld\nModel: MacBookPro15,2, BootROM 1554.50.3.0.0 (iBridge: 18.16.12561.0.0,0), 4 processors, Quad-Core Intel Core i5, 2.4 GHz, 16 GB, SMC\nGraphics: kHW_IntelIrisGraphics655Item, Intel Iris Plus Graphics 655, spdisplays_builtin\nMemory Module: BANK 0\/ChannelA-DIMM0, 8 GB, LPDDR3, 2133 MHz, SK Hynix, -\nMemory Module: BANK 2\/ChannelB-DIMM0, 8 GB, LPDDR3, 2133 MHz, SK Hynix, -\nAirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x7BF), wl0: Sep 11 2020 16:57:49 version 9.30.440.2.32.5.61 FWID 01-129bddb\nBluetooth: Version 8.0.1f5, 3 services, 18 devices, 1 incoming serial ports\nNetwork Service: Wi-Fi, AirPort, en0\nUSB Device: USB 3.1 Bus\nUSB Device: Apple T2 Bus\nUSB Device: Touch Bar Backlight\nUSB Device: Touch Bar Display\nUSB Device: Apple Internal Keyboard \/ Trackpad\nUSB Device: Headset\nUSB Device: Ambient Light Sensor\nUSB Device: FaceTime HD Camera (Built-in)\nUSB Device: Apple T2 Controller\nThunderbolt Bus: MacBook Pro, Apple Inc., 47.4\nThunderbolt Bus: MacBook Pro, Apple Inc., 47.4","AnswerCount":4,"Available Count":1,"Score":-1.0,"is_accepted":false,"ViewCount":12935,"Q_Id":64938572,"Users Score":-6,"Answer":"This also happened to me when I was trying to run python3 from iTerm (replacement of terminal). This problem was not occurring with the MacBook's default terminal. After I updated iTerm, this error was not occurring anymore. Please try to update the application on which you are trying to run python3(in my case it was iTerm) instead of updating the python version.","Q_Score":10,"Tags":"python,macos","A_Id":65018254,"CreationDate":"2020-11-21T00:18:00.000","Title":"python3.5 error 'dyld library not loaded: CoreFoundation' after macOS Big Sur update","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I type in which python I get ~\/opt\/anaconda3\/bin\/python, and that's where all packages are going. I clearly didn't know what I was doing when I installed it.\nShould I try to uninstall everything and start over? I'm kind of a beginner and I feel like I've made life more difficult for myself.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75,"Q_Id":64938702,"Users Score":0,"Answer":"Short answer is No, ~\/opt\/anaconda3\/bin\/python is the default location for python 3.6 or higher to exist when you install anaconda. Python 2.7 comes with Mac which you will most probably will not use for development and when you download anaconda it comes with python 3.9.","Q_Score":0,"Tags":"python,macos,installation","A_Id":64939028,"CreationDate":"2020-11-21T00:44:00.000","Title":"I think I've messed up my Python environment on my Mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Unfortunately when I make an exe file in windows 10, it never runs in windows 8. Pyinstaller after running in cmd using \"pyinstaller.exe --onefile myfile.py\" gives me an successful exe file in dist folder which runs in my windows 10 successully but never runs in earlier versions like windows 8.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1514,"Q_Id":64941111,"Users Score":1,"Answer":"The problem is most likely that you're missing DLLs on the machine that won't run the executable.\nThis is due to the fact that some where, some time you've installed either a .NET environment, a Visual Runtime environment or a runtime containing a particular set of DLL's for the application to function.\nYou can use the --add-data argument to Add DLLs.","Q_Score":0,"Tags":"python,windows,exe,pyinstaller","A_Id":64941130,"CreationDate":"2020-11-21T08:25:00.000","Title":"exe file created using pyinstaller in windows 10 does not run in windows 8","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Flask-based app that is used as a front (HTTP API only) for an image processing task (face detection along with some clustering). The app is deployed to Kubernetes clusters and, unfortunately during load testing it dies.\nThe problem is that all Flask threads are reserved for request processing and the application can't reply to Kubernetes liveness probe (\/health endpoint) via HTTP - so the whole pod gets restarted.\nHow can I resolve it? I thought about a grep-based liveness problem however it doesn't solve the problem. Another idea is to use celery, however, if Flask doesn't support async-processing I'll need to call wait() on a celery task which gives me exactly to the same place.\nFor now I don't consider returning 202 response along with URL for process monitoring.\nAny other ideas?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":242,"Q_Id":64967504,"Users Score":2,"Answer":"How did you deploy Gunicorn etc?\nFastAPI might be better suited for your use case, but migration might be prohibitive. It has built in async support which would should help you to scale better. I like tiangolo's docker containers for this.\nHow long does your image recognition take (seconds, milliseconds)?\nIf you must stick to your current design:\n\nincrease timeout, but be aware that your customers have the same problem - they might time out.\nIncrease resources: More pods so that no pod has no resources left.\n\nIf you're using Flask, be aware that the dev server is not meant for production deployment. Although it is multithreaded, it's not particularly performant, stable, or secure. Use a production WSGI server to serve the Flask application, such as Gunicorn, mod_wsgi, or something else.","Q_Score":0,"Tags":"python,flask,kubernetes","A_Id":64967781,"CreationDate":"2020-11-23T11:24:00.000","Title":"Flask thread pool exhausted","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a .exe file containing a skeletonization funtion.\nWhat i have to do is running the command and giving the inputs (the input image + some parameters) and getting the skeleton like this :\n\nsystem Skel_current.exe input-image 'outputimage.png' 'param1' 'param2'\n\nThe problem is that my jupyter notebook can't point the .exe file. After several trials, I get a first solution by installing wine and I was able to execute the command in the script Shell. But could'nt execute it in the jupyter notebook. When running this command in the jupyter notebook:\n\nwine Skel_current.exe class1 'image.png' '4.00000' '0.01000'\n\nI got this error :\n\n=002b:fixme:msvcrt:type_info_name_internal_method type_info_node parameter ignored\nwine: Unhandled page fault on read access to 0000000000000000 at address 00000001400293D3 (thread 002b), starting debugger...\n002d:fixme:dbghelp:elf_search_auxv can't find symbol in module\n...002d:fixme:dbghelp:interpret_function_table_entry PUSH_MACHFRAME 6","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":215,"Q_Id":65051528,"Users Score":0,"Answer":"I found the correct command finally !!\n\nsubprocess.run([\"wine\",\"Skel_current.exe\", 'image.png','skel.png' ,'param1' ,'param2'])","Q_Score":0,"Tags":"python,linux,windows","A_Id":65052658,"CreationDate":"2020-11-28T15:55:00.000","Title":"How to run .exe file (created in Windows) in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have successfully installed pyobjc-core, but when I try to install pyobjc, I get an error:\n(the first part)\nERROR: Command errored out with exit status 1:\ncommand: \/Library\/Frameworks\/Python.framework\/Versions\/3.9\/bin\/python3.9 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'\/private\/var\/folders\/b6\/22gqf0jd6252c93x8pbxr5nw0000gn\/T\/pip-install-nnn7ftk2\/pyobjc-framework-cocoa\/setup.py'\"'\"'; file='\"'\"'\/private\/var\/folders\/b6\/22gqf0jd6252c93x8pbxr5nw0000gn\/T\/pip-install-nnn7ftk2\/pyobjc-framework-cocoa\/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(file);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, file, '\"'\"'exec'\"'\"'))' install --record \/private\/var\/folders\/b6\/22gqf0jd6252c93x8pbxr5nw0000gn\/T\/pip-record-2nxnxwsn\/install-record.txt --single-version-externally-managed --compile --install-headers \/Library\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyobjc-framework-Cocoa\ncwd: \/private\/var\/folders\/b6\/22gqf0jd6252c93x8pbxr5nw0000gn\/T\/pip-install-nnn7ftk2\/pyobjc-framework-cocoa\/\n(the last part)\nERROR: Command errored out with exit status 1: \/Library\/Frameworks\/Python.framework\/Versions\/3.9\/bin\/python3.9 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'\/private\/var\/folders\/b6\/22gqf0jd6252c93x8pbxr5nw0000gn\/T\/pip-install-nnn7ftk2\/pyobjc-framework-cocoa\/setup.py'\"'\"'; file='\"'\"'\/private\/var\/folders\/b6\/22gqf0jd6252c93x8pbxr5nw0000gn\/T\/pip-install-nnn7ftk2\/pyobjc-framework-cocoa\/setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(file);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, file, '\"'\"'exec'\"'\"'))' install --record \/private\/var\/folders\/b6\/22gqf0jd6252c93x8pbxr5nw0000gn\/T\/pip-record-2nxnxwsn\/install-record.txt --single-version-externally-managed --compile --install-headers \/Library\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyobjc-framework-Cocoa Check the logs for full command output.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":273,"Q_Id":65064442,"Users Score":1,"Answer":"Try to use different Python version, right now not all libraries have wheels for 3.9 .","Q_Score":0,"Tags":"python,pip,pyobjc","A_Id":65064611,"CreationDate":"2020-11-29T19:48:00.000","Title":"I get an error when I install pyobjc on my mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to generate dynamic workflow in airflow based on user input. I know there is option to have it based on data from file and database but in all these cases, workflow will not directly be dependent on user input and in case where multiple users are using same dag then in that case also issue may come. To avoid all these, i am thinking of passing user input to sub dag and generate the workflow. But subdag does not have option of passing user input from ui.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":893,"Q_Id":65076689,"Users Score":0,"Answer":"There are many tricks for getting the same done, but the actual solution should come from airflow by way of dynamic task which is missing at present. Hopefully we will see that in future version of airflow.","Q_Score":3,"Tags":"python,airflow","A_Id":68923547,"CreationDate":"2020-11-30T15:46:00.000","Title":"Is it possible to pass user input from dag to sub dag in airflow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I use Pyinstaller in pop os for a python script but it creates a x-sharedlib file that I can only open through terminal. I tried to rename it to exe and run it but nothing happens. How I can make it open by double click? Thank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":106,"Q_Id":65123294,"Users Score":0,"Answer":"Found the solution. I renamed it to .sh changed nautilus preferences to run executable text files and runs normally now.","Q_Score":0,"Tags":"python-3.x,linux","A_Id":65144243,"CreationDate":"2020-12-03T09:37:00.000","Title":"Pyinstaller creates x-sharedlib file in pop os","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I run a process distirbuted over the cores through concurrent.futures. Each of the processes has a function which ultimately calls os.getpid(). Might the IDs from os.getpid() coincide in spite of being in different concurrent.futures' branches?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":391,"Q_Id":65137493,"Users Score":3,"Answer":"I don't know that the meaning of the value returned by os.getpid() is well defined. I'm pretty sure that you can depend on no two running processes having the same ID, but it's very likely that after some process is terminated, it's ID eventually will be re-used.\nThat's what happens in most operating systems, and the implementation of os.getpid() quite likely just calls the operating system and returns the same value.","Q_Score":1,"Tags":"python,concurrency,multiprocessing,pid","A_Id":65137573,"CreationDate":"2020-12-04T03:22:00.000","Title":"uniqueness of os.getpid in multiprocessing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running a Mac with Catalina 10.15.6 on an Intel MBP. I'm trying to debug a C++ library that has a Python 3.7.7 binding, Python being installed in a venv. I used to be able to debug it via lldb by going,\nlldb `which python` -- -m pytest myCrashingTest.py\nThen calling 'run', have it segfault and then do the normal debug fandango.\nNow when I call 'run' it tells me...\n\nerror: process exited with status -1 (Error 1)\n\nIf I try to debug python on it's own, that gives me the same error.\nlldb `which python` \nI can't figure this one and can't find any thing useful via google searches. If I try to debug system python, I gets a System Integrity Error, which I can get round if need be, but I'm not running system python. I'm being forced to put in debug prints in the C++ lib like it's the 1980s all over again.\nAny help appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":65147028,"Users Score":0,"Answer":"When SIP is on, lldb is not allowed to debug system binaries, and more generally any binaries that are codesigned and not marked as willing to be debugged. The system Python does not opt into being debugged, so you will either have to turn off SIP (not sure how you do that in a venv) or build a debug version of python yourself. I generally do the latter, Python isn't that hard to build.","Q_Score":0,"Tags":"python,macos,lldb","A_Id":65151409,"CreationDate":"2020-12-04T16:18:00.000","Title":"Debugging python 3.7 in LLDB on MacOS 10.15.6","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed Anaconda for Windows 10. I later installed Ubuntu under the WSL. I would like to run the python from Anaconda in the Ubuntu shell. Is this possible? How do I activate the environment?\nAlternatively, if I install Anaconda under ubuntu, will I be able to use that environment in Visual Studio 2019? (My end goal is to do my python dev in VS2019, be able to run in debug mode there, and also use the bash shell to run python scripts.)","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2828,"Q_Id":65151949,"Users Score":1,"Answer":"You can use VS on Ubuntu's anaconda python, however you will need to install it there aswell, and for now there's no official support for wsl GUI from Microsoft, though you can still install thirth part programs that will do that for you.\nIn order to use Anaconda under the wsl shell you will first need to install it there.\nYou can do that by going into the anaconda webpage and copying the link to download the latest linux version, and under the bash you type wget [link].\nAfter the download is done you can install it running sudo bash [name of the archive]\nYou can find it's name by typing ls, and it should match the version that you just downloaded.\nafter that you should reload the bash source ~\/.bashrc and you should now be able to use anaconda under the linux bash, though it's not possible yet to have it on the GUI.\nThere are multiple alternatives if you still want it displaying in your browser, you could either change the path to output in your windows browser, install a complete linux GUI or just use a windows software to display the anaconda GUI alone.\nAssuming you want the later:\nGo to MobaXterm website and download it. This is a lightweight software that comes with various inbuilt tools to access SSH server, VNC, SFTP, Command Terminal, and more.\n\nOpen the MobaXterm, once you have that on your system.\n\nClick on the Session option given in the Menu of its.\n\nSelect WSL available at the end of the tools menu.\n\nFrom Basic WSL Setting, click on the Dropdown box and select Ubuntu and press the OK button.\n\nNow you will see your Ubuntu WSL app on MobaXterm, great.\n\nThere, simply type: anaconda-navigator\n\nThat\u2019s it, this will open the graphical user interface of the Anaconda running on Ubuntu WSL app of Windows 10.\n\nStart creating environments and Installation of different packages directly from the GUI of Navigator.","Q_Score":1,"Tags":"python,visual-studio,anaconda,windows-subsystem-for-linux","A_Id":66953111,"CreationDate":"2020-12-04T23:02:00.000","Title":"How do I launch Windows Anaconda python from WSL Ubuntu?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"\"python --version\nPython was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.\"\nThis is what I get trying to make sure it works (clearly it doesn't). I'm quite a rookie with all this. I started cause I wanted to run some script on bluestacks, so I needed Python and ADB added both PATH. The problem comes here.... It is indeed added to Path:\nC:\\windows\\system32;C:\\windows;C:\\windows\\System32\\Wbem;C:\\windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\windows\\System32\\OpenSSH\\;C:\\Users\\Sierra\\AppData\\Local\\Microsoft\\WindowsApps;C:\\platform-tools;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\Lib;\nThis is PATHEXT: .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC\nThis is PYTHONPATH (I made it since I saw someone saying it would fix it):\nC:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\Lib;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\include;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\DLLS;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\Scripts;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\Lib\\site-packages\nWeird enough the fact that ADB works fine:\nC:\\Users\\Sierra>adb --version Android Debug Bridge version 1.0.41 Version 30.0.5-6877874 Installed as C:\\platform-tools\\adb.exe \nSince I did the same in both cases, I can't get why it's not working. Maybe I did something wrong with the Python version I downloaded? Weird thing too, since I also have stilled version 3.8\nAlso, the script I need says \"Python 3.7.X installed and added to PATH.\" I guessed 3.9 would work, since it's the newest\nI apoloogize cause my English. I'm not native speaker, so I could have messed up somewhere. Many thanks!!\nForgot to tell, I use Windows 10","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3120,"Q_Id":65164477,"Users Score":0,"Answer":"When you run the setup exe for python a list of tiny boxes should pop up and one will say add PATH to python? If you click yes it will add PATH to python which seems like your issue I think? If you have multiple versions of python installed that could cause a issue type python in cmd if it has a error then you didn't install python properly. Double check you have all the required modules installed if that doesn't work then I'm lost. Anyways here's the code to open cmd.\n\nimport os\n\n\nos.system(\"cmd\")","Q_Score":0,"Tags":"python,python-3.x,windows","A_Id":65164897,"CreationDate":"2020-12-06T03:46:00.000","Title":"How to make Python 3.9 run command prompt windows 10?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Good day!\nInstalled the Python 3.9.1, checked \"Add to path\", the cmd did not work though.\nAdded Environment Variable Path, both folder\n\nC:\\Users\\XXXXX\\AppData\\Local\\Programs\\Python\\Python39\n\n(file manager opens the path to python.exe just fine)\nand script lines:\n\nC:\\Users\\XXXXX\\AppData\\Local\\Programs\\Python\\Python39\n\nStill the commands python -version and pip --version do not work from the command line.\nPy --version works just fine though.\nAnyone might share and idea what might be the reason?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6178,"Q_Id":65166813,"Users Score":0,"Answer":"If you had Python installed in the system before, the new path is added at the end of PATH system variable and when system looks for python.exe it finds first the old version that is available under a different folder.\nIf you used a command window opened before the new version got installed, it is also possible that system variables did not reload. Close it and use a new one to check.","Q_Score":4,"Tags":"python,python-3.x,python-3.9,system-paths","A_Id":65173353,"CreationDate":"2020-12-06T10:06:00.000","Title":"Python 3.9.1 path variable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am relatively new to the python's subprocess and os modules. So, I was able to do the process execution like running bc, cat commands with python and putting the data in stdin and taking the result from stdout.\nNow I want to first know that a process like cat accepts what flags through python code (If it is possible).\nThen I want to execute a particular command with some flags set.\nI googled it for both things and it seems that I got the solution for second one but with multiple ways. So, if anyone know how to do these things and do it in some standard kind of way, it would be much appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":55,"Q_Id":65166931,"Users Score":0,"Answer":"In the context of processes, those flags are called arguments, hence also the argument vector called argv. Their interpretation is 100% up to the program called. In other words, you have to read the manpages or other documentation for the programs you want to call.\nThere is one caveat though: If you don't invoke a program directly but via a shell, that shell is the actual process being started. It then also interprets wildcards. For example, if you run cat with the argument vector ['*'], it will output the content of the file named * if it exists or an error if it doesn't. If you run \/bin\/sh with ['-c', 'cat *'], the shell will first resolve * into all entries in the current directory and then pass these as separate arguments to cat.","Q_Score":1,"Tags":"python,linux,subprocess","A_Id":65169064,"CreationDate":"2020-12-06T10:22:00.000","Title":"Is there any way to know the command-line options available for a separate program from Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed python 3.9 selenium and behave\nI want to run first feature file but had \"behave is not recognized as an internal or external command\"\nI added C:\\ProgramFiles\\Python39\\Scripts\\ and C:\\ProgramFiles\\Python39\\ to environemt var and to system path variables. In cmd when typing python --version I got proper answser.\nI dont have any code yet just Scenario in Feature file\nAlso I dont see Behave configuration template when try to ADD Configuration to run Behave trough Pycharm, so Behave is not installed\nScenario: some scenario\nGiven ...\nWhen ...\nThen ...\nwhen typing behave login.feature got this error","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1586,"Q_Id":65168644,"Users Score":0,"Answer":"I deleted python39 and installed 38 now all is working fine","Q_Score":0,"Tags":"python,selenium,cmd,environment-variables,python-behave","A_Id":65169175,"CreationDate":"2020-12-06T13:31:00.000","Title":"why Im getting \"behave is not recognized as an internal or external command\" on windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I executed \"conda update --all\", I got the following debug messages. I don't see any misbehavior in my Python or Spyder installation. Does anyone knows why we get this debug messages sometimes, what are they warning us about?\nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: \/ DEBUG menuinst_win32:init(198): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'C:\\Users\\usuario\\Miniconda3', env_name: 'None', mode: 'user', used_mode: 'user'\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\pythonw.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\pythonw.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py']\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\python.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\python.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py', '--reset']\nDEBUG menuinst_win32:init(198): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'C:\\Users\\usuario\\Miniconda3', env_name: 'None', mode: 'user', used_mode: 'user'\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\pythonw.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\pythonw.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py']\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\python.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\python.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py', '--reset']\n| DEBUG menuinst_win32:init(198): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'C:\\Users\\usuario\\Miniconda3', env_name: 'None', mode: 'user', used_mode: 'user'\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\python.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\python.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\jupyter-notebook-script.py', '\"%USERPROFILE%\/\"']\n\/ DEBUG menuinst_win32:init(198): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'C:\\Users\\usuario\\Miniconda3', env_name: 'None', mode: 'user', used_mode: 'user'\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\pythonw.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\pythonw.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py']\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\python.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\python.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py', '--reset']","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2843,"Q_Id":65207104,"Users Score":0,"Answer":"The following command solved this issue for me:\n\nconda clean --yes --all","Q_Score":5,"Tags":"python,debugging,installation,anaconda","A_Id":70377013,"CreationDate":"2020-12-08T21:22:00.000","Title":"DEBUG menuinst_win32 when running conda update --all","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have machine A that just cranks out .png files. It gets synced to machine B and I view it on machine B.\nSometimes machine A crashes for whatever reason and stops doing the scheduled jobs, which means then files on machine B will be old.\nI want machine B to run a script to see if the file is older than 1 day, and if it is, then reset the power switch on machine A, so that it can be cold booted. The switch is connected to Google Home but understand I have to use the Assistant API.\nI have installed the google-assistant-sdk[samples] package. Can someone show me some code on how to query and return all devices then flip the switch on and off on that device?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1610,"Q_Id":65222829,"Users Score":1,"Answer":"The google-assistant-sdk is intended for processing audio requests.\nFrom the doc:\n\nYour project captures an utterance (a spoken audio request, such as What's on my calendar?), sends it to the Google Assistant, and receives a spoken audio response in addition to the raw text of the utterance.\n\nWhile you could use that with some recorded phrases it makes more sense to connect to the switch directly or use a service like IFTTT. What kind of switch is it?","Q_Score":4,"Tags":"python,google-assistant-sdk","A_Id":65335478,"CreationDate":"2020-12-09T18:33:00.000","Title":"Google Assistant API, controlling a light switch connected to Google Home","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have machine A that just cranks out .png files. It gets synced to machine B and I view it on machine B.\nSometimes machine A crashes for whatever reason and stops doing the scheduled jobs, which means then files on machine B will be old.\nI want machine B to run a script to see if the file is older than 1 day, and if it is, then reset the power switch on machine A, so that it can be cold booted. The switch is connected to Google Home but understand I have to use the Assistant API.\nI have installed the google-assistant-sdk[samples] package. Can someone show me some code on how to query and return all devices then flip the switch on and off on that device?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1610,"Q_Id":65222829,"Users Score":1,"Answer":"Unfortunately, many smart home companies are building products for consumers, not developers. Google's SDK is letting developers stream consumer voice requests to their servers and turning that into actions. Gosund, similarly, is only interested in API access for Amazon and Google. They're API is probably not documented for public use.\nTo answer your specific question, if you want to use the Google Assistant SDK, you would name your switch something like \"Server A Switch\" and record a short clip of you saying \"Turn off Server A Switch\" and \"Turn on Server A Switch\" and send those two google. The way google matches the requests with your particular account is through OAuth2 tokens, which google will give you in exchange for valid sign in credentials.\nIf Gosund works with Google Assistant, it has a standard OAuth2 server endpoint as well as a Google Assistant compliant API endpoint. I only recommend this if you want to have some fun reverse engineering it.\nIn your Google Assistant app, if you try adding the Gosund integration, the first screen popup is the url endpoint where you can exchange valid Gosund account credentials for a one-time code which you can then exchange for OAuth2 access and refresh tokens. With the access token you can theoretically control your switch. The commands you'll want to send are standardized by Google. However, you'll have to figure out where to send them. The best bet here is probably to email their developers.\nAre you familiar with OAuth2? If not, I don't recommend doing any of the above.\nYour other option is to prevent Server A from hardware crashes. This is what I recommend as the least amount of work. You should start with a server that never crashes, keep it that way and add stuff on top of it. If you only have two servers, they should be able to maintain many months of uptime. Run your scheduled jobs using cron or systemctl and have a watchdog that restarts the job when it detects an error. If your job is crashing the server maybe put it in a VM like docker or something, which gives you neat auto-restart capabilities off the bat.\nAnother hacky thing you can do is schedule your gosund plug to turn off and on once a day through their consumer UI or app, or at whatever frequency you feel like is most optimal.","Q_Score":4,"Tags":"python,google-assistant-sdk","A_Id":65350670,"CreationDate":"2020-12-09T18:33:00.000","Title":"Google Assistant API, controlling a light switch connected to Google Home","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Problem statement:\nI have a python 3.8.5 script running on Windows 10 that processes large files from multiple locations on a network drive and creates .png files containing graphs of the analyzed results. The graphs are all stored in a single destination folder on the same network drive. It looks something like this\nSource files:\n\\\\drive\\src1\\src1.txt\n\\\\drive\\src2\\src2.txt\n\\\\drive\\src3\\src3.txt\nOutput folder:\n\\\\drive\\dest\\out1.png\n\\\\drive\\dest\\out2.png\n\\\\drive\\dest\\out3.png\nOccasionally we need to replot the original source file and examine a portion of the data trace in detail. This involves hunting for the source file in the right folder. The source file names are longish alphanumerical strings so this process is tedious. In order to make it less tedious I would like to creaty symlinks to the orignal source files and save them side by side with the .png files. The output folder would then look like this\nOutput files:\n\\\\drive\\dest\\out1.png\n\\\\drive\\dest\\out1_src.txt\n\\\\drive\\dest\\out2.png\n\\\\drive\\dest\\out2_src.txt\n\\\\drive\\dest\\out3.png\n\\\\drive\\dest\\out3_src.txt\nwhere \\\\drive\\dest\\out1_src.txt is a symlink to \\\\drive\\src1\\src1.txt, etc.\nI am attempting to accomplish this via\nos.symlink('\/\/drive\/dest\/out1_src.txt', '\/\/drive\/src1\/src1.txt')\nHowever no matter what I try I get\n\nPermissionError: [WinError 5] Access is denied\n\nI have tried running the script from an elevated shell, enabling Developer Mode, and running\nfsutil behavior set SymlinkEvaluation R2R:1\nfsutil behavior set SymlinkEvaluation R2L:1\nbut nothing seems to work. There is absolutely no problem creating the symlinks on a local drive, e.g.,\nos.symlink('C:\/dest\/out1_src.txt', '\/\/drive\/src1\/src1.txt')\nbut that does not accomplish my goals. I have also tried creading links on the local drive per above then then copying them to the network location with\nshutil.copy(src, dest, follow_symlinks = False)\nand it fails with the same error message. Attempts to accomplish the same thing directly in the shell from an elevated shell also fail with the same \"Access is denied\" error message\nmklink \\\\drive\\dest\\out1_src.txt \\\\drive\\src1\\src1.txt\nIt seems to be some type of a windows permission error. However when I run fsutil behavior query SymlinkEvaluation in the shell I get\n\nLocal to local symbolic links are enabled.\nLocal to remote symbolic links are enabled.\nRemote to local symbolic links are enabled.\nRemote to remote symbolic links are enabled.\n\nAny idea how to resolve this? I have been googling for hours and according to everything I am reading it should work, except that it does not.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":379,"Q_Id":65230280,"Users Score":0,"Answer":"Open secpol.msc on PC where the newtork share is hosted, navigate to Local Policies - User Rights Assignment - Create symbolic links and add account you use to connect to the network share. You need to logoff from shared folder (Control Panel - All Control Panel Items - Credential Manager or maybe you have to reboot both computers) and try again.","Q_Score":2,"Tags":"python,windows,network-drive","A_Id":65233061,"CreationDate":"2020-12-10T07:37:00.000","Title":"Create symlink on a network drive to a file on same network drive (Win10)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to install PyAudio but it needs a Python 3.6 installation and I only have Python 3.9 installed. I tried to switch using brew and pyenv but it doesn't work.\nDoes anyone know how to solve this problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3401,"Q_Id":65250951,"Users Score":1,"Answer":"You may install multiple versions of the same major python 3.x version, as long as the minor version is different in this case x here refers to the minor version, and you could delete the no longer needed version at anytime since they are kept separate from each other.\nso go ahead and install python 3.6 since it's a different minor from 3.9, and you could then delete 3.9 if you would like to since it would be used over 3.6 by the system, unless you are going to specify the version you wanna run.","Q_Score":0,"Tags":"python,python-3.x,pyaudio","A_Id":65251064,"CreationDate":"2020-12-11T11:57:00.000","Title":"How to downgrade python from 3.9.0 to 3.6","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running CentOS 8 that came with native Python 3.6.8. I needed Python 3.7 so I installed Python 3.7.0 from sources. Now, python command is unknown to the system, while commands python3 and python3.7 both use Python 3.7.\nAll good until now, but I can't seem to get pip working.\nCommand pip returns command not found, while python3 -m pip, python3.7 -m pip, python3 -m pip3, and python3.7 -m pip3 return No module named pip. Only pip command that works is pip3.\nNow whatever package I install via pip3 does not seem to install properly. Example given, pip3 install tornado returns Requirement already satisfied, but when I try to import tornado in Python 3.7 I get ModuleNotFoundError: No module named 'tornado'. Not the same thing can be said when I try to import it in Python 3.6, which works flawlessly. From this, I understand that my pip only works with Python 3.6, and not with 3.7.\nPlease tell me how can I use pip with Python 3.7, thank you.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":65258596,"Users Score":0,"Answer":"I think the packages you install will be installed for the previous version of Python. I think you should update the native OS Python like this:\n\nInstall the python3.7 package using apt-get\nsudo apt-get install python 3.7\nAdd python3.6 & python3.7 to update-alternatives:\nsudo update-alternatives --install \/usr\/bin\/python3 python3 \/usr\/bin\/python3.6 1\nsudo update-alternatives --install \/usr\/bin\/python3 python3 \/usr\/bin\/python3.7 2\nUpdate python3 to point to Python 3.7:\n`sudo update-alternatives --config python3\nTest the version:\npython3 -V","Q_Score":0,"Tags":"python,python-3.x,unix,pip,centos","A_Id":65259268,"CreationDate":"2020-12-11T21:06:00.000","Title":"Python not using proper pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running CentOS 8 that came with native Python 3.6.8. I needed Python 3.7 so I installed Python 3.7.0 from sources. Now, python command is unknown to the system, while commands python3 and python3.7 both use Python 3.7.\nAll good until now, but I can't seem to get pip working.\nCommand pip returns command not found, while python3 -m pip, python3.7 -m pip, python3 -m pip3, and python3.7 -m pip3 return No module named pip. Only pip command that works is pip3.\nNow whatever package I install via pip3 does not seem to install properly. Example given, pip3 install tornado returns Requirement already satisfied, but when I try to import tornado in Python 3.7 I get ModuleNotFoundError: No module named 'tornado'. Not the same thing can be said when I try to import it in Python 3.6, which works flawlessly. From this, I understand that my pip only works with Python 3.6, and not with 3.7.\nPlease tell me how can I use pip with Python 3.7, thank you.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":65258596,"Users Score":1,"Answer":"It looks like your python3.7 does not have pip.\nInstall pip for your specific python by running python3.7 -m easy_install pip.\nThen, install packages by python3.7 -m pip install \nAnother option is to create a virtual environment from your python3.7. The venv brings pip into it by default.\nYou create venv by python3.7 -m venv ","Q_Score":0,"Tags":"python,python-3.x,unix,pip,centos","A_Id":65259124,"CreationDate":"2020-12-11T21:06:00.000","Title":"Python not using proper pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I try to communicate with Cylon device (UC32) by Bacnet protocol (BAC0) but I can not discover any device. And I try with Yabe and it does not have any result.\nIs there any document describing how to create my communication driver?\nOr any technique which can be uswed to connect with this device?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":66,"Q_Id":65276725,"Users Score":1,"Answer":"(Assuming you've set the default gateway address - for it to know where to return it's responses, but only if necessary.)\nIf we start with the assumption that maybe the device is not (by default) listening for broadcasts or having some issue sending it - a bug maybe (although probably unlikely), then you could send a unicast\/directed message, e.g. use the Read-Property service to read back the (already known) BOIN (BACnet Object Instance Number), but you would need a (BACnet) client (application\/software) that provides that option, like possibly one of the 'BACnet stack' cmd-line tools or maybe via the (for most part) awesome (but advanced-level) 'VTS (Visual Test Shell)' tool.\nAs much as it might be possible to discover what the device's BOIN (BACnet Object Instance Number) is, it's better if you know it already (- as a small few device's might not make it easy to discover - i.e. you might have to resort to using a round-robin bruteforce approach, firing lots of requests - one after the other with only the BOIN changed\/incremented by 1, until you receive\/see a successful response).","Q_Score":0,"Tags":"python,iot,bacnet","A_Id":67459367,"CreationDate":"2020-12-13T14:28:00.000","Title":"How to communicate with Cylon BMS controller","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running a few programs (NodeJS and python) in my server (Ubuntu 20.04). I use PM2 CLI to create and manage processes. Now I want to manage all process through an echo system file. But when I run pm2 ecosystem, it just creates a sample file. I want to save my CURRENT PROCESSES to the echo system file and modify it. Anyone know how to save pm2 current process as echo system file?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":388,"Q_Id":65277107,"Users Score":1,"Answer":"If you use pm2 startup pm2 creates a file named ~\/.pm2\/dump.pm2 with all running processes (with too many parameters, as it saves the whole environment in the file)\nEdit:\nThis file is similar to the output of the command pm2 prettylist","Q_Score":0,"Tags":"python,node.js,ubuntu,pm2","A_Id":66128586,"CreationDate":"2020-12-13T15:07:00.000","Title":"Create PM2 Ecosystem File from current processes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a flask application in the Ubuntu EC2 instance. Locally I can pass the parameters For eg:\n'''http:\/\/0.0.0.0:8888\/createcm?summary=VVV&change=Feauure '''\nwhere summary and change are parameters. How can I pass the same values from outside the EC2 (i.e) using Public DNS or IP address. Any other way to pass the parameters outside the EC2 instance?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":60,"Q_Id":65288652,"Users Score":1,"Answer":"I dont know how works flask but i wont try help u.\nAt first as flask look like u shoud be able to do same like 'http:\/\/0.0.0.0:8888\/createcm?summary=VVV&change=Feauure ' to outside, but you need open ports on your outside machine, like u should do on ec2 instance(i would do it with UFW firewall)\nOther way its use rabbitmq or AWS S3 to communicate it","Q_Score":0,"Tags":"python-3.x,amazon-web-services,flask,amazon-ec2,amazon-api-gateway","A_Id":65288811,"CreationDate":"2020-12-14T11:57:00.000","Title":"How to pass values to a Flask app in EC2 instance using Public DNS or IP address?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"How can I extract the filename after sign '>' from a command line in python. For example:\npython func.py -f file1.txt -c > output.txt\nI want to get the output file name 'output.txt' from the above command line.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":160,"Q_Id":65296341,"Users Score":0,"Answer":"You can't.\nWhen you write something like command > file in shell, the shell:\n\ncreates the file if it doesn't exist,\nopens it, and\nassigns the descriptor to the stdout of command.\n\nThe called process (and it doesn't matter if that's Python or something else) doesn't know what happens to the data after it's written because it only sees the descriptor provided by the shell. It's a bit like having one end of a really long pipe: you know you can put stuff in, but you can't see what happens to it on the other end.\nIf you want to retrieve the file name in that call, you need to either:\n\npass it to your Python program as an argument and handle redirection from Python, e.g. python func.py -f file1.txt -c output.txt, or\noutput it from the shell, e.g. echo 'output.txt'; python func.py -f file1.txt -c > output.txt","Q_Score":0,"Tags":"python,command-line","A_Id":65297292,"CreationDate":"2020-12-14T20:53:00.000","Title":"Extract file name after '>' from command line in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"We have a Datastage job that runs on multiple instances on different job cycles. The job could run concurrently or on different times. When one job cycle fails due to the failed Datastage job in that cycle, the other cycles fail as well. Is there a way to prevent this from happening i.e., if the Datastage failed in one cycle, can the other cycle continue to run using the same Datastage job that failed in the other cycle. Is there a way we can do an automatic reset of the failed job? If so, how? Thanks for you info and help.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":234,"Q_Id":65313568,"Users Score":1,"Answer":"You could set up automatic reset only by wrapping each cycle variant in its own sequence. Only sequence jobs support automatic reset after failure, as a property of the Job activity.\nI'm not sure what the case is if another cycle is running when you try to reset. You could test this. It may be that, if you need this reset functionality, you need clones rather than instances of the job.","Q_Score":0,"Tags":"python,db2,datastage","A_Id":65314282,"CreationDate":"2020-12-15T20:55:00.000","Title":"Automatic restart of multi-instance Datastage job with DSJE_BADSTATE fail status","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a script that uses requests library. It is a web scaper that runs for at least 2 days and I don't want to leave my laptop on for that long. So, I wanted to run it on the Cloud but after a lot of trying and reading the documentation, I could not figure out a way to do so.\nI just want this: When I run python my_program.py it shows the output on my command line but runs it using Google Cloud services. I already have an account and a project with billing enabled. I already installed the GCP CLI tool and can run it successfully.\nMy free trial has not ended. I have read quickstart guides but as I am fully beginner regarding the cloud, I don't understand some of the terms.\nPlease, help me","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":48,"Q_Id":65317593,"Users Score":1,"Answer":"I think you'll need to setup a Google Cloud Compute Engine instance for that. It's basically a reserved computer\/machine where you can run your code. Here's some steps that you should do just to get your program running on the cloud.\n\nSpin up a Compute Engine instance\nGain access to it (through ssh)\nThrow your code up there.\nInstall any dependencies that you may have for you script.\nInstall tmux and start a tmux session.\nRun the script inside that tmux session. Depends on your program, you should be able to see some output inside the session.\nDetach it.\n\nYour code is now executing inside that session.\nFeel free to disconnect from the Compute Engine instance now and check back later by attaching to the session after connecting back into your instance.","Q_Score":0,"Tags":"python,google-cloud-platform","A_Id":65317772,"CreationDate":"2020-12-16T05:00:00.000","Title":"How do I control my Python program on the local command line using Google Cloud","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I use remote-SSH to connect the server(ubuntu 20.04) and I find that if I click the button to install the module of python, it is only installed for one user.\nFor example:\n\nxxx is not installed, install?\n\nThen I find that the command in the terminal is :\npip install -U module_name --user\nSo I try to add the configuration in settings.json and install again.\n\"python.globalModuleInstallation\": true\nThe terminal has no response, however. Is this a bug?\nThough I can type the install command in terminal by myself, I still want to know if vscode can install the module globally by itself.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":243,"Q_Id":65326646,"Users Score":0,"Answer":"To install it on the global level (for all users) you need to install it as the root user or as an administrator.\nIn short you must give the admin privileges.\nUse sudo for linux or run you VS code as admin.\nRunning the VScode as admin solved my issue. Give it a try.","Q_Score":0,"Tags":"python,visual-studio-code,vscode-settings,vscode-remote","A_Id":65326768,"CreationDate":"2020-12-16T15:50:00.000","Title":"python global module installation when using remote SSH extension","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"To exit my pipenv in window's cmd, I need to type exit. deactivate works, but it does not really exit from pipenv. Instead, it just deactivates the virtualenv created by pipenv. However, in pycham>terminal, the terminal tab just closes without exiting pipenv when I type exit, making it impossible to exit from pipenv properly. Is there a workaround to this? Thank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":118,"Q_Id":65357917,"Users Score":1,"Answer":"I don't really know if you still need the answer, but there may be others that do...so I'll still share what worked for me.\nNOTE: I used backbox linux (an ubuntu based hacking distro) for this and not windows, however, since pycharm is an IDE, it should still have the same settings\nTyping deactivate only works temporarily...when you open a new terminal, the virtualenv will still persist, so I decided to remove the python interpreter altogether.\nFor this, you'll need to go to the top left corner of your pycharm IDE where it says 'File', select settings (or you could just press Ctrl Alt s),\ngo to 'Project ', you'll see 2 options:\n\nPython Interpreter\nProject Structure\n\nClick on Python Interpreter and go to the drop down menu and select No interpreter\nAlternatively, you could just look at the bottom right corner of pycharm, just below Event Log, you should see something like Pipenv (your_project_name) [Python 3.x.x].\nClick it and select Interpreter settings, it will still take you to the settings place...click the drop down and select No Interpreter.\nThat's it! Good luck!","Q_Score":1,"Tags":"python,pycharm,pipenv","A_Id":68845375,"CreationDate":"2020-12-18T13:29:00.000","Title":"How do I exit from pipenv in pycharm?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Python launcher crashes since upgrading to macOS 11.1 - I have checked that I am on the latest version of Python 3.\nI get the error macOS 11 or later required ! Abort trap: 6","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":223,"Q_Id":65358897,"Users Score":0,"Answer":"I had to install MacOS Python package directly from python.org - it wouldn't work otherwise.","Q_Score":0,"Tags":"python,macos,crash,launcher","A_Id":65403356,"CreationDate":"2020-12-18T14:36:00.000","Title":"macOS 11.1 - Python launcher crashes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a large CSV file(>100 GB) that I want to read into memory and process the data in chunks. There are two constraints I have:\n\nObviously I cannot read the whole entire file into memory. I only have about 8GB of ram on my machine.\nThe data is tabular and unordered. I need to read the data in groups.\n\n\n\n\n\nTicker\nDate\nField1\nField2\nField3\n\n\n\n\nAAPL\n20201201\n0\n0\n0\n\n\nAAPL\n20201202\n0\n0\n0\n\n\nAAPL\n20201203\n0\n0\n0\n\n\nAAPL\n20201204\n0\n0\n0\n\n\nNFLX\n20201201\n0\n0\n0\n\n\nNFLX\n20201202\n0\n0\n0\n\n\nNFLX\n20201203\n0\n0\n0\n\n\nNFLX\n20201204\n0\n0\n0\n\n\n\n\nThe concern here is that the data has to be read in groups. Grouped by Ticker and date. If I say I want to read 10,000 records in each batch. The boundary of that batch should not split groups. i.e. All the AAPL data for 2020 December should end up in the same batch. That data should not appear in two batches.\nMost of my co-workers when they face a situation like this, they usually create a bash script where they use awk, cut, sort, uniq to divide data into groups and write out multiple intermediate files to the disk. Then they use Python to process these files. I was wondering if there is a homogenous Python\/Pandas\/Numpy solution to this.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":134,"Q_Id":65384208,"Users Score":0,"Answer":"I would look into two options\nVaex and Dask.\nVaex seems to be focused exactly on what you need. Lazy processing and very large datasets. Check their github. However it seems, that you need to convert files to hdf5, which may be little bit time consuming.\nAs far as Dask is concerned, I wouldnt count on success. Dask is primarily focused on distributed computation and I am not really sure if it can process large files lazily. But you can try and see.","Q_Score":3,"Tags":"python,pandas,numpy","A_Id":65384605,"CreationDate":"2020-12-20T19:56:00.000","Title":"How do you read a large file with unsorted tabular data in chunks in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a large CSV file(>100 GB) that I want to read into memory and process the data in chunks. There are two constraints I have:\n\nObviously I cannot read the whole entire file into memory. I only have about 8GB of ram on my machine.\nThe data is tabular and unordered. I need to read the data in groups.\n\n\n\n\n\nTicker\nDate\nField1\nField2\nField3\n\n\n\n\nAAPL\n20201201\n0\n0\n0\n\n\nAAPL\n20201202\n0\n0\n0\n\n\nAAPL\n20201203\n0\n0\n0\n\n\nAAPL\n20201204\n0\n0\n0\n\n\nNFLX\n20201201\n0\n0\n0\n\n\nNFLX\n20201202\n0\n0\n0\n\n\nNFLX\n20201203\n0\n0\n0\n\n\nNFLX\n20201204\n0\n0\n0\n\n\n\n\nThe concern here is that the data has to be read in groups. Grouped by Ticker and date. If I say I want to read 10,000 records in each batch. The boundary of that batch should not split groups. i.e. All the AAPL data for 2020 December should end up in the same batch. That data should not appear in two batches.\nMost of my co-workers when they face a situation like this, they usually create a bash script where they use awk, cut, sort, uniq to divide data into groups and write out multiple intermediate files to the disk. Then they use Python to process these files. I was wondering if there is a homogenous Python\/Pandas\/Numpy solution to this.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":134,"Q_Id":65384208,"Users Score":0,"Answer":"How about this:\n\nopen the file\nloop over reading lines: For each line read:\n\n\nparse the ticker\nif not done already:\n\ncreate+open a file for that ticker (\"ticker file\")\nappend to some dict where key=ticker and value=file handle\n\n\nwrite the line to the ticker file\n\n\nclose the ticker files and the original file\nprocess each single ticker file","Q_Score":3,"Tags":"python,pandas,numpy","A_Id":65384348,"CreationDate":"2020-12-20T19:56:00.000","Title":"How do you read a large file with unsorted tabular data in chunks in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to manipulate a txt file that uses en dashes, but cmd reads it as \u00e2\u20ac\u201c. Em dashes also have a broken formatting and are displayed as \u00e2\u20ac\u201d\nThe funny thing is that if I use both symbols inside the script (.py file) and associate it to a print command, all is displayed correctly. In interpreter also no problem at all.\nIs there any way I can make it recognize those characters before importing the file? Thank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":65387109,"Users Score":0,"Answer":"I no longer need help with this since I has able to figure out on my own, but I'm keeping it here since it might help others in the future.\nThe problem was that py was opening the file as ANSI, while due to the special characters the file had to be opened as UTF-8. So by adding encoding='utf-8' when calling the open function solved the problem.","Q_Score":2,"Tags":"python,formatting,windows-10","A_Id":65401833,"CreationDate":"2020-12-21T03:07:00.000","Title":"En dash\/ Em dash breaking txt file formatting when trying to read in cmd","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I run the .exe from my project folder from terminal with dist\\app\\app.exe it runs fine, I can see my output in terminal etc.\nHowever, with double-clicking the .exe I just get a flashing terminal window.\nDoes anyone have an idea or a clue?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":899,"Q_Id":65393282,"Users Score":0,"Answer":"When double-clicking you are going to run your application and it will close immediately after completion. The only exception is if the application is going to ask for input. This means that your application most likely runs fine.\nIt is also possible that you opened cmd-line as Administrator and thanks the application runs fine but when you double-click it is not being executed because it lacks access. It is not possible to tell without a closer investigation though.","Q_Score":0,"Tags":"python,python-3.x,windows,pyinstaller,executable","A_Id":65393406,"CreationDate":"2020-12-21T13:03:00.000","Title":"Pyinstaller .exe works from terminal but not by double-clicking -> flashing console window","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"While I install python in windows 10 operation System every single time I get error for importing opencv.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":65395138,"Users Score":0,"Answer":"Try installing the OpenCV package and reinstalling Python. Also you might need to downgrade your Python version.","Q_Score":0,"Tags":"python,opencv-python","A_Id":65395221,"CreationDate":"2020-12-21T15:05:00.000","Title":"How do I install Python in my operating system properly","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am on an Ubuntu machine and I want to use python in my C code but when I include the Python.h header file, it shows a warning:\nPython.h: No such file or directory\nAny method for this. I have already tried to use:\nsudo apt-get install python3-dev and;\nsudo apt-get install python-dev\nBut it keeps showing error.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":32,"Q_Id":65428162,"Users Score":1,"Answer":"The Python.h file is not in the default compiler include path.\nAdd the output of pkg-config --cflags python3 to your compiler command line.\nNow the compiler will know where to find Python.h (and any dependencies it may have)","Q_Score":0,"Tags":"c,python-3.8,ubuntu-20.04","A_Id":65431056,"CreationDate":"2020-12-23T17:08:00.000","Title":"How to make Python.h file work in Ubuntu?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Trying apt(-get) doesn't work, pip doesn't work and downloading the .deb package itself doesn't work either so here we are. I'll post any error messages deemed necessary, thanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":187,"Q_Id":65431822,"Users Score":0,"Answer":"if anyone runs into trouble with this, I managed to solve this by adding deb http:\/\/ftp.nl.debian.org\/debian stretch main to \/etc\/apt\/sources.list then doing sudo apt update and finally sudo apt install python-wxgtk3.0","Q_Score":0,"Tags":"python,installation,debian-based","A_Id":65432642,"CreationDate":"2020-12-23T22:43:00.000","Title":"How do I install python-wxgtk3.0 on Parrot-sec (pretty much debian)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a while loop in a shell command:\nsp = subprocess.Popen(\"while [ 1 ]; do say \\\"hello world\\\"; done;\").\nHowever, when I send sp.kill(), the loop does not terminate immediately. Rather, it finishes the iteration it is on and then exits. I notice that the version without the loop, \"say \\\"hello world\\\", will terminate immediately. I've tried sending the various C codes using signal, but nothing seems to work. How can I immediately exit from the loop?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":65432191,"Users Score":0,"Answer":"I think the best way is killing the process using kill command with -9 option for sending SIGKILL signal. This doesn't let the process to handle the signal and it terminates it entirely. There are other ways for sending this signal. Like os.kill()\nYou just need to figure out what is the UID of the process and then kill it.","Q_Score":0,"Tags":"python-3.x,bash","A_Id":65454279,"CreationDate":"2020-12-23T23:31:00.000","Title":"Exit a shell script immediately with subprocess","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to install openCV on WSL+UBUNTU20.04 with python3.8. I am trying to install using miniconda without any success.\nAfter searching over internet, it seems that openCV may not be supported on python3.8. If anyone has done this successfully, I would appreciate some help.\nUpdate: Solved. Please check my answer.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1687,"Q_Id":65434139,"Users Score":1,"Answer":"Thanks to Christoph's suggestion, I decided to install using pip in the virtual enviornment of conda. I did the following:\n\nRun\nconda create -n env_name\nsource activate env_name\n, where env_name is the name of your virtual environment.\nTo install pip to my venv directory, I ran:\nconda install pip\nI went to the actual venv folder in anaconda directory. It should be somewhere like:\n\/home\/$USER\/miniconda3\/envs\/env_name\/\nI then installed new packages by doing\n\/home\/$USER\/miniconda3\/envs\/env_name\/bin\/pip install opencv-python","Q_Score":2,"Tags":"python-3.x,opencv,windows-subsystem-for-linux,ubuntu-20.04","A_Id":65450314,"CreationDate":"2020-12-24T04:55:00.000","Title":"How to install openCV on WSL + UBUNTU20.04 + python3.8 using Anaconda?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"atbswp is a software that help you automate all the mouse clicks and movements and keyboards keys so you can automate everything u do and repeat it or replay it\nand by using crontab you can schedule it so you can run automated sequence at specific time\nthe app extracts a python file\nand you run it inside the app or in terminal without the need of the app\nthe problem is\nwhen i run it in terminal it runs ok\nwhen i put it in crontab to run it doesnt run and i got errors in the crontab log file\ni really need help it is something amazing for everyone i think\nthis is the cron log error\nTraceback (most recent call last):\nFile \"\/home\/zultan\/bot1\", line 4, in \nimport pyautogui\nFile \"\/home\/zultan\/.local\/lib\/python3.8\/site-packages\/pyautogui\/init.py\", line 241, in \nimport mouseinfo\nFile \"\/home\/zultan\/.local\/lib\/python3.8\/site-packages\/mouseinfo\/init.py\", line 223, in \n_display = Display(os.environ['DISPLAY'])\nFile \"\/usr\/lib\/python3.8\/os.py\", line 675, in getitem\nraise KeyError(key) from None\nKeyError: 'DISPLAY'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":95,"Q_Id":65436004,"Users Score":0,"Answer":"i found the solution for everybody\nput this\nin the crontab -e\nDISPLAY=:0\nXAUTHORITY=\/run\/user\/1000\/gdm\/Xauthority","Q_Score":0,"Tags":"python,linux,automation,cron,click","A_Id":65651053,"CreationDate":"2020-12-24T08:49:00.000","Title":"atbswp python file is not running on crontabs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to compare the differnt commits with two repos, for example Android_10.0_r2 and Android_11.0_r3\nthe changes are frequent and since Google merge inner code to AOSP, some commit even older than Android_10.0_r2 merged to Android_11.0_r3\uff0c I don't want to miss that from git log checking\nso I record all commit logs in both repos and select the different change_ids\/commit_ids.\nBut since the git log is too much in AOSP and it have 400+ repos, it runs 1 hour+ in my PC.\nAny idea that git command may have directly way for geting the different commit_ids between two repo?\nThe git diff with two repo dir shows diff of the files, since changelist is long, the commit message diff is more effective","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":348,"Q_Id":65438922,"Users Score":1,"Answer":"Every version of AOSP has its own manifest. Use repo sync -n -m \/path\/to\/manifest1.xml and repo sync -n -m \/path\/to\/manifest2.xml to fetch repositories' data of both. -n instructs repo to fetch data only and not checkout\/update the worktrees, which could be omitted if you want to see the real files.\nAnd then use repo diffmanifests \/path\/to\/manifest1.xml \/path\/to\/manifest2.xml to display the diff commits between 2 code bases. It has an option --pretty-format= which works like --pretty= in git log.\nHowever the output is still a bit rough. Another solution is making a script, in Python for example, to parse the 2 manifests and run git log or git diff to get the detailed information. It's much more flexible. To my experience, it won't take that long. Our code base has about 1500 repositories.","Q_Score":0,"Tags":"python,git,shell","A_Id":65444644,"CreationDate":"2020-12-24T13:24:00.000","Title":"git diff in two repos and got commit id list","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am not able to run my local server through atom terminal, even though all the requirements are meant. This is the error i get when I run python manage.py runserver,\nFile \"manage.py\", line 17\n) from exc\n^\nSyntaxError: invalid syntax\nI tried python3 manage.py runserver as suggested by some people online as a solution for mac users but it gave a different error,\nImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forge\nt to activate a virtual environment?\nSharing the screenshot of my atom terminal.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":241,"Q_Id":65492797,"Users Score":0,"Answer":"Make sure you also installed django for python3.\nBy doing pip -V you can verify that your pip belongs to the python installation you expected.\nYou might need to use pip3 if you're running python 2 and 3 in parallel. Alternatively, you can use python3 -m pip install Django to make sure it's for python 3","Q_Score":0,"Tags":"python,django","A_Id":65492865,"CreationDate":"2020-12-29T13:20:00.000","Title":"Unable to run python manage.py runserver command even though django is installed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to install pyaudio on python 3.8 but after reading a lot, I found out that it is best to use python 3.6. Now to install pyaudio, I want to install it on python 3.6 on my terminal but whenever I type python --version, it shows this Python 2.7.16 version.\nHow can I do the change?\nP.S. - I use pycharm to write the code.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":317,"Q_Id":65517546,"Users Score":-1,"Answer":"You can use python3.6 --version if you have it installed, it should give you Python 3.6.x. If you're trying to install pyaudio with Python 3.6 (I assume with pip), you can use pip3.6 ....","Q_Score":0,"Tags":"python,python-3.x,python-2.7,python-module,python-venv","A_Id":65517651,"CreationDate":"2020-12-31T06:46:00.000","Title":"How to switch Python versions in mac terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I used sublime text till now for python, but today I installed wing personal for python.\nI installed the module \"sympy\" both manually and by pip. I worked fine in sublime text, but when I wrote import sympy in the wing ide, it showed this error:\nbuiltins.ImportError: No module named sympy. What is happening?\nI use wing personal, os: windows 10","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":168,"Q_Id":65520090,"Users Score":0,"Answer":"Take a look at Show Python Environment in the Source menu and compare the interpreter there to the value of sys.executable (after 'import sys') in Python outside of Wing or in Sublime. You are probably not using the same interpreter.\nThis can be changed from Project Properties in the Project menu by setting Python Executable under the Environment tab:\nIf you're using a base install of Python then select Command Line and select the interpreter from the drop down or enter the full path to python or python.exe.\nIf you're using a virtualenv you can also do that and enter the virtualenv's Python or select Activated Env and enter the path to the virtualenv's activate.","Q_Score":1,"Tags":"python,python-3.x,python-import,sympy,wing-ide","A_Id":65522570,"CreationDate":"2020-12-31T11:10:00.000","Title":"Error in wing IDE: no module named \"sympy\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I am currently trying to make something that uses mkvmerge to merge audio, video and an unknown amount of subtitle files. For now, those subtitle filepaths are in a list.\nHere is my issue. I cannot for the life of me think of a way of putting this into subprocess run, without throwing an error. If I combine the subtitle names into a single string, mkvmerge throws an error, so each file would need to be inside \"\",\"\" themselves.\nSo, the command without subtitles looks like this:\nsubprocess.run(['C:\\MKVToolNix\\mkvmerge.exe', '-o', f'E:\\Videos\\{output_filename}.mkv', 'E:\\Videos\\viddec.mp4', 'E:\\Videos\\auddec.mp4'])\nSo this will produce a working video.\nAFAIK, a properly formatted subprocess call including two subtitles would need to look like this.\nsubprocess.run(['C:\\MKVToolNix\\mkvmerge.exe', '-o', f'E:\\Videos\\{output_filename}.mkv', 'E:\\Videos\\viddec.mp4', 'E:\\Videos\\auddec.mp4', 'E:\\Videos\\eng.srt', 'E:\\Videos\\nor.srt'])\nIs it possible to add variables like that, as individual strings into a subprocess.run call, so that it will function properly? or is there perhaps a different method\/call I cannot think of?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":35,"Q_Id":65533234,"Users Score":1,"Answer":"You can build the list of arguments before the subprocess.run call, as long as you need it to be, and pass that list in the call.","Q_Score":0,"Tags":"python,python-3.x,subprocess","A_Id":65533259,"CreationDate":"2021-01-01T20:01:00.000","Title":"Subprocess.run and an unknown amount of files, how?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way to add expiry date to a Huey Dynamic periodic task ?\nJust like there is an option in celery task - \"some_celery_task.apply_async(args=('foo',), expires=expiry_date)\"\nto add expiry date while creating the task.\nI want to add the expiry date while creating the Huey Dynamic periodic task. I used \"revoke\" , it worked as it supposed to , but I want to stop the task completely after the expiry date not revoking it . When the Huey dynamic periodic task is revoked - message is displayed on the Huey terminal that the huey function is revoked (whenever crontab condition becomes true).\n(I am using Huey in django)\n(Extra)\nWhat I did to meet the need of this expiry date -\nI created the function which return Days - Months pairs for crontab :\nFor eg.\nstart date = 2021-1-20 , end date = 2021-6-14\nthen the function will return - Days_Month :[['20-31',1], ['*','2-5'], ['1-14','6']]\nThen I call the Huey Dynamic periodic task (three times in this case).\n(the Days_Month function will return Day-Months as per requirement - Daily, Weekly, Monthly or repeating after n days)\nIs there a better way to do this?\nThank you for the help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":247,"Q_Id":65546519,"Users Score":0,"Answer":"The best solution will depend on how often you need this functionality of having periodic tasks with a specific end date but the ideal approach is probably involving your database.\nI would create a database model (let's call it Job) with fields for your end_date, a next_execution_date and a field that indicates the interval between repetitions (like x days).\nYou would then create a periodic task with huey that runs every day (or even every hour\/minute if you need finer grain of control). Every time this periodic task runs you would then go over all your Job instances and check whether their next_execution_date is in the past. If so, launch a new huey task that actually executes the functionality you need to have periodically executed per Job instance. On success, you calculate the new next_execution_date using the interval.\nSo whenever you want a new Job with a new end_date, you can just create this in the django admin (or make an interface for it) and you would set the next_execution_date as the first date where you want it to execute.\nYour final solution would thus have the Job model and two huey decorated functions. One for the periodic task that merely checks whether Job instances need to be executed and updates their next_execution_date and another one that actually executes the periodic functionality per Job instance. This way you don't have to do any manual cancelling and you only need 1 periodic task that just runs indefinitely but doesn't actually execute anything if there are no Job instances that need to be run.\nNote: this will only be a reasonable approach if you have multiple of these tasks and you potentially want to control the end_dates in your interface.","Q_Score":0,"Tags":"django,cron,django-celery,periodic-task,python-huey","A_Id":65550829,"CreationDate":"2021-01-03T04:00:00.000","Title":"How to add expiry date in HUEY dynamic periodic task just like in celery tasks?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am creating a webapp using django and react in which multiple user can control playback of a spotify player (play\/pause, skip). This is useful in case of a house party or something where people are listening to a common device . I am thinking if i can integrate the spotify web player sdk and all users can listen to synced playback and control at the sametime remotely. I understand single spotify account needs to register that webapp to be used as device. My question is if can i control the state of playback if the page is opened by multiple users so that they listen to a song synchronously.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":314,"Q_Id":65549146,"Users Score":0,"Answer":"IDEA:\nYou can make a \"Room\" in your webapp like every room will generate a unique ID for every room and a Password any person who want to enter this room will use this Password to enter that room.\nHow to Sync every song ?\nThis is the hard part but you can do it using sockets (my recommendation,... you can also use something else). Socket will enable you to send information (if anyone changed, play, pause or stoped the song) to every user who are accessing that room you have to change, play, pause or stop the song accordingly.","Q_Score":0,"Tags":"javascript,python,reactjs,django,spotify","A_Id":65549258,"CreationDate":"2021-01-03T11:12:00.000","Title":"Playback state control of spotify web player sdk using spotify api","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to understand (from a python noob perspective) how could I set GNU Radio for multiple OS's\/machines usage. Ideally I'd use GRC only on my Ubuntu machine, but run .py from both my windows machine and my raspberry pi. I've seen this thread that implies that using venv is the best alternative (which I'd love), but when I used GNU Radio back in 2018 it seemed that pybombs was the best alternative and usage in MacOS or windows was rather bad.\nIs there a good way to handle multiple OS's usage? I want to be sure before installing the required packages and asking the other guys who'll help me with the project to do so.\nThanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":65553306,"Users Score":0,"Answer":"Install it on your Ubuntu machine. Open a remote shell into that machine from the other 2 machines to run the command line python programs. It's hard enough to get everything working on 1 machine, never mind 3, never mind moving all the hardware around risking damage from static and accidents.\nA VM is also an option, but you still have to move hardware around, and depending on what you're doing a VM might be too slow, and that's not an option on a RPi.","Q_Score":0,"Tags":"python,gnuradio","A_Id":65556111,"CreationDate":"2021-01-03T18:17:00.000","Title":"Running GNU Radio in multiple OS's","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to make a small .exe file that will work in any PCs , is it possible to make and if yes then what is the procedure ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":65564592,"Users Score":0,"Answer":"Since you wrote a terrible question, I will write a terrible answer:\nYes, you can make .exe files in python. You need a package called \"psutil\" which can convert your .py files into .exe files AND convert them into executables for other operating systems as well. A .exe file only works on Windows, so some PC running a Linux Distro won't be able to run it.\nEdit: pyinstaller is also useful when you want to just make a .exe","Q_Score":0,"Tags":"python,machine-learning,software-design","A_Id":65564631,"CreationDate":"2021-01-04T14:43:00.000","Title":"is it possible to make software using python only?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Im using an sql server and rabbitmq as a result backend\/broker for celery workers.Everything works fine but for future purposes we plan to use several remote workers on diferent machines that need to monitor this broker\/backend.The problem is that you need to provide direct access to your broker and database url , thing that open many security risks.Is there a way to provide remote celery worker the remote broker\/database via ssh?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":219,"Q_Id":65565698,"Users Score":0,"Answer":"It seems like ssh port forwarding is working but still i have some reservations.\nMy plan works as follows:\n\nport forward both remote database and broker on local ports(auto\nssh) in remote celery workers machine.\nnow celery workers consuming the tasks and writing to remote database from local ports port forwaded.\n\nIs this implementations bad as noone seems to use remote celery workers like this.\nAny different answer will be appreciated.","Q_Score":0,"Tags":"python,ssh,rabbitmq,celery","A_Id":65580702,"CreationDate":"2021-01-04T15:57:00.000","Title":"Python celery backend\/broker access via ssh","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I realize similar questions have been asked however they have all been about a sepficic problem whereas I don't even now how I would go about doing what I need to.\nThat is: From my Django webapp I need to scrape a website periodically while my webapp runs on a server. The first options that I found were \"django-background-tasks\" (which doesn't seem to work the way I want it to) and 'celery-beat' which recommends getting another server if i understood correctly.\nI figured just running a seperate thread would work but I can't seem to make that work without it interrupting the server and vice-versa and it's not the \"correct\" way of doing it.\nIs there a way to run a task periodically without the need for a seperate server and a request to be made to an app in Django?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":65567300,"Users Score":1,"Answer":"'celery-beat' which recommends getting another server if i understood correctly.\n\nYou can host celery (and any other needed components) on the same server as your Django app. They would be separate processes entirely.\nIt's not an uncommon setup to have a Django app + celery worker(s) + message queue all bundled into the same server deployment. Deploying on separate servers may be ideal, just as it would be ideal to distribute your Django app across many servers, but is by no means necessary.","Q_Score":1,"Tags":"python,django","A_Id":65568135,"CreationDate":"2021-01-04T17:44:00.000","Title":"Running \"tasks\" periodically with Django without seperate server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I realize similar questions have been asked however they have all been about a sepficic problem whereas I don't even now how I would go about doing what I need to.\nThat is: From my Django webapp I need to scrape a website periodically while my webapp runs on a server. The first options that I found were \"django-background-tasks\" (which doesn't seem to work the way I want it to) and 'celery-beat' which recommends getting another server if i understood correctly.\nI figured just running a seperate thread would work but I can't seem to make that work without it interrupting the server and vice-versa and it's not the \"correct\" way of doing it.\nIs there a way to run a task periodically without the need for a seperate server and a request to be made to an app in Django?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":87,"Q_Id":65567300,"Users Score":1,"Answer":"I'm not sure if this is the \"correct\" way but it was a cheap and easy way for me to do it. I just created custom Django Management Commands and have them run via a scheduler such as CRON or in my case I just utilized Heroku Scheduler for my app.","Q_Score":1,"Tags":"python,django","A_Id":65568327,"CreationDate":"2021-01-04T17:44:00.000","Title":"Running \"tasks\" periodically with Django without seperate server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I use Python Anaconda and Visual Studio Code for Data Science and Machine Learning projects.\nI want to learn how to use Windows Subsystem for Linux, and I have seen that tools such as Conda or Git can be installed directly there, but I don't quite understand the difference between a common Python Anaconda installation and a Conda installation in WSL.\nIs one better than the other? Or should I have both? How should I integrate WSL into my work with Anaconda, Git, and VS Code? What advantages does it have or what disadvantages?\nHelp please, I hate not installing my tools properly and then having a mess of folders, environment variables, etc.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":96,"Q_Id":65569685,"Users Score":0,"Answer":"If you use conda it's better to install it directly on Windows rather than in WSL. Think of WSL as a virtual machine in your current PC, but much faster than you think.\nIt's most useful use would be as an alternate base for docker. You can run a whole lot of stuff with Windows integration from WSL, which includes VS Code. You can lauch VS code as if it is run from within that OS, with all native extension and app support.\nYou can also access the entire Windows filesystem from WSL and vice versa, so integrating Git with it won't be a bad idea","Q_Score":0,"Tags":"python,anaconda,windows-subsystem-for-linux,wsl-2","A_Id":65582721,"CreationDate":"2021-01-04T20:54:00.000","Title":"Installations on WSL?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way to store the state of the python interpreter embedded in a C program (not the terminal interpreter or a notebook) and restore it later resuming execution were it left off?\nOther questions and answers I found about this topic evolved around saving the state of the interactive shell or a jupyter notebook or for debugging. However my goal is to freeze execution and restoring after a complete restart of the program.\nA library which achieves a similar goal for the Lua Language is called Pluto, however I don't know of any similar libraries or built-in ways to achieve the same in an embedded python interpreter.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":205,"Q_Id":65573489,"Users Score":1,"Answer":"No, there is absolutely no way of storing the entire state of the CPython interpreter as it is C code, other than dumping the entire memory of the C program to a file and resuming from that. It would however mean that you couldn't restart the C program independent of the Python program running in the embedded interpreter. Of course it is not what you would want.\nIt could be possible in a more limited case to pickle\/marshal some objects but not all objects are picklable - like open files etc. In general case the Python program must actively cooperate with freezing and restoring.","Q_Score":1,"Tags":"python,c,python-embedding","A_Id":65573874,"CreationDate":"2021-01-05T04:59:00.000","Title":"Save and restore python interpreter state","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Updating the question.\nI am developing a command-line tool for a framework.\nand I am struggling with how to detect if the current directory is a project of my framework.\nI have two solutions in mind.\n\nsome hidden file in the directory\ndetect the project structure by files and folders.\n\nWhat do you think is the best approach?\nThank you,\nShay\nThank you very much","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":117,"Q_Id":65582283,"Users Score":0,"Answer":"In my opinion, a good idea would be to either have a project directory structure that you can use a signature for the project\/framework, that you can use within the tool as a list of signature-like structures, for example\nPROJECT_STRUCTURE_SIGNATURES = [ \"custom_project\", \"custom_project\/tests\", \"custom_project\/build\", \"custom_project\/config\", \"config\/environments\" ] and then just check if any(signature in os.getcwd() for signature in PROJECT_STRUCTURE_SIGNATURES).\nif the project structure is not too complex, I suppose that would be a start in order to identify the requirements that you're looking for.\nHowever, if this is not the case, then I suppose a dictionary-like structure that you could use to traverse the key-value pairs similar to the project's file structure and check the current directory against those would be a better idea, where if none of the elements from the nested dictionary traversal matches, then the directory is not within the project structure.","Q_Score":0,"Tags":"python,architecture","A_Id":67074897,"CreationDate":"2021-01-05T15:58:00.000","Title":"Detect if folder content match file and folders pattern","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently using Airflow to run a DAG (say dag.py) which has a few tasks, and then, it has a python script to execute (done via bash_operator). The python script (say report.py) basically takes data from a cloud (s3) location as a dataframe, does a few transformations, and then sends them out as a report over email.\nBut the issue I'm having is that airflow is basically running this python script, report.py, everytime Airflow scans the repository for changes (i.e. every 2 mins). So, the script is being run every 2 mins (and hence the email is being sent out every two minutes!).\nIs there any work around to this? Can we use something apart from a bash operator (bare in mind that we need to do a few dataframe transformations before sending out the report)?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":170,"Q_Id":65585318,"Users Score":0,"Answer":"Just make sure you do everything serious in the tasks. It in the python script. The script will be executed often by scheduler but it should simply create tasks and build dependencies between them. The actual work is done in the 'execute' methods of the tasks.\nFor example rather than sending email in the script you should add the 'EmailOperator' as a task and the right dependencies, so the execute method of the operator will be executed not when the file is parsed by scheduler, but when all dependencies (other tasks ) will complete","Q_Score":0,"Tags":"python,airflow","A_Id":65586460,"CreationDate":"2021-01-05T19:26:00.000","Title":"How do I stop Airflow from triggering my python scripts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In the same way that you would deploy a storage account or a upload a blob file from Python.\nI am essentially looking for the Python equivalent of the following bash commands\n\naz functionapp create --resource-group $RESOURCE_GROUP_NAME --os-type Linux --consumption-plan-location $AZ_LOCATION --runtime python --runtime-version 3.6 --functions-version 2 --name $APP_NAME --storage-account $STORAGE_ACCOUNT_NAME\n\nfunc new --name $FUNC_NAME --template \"Azure Queue Storage trigger\"\n\nfunc azure functionapp publish $APP_NAME --build-native-deps\n\n\nA cop-out would be to just have the Python script run the shell commands, but I am looking for a more elegant solution if one exists.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":389,"Q_Id":65590773,"Users Score":1,"Answer":"I am essentially looking for the Python equivalent of the following\nbash commands\n\nIf you check the python sdk of azure or check the REST API document of azure, you will find there is no Ready-made method.Basically, there are two situations to discuss:\nIf you want to deploy azure function to windows OS, then just use any python code to upload local function structure to related storage file share(The premise is that the function app has been created on azure.).\nBut if you want to deploy azure function to linux OS, then it will not be as simple as deploying to windows OS. It needs to perform additional packaging operations, and the specific logic may need to check the underlying implementation of azure cli.\n\nA cop-out would be to just have the Python script run the shell\ncommands, but I am looking for a more elegant solution if one exists.\n\nFor Linux-based azure function, I think you don\u2019t have to consider so much. It has many additional operations when deploy function app, so use command or run command by python to achieve this is more recommended(There is no ready-made python code or REST API that can do what you want.).","Q_Score":1,"Tags":"python,azure,azure-devops,azure-functions,azure-function-app","A_Id":65592524,"CreationDate":"2021-01-06T05:43:00.000","Title":"How can you deploy an Azure Function from within a Python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I downloaded a python script but I have a problem with the script, the problem is that it stops working but when I stop the program and rerun it it has a good feature to resume the process which was terminated last time and it continues the process for some time but again stops working. So,\nI want to create an another script which terminates the real python script and reruns it every 5 mins...\nbut when the real python script starts it asks if we want to continue the old terminated process and we have to enter 'y'...\nCan anyone help me with this and you can use any language to create the rerunning script. ThankYou","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":65591563,"Users Score":0,"Answer":"ThankYou everybody for their contribution here, after reading all your answers I finally resolved the issue. Here's what I did:\n\nI first changed the real python script deleted the code which asked if i wanted to continue the terminated process, so now it simply checks if any session exists and if it does exist then it directly resumes the process.\nThen I created another Python program which simply reruns the real python file.\n\nOnce again Thank You everybody!","Q_Score":1,"Tags":"python,automation","A_Id":65592516,"CreationDate":"2021-01-06T07:09:00.000","Title":"How to create a script to automate another script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Let's say you have some proprietary python + selenium script that needs to run daily. If you host them on AWS, Google cloud, Azure, etc. are they allowed to see your script ? What is the best practice to \"hide\" such script when hosted online ?\nAny way to \"obfuscate\" the logic, such as converting python script to binary ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60,"Q_Id":65607541,"Users Score":4,"Answer":"Can the cloud vendors access your script\/source code\/program\/data?\n\nI am not including government\/legal subpoenas in this answer.\nThey own the infrastructure. They govern access. They control security.\nHowever, in the real world there are numerous firewalls in place with auditing, logging and governance. A cloud vendor employee would risk termination and\/or prison time for bypassing these controls.\nSecrets (or rumors) are never secret for long and the valuation of AWS, Google, etc. would vaporize if they violated customer trust.\nTherefore the answer is yes, it is possible but extremely unlikely. Professionally, I trust the cloud vendors with the same respect I give my bank.","Q_Score":0,"Tags":"python,amazon-web-services,selenium,google-cloud-platform,hosting","A_Id":65607689,"CreationDate":"2021-01-07T06:20:00.000","Title":"How do cloud services have access to your hosted script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am building a Python Daemon app to download files which are accessible to an individual O365 user via Graph API. I am trying to use ConfidentialClientApplication class in MSAL for authorization.\nIn my understanding - this expects \u201cApplication Permissions\u201d (the API permission in Azure AD) and not \u201cDelegated permissions\u201d for which, admin has to consent Files.Read.All.\nSo the questions I have are:\n\nDoes this mean, my app will have access to all the files in the organization after the admin consent?\nHow do I limit access to a Daemon app to the files which only an individual user (my O365 user\/UPN) has access to?\nShould I be rather be using a different auth flow where a user consent be also part of the flow: such as on-behalf-of (or) interactive (or) username password?\n\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":317,"Q_Id":65608215,"Users Score":4,"Answer":"Does this mean, my app will have access to all the files in the organization after the admin consent?\nYes, it is the downside of application permissions usually.\nHow do I limit access to a Daemon app to the files which only an individual user (my O365 user\/UPN) has access to?\nI'm pretty sure you can't limit a daemon app's OneDrive access. You can for example limit Exchange access for a daemon app.\nShould I be rather be using a different auth flow where a user consent be also part of the flow: such as on-behalf-of (or) interactive (or) username password?\nIt would certainly allow you to limit the access to a specific user. In general I recommend that you do not use username+password (ROPC); it won't work any way if your account has e.g. MFA. The more secure approach would be that you need to initialize the daemon app once with Authorization Code flow. This gives your app a refresh token that it can then use to get an access token for the user when needed (and a new refresh token). Note it is possible for refresh tokens to expire, in which case the user needs to initialize the app again.","Q_Score":3,"Tags":"python,azure,microsoft-graph-api,msal","A_Id":65608304,"CreationDate":"2021-01-07T07:30:00.000","Title":"Microsoft Graph API: Limiting MSAL Python Daemon app to individual user access","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The following script in crontab is not executed, but it can be executed in the terminal\nCommand: * * * * * \/usr\/bin\/php artisan edit > \/dev\/null 2>&1\nError: [Errno 2] No such file or directory: 'ffprobe\\","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":65608236,"Users Score":0,"Answer":"I think this is not a crontab issue, it says 'ffprobe' as No such file or directory.\nIn your PHP code if you are using 'ffprobe' directory, try giving this as absolute path and not a relative one. I mean the full path and not partial one. Say for example something like \/home\/myuser\/phpcodes\/ffprobe\/ and not just ffprobe.\nPlease try and let me know if this helps.","Q_Score":0,"Tags":"python,macos,cron","A_Id":65608818,"CreationDate":"2021-01-07T07:32:00.000","Title":"The following crontab does not run, but it can be run in the terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed a bazelisk exe file and included that file in my environmental Path variable. I can now run bazelisk commands but no bazel commands and I think I was told that that was normal. Is it? If it is, if I cd into my tensorflow folder and run python .\/configure.py because I think that that is a step I need to do to build tensorflow from source I get the message Cannot find bazel. Please install bazel. What am I supposed to do? I am using python 3.6.2 and windows 10 and bazelisk is on v1.7.4","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":106,"Q_Id":65660427,"Users Score":0,"Answer":"Try to rename bazelisk to bazel because it is just a wrapper for bazel","Q_Score":1,"Tags":"python,python-3.x,windows,tensorflow,bazel","A_Id":65661694,"CreationDate":"2021-01-11T01:49:00.000","Title":"I can not use bazel to build tensorflow because I installed bazel with bazelisk?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wanted to know whether it is possible to enable IAP OAuth for App Engine but for a subdomain or a subfolder. I have already enabled it for the domain, but I don't want it to show up for the entire website. For example: I want to use IAP secured login on admin.website.com but users to website.com should be able to access it without any issues. It is also okay if this can be done for website.com\/admin (I suppose enabling on website.com\/admin is a lot easier too)\n(Website name changed for privacy)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":146,"Q_Id":65661512,"Users Score":0,"Answer":"You can activate or deactivate IAP (deactivate means grant allUsers with IAP-Secured Web App User role) per service. It's the finest granularity, you can't deactivate it by URL path.","Q_Score":2,"Tags":"google-app-engine,google-cloud-platform,google-app-engine-python,identity-aware-proxy","A_Id":65669282,"CreationDate":"2021-01-11T04:56:00.000","Title":"How to enable IAP on a subdomain in App Engine?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to run a python script inside php file, the problem is i can't pass the file as an argument\nsomething like this\n$a =\"python \/umana\/frontend\/upload\/main.py 'filename' \";\n$output =shell_exec($a);\nThe real problem is, the file is not opening in python script.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":129,"Q_Id":65680051,"Users Score":1,"Answer":"It's solved\n$a =\"python-path C:\/python program path '$file_uploaded_path_as_param' \";\n$output =shell_exec($a);\nwe want to add the python path before python script.","Q_Score":0,"Tags":"python,php,yii","A_Id":65697404,"CreationDate":"2021-01-12T08:06:00.000","Title":"Run python script inside php and pass file as a parameter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to auto run the python script when I open it from the terminal so that I won't have to press the run button\nFrom the terminal I want to open the file as :\npycharm-community main.py\nHow do I auto run it while it opens?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":222,"Q_Id":65681422,"Users Score":0,"Answer":"Preferences > Tools > Startup Tasks, <+ Add> and select your main.py\nThen anytime you open the project the script will run and display results.\nI wanted main.py to run daily at a certain time, so used Keyboard Maestro. Keyboard Maestro is also able to control the running of main.py, eliminating the step above. That way my script runs only at the desired time of day, not every time I open the project.","Q_Score":2,"Tags":"python,pycharm","A_Id":71053503,"CreationDate":"2021-01-12T09:47:00.000","Title":"Auto run python scripts on pycharm while opening","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to auto run the python script when I open it from the terminal so that I won't have to press the run button\nFrom the terminal I want to open the file as :\npycharm-community main.py\nHow do I auto run it while it opens?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":222,"Q_Id":65681422,"Users Score":2,"Answer":"Do File | Settings | Tools | Startup Tasks (or Ctrl-Alt-S).\nThen: Build, Execution, Deployment > Console > Python Console\nThis gives you a dialogue with an edit box Starting script. Put your import code there. That will run every time you open a new console.","Q_Score":2,"Tags":"python,pycharm","A_Id":65681531,"CreationDate":"2021-01-12T09:47:00.000","Title":"Auto run python scripts on pycharm while opening","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using pip regularly\nright now i'm getting errors when trying to run\npip install numpy\nWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/ ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy\nI get the same error when running the command from my pc and also where running it from my laptop.\nI had some internet connectivity issues the other day, also the problem seemed to occur after I installed\npip install -U databricks-connect==7.1.* ran some commands(databricks-connect configure and databricks-connect test) and then uninstalled it.\nAgain, the problem occurs on both computers connected to the same network.\nThanks\nroy","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":234,"Q_Id":65688006,"Users Score":0,"Answer":"might be a network provider related problem","Q_Score":0,"Tags":"python,networking,pip","A_Id":65747876,"CreationDate":"2021-01-12T16:31:00.000","Title":"pip timing out on multiple computers on the same network","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to anaconda and the concept of environments and have a few questions I want to clarify!\n\nDoes the Anaconda graphical installer installs a new \"copy\" of python into my Mac?\n\nSo, in the future, am I correct to say that when I update packages\/python through Conda, it will not affect my native python version? (and therefore will not affect my macOS \"dependencies\"?)\n\nShould I be creating a new environment for my learning instead of using the base environment? (b\/c Conda documentation states that\n\n\n\nWhen you begin using conda, you already have a default environment named base. You don't want to put programs into your base environment, though. Create separate environments to keep your programs isolated from each other.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":65697188,"Users Score":0,"Answer":"Yes, you get a new copy of python that can also have a different version than the one shipped with you OS. conda will set up your PATH environment variable so that the new python will take precedence whenever you call python\n\nYes\n\nThat could be somewhat of an opinion based answer, but I would highly encourage it. It helps you to get used to the environment concept and also in case you mess something up, you can just delete an environment and create a new one\n\nWhen you do pip list it will also show you packages that are in your currently active conda environment. This is again because conda has by default also installed pip and has modified the PATH so that condas pip is found when you do pip commands\n\n\nNote: You can always check with the which command where commands are called from. When doing which pip or which python you should see that both point to your anaconda or miniconda installation directory","Q_Score":0,"Tags":"python,anaconda","A_Id":65697290,"CreationDate":"2021-01-13T07:12:00.000","Title":"Regarding Anaconda's python and native macOS python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a special use case where I need to run a task on all workers to check if a specific process is running on the celery worker. The problem is that I need to run this on all my workers as each worker represents a replica of this specific process.\nIn the end I want to display 8\/20 workers are ready to process further tasks.\nBut currently I'm only able to process a task on either a random selected worker or just on one specific worker which does not solve my problem at all ...\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":188,"Q_Id":65701898,"Users Score":0,"Answer":"I can't think of a good way to do this on Celery. However, a nice workaround perhaps could be to implement your own command, and then you can broadcast that command to every worker (just like you can broadcast shutdown or status commands for an example). When I think of it, this does indeed sound like some sort of monitoring\/maintenance operation, right?","Q_Score":1,"Tags":"python,celery","A_Id":65702188,"CreationDate":"2021-01-13T12:22:00.000","Title":"Celery - execute task on all nodes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am researching on Celery as background worker for my flask application. The application is hosted on a shared linux server (I am not very sure what this means) on Linode platform. The description says that the server has 1 CPU and 2GB RAM. I read that a Celery worker starts worker processes under it and their number is equal to number of cores on the machine - which is 1 in my case.\nI would have situations where I have users asking for multiple background jobs to be run. They would all be placed in a redis\/rabbitmq queue (not decided yet). So if I start Celery with concurrency greater than 1 (say --concurrency 4), then would it be of any use? Or will the other workers be useless in this case as I have a single CPU?\nThe tasks would mostly be about reading information to and from google sheets and application database. These interactions can get heavy at times taking about 5-15 minutes. Based on this, will the answer to the above question change as there might be times when cpu is not being utilized?\nAny help on this will be great as I don't want one job to keep on waiting for the previous one to finish before it can start or will the only solution be to pay money for a better machine?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":179,"Q_Id":65709154,"Users Score":1,"Answer":"This is a common scenario, so do not worry. If your tasks are not CPU heavy, you can always overutilise like you plan to do. If all they do is I\/O, then you can pick even a higher number than 4 and it will all work just fine.","Q_Score":2,"Tags":"python,celery","A_Id":65718359,"CreationDate":"2021-01-13T20:16:00.000","Title":"Run multiple processes in single celery worker on a machine with single CPU","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have made an alexa like program on python. Now, I want it to auto run when I start my computer and take inputs and give outputs as well. How do I do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":328,"Q_Id":65712858,"Users Score":0,"Answer":"- For linux\nFirst make sure you add this line to the top of your python program.\n#!\/usr\/bin\/python3\n\nCopy the python file to \/bin folder with the command.\n\nsudo cp -i \/path\/to\/your_script.py \/bin\n\nNow Add a new Cron Job.\n\nsudo crontab -e\nThis command will open the cron file.\n\nNow paste the following line at the bottom of the file.\n\n@reboot python \/bin\/your_script.py &\n\nDone, now test it by rebooting the system\n\nYou can add any command to run in the startup in the cron file.\nCron can be used to perform any kind of scheduling other than startup also.\n- For Windows\nNavigate to C:\\Users\\username\\Appdata\\Roaming\\Microsoft\\Windows\\Start Menu\\Programs\\Startup\nPlace your complied exe file there, it will be executed on startup.\nTo make exe from py, first install the pyinstaller module pip install pyinstaller.\nNow run the command in the folder where the python file is pyinstaller --onefile your_script.py","Q_Score":0,"Tags":"python,autorun","A_Id":65712899,"CreationDate":"2021-01-14T03:08:00.000","Title":"Python Auto run files on startup","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When running psutil.boot_time() for the first time (yesterday) on my windows computer it shows the correct boot time. But when I am running it today it shows yesterday's boot time!\nwhat to do? Am I Doing Something Wrong?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":65713779,"Users Score":0,"Answer":"If you hadn't booted it again, the boot_time will be the same.","Q_Score":0,"Tags":"python,psutil","A_Id":70358266,"CreationDate":"2021-01-14T05:16:00.000","Title":"Incorrect Boot Time In `psutil.boot_time()` on windows with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Using debian, seems like installed all dependencies\nsudo apt update\nsudo apt install -y git zip unzip openjdk-8-jdk python3-pip autoconf libtool pkg-config zlib1g-dev libncurses5-dev libncursesw5-dev libtinfo5 cmake libffi-dev libssl-dev\npip3 install --user --upgrade Cython==0.29.19 virtualenv # the --user should be removed if you do this in a venv\nadded the following line at the end of your ~\/.bashrc file\nexport PATH=$PATH:~\/.local\/bin\/\ndid clone git, installed","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":65723283,"Users Score":0,"Answer":"Installed from pip not from git and it worked.","Q_Score":0,"Tags":"python,linux,debian,buildozer","A_Id":65723477,"CreationDate":"2021-01-14T16:52:00.000","Title":"pkg_resources: The 'sh' distribution was not found and is requred by buildozer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have the current file structure in a folder with nothing else in it:\n\n(folder) crypt\n(file) run.bat\n\nI'm on Windows and I'm trying to execute a python.exe with run.bat that is in the crypt folder. How do I do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":65724516,"Users Score":0,"Answer":"I figured it out! I just had to add a \".\/crypt\/python.exe\" argument as the thing to run.","Q_Score":0,"Tags":"python,windows,executable","A_Id":65724544,"CreationDate":"2021-01-14T18:05:00.000","Title":"Running an executable in a child directory from parent directory windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working on my personal project for an oscilloscope, where I send bulk data from MCU(STM32) to PC through USB Full-Speed (12 Mbits). I would like to communicate with the device(STM32) by using PySerial. I found out that USB is half-duplex, where the host (PC) sets talking privileges with the device - \"speak when you're spoken to\". What I don't understand is how does the host set the talking privileges - Does my computer or pyserial automatically handle this, or do I have to do some handshaking protocol that needs to be implemented in code in both the host and device? I'm wondering since in the event both device and host are sending data, what happens to the data? Thank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":103,"Q_Id":65732165,"Users Score":0,"Answer":"On your PC you do not have to worry about the USB protocol. That is the responsibility of the USB stack in your OS and the associated USB device drivers.\nSo you just use PySerial to send and receive your data.","Q_Score":0,"Tags":"python,serial-port,usb,pyserial","A_Id":65733185,"CreationDate":"2021-01-15T07:37:00.000","Title":"Confused about PySerial with USB Full-speed half-duplex talking priority","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have tried every other solution to create a text file in C:\\windows but unfortunately after performing the user uac permission and giving administrative access, windows still doesn't allow to create this file, pycharm console output PermossionError: [Errno 13] Permission denied: 'C:\\textfile.log'.\nIs there a way to create a file in C windows root by entering the admin user and pass in windows uac?\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":194,"Q_Id":65741250,"Users Score":0,"Answer":"Try the following:\n\nTry from out of PyCharm, sometimes that could be a problem.\nRun the python file itself as administrator.","Q_Score":0,"Tags":"python,admin,root,uac,privileges","A_Id":65741441,"CreationDate":"2021-01-15T17:59:00.000","Title":"Elevate permission to create file in C:\\windows with Python 3.x","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"On a local machine there are two files in the same directory:\n\ntaskrunner.py\nnamefile.txt\n\ntaskrunner.py runs continuously (Python 3.x) and reads a single name from namefile.txt once per minute. I want to be able to have someone at a remote location SSH into the local machine and replace the old namefile.txt with a new namefile.txt without causing any collisions. It is entirely acceptable for taskrunner.py to work with the old namefile.txt information until the new namefile.txt is in place. What I do not want to have occurred is:\n\nHave the taskrunner.py throw an exception because namefile.txt is present in the process of being replaced\n\nand\/or\n\nBe unable to insert the new namefile.txt because of taskrunner.py locks out the remote access.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":37,"Q_Id":65756733,"Users Score":1,"Answer":"This is a typical situation where a lock is useful.\nYou need to create two copies of namefile.txt: let's call them namefile.txt and namefileOld.txt. You also need a locking mechanism that will allow fast updates. For complex operations, you can use Redis. For simple operations like yours, you can probably get away with an environment variable. Let's call it LOCK, which can take values True and False.\nWhen a person wants to write to namefile.txt, set LOCK to True. Subsequently, set LOCK to False and overwrite nameFileOld.txt with data from nameFile.txt.\nHow taskrunner.py should read the data:\n\nRead the LOCK value.\nIf LOCK == True, read from nameFileOld.txt\nelse, read form nameFile.txt","Q_Score":1,"Tags":"python,python-3.x","A_Id":65757068,"CreationDate":"2021-01-17T02:24:00.000","Title":"Remotely Access A Text File That Python Is Presently Accessing Locally","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to using terminal in Mac. When I type any python3 command it only checks the users folder on my PC, HOW can I change the directory to open a folder in the users section and check for the .py file there?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":65767823,"Users Score":0,"Answer":"You have to use the command cd to your folder first.","Q_Score":0,"Tags":"python,python-3.x,macos,terminal","A_Id":65767862,"CreationDate":"2021-01-18T01:44:00.000","Title":"How to select a folder in users directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I recently started working with Yaskawa's OPC UA Server provided on its robot's controller.\nI'm connecting to the server via Python's OPCUA library. Everything works well, but when my code crashes or when I will close terminal without disconnecting from the server I cannot connect to it once again.\nI receive an error from library, saying:\nThe server has reached its maximum number of sessions.\nAnd the only way to solve this is to restart the controller by turning it off and on again.\nDocumentation of the server is saying that max number of sessions is 2.\nIs there a way to clear the connection to the server without restarting the machine?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":221,"Q_Id":65773379,"Users Score":1,"Answer":"The server keeps track of the client session and doesn't know that your client crashed.\nBut the client can define a short enough SessionTimeout, after which the server can remove the crashed session.\nThe server may have some custom configuration where you can define the maximum number of sessions that it supports. 2 sessions is very limited, but if the hardware is very limited maybe that is the best you can get. See the product documentation about that.","Q_Score":1,"Tags":"python,opc-ua","A_Id":65776760,"CreationDate":"2021-01-18T11:09:00.000","Title":"OPC UA zombie connection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So, I am really confused on how nginx works with docker.\nSuppose I have a python flask container website hosted on two servers and I am using nginx to load balance from a third server, Say bastion.\nSo, everytime I visit the website, will a new docker -flask-instance\/image be created to serve the client? Or all are served from the one flask image?\nIf yes, where can I find the new instances names which are created.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":22,"Q_Id":65789586,"Users Score":1,"Answer":"First of you seem to be confused about the concept of images in docker. For your flask application there should only be 1 image, and there can be any number of containers which are running instances of this image.\nyou can see all running instances (containers) with docker ps.\nAnd no generally speaking, there will not be a new container for every request.","Q_Score":0,"Tags":"python,docker,nginx,flask,nginx-reverse-proxy","A_Id":65789804,"CreationDate":"2021-01-19T10:24:00.000","Title":"Is a new docker instance\/image created everytime a web request arrives?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a cypher projection that used algo.unionFind in Neo4j. However, that algorithim has been deprecated. My query was:\nCALL algo.unionFind('MATCH (n) WHERE n.dtype=\\\"VALUE\\\" RETURN id(n) AS id','MATCH p=(n)-[]-(m) WHERE n.dtype=\\\"VALUE\\\" AND m.dtype=\\\"VALUE\\\" RETURN id(n) AS source, id(m) AS target', {write:true, partitionProperty:\\\"partition\\\", graph:'cypher'}) YIELD nodes, setCount, loadMillis, computeMillis, writeMillis\nI was hoping to find an equivalent approach with the Graph Data Science Library that runs the query and writes a new property partition in my nodes.\nAny help would be greatly appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":130,"Q_Id":65799737,"Users Score":0,"Answer":"The algorithm has been renamed to gds.wcc.write in the new GDS library.","Q_Score":0,"Tags":"python,neo4j,cypher,py2neo,graph-data-science","A_Id":65807968,"CreationDate":"2021-01-19T21:24:00.000","Title":"Neo4j algo.unionFind equivalent with new Graph Data Science Library","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I execute pip install cassandra-driver successfully\nand when I type\npython -c 'import cassandra; print (cassandra.__version__)' I got 3.24.0 \nBut when I import cassandra from jupiter notebook I got :\nModuleNotFoundError Traceback (most recent call last)\n in \n----> 1 import cassandra\nModuleNotFoundError: No module named 'cassandra'\nI'm using python 3, os: windows 10\nSo, why it's not able to access cassandra as on cmd?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72,"Q_Id":65806208,"Users Score":0,"Answer":"execute !pip install cassandra-driver on Jupiter notebook\nthen import cassadra\ngoes well !","Q_Score":0,"Tags":"python,cassandra,jupyter-notebook","A_Id":65808483,"CreationDate":"2021-01-20T09:00:00.000","Title":"import cassandra module from Jupiter notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to build a Python script which should go to web every day at 1 pm, do some job (some web-scraping) and save a result to a file.\nIt will be deployed on a Linux server.\nI am not sure what technology to use to run it on schedule.\nWhat comes to mind:\n\nRun it with a cron job scheduler. Quick and dirty. Why bother with any other methods?\n\nRun it as a service with a systemd \/ systemctl (I never did this but I just know there is such possibility and I have to google for a specific implementation). Is this something to be considered as best practice?\n\nOther methods?\n\n\nSince, I never did this, I don't know the pros and cons of every method. May be it's just a one way of doing this properly? Please share your experience.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":65816103,"Users Score":0,"Answer":"I use cron job to run a schedule task it works awesome with me.","Q_Score":0,"Tags":"python,automation,systemd,remote-server,cron-task","A_Id":65817101,"CreationDate":"2021-01-20T19:12:00.000","Title":"Running Python on remote server on schedule","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I read that blender comes with its own python version. However I have troubles actually locating it in ubuntu. I had hoped for adding packages there. What is the current way of adding packages, like pandas to blender's python version?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":445,"Q_Id":65818610,"Users Score":0,"Answer":"Just copy the respective packages to\n\/usr\/share\/blender\/scripts\/modules\nand restart blender.","Q_Score":0,"Tags":"python,pandas,pip,blender","A_Id":65827045,"CreationDate":"2021-01-20T22:27:00.000","Title":"How to install pandas in blender 2.80 python on ubuntu?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I want to run python into gdb I using\nsource \/tmp\/gdb\/tmp\/parser.py\n\nCan I set an alias so in the next time I want to call this script I use only parser.py or parser (without setting the script into working directory\nHow can I pass args to script ? source \/tmp\/gdb\/tmp\/parser.py doesn't work","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":65840478,"Users Score":0,"Answer":"These should have been asked as two separate questions, really. But:\n\nExecute command dir \/tmp\/gdb\/tmp\/, after that you should be able to run script as source parser.py\nYou can't when you are sourcing a script. Rewrite script so that it attaches itself as GDB command via class inheriting from gdb.Command. The command can accept arguments. And you will save on typing source ... too.","Q_Score":0,"Tags":"gdb,gdb-python","A_Id":66403052,"CreationDate":"2021-01-22T06:44:00.000","Title":"Run python script with gdb with alias","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"this should be dead simple but ive googled for ages and can't find anything on this. maybe too generic of a search.\nI have several vms. centos and ubuntu.\nthey both always come with python3.6 which has always been fine with me. but i gotta do some devwork on an app written in 3.7. So i installed that in ubuntu using the apt-get install python3.7 which went fine but it seems the modules I install with pip3 work on in python3.6...\npip3 install future\nimport future\nworks in 3.6 but not 3.7.\nWhat I do?\n-thx","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":65841125,"Users Score":0,"Answer":"which pip3 points to \/usr\/bin\/pip3 hence pip3 install only installs it for python3.6.\nFor python3.7 you can use \/path\/to\/python3.7 -m pip install future to install it.","Q_Score":0,"Tags":"pip,python-3.7","A_Id":65841824,"CreationDate":"2021-01-22T07:44:00.000","Title":"using pip3 with 3.7?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"(gdb) source script.py loaded script file to GDB\nHow to unload that script? How to unload all loaded script or view all script that loaded ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":65841492,"Users Score":0,"Answer":"The script is \"sourced\", not \"loaded\". The script executed and exited. Hence you can't unload it. It may have left something after itself (pretty-printers, commands, breakpoints, changes in configuration etc). You can't unload them all as a group, you have to find them and undo one-by-one.","Q_Score":0,"Tags":"gdb,gdb-python","A_Id":66403116,"CreationDate":"2021-01-22T08:16:00.000","Title":"Unload source file with GDB","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I had saved a python file after working on it for sometime, but now when I open it, Python 3.9.1 opens a window then immediately closes. I had done lots of work on this and don't want it to go to waste. I'm on Windows 10.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":754,"Q_Id":65847043,"Users Score":0,"Answer":"If you\u2019re using the Open option when you right-click or you are simply double-clicking the script, the program will run and close so fast you won\u2019t even see it.\nHere are two options I use:\n\nOpen up the Command Prompt. You can easily do this by going to the address bar of your File Explorer and enter \u2018cmd\u2019. If you\u2019re in the directory where your script is, the current working directory of the Command Prompt will be set to that. From there, run python my_script.py.\n\nEdit your script with IDLE. If you\u2019re using an IDE, it should be nearly the same process, but I don\u2019t use one so I wouldn\u2019t know. From the editor, there should be a method for running the program. In IDLE, you can just using Ctrl + F5.","Q_Score":0,"Tags":"python,windows,crash","A_Id":65847133,"CreationDate":"2021-01-22T14:29:00.000","Title":"Python IDLE 3.9.1 file not opening in windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I had saved a python file after working on it for sometime, but now when I open it, Python 3.9.1 opens a window then immediately closes. I had done lots of work on this and don't want it to go to waste. I'm on Windows 10.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":754,"Q_Id":65847043,"Users Score":0,"Answer":"Right click on it and clicken \"open with\". Then choose Python IDLE.","Q_Score":0,"Tags":"python,windows,crash","A_Id":65847138,"CreationDate":"2021-01-22T14:29:00.000","Title":"Python IDLE 3.9.1 file not opening in windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a NetCore3.1 server app. On my local setup I can use Process to execute python to do some dataframe crunching that I have installed in a venv.\nOn Azure, I can use site extensions to install a local copy of python and all my needed libs. (It's located in D:\/home\/python364x86\/).\nNow on my published Azure app, I want my process to execute python as on my local setup. I have configured the proper path, but I get this error: \"Unexpected character encountered while parsing value: D. Path '', line 0, position 0.\"\nWould anyone know why this is failing? Many thanks for any help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":70,"Q_Id":65847542,"Users Score":0,"Answer":"Please notice that the python extension is in the website extension, so it should be impossible to access the python extension in code.","Q_Score":0,"Tags":"python,azure,asp.net-core,process","A_Id":65932178,"CreationDate":"2021-01-22T15:00:00.000","Title":"NetCore 3.1: How to execute python.exe using Process on Azure?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I encountered the Error calling sync triggers (TooManyRequests) error when running func azure functionapp publish... for a Python Function App in Azure. Encountered this error consistently after trying to publish.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1661,"Q_Id":65851673,"Users Score":3,"Answer":"Solution: This problem was caused by my stopping the Function App in Portal. After re-starting the Function App, the problem disappeared!","Q_Score":1,"Tags":"python,azure,azure-functions,azure-function-app,azure-linux","A_Id":65851674,"CreationDate":"2021-01-22T19:36:00.000","Title":"Encountered the \"Error calling sync triggers (TooManyRequests)\" error when running \"func azure functionapp publish\" for a Python Function App in Azure","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was trying to figure out without success how I could exclude the dev\/test dependencies of 3rd party Python modules from the BOM generated by CycloneDX. There seems to be no straightforward way to do this. Any recommendation on how to best approach this would be highly appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":65859032,"Users Score":0,"Answer":"This is unfortunately not supported currently. But would make a great issue :)","Q_Score":0,"Tags":"python,external-dependencies","A_Id":67225494,"CreationDate":"2021-01-23T12:10:00.000","Title":"CycloneDX Exclude Python Dev\/Test Dependencies","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have two bytestream, these data should be dump in a compound OLE file both together in a container. I just checked that there are some third party libraries to do it, but I would like to do it with pywin32, I have this library in my project and I would not like to add more third party libraries which maybe I could not mantain in the future. If for some reason I can not use Com objects from Windows, which is the best option or the best library?\nThanks.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":91,"Q_Id":65864425,"Users Score":-1,"Answer":"dI found several libraries, useless right now to create an ole object from scratch then add some streams in to the container. The only way to do it is through pywin32 then use a Com object. The problem is as always with pywin32, no examples, no a good documentation. Nice.\nDoes anyone would know how to do it? It would help just to know how to open a com object for this purpose.\nThanks.","Q_Score":0,"Tags":"python-3.x,pywin32,ole,bytestream","A_Id":65872773,"CreationDate":"2021-01-23T21:10:00.000","Title":"Writing OLE compound files with python 3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using python 3.9.1 on my computer and when I try this command on cmommand windows : python -- version , I come up with 2.7.12 !!! And it does not show the version correct.\nI uninstalled python and removed all the related files on C drive as well as environmental Variables...\nNow I don't have python but it still shows the version 2.7.12 when I ask for Command windows!!!\nDoes anyone know what is the problem ????","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":81,"Q_Id":65868650,"Users Score":2,"Answer":"Go to my computer, right click and then properties. Here go to Advanced System setting\nand at the bottom of the window open Environment Variables and check any variable having python on it. if there are two variable maybes this is the problem.\nAlso go to the app data on your windows and check files if there is a file related to the older version of python.\nGood Luck.","Q_Score":1,"Tags":"python-3.x,python-2.7,pip","A_Id":65868714,"CreationDate":"2021-01-24T08:53:00.000","Title":"Showing python 2.7.12 for python 3.9.1 on Command window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using python 3.9.1 on my computer and when I try this command on cmommand windows : python -- version , I come up with 2.7.12 !!! And it does not show the version correct.\nI uninstalled python and removed all the related files on C drive as well as environmental Variables...\nNow I don't have python but it still shows the version 2.7.12 when I ask for Command windows!!!\nDoes anyone know what is the problem ????","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":81,"Q_Id":65868650,"Users Score":1,"Answer":"You can use PowerShell instead of cmd as well try this one after checking the variables.","Q_Score":1,"Tags":"python-3.x,python-2.7,pip","A_Id":65868854,"CreationDate":"2021-01-24T08:53:00.000","Title":"Showing python 2.7.12 for python 3.9.1 on Command window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using python 3.9.1 on my computer and when I try this command on cmommand windows : python -- version , I come up with 2.7.12 !!! And it does not show the version correct.\nI uninstalled python and removed all the related files on C drive as well as environmental Variables...\nNow I don't have python but it still shows the version 2.7.12 when I ask for Command windows!!!\nDoes anyone know what is the problem ????","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":81,"Q_Id":65868650,"Users Score":2,"Answer":"if you have both versions then you should write python2 --version o","Q_Score":1,"Tags":"python-3.x,python-2.7,pip","A_Id":66407138,"CreationDate":"2021-01-24T08:53:00.000","Title":"Showing python 2.7.12 for python 3.9.1 on Command window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Note this is a MacOS question not a Linux Question - They are different operating systems\nI'd like to get a meaningful mount point out of python's os.stat(\"foo\").st_dev. At the moment this is just a number and I can't find anywhere to cross reference it.\nAll my searches so far have come up with answers that work on Linux by interrogating \/proc\/... but \/proc doesn't exist in MacOS so any such answer will not work.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":179,"Q_Id":65873319,"Users Score":3,"Answer":"I'm a Linux guy, but if I were not allowed to use \/proc, I would search the \/dev directory for an entry (i.e. filename) which has following stat data:\n\nst_mode indicates that it is a block device (helper: stat.S_ISBLK)\nst_rdev matches the given st_dev value","Q_Score":3,"Tags":"python,macos,stat","A_Id":65873732,"CreationDate":"2021-01-24T16:48:00.000","Title":"Is there a way to get a meaningful mount point for st_dev on MacOS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"It says Fatal error in launcher: Unable to create process using '\"c:\\python38\\python.exe\" \"C:\\Python38\\Scripts\\pip.exe\" ': The system cannot find the file specified.\", when I use pip alone.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":65878031,"Users Score":0,"Answer":"You seem to have an issue with an old Python installation that wasn't fully removed.\nAn easy way to resolve it is by overwriting the system PATH variable.\nPress Winkey+Break (or Winkey+Pause depending on keyboard), go to \"advanced system settings\" then \"environment variables\".\nOn user variables you have \"path\". Edit it and add this new path:\nC:\\Users\\\\AppData\\Local\\Programs\\Python\\Python39\\Scripts\nMove this all the way to the top and press OK.\nReopen your cmd. Should work.","Q_Score":0,"Tags":"python","A_Id":65878202,"CreationDate":"2021-01-25T01:59:00.000","Title":"Can't use pip without saying py -m pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"It says Fatal error in launcher: Unable to create process using '\"c:\\python38\\python.exe\" \"C:\\Python38\\Scripts\\pip.exe\" ': The system cannot find the file specified.\", when I use pip alone.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":65878031,"Users Score":0,"Answer":"you can uninstall python, and than you install python, must choice pip.","Q_Score":0,"Tags":"python","A_Id":65878450,"CreationDate":"2021-01-25T01:59:00.000","Title":"Can't use pip without saying py -m pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"vincens@VMAC: python3\ndyld: Library not\nloaded:\/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation\nReferenced from:\n\/Library\/Frameworks\/Python.framework\/Versions\/3.6\/Resources\/Python.app\/Contents\/MacOS\/Python\nReason: image not found\n[1] 25278 abort python3\n\npython3 env is not used when I update my Mac to the latest version. How can I solve it?","AnswerCount":5,"Available Count":2,"Score":0.1586485043,"is_accepted":false,"ViewCount":26865,"Q_Id":65878141,"Users Score":4,"Answer":"That's becuase you have installed both python 3.6 from system library & python3.9 from other source like brew and there are something wrong with the python in lower version. Please manually delete the python within \/Library\/Frameworks. sudo rm -rf \/Library\/Frameworks\/Python.framework\/Versions\/3.6 this command works for me.","Q_Score":17,"Tags":"python-3.x,macos,pycharm","A_Id":70687123,"CreationDate":"2021-01-25T02:14:00.000","Title":"dyld: Library not loaded: \/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"vincens@VMAC: python3\ndyld: Library not\nloaded:\/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation\nReferenced from:\n\/Library\/Frameworks\/Python.framework\/Versions\/3.6\/Resources\/Python.app\/Contents\/MacOS\/Python\nReason: image not found\n[1] 25278 abort python3\n\npython3 env is not used when I update my Mac to the latest version. How can I solve it?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":26865,"Q_Id":65878141,"Users Score":23,"Answer":"This worked for me with the same issue.\nCheck if you have multiple Python3.x versions installed. In my case I had Python3.6 and Python3.9 installed. brew uninstall python3 did not remove Python3.6 completely.\nI was able to call Python3.9 from Terminal by explicitly running python3.9 instead of python3, which led me to believe the issue was caused by ambiguity in which Python3.x resource was to be used.\nManually deleted \/Library\/Frameworks\/Python.framework\/Versions\/3.6 resulted in Python3 running as expected.\nhint:\nIt may be sufficient to remove \/Library\/Frameworks\/Python.framework\/Versions\/3.6 from your PATH environment variable.","Q_Score":17,"Tags":"python-3.x,macos,pycharm","A_Id":65895716,"CreationDate":"2021-01-25T02:14:00.000","Title":"dyld: Library not loaded: \/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Flask app running in Google Cloud App Engine. I want the user to be able to call MATLAB functions on their local instance - if they have MATLAB installed locally and the correct license, of course.\nRunning locally the app works well using matlab.engine, however, when deployed to google cloud platform it fails during build. Looking in the logs:\n\nModuleNotFoundError: No module named 'matlabengineforpython3_7\n\nSo I suspect it is because the server cannot import the required dlls etc. for the python matlab engine package to work.\nIs there a way to pass the required files to google app engine? Is this approach even possible?\nMy users will always have a local copy of MATLAB, so I am trying to find a solution that avoids needing to pay for the MATLAB server license.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":95,"Q_Id":65882168,"Users Score":1,"Answer":"I don't believe that any solution is possible that would avoid a Matlab server license. Your server cannot access installed Matlab on the computers of your users.\nTo install non-Python software with App Engine you need to use a custom runtime with App Engine Flexible. Check the GAE docs for more details.","Q_Score":0,"Tags":"python,google-app-engine,flask,matlab-engine","A_Id":65886318,"CreationDate":"2021-01-25T09:37:00.000","Title":"Is it possible to call matlab.engine from Flask webapp on Google Cloud?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I installed django cookiecutter in Ubuntu 20.4\nwith postgresql when I try to make migrate to the database I get this error:\n\npython manage.py migrate\nTraceback (most recent call last): File \"manage.py\", line 10, in\n\nexecute_from_command_line(sys.argv) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/init.py\",\nline 381, in execute_from_command_line\nutility.execute() File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/init.py\",\nline 375, in execute\nself.fetch_command(subcommand).run_from_argv(self.argv) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/base.py\",\nline 323, in run_from_argv\nself.execute(*args, **cmd_options) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/base.py\",\nline 361, in execute\nself.check() File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/base.py\",\nline 387, in check\nall_issues = self._run_checks( File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/commands\/migrate.py\",\nline 64, in _run_checks\nissues = run_checks(tags=[Tags.database]) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/checks\/registry.py\",\nline 72, in run_checks\nnew_errors = check(app_configs=app_configs) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/checks\/database.py\",\nline 9, in check_database_backends\nfor conn in connections.all(): File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/db\/utils.py\",\nline 216, in all\nreturn [self[alias] for alias in self] File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/db\/utils.py\",\nline 213, in iter\nreturn iter(self.databases) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/utils\/functional.py\",\nline 80, in get\nres = instance.dict[self.name] = self.func(instance) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/db\/utils.py\",\nline 147, in databases\nself._databases = settings.DATABASES File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/conf\/init.py\",\nline 79, in getattr\nself._setup(name) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/conf\/init.py\",\nline 66, in _setup\nself._wrapped = Settings(settings_module) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/conf\/init.py\",\nline 176, in init\nraise ImproperlyConfigured(\"The SECRET_KEY setting must not be empty.\") django.core.exceptions.ImproperlyConfigured: The SECRET_KEY\nsetting must not be empty.\n\nI did the whole instructions in cookiecutter docs and createdb what is the wrong?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":167,"Q_Id":65897801,"Users Score":0,"Answer":"Your main problem is very clear in the logs.\nYou need to set your environment SECRET_KEY give it a value, and it should skip this error message, it might throw another error if there are some other configurations that are not set properly.","Q_Score":0,"Tags":"python-3.x,django,django-rest-framework","A_Id":65898014,"CreationDate":"2021-01-26T08:12:00.000","Title":"Django cookiecutter with postgresql setup on Ubuntu 20.4 can't migrate","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to install jupyterlab via the command terminal and it gave me the following warning:\nWARNING: The script jupyter-server.exe is installed in 'C:\\Users\\Benedict\\AppData\\Roaming\\Python\\Python39\\Scripts' which is not on PATH.\nConsider adding this directory to PATH or, if you prefer to suppress thus warning, use --no-warn-script-location.\nPlease how do I add the directory to PATH? Someone help me please. Thank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":623,"Q_Id":65899561,"Users Score":0,"Answer":"As i can see, you haven't put that in path, for doing that follow the following step:-\n\nOpen the advance system settings\nselect environment variable\nThen click on path and press edit.\nClick on new and enter the you path and then your path to python script directory.\nPress okay and reopen the jupyter.\nThat's it","Q_Score":0,"Tags":"python-3.x,pandas,numpy,pip,jupyter-lab","A_Id":65899754,"CreationDate":"2021-01-26T10:24:00.000","Title":"How to add a directory to a path?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to deploy timer trigger function that extracts data from web. I'm using playwright to access. My code runs as expected on my local machine. However when I tried to deploy on cloud it says:\n Result: Failure Exception: Exception: ================================================================================ \"chromium\" browser was not found. Please complete Playwright installation via running \"python -m playwright install\" ================================================================================ Stack: File \"\/azure-functions-host\/workers\/python\/3.8\/LINUX\/X64\/azure_functions_worker\/dispatcher.py\", line 353, in _handle__invocation_request call_result = await fi.func(**args) File \"\/home\/site\/wwwroot\/AsyncFlight\/__init__.py\", line 21, in main browser = await p.chromium.launch() File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/playwright\/async_api\/_generated.py\", line 9943, in launch raise e File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/playwright\/async_api\/_generated.py\", line 9921, in launch await self._impl_obj.launch( File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/playwright\/_impl\/_browser_type.py\", line 73, in launch raise not_installed_error(f'\"{self.name}\" browser was not found.')\nI have checked my consumption plan and my os on cloud is Linux and\"azureFunctions.scmDoBuildDuringDeployment\" is set to true.\nI have included playwright in my requirements.txt. Don't know what I'm missing. Please help!!\nThankyou","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":244,"Q_Id":65929770,"Users Score":0,"Answer":"Have you tried doing what the error instructs you to do:\nPlease complete Playwright installation via running \"python -m playwright install\"","Q_Score":0,"Tags":"python,azure,azure-cloud-services,playwright,playwright-python","A_Id":65929785,"CreationDate":"2021-01-28T01:29:00.000","Title":"How to run python playwright on Azure Cloud using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Really weird problem here. I have a Python Application running inside a Docker Container which makes requests in different threads to a http restapi. When I run the Container, I get the error:\nERROR - host not reachable abc on thread abc. Stopping thread because of HTTPConnectionPool(host='corporate.proxy.com', port=111111): Max retries exceeded with url: http:\/\/abc:8080\/xyz (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response')))\nWhen I log in onto the docker host and make the request with curl, then it works.\nWhen I execute the request inside the docker container (docker exec ....), then it works.\nWhen I start the python interpreter inside the container and make the request with the requests module (like application does it), then it works.\nThe Container is attached to the host network of the docker host machine\nDid anyone had also an issue like this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":312,"Q_Id":65933402,"Users Score":1,"Answer":"Thanks to @Tarique and others I've found the solution:\nI've added a startup delay of 30 seconds to the container to connect to the docker host network correctly. Then startet the requests.session. Additionally I removed the http_proxy and https_proxy env var from the container.","Q_Score":1,"Tags":"python,docker,proxy,python-requests","A_Id":65934707,"CreationDate":"2021-01-28T08:22:00.000","Title":"Python Docker Container gets ProxyError, despite I can connect to Server manually","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to get Jenkins host detail,\nIt seems for windows its present as COMPUTERNAME env variable in \"http:\/\/\/systemInfo\" URL.\nBut for Linux hosts, I don't see this variable present.\nIs there any way that I can fetch the Jenkins host (where the Jenkins is running) using python?\ndon't want to use a groovy script as I want to do it w\/o running any job.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":86,"Q_Id":65944648,"Users Score":1,"Answer":"You can use HOSTNAME from Environment Variables from http:\/\/\/systemInfo.","Q_Score":0,"Tags":"python,jenkins","A_Id":65945568,"CreationDate":"2021-01-28T20:21:00.000","Title":"How to get jenkins host detail?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Long story short, I need to call a python script from a Celery worker using subprocess. This script interacts with a REST API. I would like to avoid hard-coding the URLs and django reverse seems like a nice way to do that.\nIs there a way to use reverse outside of Django while avoiding the following error?\n\ndjango.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.\n\nI would prefer something with low-overhead.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":65959394,"Users Score":0,"Answer":"I am using a custom manager command to boot my django app from external scripts. It is like smashing a screw with a hammer, but the setup is fast and it takes care for pretty much everything.","Q_Score":1,"Tags":"python,django","A_Id":65959971,"CreationDate":"2021-01-29T17:49:00.000","Title":"Is there a way to use reverse outside of a Django App?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to implement a simple source code that DROPs all RST packets that come into the computer using Python. What should I do?\nLinux servers can be easily set up using the iptables command, but I want to make it Python for use on Mac, Linux, and Windows systems.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":65970822,"Users Score":0,"Answer":"Dropping RST packets is a function of the networking firewall built into your operating system.\nThere is only one way to do it on Linux: with iptables. You could use Python to instruct iptables.\nWindows has its own way to add firewall rules. MacOS also has its own way, and each of them is different from the other.\nThere is no single common way to do this. Therefore, there is no single common way to do this with Python.","Q_Score":0,"Tags":"python,packet","A_Id":65971659,"CreationDate":"2021-01-30T17:10:00.000","Title":"how to RST packet drop using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way to perform an exact mirror of local vs S3, i.e. If I rename a file locally, is there a way to apply that to S3 as well?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":65974119,"Users Score":0,"Answer":"If two locations are synchronized and then a local file is renamed, then next time that aws s3 sync is run, the file will be treated as a new file and it will be copied to the destination.\nThe original file in the destination will remain untouched. However, if the --delete option is used, then the original file in the destination will be deleted.\nThe sync command does not rename remote files. It either copies them or deletes them.\nThere are some utilities that can mount Amazon S3 as a virtual disk, such that changes are updates on Amazon S3. This is great for copying data, but is not recommended for production usage at high volumes.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,boto3","A_Id":65975700,"CreationDate":"2021-01-30T23:18:00.000","Title":"AWS S3 Sync renamed files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to run an automation job in Python that restarts a deployment in Kubernetes cluster. I cannon install kubectl on the box due to limited permissions. Does anyone have a suggestion or solution for this?\nThank you.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1450,"Q_Id":65996468,"Users Score":1,"Answer":"There is no atomic corresponding operation to the kubectl rollout restart in the Kubernetes clients. This is an operation that is composed of multiple API calls.\nWhat to do, depends on what you want. To just get a new Pod of the same Deployment you can delete a Pod alternatively you could add or change an annotation on the Deployment to trigger a new rolling-deployment.","Q_Score":0,"Tags":"python,kubernetes","A_Id":65996667,"CreationDate":"2021-02-01T16:32:00.000","Title":"Python client euqivelent of `kubectl rollout restart","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a huge issue with VS Code since many weeks. One day VS Code didn't manage to run any python file. I have the message :\n\nbash: C:\/Users\/rapha\/AppData\/Local\/Programs\/Python\/Python38\/python.exe: No such file or directory\n\nI have uninstall Python and VS CODE many times to add properly python 3.8 to my windows path but I have always the error.\nHave you got any idea ?\nThank you very much","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":14834,"Q_Id":65999975,"Users Score":1,"Answer":"Go to the VS Code preferences, and under interpreter, you'll find Interpreter Path, so set that to the path of your python installation, restart VS Code, and you should be good.","Q_Score":4,"Tags":"python,python-3.x,bash,visual-studio-code","A_Id":65999997,"CreationDate":"2021-02-01T20:53:00.000","Title":"VS Code can't find Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a huge issue with VS Code since many weeks. One day VS Code didn't manage to run any python file. I have the message :\n\nbash: C:\/Users\/rapha\/AppData\/Local\/Programs\/Python\/Python38\/python.exe: No such file or directory\n\nI have uninstall Python and VS CODE many times to add properly python 3.8 to my windows path but I have always the error.\nHave you got any idea ?\nThank you very much","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":14834,"Q_Id":65999975,"Users Score":0,"Answer":"I had this same problem, but I found a different solution;\nin settings.json I had\n\"python.defaultInterpreterPath\": \"D:\\Program Files\\Python310\\python.exe\"\nbut even this was getting ignored for some reason!\nSo, I looked at $ENV:path in the powershell loaded in vscode, and the $ENV:path in the standard commandline powershell in windows, and they were different!\nIt seems that if you have a terminal open in VSCode, it remembers the $ENV from that terminal, even if you completely restart vscode or even if you reboot your computer.\nWhat worked for me (by accident) is, close all terminal windows (and possibly anything else terminal\/powershell related that's open) and give it another try!\nIf it still doesn't work, compare the $ENV:Path values again, and see if they're still different!","Q_Score":4,"Tags":"python,python-3.x,bash,visual-studio-code","A_Id":70574397,"CreationDate":"2021-02-01T20:53:00.000","Title":"VS Code can't find Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a huge issue with VS Code since many weeks. One day VS Code didn't manage to run any python file. I have the message :\n\nbash: C:\/Users\/rapha\/AppData\/Local\/Programs\/Python\/Python38\/python.exe: No such file or directory\n\nI have uninstall Python and VS CODE many times to add properly python 3.8 to my windows path but I have always the error.\nHave you got any idea ?\nThank you very much","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":14834,"Q_Id":65999975,"Users Score":1,"Answer":"I have installed VS code insider and it works perfectly. I'm happy. It doesn't fix the issue but it's a great alternative.\nEdit : The issue came back","Q_Score":4,"Tags":"python,python-3.x,bash,visual-studio-code","A_Id":66000514,"CreationDate":"2021-02-01T20:53:00.000","Title":"VS Code can't find Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Sorry to my bad english.\nSo i use a automate with MERVIS software and i use a Bacnet server to have my variable in my IHM (weintek panel pc with Easybuilder Pro).\nSo all i make is good and work but i'm not happy to EasyBuilder pro and i want make my own HMI. I decide to make my application with QT in C++.\nBut i'm physicien at the begining so learn little bit by little bit( i have base of python,c++, structur text). I know nothing about how build a bacnet client and do you have idea where can i find some simple exemple to communicate with my PLC because i find nothing and i need to learn and make this to my project.\nSo i have my PLC, link in ethernet to my PC where i make my hmi. In the future i want put this application in PANEL PC tactil work in window and link to my PLC with MERVIS software.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":66029193,"Users Score":0,"Answer":"If I'm clear on the question, you could checkout the 'BACnet Stack' project source code or even the 'VTS' source code too - for a C\/C++ (language) reference.\nOtherwise YABE is a good project in C# (language), but there's also a BACnet NuGet package available for C# too - along with the mechanics that underpin the YABE tool.","Q_Score":0,"Tags":"python,c++,client,bacnet,human-interface","A_Id":67459182,"CreationDate":"2021-02-03T14:09:00.000","Title":"Create Bacnet client variable automate","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have this path in my mac:\n\/usr\/local\/lib\/python3.8\/site-packages but I don't have \/usr\/local\/bin\/python3.8, which should be the interpreter.\nCurrently, my pip3 install command would install packages into \/usr\/local\/lib\/python3.8\/site-packages, but I can't use python3.8 since I don't have the interpreter. I don't care which version of python I use. I just want to install packages into the directory that I can use.\nSo please help me with one of the questions:\n\nInstall Python 3.8 interpreter so I can use packages installed by pip3.\n\nOR\n\nChange the default pip3 installation path to other directory such as \/usr\/local\/lib\/python3.7\/site-packages which I already have.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":880,"Q_Id":66038066,"Users Score":0,"Answer":"You could do one of them.\nInstall python 3.8 should solve your problem\nor\nHave you tried to rename the python3.8 to python3.7?\n(Make sure you have python 3.7, by clicking \"python3 --version\" in terminal)\n\n\/usr\/local\/lib\/python3.8\/site-packages ->\n\/usr\/local\/lib\/python3.7\/site-packages","Q_Score":1,"Tags":"python,python-3.x,pip,interpreter,site-packages","A_Id":66038244,"CreationDate":"2021-02-04T01:18:00.000","Title":"I have the site-packages for Python 3.8 but not the interpreter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using Task Scheduler to execute python run.py. It works, but the Python interpreter pops up. I want to run run.py in the background without any interpreter popping up.\nHow can I do this? In Linux I'd just do python run.py & to get it to run in the background silently, but I'm not sure how to achieve the same in Windows with Task Scheduler.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":655,"Q_Id":66055158,"Users Score":0,"Answer":"You can just change .py extension to .pyw\nand the python file will run in background.\nAnd if you want to terminate or check if it actually running in background,\nsimply open Task manager and go to Processes you will see python32 running\nthere.\nEDIT\nAs you mentioned, this doesn't seem like working from command line because changing the file's .extension simply tells your system to open the file with pythonw application instead of python.\nSo when you are running this via command line as python .\\run.pyw even with the .pyw this will run with python.exe instead of pythonw.exe.\nSolution:\nAs you mentioned in the comments, run the file as pythonw .\\run.pyw or .\\run.py\nor just double click the run.pyw file.","Q_Score":0,"Tags":"python,windows,scheduled-tasks","A_Id":66055185,"CreationDate":"2021-02-04T23:02:00.000","Title":"How to hide window when running a Task Scheduler task","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to build a path in Python (windows) and frustratingly enough it gives me wrong path every time. The path I'm trying to build is C:\\Users\\abc\\Downloads\\directory\\[log file name].\nSo when I use print(os.getcwd()) it returns C:\\Users\\abc\\Downloads\\directory which is fine. But when I try to use the os join in python, (os.path.join(os.path.abspath(os.getcwd()),GetServiceConfigData.getConfigData('logfilepath')))\nit returns only C:\\Logs\\LogMain.log and not the desired output. (Path.cwd().joinpath(GetServiceConfigData.getConfigData('logfilepath'))) also returns the same result.\nlogfilepath is an XML string ","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":192,"Q_Id":66063345,"Users Score":1,"Answer":"Thanks for all the help, in the end it was solved by removing 1 backslash.\n\nto\n","Q_Score":0,"Tags":"python","A_Id":66078582,"CreationDate":"2021-02-05T12:31:00.000","Title":"Python windows path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I just uninstalled and reinstalled python on my Windows machine. Before I uninstalled my previous version I was able to just double-click on a python script and it would open the command prompt, run the script, and close automatically. After re-installing with the newest version (3.9), I am no longer able to execute the script like that with a double-click.\nClearly I had done something special last time to set that up for myself, but I don't remember what it was. Any idea how I can get that double-click deal going again?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1269,"Q_Id":66073004,"Users Score":0,"Answer":"There will be an option of \"Open With\" after right-click on the file go and choose CMD. I hope it helps if not then sorry. Because I use Parrot OS","Q_Score":2,"Tags":"python,python-3.x,windows","A_Id":66073027,"CreationDate":"2021-02-06T02:26:00.000","Title":"How to execute .py file with double-click","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm not a Python programmer and i rarely work with linux but im forced to use it for a project. The project is fairly straightforward, one task constantly gathers information as a single often updating numpy float32 value in a class, the other task that is also running constantly needs to occasionally grab the data in that variable asynchronously but not in very time critical way, and entirely on linux. My default for such tasks is the creation of a thread but after doing some research it appears as if python threading might not be the best solution for this from what i'm reading.\nSo my question is this, do I use multithreading, multiprocessing, concurrent.futures, asyncio, or (just thinking out loud here) some stdin triggering \/ stdout reading trickery or something similar to DDS on linux that I don't know about, on 2 seperate running scripts?\nJust to append, both tasks do a lot of IO, task 1 does a lot of USB IO, the other task does a bit serial and file IO. I'm not sure if this is useful. I also think the importance of resetting the data once pulled and having as little downtime in task 1 as possible should be stated. Having 2 programs talk via a file probably won't satisfy this.\nAny help would be appreciated, this has proven to be a difficult use case to google for.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":32,"Q_Id":66115598,"Users Score":3,"Answer":"Threading will probably work fine, a lot of the problems with the BKL (big kernel lock) are overhyped. You just need to make sure both threads provide active opportunities for the scheduler to switch contexts on a regular basis, typically by calling sleep(0). Most threads run in a loop and if the loop body is fairly short then calling sleep(0) at the top or bottom of it on every iteration is usually enough. If the loop body is long you might want to put a few more in along the way. It\u2019s just a hint to the scheduler that this would be a good time to switch if other threads want to run.","Q_Score":0,"Tags":"python,python-3.x,multithreading,multiprocessing","A_Id":66115896,"CreationDate":"2021-02-09T08:45:00.000","Title":"Python3 help, 2 concurrently running tasks, one needs data from the other","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"We have Azure http triggered function app(f1) which talks to another http triggered function app(f2) that has a prediction algorithm.\nDepending upon input request size from function(f1), the response time of function(f2) increase a lot.\nWhen the response time of function(f2) is more, the functions get timed out at 320 seconds.\n\nOur requirement is to provide prediction algorithm as a\nservice(f2)\n\nAn orchestration API(f1) which will be called by the client and\nbased on the clients input request (f1) will collect the\ndata from database do data-validation and pass the data to\n(f2) for prediction\n\nAfter prediction (f2) would respond back predicted result to\n(f1)\n\nOnce (f1) receives the response from (f2), (f1) would respond\nback to client.\n\n\n\nWe are searching for alternative azure approach or solution which will\nreduce the latency of an API and also the condition is to have f2\nas a service.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":270,"Q_Id":66150680,"Users Score":0,"Answer":"If it takes more than 5 minutes in total to validate user input, retrieve additional data, feed it to the model and run the model itself, you might want to look at something different than APIs that return response synchronously.\nWith these kinds of running times, I would recommend a asynchronous pattern, such as F1 stores all data on a Azure Queue, F2 (Queue triggered) runs the model and stores data in database. Requestor can monitor database for updates. Of F1 takes the most time, than create a F0 that stores the request on a Queue and make F1 queue triggered as well.","Q_Score":1,"Tags":"python,azure,rest,azure-functions,azure-api-management","A_Id":66150785,"CreationDate":"2021-02-11T07:49:00.000","Title":"Need better approach for azure api to process large amount of data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been working on a project which involves me to get the icon from the icon theme used on Linux so that I can use it with the Gtk Pixbuf like how Gnome-system-monitor displays the icon for all the process, the same thing I want to achieve. Any ideas about how to do this?\nI am using python with Gtk on PopOS 20.10.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":200,"Q_Id":66184610,"Users Score":1,"Answer":"Gio.AppInfo in the Gtk library stack is a good point to start.\nIf you are looking for the approach that is used by the gnome-system-monitor then the prettytable.c file will the one you need to check.\nThere is one more approach, scanning the \/usr\/share\/application\/ directory and creating a file monitor for this directory. All icons of the application are that are in the menu can be found here.","Q_Score":0,"Tags":"python,linux,user-interface,process,gtk","A_Id":69208718,"CreationDate":"2021-02-13T11:32:00.000","Title":"Get icon of a process (like in gnome-system-monitor) to be used with Gtk and python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm watching a video on YouTube that that is telling me to type cd documents and then within that file type cd python to connect python, but every time I try this it says \"the system cannot find the path specified\" I don't understand why it is saying this because I can see that python is in my documents folder.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":66188745,"Users Score":0,"Answer":"If you are using Windows CMD or Powershell, use the command py instead of python.\nExample: py myscript.py (or py.exe .\\myscript.py also works)","Q_Score":0,"Tags":"python-3.x","A_Id":66188811,"CreationDate":"2021-02-13T18:54:00.000","Title":"connecting python to the command prompt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm watching a video on YouTube that that is telling me to type cd documents and then within that file type cd python to connect python, but every time I try this it says \"the system cannot find the path specified\" I don't understand why it is saying this because I can see that python is in my documents folder.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":66188745,"Users Score":0,"Answer":"if you want to run the python code which is located in your documents folder then open cmd and type \"cd documents\" and if your python code is in same folder then type \"python 'filename'.py\" pre enter to run","Q_Score":0,"Tags":"python-3.x","A_Id":66188829,"CreationDate":"2021-02-13T18:54:00.000","Title":"connecting python to the command prompt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"N.B.: Please do not lock\/delete this question as I did not find any relevant answer.\nI need to convert a .py file to .exe. I have tried pyinstaller, py2exe, and auto_py_to_exe. Unfortunately, the output files are very big. For example, if I simply convert a python file to print('Hello world!'), the output folder becomes 22 MB. The command of --exclude-module does not reduce so much.\nIf I write the same code in C and compiled by Dev-C++, the file size will be below 1 MB.\nSo, is there any way to convert the .py file to a .exe file with a smaller file size?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":614,"Q_Id":66191542,"Users Score":2,"Answer":"As mentioned by Jan in the comments, pyinstaller bundles everything together that is needed to run the program. So realistically the only to make the file smaller is to make sure the target computer has a python environment. Long story short, if you must use a .exe then they are not going to get any smaller unless you re-write it so it needs less external libraries etc.","Q_Score":0,"Tags":"python,python-3.x,pyinstaller,py2exe","A_Id":66193322,"CreationDate":"2021-02-14T01:11:00.000","Title":"How to convert .py file to .exe file with a smaller file size?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"C:\\Users\\Dell>pip install git-review\nFatal error in launcher: Unable to create process using '\"c:\\python39\\python.exe\" \"C:\\Python39\\Scripts\\pip.exe\" install git-review': The system cannot find the file specified\nI am getting this error i have tried many way to resolve it.\nby installing pip and python again.\nand trying old question to solve this error but unable to solve","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":381,"Q_Id":66206292,"Users Score":0,"Answer":"As @programandoconro mentioned in comment\npython -m pip install --upgrade --force-reinstall pip and then python -m pip install git-review\nworked for me","Q_Score":1,"Tags":"python,python-3.x,pip,git-review","A_Id":66206611,"CreationDate":"2021-02-15T10:25:00.000","Title":"Git review is not installing using PIP","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When using Azure Bobs the path separator is a slash (\/). As Blobs are flat there are no actual directories. However, often there are prefixes that should be treated as directories.\nWhat are the right methods to deal with such paths? os.path is os dependent and will assume backslashes e.g. on Windows machines.\nSimply using str.split('\/') and similar does not feel right, as I would like to have the features from os.path to combine paths and I don't want to care about trailing and leading slashes and so on.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":66222401,"Users Score":0,"Answer":"I normally do use str.split('\/'). Don't know if its the \"right\" method but it works fine for me. Later on I can use os.path.join() to combine the resulting strings again (when needed).","Q_Score":0,"Tags":"python,azure,path,blob","A_Id":66222454,"CreationDate":"2021-02-16T10:10:00.000","Title":"What are correct python path splitting methods for use with Azure Blob","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'd like to use the Google Error Reporting Client library (from google.cloud import error_reporting).\nBasically, you instantiate a client:\nclient = error_reporting.Client(service=\"my_script\", version=\"my_version\")\nand then you can raise error using:\n\nclient.report(\"my message\") or\nclient.report_exception() when an exception is caught\n\nI have 3 environments (prod, staging and dev). They are each setup on their own Kubernetes cluster (with their own namespace). When I look at Google Cloud Error Reporting dashboard, I would to quickly locate on which environment and which class\/script the error was raised.\nUsing service is a natural choice to describe the class\/script but what about the environment?\nWhat is the best practice? Should I use the version to store that, e.g. version=\"staging_0.0.2\"?\nMany thanks in advance\nCheers,\nLamp'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":123,"Q_Id":66238791,"Users Score":0,"Answer":"I think the Error Reporting service is deficient (see comment above).\nSince you're using Kubernetes, how about naming your Error Reporting services to reflect Kubernetes Service names: ${service}.${namespace}.svc.cluster.local?\nYou could|should replace the internal cluster.local domain part with some unique external specifier (FQDN) to your cluster: $[service}.${namespace}.${cluster}\n\nNOTE These needn't be actual Kubernetes Services but some way for you to uniquely identify the thing within a Kubernetes cluster my_script.errorreporting.${namespace}.${cluster}","Q_Score":0,"Tags":"python,kubernetes,error-reporting,google-cloud-logging,google-cloud-error-reporting","A_Id":66374446,"CreationDate":"2021-02-17T09:06:00.000","Title":"Google Cloud - Error Reporting Client Libraries","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using ubuntu 16.04 and trying to use vim plugin that requires python3.6 (YouCompleteMe). I used update-alternatives to set python3.6 as the default python and python3 interpreter but vim still using python3.5.\nIs there a way to tell vim to use python3.6 interpreter?\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":103,"Q_Id":66241307,"Users Score":3,"Answer":"Vim uses the Python interpreters it was compiled with. No setting will affect it. If you can't find a Vim binary with the desired Python support, the only way to make Vim use Python3.6 is to compile it with Python3.6 yourself. See --enable-python3interp, --with-python3-command and --with-python3-config-dir options to Vim's configure.","Q_Score":1,"Tags":"python,vim,ubuntu-16.04","A_Id":66241359,"CreationDate":"2021-02-17T11:44:00.000","Title":"force VIM to use python3.6 in ubuntu 16.04","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I run\nsudo apt install mysql-workbench-community\nI get the error\nThe following packages have unmet dependencies:\nmysql-workbench-community : Depends: libpython3.7 (>= 3.7.0) but it is not installable\nE: Unable to correct problems, you have held broken packages.\nI then ran\nsudo dpkg --get-selections | grep hold\nWhich did not return anything\ntyping\npython3 -v\nProduces an error\nif I type\npython3 --version\nI get\nPython 3.8.5\nIf I try to run\nsudo apt install libpython3.7\nI get the error\nE: Package 'libpython3.7' has no installation candidate\nI cannot come up with a way to fix this I have recently updated from 19\nHelp much appreciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":507,"Q_Id":66260846,"Users Score":0,"Answer":"This was caused due to running an older version of MYSQL.\nfix was to remove the mysql repository for tools and install the work bench via snap.","Q_Score":0,"Tags":"python-3.x,mysql-workbench","A_Id":66271946,"CreationDate":"2021-02-18T13:22:00.000","Title":"cannot install mysql-workbench-community on ubuntu 20.04","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The command is:\ndocker run -v \"$PWD\":\/var\/task \"lambci\/lambda:build-python3.6\" \/bin\/sh -c \"pip install -r \/var\/task\/requirements.txt -t python\/lib\/python3.6\/site-packages\/; exit\"\nAnd I am running it from the same folder as the requirements.txt file.\nI get the following error: ERROR: Could not open requirements file: [Errno 2] No such file or directory: '\/var\/task\/requirements.txt'","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":90,"Q_Id":66283683,"Users Score":0,"Answer":"This seems to be a \"Docker on WSL2\" issue, not a Docker issue.","Q_Score":0,"Tags":"python,docker,pip,windows-subsystem-for-linux","A_Id":66284304,"CreationDate":"2021-02-19T19:12:00.000","Title":"Why is Docker saying that the requirements.txt file doesn't exist?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running a Ubuntu server on Azure's cloud. I run the command nohup python3 scanner.py to allow me to run my script and close the putty terminal and let it keep running. The problem is now I have no way to give input to the process, and if I want to terminate it I have to use the kill command.\nWhat's the best way to disconnect\/connect to a running process on ubuntu server command line","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":66285212,"Users Score":0,"Answer":"There are a couple of ways, but none are particularly fantastic. First, you could rework the script such that it has some form of socket connection, and expects input on that, and then write yet another script to send information to that socket. This is very work-heavy, but I've done it before. Second, you could use something like screen or tmux instead of nohup so that you can reconnect to that terminal session later, and have direct access to stdout\/stdin.","Q_Score":1,"Tags":"python,azure,ubuntu,server,remote-server","A_Id":66285508,"CreationDate":"2021-02-19T21:27:00.000","Title":"Communicate to a running process ubuntu python server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can't get my Python program (which runs in Terminal with no problem) to run through cron.\nHere's the crontab command I use:\n38 11 * * * \/usr\/bin\/python3 \/home\/pi\/Cascades2\/03_face_recognition.py >> \/home\/pi\/Cascades2\/cron.log 2>&1\nThe error message that appears in the cron.log file is:\n\n: cannot connect to X server\n\nWhat's the problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":160,"Q_Id":66290650,"Users Score":0,"Answer":"Like tocode suggested, adding export DISPLAY=:0.0 && to the script works perfectly.\n38 11 * * * export DISPLAY=:0.0 && \/usr\/bin\/python3 \/home\/pi\/Cascades2\/03_face_recognition.py >> \/home\/pi\/Cascades2\/cron.log 2>&1","Q_Score":0,"Tags":"python,cron","A_Id":66292030,"CreationDate":"2021-02-20T10:47:00.000","Title":"Unable to run a Python program with cron: cannot connect to X server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to run some python code from my c program by using the python.h header but I keep getting\nfatal error: Python.h: No such file or directory\nAfter checking my anaconda environment, I have found a Python.h file in ...Anaconda\/envs\/myenv\/include\nI tried adding this path to my system variable's PATH but it didn't work. How can I include the Python.h file from my anaconda env to my program?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":89,"Q_Id":66349253,"Users Score":1,"Answer":"PATH is only used for executables, not C headers. Tell your compiler or build system to add this path. On gcc and clang, for example, it's the -I flag.","Q_Score":0,"Tags":"python,c","A_Id":66349443,"CreationDate":"2021-02-24T10:44:00.000","Title":"Missing Python.h in Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"With the Google Cloud command line CLI running you can specify a local jar with the --jars flag. However I want to submit a job using the Python API. I have that working but when I specify the jar, if I use the file: prefix, it looks on the Dataproc master cluster rather than on my local workstation.\nThere is an easy workaround which is to just upload the jar using the GCS library first but I wanted to check if the Dataproc client libraries already supported this convenience feature.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72,"Q_Id":66376451,"Users Score":3,"Answer":"Not at the moment. As you mentioned, the most convenient way to do this strictly using the Python client libraries would be to use the GCS client first and then point to your job file in GCS.","Q_Score":2,"Tags":"python,google-cloud-dataproc","A_Id":66378101,"CreationDate":"2021-02-25T21:08:00.000","Title":"Can I Upload a Jar from My Local System using Cloud Dataproc Python API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to use aquatone from python. It works fine when I run it from VS Code or terminal using either os or subprocess. But When it is start from a parent program which is started on startup as a service. It doesn't work anymore. My guess is that it is due to the parent program being run as root.\nThe parent program requires root privileges.\nSo is there any way to I can start the aquatone as a non-root user from within python??","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":45,"Q_Id":66395449,"Users Score":1,"Answer":"It depends where you've installed aquatone. By default, if you're using pip, aquatone will be installed to python\/site-packages so in order to access the package and Python interpreter any app that runs Python will need to be granted root privileges. This is the simplest way to solve the problem.","Q_Score":1,"Tags":"python,service","A_Id":66395523,"CreationDate":"2021-02-27T03:55:00.000","Title":"How to execute bash commands as non root user from Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm new to digdag. Below is an example workflow to illustrate my question:\n_export:\nsh:\nshell: [\"powershell.exe\"]\n_parallel: false\n+step1:\nsh>: py \"C:\/a.py\"\n+step2:\nsh>: py \"C:\/b.py\"\nThe second task runs right after the first task starts. However, I want the second task to wait for the first task to complete successfully.\nI modified the first task a.py to just raise ValueError, but the second task still runs right after the first task starts.\nThis is not consistent with my understanding of the digdag documentation. But I dont know what goes wrong with my workflow. Could someone please advise?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":66415245,"Users Score":0,"Answer":"No. This can not be solved by redownloading.","Q_Score":0,"Tags":"python,workflow,etl,pipeline,directed-acyclic-graphs","A_Id":67528412,"CreationDate":"2021-03-01T00:26:00.000","Title":"digdag shell script tasks complete instantaneously","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In Settings > Advansced Settings Editor > Keyboard Shortcuts, I am trying to re-map some of the keys to use with Command key in Macbook. I tried \"Cmd\" and \"Command\", both of which didn't work. Is there a key defined for Command key in Jupyter lab?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":179,"Q_Id":66432651,"Users Score":0,"Answer":"Yes, I had the same problem. The key \"Command\" is called \"Accel\" in Jupyterlab.","Q_Score":0,"Tags":"python,jupyter,jupyter-lab","A_Id":67377528,"CreationDate":"2021-03-02T02:54:00.000","Title":"How to use Mac command key in Jupyter lab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to deploy a flask app with http.server (ngnix not installed by admin). I want any user who logs into the cluster to access it. Is it possible?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":26,"Q_Id":66436982,"Users Score":1,"Answer":"HTTP server interfaces are visible to all users that are connected to a machine that has direct network access to the machine your server is running on.\nIf you need them to access the interface just provide the ip address and port where the server is running and the will be able to access it as users of the Flask app you are running. Just make sure you allow the users to access the needed resources.","Q_Score":0,"Tags":"python,linux,http,server","A_Id":66437057,"CreationDate":"2021-03-02T09:49:00.000","Title":"If I run an http server in my user account (linux cluster) how to enable other users to access it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a Windows AD and would like to let my users run Python. with administrative credentials, the user installs Python, installs the libraries needed, but when the attempts to run the code, the libraries aren't being found. You run a cmd with elevated permissions, pip install the package and you get that the package has been already installed.\nWhat would be the correct way to install Python for Windows domain users where they can run code, preferably by not forcing them to be administrators :) ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":66442263,"Users Score":0,"Answer":"Installing Python from the MS Store can be done by restricted users. Installing packages, as @Raspy said, can be done without running an elevated command prompt","Q_Score":0,"Tags":"python-3.x,active-directory","A_Id":66535701,"CreationDate":"2021-03-02T15:28:00.000","Title":"Installing Python for AD users","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hey everyone I installed Python3.9 on my Mac using homebrew package manager but now I do not know how to install packages to it for use\nCan anyone please tell me, thanks !","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":642,"Q_Id":66444227,"Users Score":0,"Answer":"You should first do some research on the Python virtual environment, but the final answer to your question is to use pip install for installing Python packages. Be aware that there are other options out there, but pip is the most prevalent.","Q_Score":1,"Tags":"python,macos,homebrew","A_Id":66444366,"CreationDate":"2021-03-02T17:28:00.000","Title":"Installing packages in python installed using home-brew","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm new to python coding and I'm using OSX.\nI installed Python 3.9 and openpyxl using Brew (if I understood correctly Brew puts everything in \/usr\/local).\nWith Brew Cask I also installed Spyder 4.\nIn Sypder 4 I didn't find openpyxl so, following a guide, I tried to change the interpreter selecting the Python 3.9 installed under \/usr\/local (here the path \"\/usr\/local\/Cellar\/python@3.9\/3.9.1_8\/Frameworks\/Python.framework\/Versions\/3.9\/bin\/python3.9\").\nI get this error\n\n\"Your Python environment or installation doesn't have the spyder\u2011kernels module or the right version of it installed (>= 1.10.0 and < 1.11.0). Without this module is not possible for Spyder to create a console for you.\"\n\nI'm stuck and I need help. thanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":129,"Q_Id":66446558,"Users Score":0,"Answer":"Solved.\nI installed spyder-kernels using pip3.","Q_Score":0,"Tags":"python,macos,spyder,openpyxl","A_Id":66458912,"CreationDate":"2021-03-02T20:20:00.000","Title":"How to set Spyder 4 for my existing python environment (OSX)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using Windows subsystem for Linux WSL with the Ubuntu App (Ubuntu 20.04 LTS). I have installed Anaconda (Anaconda3-2020.11-Linux-x86_64) on my Windows 10 Education 1909. I have Jupyter notebook, and can run this in the Firefox on my computer and it seams to be working properly. However when I try to install packages such as:\nUbuntu console: pip install scrapy\nThen the Jupyter notebook can not find it.\nJupyter notebook: import scrapy\nI am currently working in the base environment, but I believe that Jupyter is actually running python from a different source (I also have Anaconda on my Windows).\nI confirmed this by running:\nimport sys and sys.version both in the WSL and in the Jupyter notebook.\nJupyter notebook returns: '3.6.6 |Anaconda, Inc.| (default, Oct 9 2018, 12:34:16) \\n[GCC 7.3.0]'\nWSL returns: '3.8.5 (default, Sep 4 2020, 07:30:14) \\n[GCC 7.3.0]'\nconfirming that the \"wrong python is used\".\nI am hesitant to delete my Windows Anaconda since I have my precious environments all set up there and are using them constantly.\nThe spessific package that forces me to linux can be found at \"http:\/\/www.nupack.org\/downloads\" but requires registration for downloads.\nI do not have Anaconda or python in my Windows environment variables.\nI would be happy If I either would know where to install my packages (as long as they are in Linux), or if someone knows how to force Jupyter to use the Anaconda from WSL.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":558,"Q_Id":66474736,"Users Score":0,"Answer":"Thanks to Panagiotis Kanavos I found out that I had both Anaconda3 and Miniconda3 installed and that the WSL command line used the miniconda3 version while Jupiter Notebook used Anaconda3.\nThere is probably a way of specifying which version to use but for me I simply deleted Miniconda and it now works.","Q_Score":1,"Tags":"python,jupyter-notebook,windows-subsystem-for-linux,anaconda3","A_Id":66489815,"CreationDate":"2021-03-04T12:09:00.000","Title":"Anaconda on Windows subsystem for Linux (WSL) uses the \"wrong\" anaconda when creating a Jupyther Notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have created python desktop software. Now I want to market that as a product. But my problem is, anyone can decompile my exe file and they will get the actual code.\nSo is there any way to encrypt my code and convert it to exe before deployment. I have tried different ways.\nBut nothing is working. Is there any way to do that?.Thanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":528,"Q_Id":66491254,"Users Score":0,"Answer":"You can install pyinstaller per pip install pyinstaller (make sure to also add it to your environment variables) and then open shell in the folder where your file is (shift+right-click somewhere where no file is and \"open PowerShell here\") and the do \"pyinstaller --onefile YOUR_FILE\".\nIf there will be created a dist folder, take out the exe file and delete the build folder and the .spec I think it is.\nAnd there you go with your standalone exe File.","Q_Score":2,"Tags":"python,exe","A_Id":66491565,"CreationDate":"2021-03-05T10:55:00.000","Title":"How to encrypt and convert my python project to exe","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python script whose dependencies are in the python environment. I want to convert python script to exe but before that i want to run python script without activating environment or automatically activate environment when the script starts. Is there any way i can achieve this so it will be easier for other people to use python script.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":66514222,"Users Score":0,"Answer":"Well, I have found the answer. This can be done by adding just two lines at the starting of your script.\nFor Python 3:\nactivate_this = '\/path\/to\/env\/bin\/activate_this.py'\nexec(compile(open(activate_this, \"rb\").read(), activate_this, 'exec'), dict(__file__=activate_this))\nFor Python 2:\nactivate_this = '\/path\/to\/env\/bin\/activate_this.py'\nexecfile(activate_this, dict(__file__=activate_this))","Q_Score":1,"Tags":"python","A_Id":66519795,"CreationDate":"2021-03-07T07:38:00.000","Title":"How can I run python script without activating python environment everytime?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I had a working setup where I'd type pip install some-library and then I could import it into my projects. Then I decided to install miniconda which installed another version of python (3.8) that my system started defaulting to.\nBy running this command in terminal (I'm on a mac): alias python=\/usr\/local\/bin\/python3 I managed to revert so that when I type python [something], my system uses the python located there (not the newly created one).\nIt seems that it's not as straightforward to get pip to do the same though. pip install some-library just installs stuff for the wrong python version.\nHow can one make pip install some-library install some-library to the python version located in \/usr\/local\/bin\/python3?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":133,"Q_Id":66518181,"Users Score":0,"Answer":"You can try pip3 install some-library for python 3. I hope that works fine!","Q_Score":0,"Tags":"python,python-3.x,macos,pip,version","A_Id":66526254,"CreationDate":"2021-03-07T15:31:00.000","Title":"How to make pip install stuff for another version of python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm developing an app which uses Pytesseract and I'm hosting it on PA. Tesseract is preinstalled\nbut apparently the version is old (3.04) when I run my code I get error:\n\"TSV output not supported. Tesseract >= 3.05 required\"\nHow can I upgrade it since I can't use sudo apt ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":65,"Q_Id":66525998,"Users Score":0,"Answer":"The latest version of Tesseract is not available on PythonAnywhere yet. It should be available with the next system image later this spring.","Q_Score":0,"Tags":"tesseract,python-tesseract,pythonanywhere","A_Id":66528060,"CreationDate":"2021-03-08T07:48:00.000","Title":"How do I update Tesseract on PA?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using Androguard to do the analysis of all Malware APKS (Ransomware, Adware, Trojan etc) but, I am curious to know something which is:\nWhenever I analyze the apk using androguard library using python, I keep getting notification of malware alerts from windows defender. So my question is if its safe to turn off antivirus when I am doing analysis of Infected APK. I am comparing its values and saving inside csv file. Will my computer be still safe?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":37,"Q_Id":66530342,"Users Score":1,"Answer":"Its depends what exactly you doing with malwares, I would sugest you to use some kind of emulator to hide behind additional layer of protection like Windows Sandbox or something like this.","Q_Score":0,"Tags":"python,android,apk,malware,androguard","A_Id":66530664,"CreationDate":"2021-03-08T13:01:00.000","Title":"Should I turn off Antivirus when analyzing Malware APK?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hello I have designed some algorithms that we would like to implement in our company's software (start-up) but some of them take too long (10-15 min) as it is handling big datasets.\nI am wondering if using for example Google Cloud to run my scripts, as it would use more nodes, it would make my algorithm to run faster.\nIs it the same to run a script locally in Jupyter for instance than running it within Cloud?\nThinking of using Spark too.\nThank you","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":35,"Q_Id":66535075,"Users Score":1,"Answer":"I think the only applicable answer is \"it depends\". The cloud is just \"someone else's computer\", so if it runs faster or not depends on the cloud server it's running on. For example if it is a data-intensive task with a lot of I\/O it might run faster on a server with a SSD than on your local machine with a HDD. If it's a processor intensive task, it might run faster if the server has a faster CPU than your local machine has. You get the point.","Q_Score":0,"Tags":"python,cloud","A_Id":66535186,"CreationDate":"2021-03-08T18:17:00.000","Title":"Does running scripts from Cloud (AWS\/Google\/Azure) make my algorithms faster?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"When I run test example: .\/bin\/run-example SparkPi 10 I get error below.\nEDIT:\nProblem is due to the fact, that I switched ti WIFI instead of Ethernet and this changed localhost. @mck direction to previous solutionhelped.\nSolution:\nAdd SPARK_LOCAL_IP in load-spark-env.sh file located at spark\/bin directory\nexport SPARK_LOCAL_IP=\"127.0.0.1\"\nI get error:\n\n WARNING: An illegal reflective access operation has occurred\n WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:\/home\/d\/spark\/jars\/spark-unsafe_2.12-3.1.1.jar) to constructor java.nio.DirectByteBuffer(long,int)\n WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform\n WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations\n WARNING: All illegal access operations will be denied in a future release\n 2021-03-09 15:37:39,164 INFO spark.SparkContext: Running Spark version 3.1.1\n 2021-03-09 15:37:39,214 INFO resource.ResourceUtils: ==============================================================\n 2021-03-09 15:37:39,215 INFO resource.ResourceUtils: No custom resources configured for spark.driver.\n 2021-03-09 15:37:39,215 INFO resource.ResourceUtils: ==============================================================\n 2021-03-09 15:37:39,216 INFO spark.SparkContext: Submitted application: Spark Pi\n 2021-03-09 15:37:39,240 INFO resource.ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)\n 2021-03-09 15:37:39,257 INFO resource.ResourceProfile: Limiting resource is cpus at 1 tasks per executor\n 2021-03-09 15:37:39,259 INFO resource.ResourceProfileManager: Added ResourceProfile id: 0\n 2021-03-09 15:37:39,335 INFO spark.SecurityManager: Changing view acls to: d\n 2021-03-09 15:37:39,335 INFO spark.SecurityManager: Changing modify acls to: d\n 2021-03-09 15:37:39,335 INFO spark.SecurityManager: Changing view acls groups to: \n 2021-03-09 15:37:39,335 INFO spark.SecurityManager: Changing modify acls groups to: \n 2021-03-09 15:37:39,335 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(d); groups with view permissions: Set(); users with modify permissions: Set(d); groups with modify permissions: Set()\n 2021-03-09 15:37:39,545 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,557 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,572 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,585 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,597 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,608 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,612 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,641 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,646 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,650 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,654 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,658 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,663 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,673 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,676 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,682 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,705 ERROR spark.SparkContext: Error initializing SparkContext.\n java.net.BindException: Cannot assign requested address: Service 'sparkDriver' failed after 16 retries (on a random free port)! Consider explicitly setting the appropriate binding address for the service 'sparkDriver' (for example spark.driver.bindAddress for SparkDriver) to the correct binding address.\n at java.base\/sun.nio.ch.Net.bind0(Native Method)\n at java.base\/sun.nio.ch.Net.bind(Net.java:455)\n at java.base\/sun.nio.ch.Net.bind(Net.java:447)\n at java.base\/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)\n at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)\n at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:550)\n at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)\n at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)\n at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)\n at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)\n at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:248)\n at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)\n at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)\n at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)\n at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)\n at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n at java.base\/java.lang.Thread.run(Thread.java:834)\n 2021-03-09 15:37:39,723 INFO spark.SparkContext: Successfully stopped SparkContext\n Exception in thread \"main\" java.net.BindException: Cannot assign requested address: Service 'sparkDriver' failed after 16 retries (on a random free port)! Consider explicitly setting the appropriate binding address for the service 'sparkDriver' (for example spark.driver.bindAddress for SparkDriver) to the correct binding address.\n at java.base\/sun.nio.ch.Net.bind0(Native Method)\n at java.base\/sun.nio.ch.Net.bind(Net.java:455)\n at java.base\/sun.nio.ch.Net.bind(Net.java:447)\n at java.base\/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)\n at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)\n at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:550)\n at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)\n at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)\n at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)\n at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)\n at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:248)\n at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)\n at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)\n at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)\n at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)\n at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n at java.base\/java.lang.Thread.run(Thread.java:834)\n 2021-03-09 15:37:39,730 INFO util.ShutdownHookManager: Shutdown hook called\n 2021-03-09 15:37:39,731 INFO util.ShutdownHookManager: Deleting directory \/tmp\/spark-b53dc8d9-adc8-454b-83f5-bd2826004dee","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":284,"Q_Id":66548256,"Users Score":0,"Answer":"Solution: Add SPARK_LOCAL_IP in load-spark-env.sh file located at spark\/bin directory export SPARK_LOCAL_IP=\"127.0.0.1\"","Q_Score":0,"Tags":"python,apache-spark,pyspark,apache-spark-sql","A_Id":66551389,"CreationDate":"2021-03-09T13:56:00.000","Title":"spark : Cannot assign requested address","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"First I run command pip install virtualenv then after I run python -m virtualenv venv, I get this following error msg\n\"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Resources\/Python.app\/Contents\/MacOS\/Python: No module named virtualenv\"\nCuurently, I'm using python v2.7.16 and when I run pip freeze | grep virtualenv , I get virtualenv==20.4.2 so virtualenv is there. When I run which python I get \/usr\/bin\/python and I don't have .bash_profile when I run ls -a. I am using mac. What could be the reasons python not recognizing virtualenv when it's there?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":66556604,"Users Score":0,"Answer":"You may create .bash_profile and it is auto-recognised by the macintosh machine.\n\nPlease also run which pip and make sure the pip is in the same bin as your python (\/usr\/bin\/python)\n\n\nThe bottom line is pip used to install a package by default will install the packages in the bin directory that also stored your python executable.","Q_Score":2,"Tags":"python,python-2.7,virtualenv","A_Id":66556829,"CreationDate":"2021-03-10T00:17:00.000","Title":"Python not recognizing virtualenv","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a problem parsing DAG with error:\nBroken DAG: [\/usr\/local\/airflow\/dags\/test.py] No module named 'airflow.providers'\nI added apache-airflow-providers-databricks to requirements.txt, and see from the log that:\nSuccessfully installed apache-airflow-2.0.1 apache-airflow-providers-databricks-1.0.1 apache-airflow-providers-ftp-1.0.1 apache-airflow-providers-http-1.1.1 apache-airflow-providers-imap-1.0.1 apache-airflow-providers-sqlite-1.0.2 apispec-3.3.2 attrs-20.3.0 cattrs-1.3.0 clickclick-20.10.2 commonmark-0.9.1 connexion-2.7.0 flask-appbuilder-3.1.1 flask-caching-1.10.0 gunicorn-19.10.0 importlib-resources-1.5.0 inflection-0.5.1 isodate-0.6.0 marshmallow-3.10.0 marshmallow-oneofschema-2.1.0 openapi-schema-validator-0.1.4 openapi-spec-validator-0.3.0 pendulum-2.1.2 python-daemon-2.3.0 rich-9.2.0 sqlalchemy-jsonfield-1.0.0 swagger-ui-bundle-0.0.8 tenacity-6.2.0 termcolor-1.1.0 werkzeug-1.0.1\nBut the scheduler seems to be stuck:\nThe scheduler does not appear to be running. Last heartbeat was received 19 hours ago.\nHow can I restart it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2000,"Q_Id":66561217,"Users Score":0,"Answer":"well after remove all deps in the requirements , the worker in mwaa run normally , now can try test the bad deps","Q_Score":3,"Tags":"python,airflow,scheduler,directed-acyclic-graphs,mwaa","A_Id":66655747,"CreationDate":"2021-03-10T08:45:00.000","Title":"AWS Managed Airflow - how to restart scheduler?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I cannot open any .py file: when I run in the command prompt either \"python test.py\" or \"python3 test.py\" or \"py test.py\", it just says can't open file 'C:\\Users\\Ciela\\Desktop\\test.py': [Errno 2] No such file or directory.\n\nPython is installed, latest version\nAll other versions are uninstalled\nPython was automatically added to PATH during installation, I can see it in both User and System paths and the version is correct\nthe files can be opened in Python just by double-clicking them, although they shut off immediately (I know they work because the \"turtle module\" screen persists on the screen)\nThe OS is Windows 10 and I am a total noob trying to learn\n\nWhat could it be??","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":745,"Q_Id":66575046,"Users Score":0,"Answer":"I sorted it out. For anyone struggling with the same issue, the problem might be OneDrive. Windows10 automatically creates 2 desktops: the one in User, and the one in User\/OneDrive, where files are stored by default. Essentially I was looking for the files in the wrong desktop folder.","Q_Score":1,"Tags":"python,installation,windows-10","A_Id":66580075,"CreationDate":"2021-03-11T00:58:00.000","Title":"Cannot open .py files: \"[Errno 2] No such file or directory\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am building a Python 3.6 application which distributes specific jobs between available nodes over network. There is one server which builds jobs. Clients connect to the server and are assigned a job, which they return as finished after computation completes.\nA job consists of a dict object with instructions, which can get kind of large (> 65536 bytes, probably < 30 MB).\nIn my first attempt I used the Twisted library to exchange messages via a basic Protocol derived from twisted.internet.protocol. When sending a serialized object using self.transport.write() and receiving it on the other hand over the callback function dataReceived() only 65536 bytes are received. Probably that's the buffer size.\nIs there a \"simple\" protocol which allows me to exchange larger messages between a server and multiple clients in Python 3.6, without adding too much coding overhead?\nThanks a lot in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":66602702,"Users Score":0,"Answer":"Finally I used websockets. It works like a charm, even for large messages.","Q_Score":0,"Tags":"python,python-3.x,networking,tcp,twisted","A_Id":67690254,"CreationDate":"2021-03-12T15:25:00.000","Title":"Simple network protocol to send large dict\/JSON messages between python nodes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can start python from windows cmd just typing \"python\", but it doesn't work from pycharm terminal - it writes, that \"python\" is not internal or external command, executable program, or batch file.\nSo, os.system('python file.py') or os.popen('python file.py') also doesn't work, but I have to start another python program in my project. How can I fix it?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":66611446,"Users Score":0,"Answer":"I found that there is the sys module, that has the exexutable variable, that is the path to python.exe, so I'll use the full path. This solved the problem on my computer, so, I think, this will work on other computers.","Q_Score":1,"Tags":"python,cmd,path,environment-variables,python-os","A_Id":66612602,"CreationDate":"2021-03-13T08:20:00.000","Title":"python can't be started from pycharm windows terminal without full path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"What is the difference between - and -- in Python and in Linux, I know about operators but in cmd line we will use python -m pip install --upgrade pip so that's the doubt.\nhope some can clear my doubt as soon as possible.\nthanks in advance! :-)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":359,"Q_Id":66614767,"Users Score":0,"Answer":"Using -- instead of - are just conventions used in naming\/invoking options on the command line, the single dash (-) is usually used on short options, the double dash (--) is used on long options. Many commands will accept either form but you must check the man page and\/or manual first.\nIn your example 'python -m pip install --upgrade pip' means:\n\npython -m pip means to run a module, in this case pip\ninstall --upgrade pip means you are telling pip to install an update for the package called pip, which will bring it to the latest version available.\n\nIf you are on Linux, you can see a summary of options for most commands by typing -h or --help after the command, for example python -h.\n-- is not a valid operator in Python, but - is.\nI hope this clears things up for you.","Q_Score":0,"Tags":"python,linux,windows,cmd","A_Id":66614946,"CreationDate":"2021-03-13T14:44:00.000","Title":"What is the difference between - and -- in python, linux and in windows cmd?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"What is the difference between - and -- in Python and in Linux, I know about operators but in cmd line we will use python -m pip install --upgrade pip so that's the doubt.\nhope some can clear my doubt as soon as possible.\nthanks in advance! :-)","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":359,"Q_Id":66614767,"Users Score":0,"Answer":"Python is a programming language. Linux is an operating system kernel.\nMy guess is that by \"in Linux\" you mean using a command shell like bash. Yes, the language that bash processes might reasonably be called a \"language.\" The command shell in Microsoft Windows is cmd.\nIf bash is what you mean, then Python and bash are two different languages; in the same way that Python, bash, Java, PHP, C++, and others are all different languages. Each may have its own meaning for the use of - and --.\nIt is always important to read the documentation. It is common practice at this time for executable programs to have command line options using - for single letter options and -- for long name options. When using bash, see the output of ls --help to see the short (single letter) and long options. -a and --all are equivalent.\nMost programs from Microsoft to be run in the cmd shell, and those designed for it, typically use \/ to specify options. See DIR \/? for a list of options that can be used with the DIR command.\nPowerShell uses - like bash to indicate options. However, the options can be long names. In a PowerShell console, use the command help Get-ChildItem -ShowWindow to see the options (called parameters) that can be used with the Get-ChildItem command.\nWhen in doubt, read the doc.","Q_Score":0,"Tags":"python,linux,windows,cmd","A_Id":66616893,"CreationDate":"2021-03-13T14:44:00.000","Title":"What is the difference between - and -- in python, linux and in windows cmd?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have created an instance on GCP to run some machine learning model for an app I am working on for a little project. I want to be able to call one of the methods in one of the files from my app and I thought a Cloud Function would be suitable.\nTo make this question simpler, let's just imagine I have a file in my instance called hello.py and a method in this file called foo(sentence). And foo(sentence) simply returns the sentence parameter.\nSo how do I call this method foo in python.py and capture the output?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":330,"Q_Id":66639983,"Users Score":2,"Answer":"At Google Cloud (And Google also), \"all is API\". Thus, if you need to access from a product to another one, you need to do this through API calls.\nIf your Cloud Functions needs to invoke a script hosted on a Compute Engine VM instance, you need to expose an API on this Compute Engine instance. A simple flask server is enough, and expose it only on the private IP is also enough. But you can't directly access from your Cloud Functions code to the Compute Engine instance code.\nYou can also deploy a Cloud Functions (or a Cloud Run if you need more system packages\/libraries) with the model loaded in it, and like this perform all the computation on the same product.","Q_Score":0,"Tags":"javascript,python,node.js,google-cloud-platform,google-cloud-functions","A_Id":66657007,"CreationDate":"2021-03-15T14:34:00.000","Title":"How to call function in GCP Instance from Cloud Function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I currently have a bucket in Google Cloud Storage with .pdf files, and I want to split each .pdf file into a multiple one-page .pdf files.\nI can only load the files as BLOB's (), and I can't find a good answer on how to read as a PdfFileReader object.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":314,"Q_Id":66642586,"Users Score":1,"Answer":"Upon \"fetching\" the object\/file from the bucket, you can \"keep\" it in the cloud function memory as a string (of bytes) or save it into a temp \"directory\" (\/tmp) local to your cloud function (the memory fo that temp directory is allocated form the total memory available for the cloud function). After that, you may be able to process the data either as a string, or as a file. When you finish with processing, you probably would like to upload those files into some other storage bucket.","Q_Score":0,"Tags":"python,pdf,google-cloud-functions,google-cloud-storage","A_Id":66643268,"CreationDate":"2021-03-15T17:15:00.000","Title":"Reading PDF from Google Cloud Storage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a Kafka topic with 4 partitions, and I am creating an application written with python that consumes data from the topic.\nMy ultimate goal is to have 4 Kafka consumers within the application. So, I have used the class KafkaClient to get the number of partitions right after the application starts, then, I have created 4 threads, each one has the responsibility to create the consumer and to process messages.\nAs I am new to Kafka (and python as well), I don't know if my approach is right, or it needs enhancements (e.g. what if a consumer fails).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":127,"Q_Id":66651832,"Users Score":1,"Answer":"If a consumer thread dies, then you'll need logic to handle that.\nThreads would work (aiokafka or Faust might be a better library for this), or you could use supervisor or Docker orchestration to run multiple consumer processes","Q_Score":0,"Tags":"python,apache-kafka,kafka-consumer-api","A_Id":66656387,"CreationDate":"2021-03-16T08:51:00.000","Title":"Having same number of consumers as the number of partitions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My question is how to connect two BBB (Beagle Bone Black) into one PC using USB and communication with them at the same time.\nI am trying to write a python script that two BBB boards to communicate with each other. The idea is that these boards should communicate with each other using iperf and an external cape (OpenVLC).\nThe problem is that I need to use the iperf on the server-side and then read the data on the client-side.\nFor this purpose, I need to connect them to one PC to be able to read commands and write the results.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":151,"Q_Id":66652533,"Users Score":0,"Answer":"That's going to be a struggle. Both BBB grab the same IPs (192.168.6.2 and\/or 192.168.7.2 depending on your PC's operating system) on their virtual Ethernet adapters and assign the same address to the PC side (192.168.6.1 and\/or 192.168.7.1). You can change that in the startup scripts (google for details). Then you'd need to set your PC up to route traffic between the two which depends on your operating system. If you haven't done networking before, it's going to be hard.\nInstead of USB, I'd strongly recommend connecting all devices to a simple router box using Ethernet. It just works.","Q_Score":0,"Tags":"python,beagleboneblack,serial-communication","A_Id":66656103,"CreationDate":"2021-03-16T09:37:00.000","Title":"Connect two BBB to one PC using USB","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have my first ever hello_world.py file saved in documents\\python_work. It runs fine in the Sublime text editor but when I try to navigate to it through a command prompt window, it can't seem to find it. I open a command prompt and type cd documents and it works. I type dir and it shows me the files, but when I type cd python_work (which is the folder my file is in) I get:\n\nThe system cannot find the path specified.\n\nThe textbook had me add C:\\Python and C:\\Python\\Scripts to the PATH environment variables (not too sure why, just following directions), so perhaps I made a mistake during this process?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":66664949,"Users Score":2,"Answer":"If you are the the same folder as the file, type python(or py, depending on your python version) python file.py to run it.\nA quick way to get to the folder is find it in the file explorer, and where it shows the file path, click there and type 'cmd' and hit enter. It will open up the Command Prompt from that folder so you don't have to manually navigate to it.","Q_Score":1,"Tags":"python","A_Id":66664978,"CreationDate":"2021-03-16T23:31:00.000","Title":"How do I run my newly created hello_world.py file from a command prompt window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Let's say I have a python file named myfile.py in a certain directory.\nHow do I just call the python file directly without invoking python.\n\nmyfile.py\n\nand not\n\npython myfile.py","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1003,"Q_Id":66670581,"Users Score":0,"Answer":"Edit:\nTo be more precise.\njust typing the filename in the command line will not work.\nTyping start program.py however should work.\nWindows normally has information telling it which program is the default program for a given suffix.\nConcerning @go2nirvan's comment: Though windows is not an oracle, it can have information for each file suffix to know what's the related default application.\nEven many linux desktop application associate a default application to certain mime types.\nIf you click on .xls (depending on what is installed) either Excel, or OpenOfficeCalc or LibreOffice will be opened)\nWindows associates file suffixes to file types and file types to applications, that are supposed to start it.\nIf you open a CMD window and you type\nassoc .py\nYou should get an output similar to: (I don't have a windows machine nearby, so can't tell you the exact output)\n.py=Python.File\nThen type\nftype Python.File or whatever the previous command gave you and you should see which executable shall be used.\nThis should be something like\nc:\\System32\\py.exe\nwhich is a wrapper program, calling the real python executable according to some rules\nIf this doesn't work, then please tell which version of python you installed and how you installed it (for all users, for current user, ...)\nFrom command line you have to call (If I recall correctly)\nstart test.py and it will execute the file with the associated executable","Q_Score":1,"Tags":"python,windows,command-prompt","A_Id":66670849,"CreationDate":"2021-03-17T09:47:00.000","Title":"How to run python scripts without typing python in windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I run \"WinPython Command Prompt.exe\", the working directory defaults to the scripts directory of the installation.\nCreating a shortcut to run the exe with a specific working directory does not seem to have an effect.\nIs it possible to have the directory after running \"WinPython Command Prompt.exe\" be something other than the scripts directory?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":358,"Q_Id":66693316,"Users Score":0,"Answer":"on latest version, drag&drop your directory over the icon, it should make your day.","Q_Score":0,"Tags":"python,windows","A_Id":66723382,"CreationDate":"2021-03-18T14:38:00.000","Title":"Is it possible to change default directory for WinPython Command Prompt.exe?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"As the title suggests i want to shutdown pc without using modules such as os ,popen or subprocess to invoke system commands.i have searched alot but all the answers were using os module to invoke system commands.i want a pure python way of doing this.and also OS independent.Any help would be greatly appreciated!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":63,"Q_Id":66697404,"Users Score":1,"Answer":"This operation will always include operating system calls ,cause anyways you ask for an operating system action.A pure python module that would do the thing you ask ,anyways will use the things you want to avoid.So yes there is a way to do it with 'pure python' but you need to write the code for your case as i dont think any library exists by now(due to complexity for all cases for all actions).\nThe solution is pretty straight forward:\n\nDefine what os system you work with platform module(platform.system(),platform.release(),platform.version())\nWrite the os system calls for each platform.","Q_Score":0,"Tags":"python,python-3.x","A_Id":66697612,"CreationDate":"2021-03-18T18:56:00.000","Title":"How to shutdown pc in python without using system commands","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"As the title suggests i want to shutdown pc without using modules such as os ,popen or subprocess to invoke system commands.i have searched alot but all the answers were using os module to invoke system commands.i want a pure python way of doing this.and also OS independent.Any help would be greatly appreciated!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":63,"Q_Id":66697404,"Users Score":1,"Answer":"Much of the code that you write is actually being managed by the Operating System, and doesn't run independently from it. What you are trying to accomplish is a protected action, and needs to be invoked by the OS API.\nIf I understand correctly, programming languages like Python can't usually directly work with a computer's hardware, and even much more low level programming languages like C require use of the Operating System's APIs to take such action.\nThat's why most of the solutions you've found depend on the os package, Python doesn't have the ability to do it natively, it needs to make use of the aforementioned OS API.\nThis is a feature, not a bug, and helps keep programs from gaining access or visibility into other processes and protected operations.","Q_Score":0,"Tags":"python,python-3.x","A_Id":66697581,"CreationDate":"2021-03-18T18:56:00.000","Title":"How to shutdown pc in python without using system commands","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to get Google Cloud SDK working on my Windows 10 desktop, but when I use the SDK shell (which, as I understand it, is just command line but with the directory changed to where Cloud SDK is installed), running 'gcloud init' returns the following:\n'\"\"C:\\Program' is not recognized as an internal or external command,\noperable program or batch file.\nPython was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.\n'\"\"C:\\Program' is not recognized as an internal or external command,\noperable program or batch file.\nPython was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.\nPython was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.\nIt then finishes the configuration and tells me 'Your Google Cloud SDK is configured and ready to use!' However, whenever I run any other commands, I get the same error popup again before it continues doing whatever the command does. I believe Python is installed correctly and added to Path, and when I call python from the same command line, same directory as my 'gcloud init' call, it functions as expected and opens a python console. Any ideas at what the problem might be? (or if it will even affect anything?)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":304,"Q_Id":66700328,"Users Score":0,"Answer":"Go to -> \"start\" and type \"Manage App Execution Aliases\". Go to it and turn off \"Python\"","Q_Score":0,"Tags":"python,google-cloud-platform,gcloud","A_Id":71111826,"CreationDate":"2021-03-18T23:03:00.000","Title":"Windows 'gcloud init' returns C:\\Program is not recognized, Python was not found (but Python works on cmd)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to do a connection between a computer simulating being a server and another computer being a user, both with Linux.\nIn the first computer I've created a directory called \"server\" and in that directory I've done the following command:\npython3 -m http.server 8080\nThen I can see that directory going to the localhost. But what I want is to see that localhost from the other computer, I tried with wget, and the gnome system of sharing files but none of them worked, and I'm not seeing any solution online.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":968,"Q_Id":66708739,"Users Score":0,"Answer":"I'm not sure I fully understand your question but if you want to reach your folder from an other computer with your command python3, you can use the option -b followed by an IP address used on your linux.","Q_Score":0,"Tags":"python,html,linux,networking,server","A_Id":66708913,"CreationDate":"2021-03-19T13:03:00.000","Title":"How to connect to a Localhost in linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to do a connection between a computer simulating being a server and another computer being a user, both with Linux.\nIn the first computer I've created a directory called \"server\" and in that directory I've done the following command:\npython3 -m http.server 8080\nThen I can see that directory going to the localhost. But what I want is to see that localhost from the other computer, I tried with wget, and the gnome system of sharing files but none of them worked, and I'm not seeing any solution online.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":968,"Q_Id":66708739,"Users Score":0,"Answer":"If I'm understanding your question correctly you want to connect to the server from another machine on the same network.\nIf so you can run hostname -I on the server to output to the local IP address of the server, you can then use this on the other machine to connect to it (provided they are on the same network)","Q_Score":0,"Tags":"python,html,linux,networking,server","A_Id":66709470,"CreationDate":"2021-03-19T13:03:00.000","Title":"How to connect to a Localhost in linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I accidentally stopped the Anaconda uninstalling. Now prompt is gone and Uninstall.exe is also gone and I can't seem to find a way to uninstall Anaconda now. If someone knows how can I uninstall Anaconda so I can reinstall it clean please help me.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":64,"Q_Id":66716516,"Users Score":0,"Answer":"I figured it out, I reinstalled Anaconda on a differente folder and ran Anaconda-clean on the conda terminal. Then I deleted the old folder and it was all good (until now at least).","Q_Score":0,"Tags":"python,anaconda,conda,uninstallation,anaconda3","A_Id":66727706,"CreationDate":"2021-03-19T22:49:00.000","Title":"Accidentally Anaconda uninstall cut short","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to deploy\/publish an Azure Function through VS-Code. Programming language used is PYTHON 3.7\nI am obviously not able to publish this because the resource group I am using allows Operation system: Linux. Looks like VS-Code tries publishing it as a Windows OS be default.\nHence, while publishing, I do not get an option to choose the OS I want to publish on.\nHowever, If I use Visual Studio, I have the option to choose the OS while publishing, but does not support Python.\nWhat am I missing?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":152,"Q_Id":66717239,"Users Score":0,"Answer":"First, I think python 3.7 is not supported by windows OS on azure. When you try to deploy your function app of python 3.7 in VS Code, it will deploy it to a Linux function app.\nSecond, if your VS Code has some problem, you can first create a function app of python Linux OS on azure, then use below command to deploy your function app:\nfunc azure functionapp publish ","Q_Score":0,"Tags":"visual-studio,visual-studio-code,operating-system,azure-functions,python-3.7","A_Id":66739670,"CreationDate":"2021-03-20T00:44:00.000","Title":"Azure Function with Visual Studio Code - No option to choose OS while publishing (Python 3.7)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Does anyone have any good suggestions for where I can store my custom Python modules on Google Cloud Platform?\nI have a bunch of modules that I would like to access from the different GCP services I am using (App Engine, Compute Engine, Cloud Functions etc), without having to copy the Python files and upload to the service's Python environment each time.\nI was thinking GCS could be an option but then I am not sure how I would then get the module into, say Cloud Functions or App Engine?\nAny ideas?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":50,"Q_Id":66726060,"Users Score":1,"Answer":"The code will eventually need to be written to your service's local storage. Python does not access code remotely during execution unless you write your code to do so (download the module and then execute). Package your code as modules and publish to PyPI and then add them as dependencies. When you deploy a service, your modules will be downloaded.","Q_Score":1,"Tags":"python,google-cloud-platform","A_Id":66726192,"CreationDate":"2021-03-20T20:14:00.000","Title":"Where to store my custom Python modules on GCP so they can be accessed by different GCP services?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"is it possible to call a web service from cooja? May be I can read from border-router then call web service (via python script for example). I can ping border-router but I dont know how to read from node or write to node in cooja.I am new to contiki-ng and cooja. thanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":66732962,"Users Score":0,"Answer":"You can try the websense example at the folder ~\/contiki\/examples\/ipv6\/sky-websense","Q_Score":0,"Tags":"python,web-services,serial-communication,cooja,contiki-ng","A_Id":67021154,"CreationDate":"2021-03-21T13:35:00.000","Title":"is it possible to call a web service from cooja?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to upgrade pip on my mac using this command\npip install --upgrade pip\nAfter I enter that command I get this error\nzsh: command not found: pip\nI also can't even seem to get the version of pip when I enter this\npip --version\nI get the same error.\nI am new to this and I'm not sure what I'm doing wrong. It is possible the fact that I'm using zsh but honestly, I'm pretty sure I changed to zsh when installed homebrew but I forget what zsh even means for me.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":379,"Q_Id":66735270,"Users Score":0,"Answer":"Solution:\nI solved my issue by using pip3 install --upgrade pip to upgrade to the latest version of pip. Now install using pip install works fine.\nI'm not sure why I had to use pip3 but it works now.","Q_Score":0,"Tags":"python,pip","A_Id":66735353,"CreationDate":"2021-03-21T17:13:00.000","Title":"Upgrading pip on mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am installing airflow on a Cent OS 7. I have configured airflow db init and checked the status of the nginx server as well its working fine. But when I run the airflow webserver command I am getting the below mentioned error*[2021-03-22 14:59:30 +0000] [9019] [INFO] Booting worker with pid: 9019 [2021-03-22 14:59:32,548] {filesystemcache.py:224} ERROR - set key '\\x1b[01m__wz_cache_count\\x1b[22m' -> [Errno 1] Operation not permitted: '\/tmp\/tmpdwzf56wm.__wz_cache' -> '\/tmp\/2029240f6d1128be89ddc32729463129'","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2109,"Q_Id":66748803,"Users Score":0,"Answer":"You should execute airflow command as sudo.","Q_Score":6,"Tags":"python,nginx,webserver,airflow","A_Id":67261120,"CreationDate":"2021-03-22T15:10:00.000","Title":"airflow webserver command fails with {filesystemcache.py:224} ERROR - Operation not permitted","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a python script (priceChange.py) which I'm trying to run using CRON from the path below.\n35 10 * * 1-5 \/home\/pi\/Desktop\/priceChange.py\nwhen I check grep CRON \/var\/log\/syslog it shows this:\nMar 23 10:35:01 AlexM CRON[16200]: (pi) CMD (\/home\/pi\/Desktop\/priceChange.py) with no error.\nWhen I run the script manually it works, the end result being an email being sent to myself.\nClearly something is missing in the crontab line, but I'm lost. Any suggestions appreciated","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":109,"Q_Id":66761644,"Users Score":0,"Answer":"The way how run your script causes the problem. add python command ahead.\n35 10 * * 1-5 python \/home\/pi\/Desktop\/priceChange.py","Q_Score":0,"Tags":"python,linux,debian,raspberry-pi4","A_Id":66761697,"CreationDate":"2021-03-23T10:48:00.000","Title":"CRON not running python script - Debian\/RPi","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using docker-compose to run a python API and a Localstack instance in 2 separate containers, for local development.\nThe API has an endpoint which generates a presigned AWS S3 URL and redirects the user, in order to load images directly from S3.\nIn local development, the API instantiates a boto3 client using the address of the localstack container as a custom endpoint url (ie: boto3.client(\"s3\",endpoint_url=\"http:\/\/localstack:4566\")) which allows the API to access resources within the localstack container.\nThe problem is that the presigned URL returned by the boto3 client uses the localstack address, and the browser cannot load it, since the localstack resources are exposed to the host machine, at http:\/\/localhost:4566.\nIf I try to set the aws resources endpoint url to localhost in the boto3 client instantiation, then the API, which is running inside of a container, will look for AWS resources within it OWN CONTAINER's localhost, and not the host machine, where the localstack resources are exposed.\nIs there any way to access localstack resources, running in a docker container, from both the host machine's browser AND a different container, using the same address?\n[Edit] I'm using docker on mac, in case that changes anything [\/Edit]","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":4441,"Q_Id":66767975,"Users Score":1,"Answer":"Perhaps you can give host.docker.internal:4566 a try, since most likely the localstack service will be the only one listening to it.","Q_Score":4,"Tags":"python,amazon-web-services,docker,localstack","A_Id":67388365,"CreationDate":"2021-03-23T17:11:00.000","Title":"Docker-compose: How to access Localstack resources both from container and host, using same network address","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I designed my java spring application to run a couple of python programs on same server and communicate with them. I run them with ProcessBuilder and communicate via InputStream\/OutputStream. \nNow I want to achieve that when I restart or shut down my java application, python apps didnt close. I can't get Process object by PID. With ProcessHandler object I can't get input\/output streams. It seems that i should use some other mechanism of IPC. So the questions are:\n\nHow can I run external applications from java so they wouldn't close when java app restarts?\nHow can I achieve communication between java and python applications without having Process object?\n\nThanks in advance, sorry for poor language :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":99,"Q_Id":66769808,"Users Score":0,"Answer":"I give some suggestion on our project experience:\n\nIn single server, use docker-compose to run\/stop app no matter using java or python. In multi-server\uff0cthey will be deployed in Istio.\n\nuse Restful protocol to communicate between them.","Q_Score":0,"Tags":"java,python,spring","A_Id":66773533,"CreationDate":"2021-03-23T19:12:00.000","Title":"Communication between java spring server and python applications","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm lost with Python. After troubleshooting with a friend through Discord for literally months I am about to give up. This is my last attempt at trying to get Python fixed on my Windows 10 laptop.\nI'm using a Lenovo Legion 5, bought the laptop in November 2020. I've not been able to get anything related to Python to run in the CMD window. I can run Python no problem, but nothing I have installed through pip has ever worked. I can use virtualenvs, but only through PyCharm for example. Python has never really worked through the command line.\nYes I tried reopening the CMD window, rebooted the system many times, ran the CMD as administrator, installed the path variables for both Python and esptool.py but nothing seems to help.\nI honestly don't know where to start because none of the 250+ websites I've visited to so far that suggested a fix for any kind of issue I've been experiencing with Python has been working. I can run Python fine by the way, just none of the things installed through pip will work.\nLet's start with a use-case:\nI'm trying to run esptool.py so installed it with pip install esptool. This install worked fine, and I can confirm it is installed with pip show -f esptool.\nHowever, when running esptool.py version it told me:\n'esptool.py' is not recognized as an internal or external command, operable program or batch file.\nSo I added the local folder from the previous step to the %PATH% variables, after running esptool.py version it gave me a popup asking me with what kind of program I should open this, I didn't select to open with this kind of program from now on. This makes it so that I do not get an error, what now happens is that another window quickly opens and then exits without an error code. So I have no clue what's happening.\nWhat should happen is that it should tell me which version is installed in the CMD window.\nThere have been a few other things going on with my Windows 10 install, for one, the username that I used during the installation wasn't used to create the user directory. Windows 10 somehow instead chose a name that was related to the first 5 characters of my email address, which is totally strange as I haven't used that string in the installation of Windows 10 at all. This was a fresh install on a new laptop.\nNow, after an update of Win10 my user icon doesn't display anymore and I had to change ownership of the 'Windows Apps' folder in order to be able to access it. Changing the ownership also changed the name I now see on the login screen when I boot up the laptop. It changed from the 5 first chars of my email address to my full name in the login screen, only because I took ownership of this folder so I could access it.\nThere have been a lot of things going on that I think should not be changing all the time, things to do with administrator rights, ownership, etc.\nNow, since opening esptool.py doesn't open it, but also doesn't show me an error, I'm clueless and the only thing I can think of is doing a fresh system install, but I have a bunch of projects going on for which I need this laptop in working order and I don't have the mental health (due to corona) left to do a fresh system install. I'm worn down. Not in a dramatic way, I just don't have the spare energy to go through the whole process. So I'm hoping someone can point me in the right way to troubleshoot why my Python doesn't want to work natively.\nWhat happens when running esptool.py version is that I can see it opens a Python window, but without showing any content it closes within a few milliseconds.\nWhat is going on, how do I continue? I hope someone knows how to troubleshoot my system, to find the core of the problem.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":377,"Q_Id":66772433,"Users Score":0,"Answer":"It apparently was rather simple. First of all, thanks for the replies! And second of all, thanks for pointing me to superuser.com I wasn't aware of the site and will continue there.\nThe fix was to use python -m pip install esptool as suggested by Valentin Kuhn.\nTo answer AKD, I have a lot of experience with programming on my MacBook, but I'm not experienced with actually maintaining the system side, I'm a creative user. It's just that ever since I got a Windows laptop it's been nothing but trouble and after months of chatting about each individual issue with people on Discord nobody has been able to find a solution. I'm not expecting a GUI, just a simple \"esptool.py v3.0\" was the answer I was expecting from the command line.\nNow what I don't understand is that I've never found any hint to anyone suggesting python -m. I will get on superuser to find out more about why the standard instructions that work for most people, don't work for me.\nSorry for using the wrong site for my question, I came on here through another related question and it was past my bedtime and I wasn't thinking clear.","Q_Score":0,"Tags":"python,python-3.x,windows,pip,windows-10","A_Id":66777723,"CreationDate":"2021-03-23T22:55:00.000","Title":"PIP installed items don't want to work in Windows 10","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"At first I entered the command sudo pip3 install pygame in order to install pygame but then when I entered sudo apt install python3-pygame, It did not prompt me saying that it was already installed. What is the difference?","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":60,"Q_Id":66796396,"Users Score":2,"Answer":"apt is for Debian packages. pip is for Python packages.\npython3-pygame is Python's pygame repackaged as a Debian package. So, technically, not the same as the Python package.\nSo the difference is how apt and pip report an already installed package.","Q_Score":2,"Tags":"python,pygame","A_Id":67875921,"CreationDate":"2021-03-25T09:20:00.000","Title":"What is the difference between sudo pip3 install pygame and sudo apt install python3-pygame","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"At first I entered the command sudo pip3 install pygame in order to install pygame but then when I entered sudo apt install python3-pygame, It did not prompt me saying that it was already installed. What is the difference?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":60,"Q_Id":66796396,"Users Score":1,"Answer":"They may not get you the same version.\npip will get the latest from the pypi package index.\napt will bring you the version that was included for your ubuntu\/debian release.\npip can also be used in a virtualenv so as not to pollute your system packages.\nIn general the pip version will be the newer one.","Q_Score":2,"Tags":"python,pygame","A_Id":67875971,"CreationDate":"2021-03-25T09:20:00.000","Title":"What is the difference between sudo pip3 install pygame and sudo apt install python3-pygame","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"At first I entered the command sudo pip3 install pygame in order to install pygame but then when I entered sudo apt install python3-pygame, It did not prompt me saying that it was already installed. What is the difference?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":66796396,"Users Score":0,"Answer":"apt is for linux. pip is for python","Q_Score":2,"Tags":"python,pygame","A_Id":71048119,"CreationDate":"2021-03-25T09:20:00.000","Title":"What is the difference between sudo pip3 install pygame and sudo apt install python3-pygame","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How can I properly kill celery tasks running on containers inside a kubernetes environment? The structure of the whole application (all written in Python) is as follows:\n\nA SDK that makes requests to our API;\n\nA Kubernetes structure with one pod running the API and other pods running celery containers to deal with some long-running tasks that can be triggered by the API. These celery containers autoscale.\n\n\nSuppose we call a SDK method that in turn makes a request to the API that triggers a task to be run on a celery container. What would be the correct\/graceful way to kill this task if need be? I am aware that celery tasks have a revoke() method, but I tried using this approach and it did not work, even using terminate=True and signal=signal.SIGKILL (maybe this has something to do with the fact that I am using Azure Service Bus as a broker?)\nPerhaps a mapping between a celery task and its corresponding container name would help, but I could not find a way to get this information as well.\nAny help and\/or ideas would be deeply appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":495,"Q_Id":66799974,"Users Score":1,"Answer":"The solution I found was to write to file shared by both API and Celery containers. In this file, whenever an interruption is captured, a flag is set to true. Inside the celery containers I keep periodically checking the contents of such file. If the flag is set to true, then I gracefully clear things up and raise an error.","Q_Score":0,"Tags":"python,kubernetes,containers,celery","A_Id":66871940,"CreationDate":"2021-03-25T12:56:00.000","Title":"How can I properly kill a celery task in a kubernetes environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using Python to automate some installations in my everyday workflow.\nThe installation requires mounting a .dmg file to the system in order to start and complete the installation. Everything works fine until I try to eject\/unmount the attached volume, it gives an error that the volume is used by Python and cannot be ejected\/unmounted. The installation process is already completed by the time the unmount is executed, so in theory, files should no longer be in use.\nForce unmount helps with the unmount process, but for some reason, it interferes with subsequent subprocess.Popen command that starts the installed application and the app crashes at startup. The crash doesn't occur if the volume is not unmounted, which is a sign the issue is caused by the unmount process.\nI would like to try to unmount the volume without forcing the process, but I don't know how to unlock the files being used by Python for the installation. Is there a way to force python to unlock those files?\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":80,"Q_Id":66804331,"Users Score":0,"Answer":"The issue was related to the fact the current working directory was set to a folder inside the mounted volume. Switching the CWD to HOME before unmounting process and subprocess execution fixes the issue.","Q_Score":2,"Tags":"python,macos,file,locking","A_Id":66805352,"CreationDate":"2021-03-25T17:07:00.000","Title":"How to force python to unlock files on macOS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Python Project that has a bunch of dependencies (in my VirtualEnv). I need to run my project in my school computer for Demonstration. My School computer does not have python installed, and lets assume also wont have an Internet connection to install it. I have written the program in Windows 10, and the school computer runs Windows 7.\nI have looked at these solutions so far, and also here is why I think they may not work.\n\nCopy and pasting my virtual Env - Doesnt work because venv's have their own structures and has my username in its paths which it will look for in the other system.\nUsing Py2Exe. I have an Exe file, that I can now run on other systems running Windows 10 without them having python or any of my packages. But I am not sure the VC++ dependencies will be present in windows 7. It may also have some other weird issue that I cant risk.\nDocker. I am not familiar with Docker, but can do it if this happens to be the only way.\n\nHow can I run the python file in that computer?\nAlso note that I will not have the time to mess around in the other system. Ideally I must plug in my USB and Open the file to run it. If you think there isn't a solution to this, please let me know by telling so.\nThanks!","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2069,"Q_Id":66821573,"Users Score":0,"Answer":"Convert that python file to a .exe file using auto-py-to-exe. This would convert your .py to a .exe file which you can run anywhere.\nTo use auto-py-to-exe, just execute the following command on terminal pip install auto-py-to-exe.\nNow on the terminal write auto-py-to-exe and press enter. Select the python file you wanna execute anywhere and click on Convert .py to .exe and you would get the folder containing the .exe file. Transfer this folder to any computer and just by clicking the .exe file that is there inside the folder, the program would start executing normally no matter if the computer does not have python or pycharm installed.","Q_Score":10,"Tags":"python,python-3.x,windows,pip,virtualenv","A_Id":68011569,"CreationDate":"2021-03-26T17:22:00.000","Title":"How can I run a Python project on another computer without installing anything on it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm having Ubuntu 16.04 OS running in which, while I tried to install pip3 using the command sudo apt install python3-pip Its showing python3-pip is already the newest version (8.1.1-2ubuntu0.6).\nBut when I tried to get the pip3 --version, its showing The program 'pip3' is currently not installed.\nPlease help me to solve this issue, thanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":66869039,"Users Score":0,"Answer":"You are probably missing pip from your PATH.\nTry running 'which python3' or 'which python' and see if you can find pip in the same folder.\nYou can then add it to your path if you want by modifying your bash profile or adding an alias","Q_Score":0,"Tags":"python-3.x,pip","A_Id":66870120,"CreationDate":"2021-03-30T10:17:00.000","Title":"Pip3 not installing (Already exist)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Want to change the size of an emoji(unicode) in console using Python language.\nRather changing the font size of terminal.\nCurrently using VSCode for python.\nprint(\"\\U0001F609\")\nThe above line will print the this emoji in a specific size or pre-defined console font size.\nHelp me with all possible alternatives to adjust the size of this emoji(unicode) specifically.\nAny useful recourse link will also work. :)\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":476,"Q_Id":66889980,"Users Score":0,"Answer":"The size of the emoji is determined by the font size.\n\n\n\nThey all have the same unicode string, just at diffrent sizes\nTo change the size of anything in the console, you need to change the font-size of the terminal you are using. Note, that this will scale everything","Q_Score":1,"Tags":"python,resize,size,emoji","A_Id":66890576,"CreationDate":"2021-03-31T14:39:00.000","Title":"Change Uni-Code (Emoji) size in console, using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My aim is to run a script daily that updates a google sheet with data from Google Analytics.\nWhen I run the code on my VM instance it returns an error:\nModuleNotFoundError: No module named 'googleapiclient'\nThe module is installed and if I run:\n$ source [project-name]\/bin\/activate\non the VM then call my script it works and the update is done.\nI am trying to set the script to run daily, I tried to set it up using crontab:\n(for testing i used every 5 mins)\nI tried:\n*\/5 * * * * python3 myscript.py\nand\n*\/5 * * * * $ python3 myscript.py\nand\n*\/5 * * * * source [project-name]\/bin\/activate\n*\/5 * * * * python3 myscript.py\nThis is the first time I am trying to set up a crontab job, so any debugging suggestions are appreciated as well.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":66890825,"Users Score":0,"Answer":"So, I finally solved this.\nyou have to use pip install when installing the libraries and once you install google-api-python-client, you don't have to do the activation to be able to call the library\nuse:\n$ pip3 install google-api-python-client\nthen you can schedule crontab jobs such as:\n\/*5 * * * * python3 myscript.py","Q_Score":0,"Tags":"python,cron,scheduled-tasks","A_Id":66902978,"CreationDate":"2021-03-31T15:32:00.000","Title":"Requested library doesn't load in google cloud compute VM","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using Python 3.7 and PIP 21.0 and linux ppc64le.\nWhen i try to install pyarrow with pip it errors out with below error message. Can some one please help\n-- Could NOT find Arrow (missing: Arrow_DIR)\n-- Checking for module 'arrow'\n-- No package 'arrow' found\nCMake Error at \/home\/***\/miniconda3\/envs\/myenv\/share\/cmake-3.19\/Modules\/FindPackageHandleStandardArgs.cmake:218 (message):\nCould NOT find Arrow (missing: ARROW_INCLUDE_DIR ARROW_LIB_DIR\nARROW_FULL_SO_VERSION ARROW_SO_VERSION)\n-- Configuring incomplete, errors occurred!\nSee also \"\/tmp\/pip-install-eupzn03_\/pyarrow_3ebdc9313f8c40db9a823ba34e4a40e0\/build\/temp.linux-ppc64le-3.7\/CMakeFiles\/CMakeOutput.log\".\nerror: command 'cmake' failed with exit status 1\nERROR: Failed building wheel for pyarrow\nFailed to build pyarrow\nERROR: Could not build wheels for pyarrow which use PEP 517 and cannot be installed directly","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":513,"Q_Id":66902732,"Users Score":4,"Answer":"This issue you are seeing here is because you haven't yet installed the Arrow C++ libraries. You first need to install them and then install\/build pyarrow itself afterwards.\nFor ppc64le, there are no pyarrow wheels available. If you can use conda instead, we are building pyarrow conda packages on conda-forge.","Q_Score":2,"Tags":"python,cmake,pip,pyarrow","A_Id":66902928,"CreationDate":"2021-04-01T10:20:00.000","Title":"pip install pyarrow failed on Linux ppc64le","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"when I pip install tensorflow it was killed\nroot@cryptofeed:~# pip3 install tensorflow\nCollecting tensorflow\nDownloading tensorflow-2.4.0-cp38-cp38-manylinux2010x8664.whl (394.8 MB)\n|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 394.7 MB 24.4 MB\/s eta 0:00:01Killed\nroot@cryptofeed:~#\nI have tried with all version of Tensorflow","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":131,"Q_Id":66914565,"Users Score":0,"Answer":"\"Killed\" usually means that a process was killed because it was using more memory than your system would provide.\nCheck if you have enabled swap.\nWhat is your output of dmesg?\nHow much memory do you have?","Q_Score":0,"Tags":"python,tensorflow,ubuntu","A_Id":66914638,"CreationDate":"2021-04-02T04:46:00.000","Title":"Can not install tensorflow on ubuntu server 18.04 aws","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm new with dramatiq and I do not find a way how to start dramatiq in detach mode like Celery?\nI try to start with flags --d --detach, but nothing works.\nPlease inform me how to start dramatic in detach mode when I start dramatiq app:broker","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":64,"Q_Id":66915309,"Users Score":0,"Answer":"I found it's already work on detach mode, but if you wish to read the logs you need to have your terminal in open view","Q_Score":0,"Tags":"python,dramatiq","A_Id":66949143,"CreationDate":"2021-04-02T06:31:00.000","Title":"How to run dramatiq tasks run in detach mode?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So, on a Raspberry Pi I'm using a camera app with a web interface, I wanted to add LED lighting by adding a neopixel. I have successfully done this and can now turn it on and off running two python scripts.\nExplanation and question:\nI have a python script in \/usr\/local\/bin that is executable.\nIt is owned by 'root root'.\nI have a shell script in \/var\/www\/html\/macros that is executable and has to run the python script in \/usr\/local\/bin.\nThe shell script is owned by 'www-data'\nWhen I manually run the python file, it executes the script.\nWhen I manually run the shell script, it executes the python script.\nWhen I run the shell script by clicking on a button on my webpage, it seems to execute the shell script correctly, however, it looks like it doesn't execute the python script.\nWhat can I do to fix this?\nI'm not that experienced with permissions, but I wanted to emphasize on the fact that this is a closed system that does not contain any sensitive information. So safety\/best practice is not a concern. I just want to make this work.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":177,"Q_Id":66921554,"Users Score":0,"Answer":"Now, after 11 hours and a group of people thinking along we found a solution to the problem.\nThe problem turned out to be that the Web interface can only execute as 'www-data', and the NeoPixel library that the python script depends on needs to be executed as sudo\/root.\nThese two factors make it so that there will never be a direct way of getting the scripts to work together.\nHowever, the idea emerged to use some sort of pipe.\nA brilliant user suggested to me to use sshpass. This would allow to pass data to ssh and have it essentially be executed as a root user.\nThe data from the web interface would be relayed to the sshpass and this would successfully run the needed scripts with the needed privileges.\nSpecial thanks to Minty Trebor and Falcounet from the RRF for LPC\/STM Discord!","Q_Score":0,"Tags":"python,apache,shell,raspberry-pi,raspbian","A_Id":66925377,"CreationDate":"2021-04-02T15:23:00.000","Title":"How to run a Python script from Apache on Raspberry Pi?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to deploy my Django application to gcloud and I keep getting this error, any ideas what this means?\nFile upload done.\nUpdating service [default]...failed.\nERROR: (gcloud.app.deploy) Error Response: [9] Cloud build c90ad64e-2c2f-4ad0-a250-160de6f315df status: FAILURE\nError ID: c84b3231\nError type: UNKNOWN","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":388,"Q_Id":66923622,"Users Score":1,"Answer":"never mind, I found out the error was due to a build error, I had an unused pip install in my requirements.txt file. I deleted it and everything is working now.","Q_Score":0,"Tags":"python,django,ubuntu,gcloud","A_Id":66923685,"CreationDate":"2021-04-02T18:23:00.000","Title":"ERROR: (gcloud.app.deploy) Error Response: [9] Cloud build c90ad64e-2c2f-4ad0-a250-160de6f315df status: FAILURE","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to get a simple script to run on start-up of debian Raspberry Pi where\nstartupTest.py writes the time of run to a .txt file\n$python3 \/home\/pi\/startupTest.py \nruns successfully in command line\n$ python \/home\/pi\/startupTest.py \nruns successfully in command line\nHowever using\n$ sudo crontab -e to write\n@reboot python3 \/home\/pi\/startupTest.py &\nYields\nbash: alias: python: not found\nbash: alias: \/usr\/local\/bin\/python3.8: not found\nbash: alias: python: not found\nbash: alias: \/usr\/local\/bin\/python3.8: not found\nbash: alias: python: not found\nbash: alias: \/usr\/local\/bin\/python3.8: not found\nOkay, am I missing python3.8 in that directory? no\n$ ls \/usr\/local\/bin shows python3.8\nI am a Mechanical Engineering student trying to get a testing system working for my senior design project.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":300,"Q_Id":66935705,"Users Score":2,"Answer":"The problems is when running sudo crontab -e actually that root user runs the script. If in your startupTest.py you do not have any special code that might need sudo privilege, just remove sudo and add your command in crontab -e startup list.","Q_Score":1,"Tags":"python,bash,cron,alias","A_Id":66935786,"CreationDate":"2021-04-03T20:42:00.000","Title":"Bash alias not found during start-up on raspberry pi","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I run any youtube-dl command I get this error:\nFatal error in launcher: Unable to create process using '\"c:\\python38\\python.exe\" \"C:\\Python38\\Scripts\\youtube-dl.exe\" '\nHow do I fix this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":137,"Q_Id":66947301,"Users Score":1,"Answer":"this happens when you have pip.exe of first uninstalled python you need to use software's like \"Revo uninstaller\" or \"BCUninstaller\" to uninstall all python versions and remove all files related to python and install a new fresh python.","Q_Score":0,"Tags":"python,youtube-dl","A_Id":66956363,"CreationDate":"2021-04-05T01:30:00.000","Title":"\"Fatal error in launcher\" when using youtube-dl","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Apologies if this is a dumb question, but after having some problems with python packaging I ran platform.machine() on my M1 Mac, expecting the output to be arm64 as I had seen online but instead got x86_64 which is the Intel processor. I just don't understand how this could be the case on this machine, so any explanation would be super helpful.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2441,"Q_Id":66955938,"Users Score":2,"Answer":"What python are you using? If you are on < Python 3.9, which is most likely, then the python interpreter is made for an x86 Intel Processor, which was translated using Rosetta 2. Now, there is no problem with having an x86 Python Interpreter, actually, it's probably best to not use the newest versions of Python, as there might be some errors.\nIf you go to the official python.org website, you can see there are two different downloads: One for the ARM Macbooks and another for the Intel Macbooks. You may have installed the Intel Download for Python 3.9.2, which is why you are getting this output.","Q_Score":4,"Tags":"python,python-3.x,macos,pip,apple-m1","A_Id":66956005,"CreationDate":"2021-04-05T16:04:00.000","Title":"Python platform on M1 confusion","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Overall Error: gdal_polygonize.py fails with \"Cannot guess driver for\" I am at a loss here, I have been playing with dependencies and libraries and things but I am getting no where. I believe the error is somehow the compile is not building in shape file support, but as that is default I do not understand.\nDetails\nI am compiling GCC 10.2 (from source) and all the deps for GDAL using the GCC 10.2 compiler. At runtime when reading a TIFF file to convert to SHP file using gdal_polygonize.py I get a failure about not being able to determine the type. Normal GDAL commands seem to complete normally such as gdalinfo.\nI messed with the python gdal_polygonize and see that it is finding drivers. I do not see one for shape files though.\nI am also compiling Python3.8.8 from source, which is working fine. I have also tried it with the amazon version 3.8.5 using a yum install, there is no difference on that.\nError as Reported at Runtime\n\nTraceback (most recent call last): File\n\"\/usr\/bin\/gdal_polygonize.py\", line 163, in \nfrmt = GetOutputDriverFor(dst_filename) File \"\/usr\/bin\/gdal_polygonize.py\", line 85, in GetOutputDriverFor\nraise Exception(\"Cannot guess driver for %s\" % filename) Exception: Cannot guess driver for .\/test.shp\n\nInput is : test.TIF which exists and reports fine from gdalinfo\nGDAL Compile (Completes without error)\n\n.\/configure \\ --prefix=\/usr \\ --with-proj=\/usr \\\nLDFLAGS=\"-L\/usr\/lib -lz -lopenjp2 -ltiff\" \\ CXXFLAGS=\"-Wall\n-std=c++14\" \\ --with-threads \\ --with-hide-internal-symbols \\ --with-libtiff \\ --with-geotiff=internal --with-rename-internal-libgeotiff-symbols \\ --with-rename-internal-shapelib-symbols=yes \\ --with-geos \\ --with-curl \\ --with-zstd \\ --with-openjpeg \\ --with-xerces-c \\ --with-libdeflate=yes \\ --with-liblzma=yes \\ --with-cpp14 \n--with-python=\/usr\/bin\/python3.8 \\ OPENJPEG_CFLAGS=\"-lopenjp2\" && make -j 32 && make install\n\nGDAL Reports at Build Time:\nYou can see it does not show shape files in the supported files.\n\nmisc. gdal formats: aaigrid adrg aigrid airsar arg blx bmp bsb cals ceos ceos2 coasp cosar ctg dimap dted e00grid elas envisat ers fit gff gsg gxf hf2 idrisi ignfheightasciigrid ilwis ingr iris iso8211 jaxapalsar jdem kmlsuperoverlay l1b leveller map mrf msgn ngsgeoid nitf northwood pds prf r raw rmf rs2 safe saga sdts sentinel2 sgi sigdem srtmhgt terragen til tsx usgsdem xpm xyz zmap rik ozi grib eeda plmosaic rda wcs wms wmts daas rasterlite mbtiles pdf\ndisabled gdal formats:\nmisc. ogr formats: aeronavfaa arcgen avc bna cad csv dgn dxf edigeo flatgeobuf geoconcept georss gml gmt gpsbabel gpx gtm htf jml mapml mvt ntf openair openfilegdb pgdump rec s57 segukooa segy selafin shape sua svg sxf tiger vdv wasp xplane idrisi pds sdts nas ili gmlas ods xlsx amigocloud carto cloudant couchdb csw elastic ngw plscenes wfs gpkg vfk osm","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":151,"Q_Id":66959241,"Users Score":0,"Answer":"Okay, I finally tracked this down by accident.\nWhen I compiled gdal I used the following command.\n\nmake -j 32 CXXFLAGS=\"-O3\"\n\nIt does not like this for whatever reason.\n\nmake -j 32\n\nWorks without issue. I am guessing the C++ opt is causing a problem.","Q_Score":0,"Tags":"python-3.x,gcc,gdal","A_Id":66969492,"CreationDate":"2021-04-05T20:15:00.000","Title":"GDAL 3.1.2 \/ PROJ 6.2.1 \/ GCC 10.2 Unable to handle Shape File","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"According to Prefect's Hybrid Execution model, Agents \"watch for any scheduled flow runs and execute them accordingly on your infrastructure,\" while Executors \"are responsible for actually running tasks [...] users can submit functions and wait for their results.\"\nWhile this makes some sense from a high-level design perspective, in practice how are these parts actually composed? For instance, if I specify that a Flow Run should make use of Docker Agent and a Dask Executor, what interactions are concretely happening between the Agent and the Executor? What if I use a Docker Agent and a Local Executor? Or a Local Agent and a Dask Executor?\nIn short, what exactly is happening at each step of the process within each component \u2014 that is, on the Server, the Agent, and the Executor?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1744,"Q_Id":66959695,"Users Score":10,"Answer":"Agents represent the local infrastructure that a Flow can and should execute on, as specified by that Flow's RunConfig. If a Flow should only run on Docker (or Kubernetes, or ECS, or whatever else) then the Flow Run is served by that Agent only. Agents can serve multiple Flows, so long as those Flows are all supported by that particular infrastructure. If a Flow Run is not tied to any particular infrastructure, then a UniversalRun is appropriate, and can be handled by any Agent. Most importantly, the Agent guarantees that the code and data associated with the Flows are never seen by the Prefect Server, by submitting requests to the server for Flows to run, along with updates on Flows in progress.\nExecutors, on the other hand, are responsible for the actual computation: that is, actually running the individual Tasks that make up a Flow. The Agent manages execution at a high level by calling submit on Tasks in the appropriate order, and by handling the results that the Executor returns. Because of this, an Executor has no knowledge of the Flow as a whole, rather only the Tasks that it received from the Agent. All Tasks in a single Flow are required to use the same Executor, but an Agent may communicate with different Executors between separate flows. Similarly, Executors can serve multiple Flow Runs, but at the Task level only.\nIn specific terms:\n\nFor a Docker Agent and a Dask Executor, there would be a Docker container that would manage resolution of the DAG and status reports back to the server. The actual computation of each Task's results would take place outside of that container though, on a Dask Distributed cluster.\nFor a Docker Agent and a Local Executor, the container would perform the same roles as above. However, the computation of the Tasks' results would also occur within that container (\"local\" to that Agent).\nFor a Local Agent and a Dask Executor, the machine that registered as the agent would manage DAG resolution and communication to the Server as a standalone process on that machine, instead of within a container. The computation for each Task though would still take place externally, on a Dask Distributed cluster.\n\nIn short, the Agent sits between the Server and the Executor, acting as a custodian for the lifetime of a Flow Run and delineating the separation of concerns for each of the other components.","Q_Score":10,"Tags":"python,prefect","A_Id":66959696,"CreationDate":"2021-04-05T20:57:00.000","Title":"Prefect: relationship between agent and executor?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"If an application developed to support only HTTP. What configuration we should do in google app engine, that it force developer to have HTTPS support. We can add an entry(for handler) in \"app.yaml\", but in order to redirection. Just want to know anything else we can do to prevent such thing(in short should work with HTTPS only). Probably we can do something from ingress\/loadbalancer\/ssl etc but that's looks paid and don't want that currently.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":45,"Q_Id":66964748,"Users Score":0,"Answer":"You just have to set secure: always in app.yaml for your route handlers. Any call to your app from http will automatically get redirected to https","Q_Score":0,"Tags":"https,google-app-engine-python","A_Id":66970984,"CreationDate":"2021-04-06T07:56:00.000","Title":"Force HTTPS for app deployed in google app engine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"macOS AppleScript has the ability to 'tell' running apps like safari to do something, like open a tab.\nHow do I add an AppleScript compatible api to my python application?\nsuch that applescripts can interact with my running script.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":66966533,"Users Score":0,"Answer":"Native AppleScript can send basic messages to applications: i.e., processes the system can identify with a bundle id. It can't send messages to other running processes (aside from the standard signals that can be sent to processes in unix). Since you've apparently bundled this script as an application, you ought to be able to send it basic commands like open or quit.\nFrom a quick glance at the wiki you linked in comments, it seems that you can install Apple Event Handlers using an implementation of primitive carbon methods, using the low-level Carbon.AE extension, and that (in theory) should allow AppleScript to talk to your process in more complex ways: AppleScript uses Apple Events to communicate. Further, if you've installed the PyObjC packages you can (probably) use higher-level Objective-C commands to install AE handlers, or even to load a scripting definition file. that would allow you to define scripting objects and scripting commands that others could access through AppleScript. It's reasonably easy to do this in ObjC, but I wouldn't call it trivial, and transcribing it into a PyObjC implementation would add a number of warts and wrinkles.","Q_Score":0,"Tags":"python-3.x,macos,applescript","A_Id":66973364,"CreationDate":"2021-04-06T09:59:00.000","Title":"How do I make apple script tell running python script to run a function?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I do a lot of Python development and I need my files to open directly in Command Prompt when I run them. Is there a way I can set up VSCode to run the current file in Command Prompt instead of the integrated terminal?\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":66971405,"Users Score":0,"Answer":"Find where your program is saved either by opening the save position in visual studio code or you can try the default save position here C:\\users{username}\\AppData\\Local\\Programs\\Microsoft VS Code (You may have changed this) In this directory then find your program you want to run if you made a virtual environment make sure to enable it with cd venv\/Scripts\/activate.bat (Windows) or . venv\/bin\/activate (Linux) these will open your venv then to actually run your program use python .py\nYour installations may be slightly different from the standard so please ask if you have any difficulty.","Q_Score":0,"Tags":"python,visual-studio-code,vscode-settings","A_Id":66976042,"CreationDate":"2021-04-06T15:12:00.000","Title":"How can I run a file in VSCode in the Command Prompt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Description:\nI am trying to use sam build with the following requirements but its throwing the error: Pythonpipbuilder: resolve dependencies - list index out of range\n\npyotp\nulid-py\naws_encryption_sdk\nboto3\nrequests\nattrs\ncryptography\n\nSteps to reproduce the issue:\n\nCreate a virtual env.\nActivate virtual env in a terminal\npip install -r requirements.txt\nsam build\n\nObserved result:\nBuild Failed\nError: PythonPipBuilder:ResolveDependencies - list index out of range\nExpected result:\nBuild Succeeded\nAdditional environment details\nAmazon Linux 2 Workspace\nPython3.8","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2129,"Q_Id":66971745,"Users Score":0,"Answer":"I had the same failure when my serverless application specified Runtime as python3.6, while the environment was using Python3.7.","Q_Score":5,"Tags":"python,amazon-web-services,aws-lambda,aws-sam","A_Id":67374818,"CreationDate":"2021-04-06T15:33:00.000","Title":"Pythonpipbuilder: resolve dependencies - list index out of range","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"While using robot framework in windows 10 the python -m robot is launched with admin privileges. Does this mean admin privileges will be passed down to all sub processes that are started?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":66974254,"Users Score":0,"Answer":"Yes, admin privileges will be passed into sub-processed. I would not recommend to run test with admin privileges, the safest and recommended way is to use virtualenv without admin privileges.","Q_Score":0,"Tags":"python,windows-10,robotframework,uac","A_Id":66996759,"CreationDate":"2021-04-06T18:14:00.000","Title":"Robot Framework uac windows 10","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm just picking up some async Python and trying to write a simple TCP port scanner using the Asyncio module.\nI can open a full-fledged TCP connection with 3-way handshakes via asyncio.open_connection. However, I want to create an SYN-ACK half-open connection\u2014similar to what nmap uses\u2014using asyncio. I was rummaging through the streams API but couldn't find anything. Is there a high-level method to do this? If not, how do I do this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":203,"Q_Id":66976054,"Users Score":1,"Answer":"asyncio doesn't give you such control on TCP\/IP stack layers and even hides some complex tasks such as callbacks, low-level protocols, transports.\nYou can do it using a raw socket.\nModules that can be useful\n\npython-nmap\nscapy","Q_Score":1,"Tags":"python,asynchronous,tcp,async-await,python-asyncio","A_Id":67002079,"CreationDate":"2021-04-06T20:30:00.000","Title":"Is there an easy way to do half open TCP connection via Python Asyncio?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I recently started to using PyCharm Community so i'm pretty new to this. I added the anaconda3 and python path to my variables. \"conda\" works in my cmd but python just opens a windows store page where i can download python. I checked the path with \"where python\" and added these. Does anyone have suggestions, because I wanted to use pyinstaller.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":66984084,"Users Score":0,"Answer":"Type py instead of python in your command prompt.\nThe default command for running python commandline interface has changed.","Q_Score":0,"Tags":"python,python-3.x,cmd","A_Id":66984145,"CreationDate":"2021-04-07T10:22:00.000","Title":"Why does my python not show in cmd despite me adding the path to my system variables?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a time-triggered Azure Function written in Python which gets a list of URLs (list is not static). For every URL I want to trigger an Azure Function and pass the URL to it for further processing.\nHow can I do this transition from one Azure Function to another? What's the best way to trigger the second function and pass the data to it?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":355,"Q_Id":66985322,"Users Score":0,"Answer":"How can I do this transition from one Azure Function to another? What's\nthe best way to trigger the second function and pass the data to it?\n\nIn your situation, you can foreach the list of the urls, create a new httptrigger function, put the url as the body of the request and process the url in the httptrigger function. You can call the httptrigger function by sending request to the httptrigger url.","Q_Score":0,"Tags":"python,azure,azure-functions","A_Id":66996206,"CreationDate":"2021-04-07T11:39:00.000","Title":"Calling Azure Function from another Azure Function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a time-triggered Azure Function written in Python which gets a list of URLs (list is not static). For every URL I want to trigger an Azure Function and pass the URL to it for further processing.\nHow can I do this transition from one Azure Function to another? What's the best way to trigger the second function and pass the data to it?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":355,"Q_Id":66985322,"Users Score":0,"Answer":"You can do this one of 3 ways:\n\nOnce your Function ends, call the http triggered function that you want with a post request and a body filled with data that you want to send.\nWrite the function output to a blob or cosmosdb or postgresdb and create a blob\/cosmos\/postgres triggered function that triggers off of that input.\nCreate a durable function and chain a few functions together!\n\nGood luck :)","Q_Score":0,"Tags":"python,azure,azure-functions","A_Id":68928831,"CreationDate":"2021-04-07T11:39:00.000","Title":"Calling Azure Function from another Azure Function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to find a way to add a new task to celery after celery worker has been executed and after celery has been instantiated. I basically want to add a new task with a dynamic name based on user input. I will also want to set a rate limit for that new task.\nI have not been able to find any documentation on this and no examples on my google searches. All I have been able to find is dynamically adding periodic tasks with celery beat.\nIs there any way to do what I am looking to do?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":172,"Q_Id":66995852,"Users Score":1,"Answer":"What you want to achieve is not trivial. I am not aware of any distributed system similar to Celery that allows such thing.\nThe only way perhaps to do it is to dynamically create and run a new Celery worker with the new task added and configured the way you prefer...","Q_Score":2,"Tags":"python,celery","A_Id":67000136,"CreationDate":"2021-04-08T00:33:00.000","Title":"Add dynamic task to celery","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to install mysqlclient in a offline CentOS7 server, so that I can connect my Django site to a MariaDB\nWhat I did was to download .wheel package \"mysqlclient-2.0.3-cp37-cp37m-win_amd64.whl\" from PyPI.\nThen I run the code\n pip install mysqlclient-2.0.3-cp37-cp37m-win_amd64.whl\nBut received the following message\nmysqlclient-2.0.3-cp37-cp37m-win_amd64.whl is not a supported wheel on this platform.\n[error message][1]\n[1]: https:\/\/i.stack.imgur.com\/bhqUD.png\nI looked through all available answers and internet questions but did not able to find a similar problem. Could someone give me help?\nThank you very much","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":66996950,"Users Score":0,"Answer":"After following @Brain comment, I have solved the problem.\nI went to PyPI and downloaded the .tar.gz file;\nUploaded the file to the offline server. Unzipped the file and followed the INSTALL.rst\nAlthough building from the .tar.gz source code required some more efforts. Thanks","Q_Score":0,"Tags":"python,mysql,django,centos7,python-wheel","A_Id":67017824,"CreationDate":"2021-04-08T03:28:00.000","Title":"Can not install mysqlclient in a OFFLINE CentOS7 server with ERROR message \"not a supported wheel on this platform\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Imagine this business logic. I'm buying a tshirt from a store that runs microservices and uses Kafka. When I submit my order the web back end saves my order and kicks out two async requests:\n\nPayment Service that handles authorizing my payment (credit card stuff).\nUser Service that handles creating my user account (email confirmation step).\n\nBoth of those services write to their own kafka topics when they've finished. The web back end subscribes to these topics and when an event comes in it updates the order. Once the order back end has confirmed my user and payment information, it will send that order off to be fulfilled.\nHow do you prevent race conditions when both events come back at the same time?\n\nIf two processes both update the order at the same time, it never\nsees the complete order and you risk having complete orders that\nnever get fulfilled.\n\nIf two processes both update and then immediately requery, they\ncould both see a complete order and then we fulfill the order twice.\n\n\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":941,"Q_Id":67005492,"Users Score":1,"Answer":"Whatever method you choose you will need some sort of synchronization point or total ordering. A few approaches are below.\nYou could serialize your writes via transactions to some DB under the hood and only execute the fulfillment after both events have arrived (check for both events on either event write). Up to you to use pessimistic\/optimistic locking here and if it is feasible for your DB selection and expected loads.\nOr you could create 1 topic and partition these writes by some unique identifier so that both events go to the same partition thus to the same Kafka consumer. This avoids race conditions assuming your consumers are single threaded. To do that just add that identifier (user-id, tx id etc) to the Kafka key to make any event sent with that key go to the same partition and avoid multi-partition race conditions. This approach has the downside of having multiple event types in 1 topic but I have seen this firehose pattern used effectively before to provide ordering.","Q_Score":0,"Tags":"python,apache-kafka","A_Id":67014217,"CreationDate":"2021-04-08T13:54:00.000","Title":"How to avoid race conditions in Kafka","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a server. Client can send a path and server should cd to that path. But here is the thing. Imagine I have a test2 directory in test1 directory and the path to test1 directory is C:\\test1. The client can access test2 by cd test2 and \\test1\\test2 and if he wants to go back he can use \\test1 (I searched and found os.chdir but it needs the full path and I don't have it) and he shouldn't be free to send E:\\something or anything like that. Just the directories that are in test1. what do you suggest? what can I use to achieve this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":82,"Q_Id":67012663,"Users Score":0,"Answer":"You can store the default path as a kind of root path and path.join(root, client_path) this way you have a complete path that has to start with C:\\test1\nThe issue you have to overcome is deciding if you have to join the current path or the root path with the client's command. I would first check if the directory exists in the current working directory if not I would try finding it in the \"root\" path","Q_Score":0,"Tags":"python,sockets,directory,cd","A_Id":67012716,"CreationDate":"2021-04-08T22:25:00.000","Title":"Change directory in a server without leaving the working directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Im writing a python program that performs a redundant task. After the program loops a certain number of times I need to be able to switch VPN servers and start over. So i'm basically trying to use Nord VPN as a proxy in python. Can anyone point me in the right direction?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1355,"Q_Id":67035764,"Users Score":0,"Answer":"with the command \"scutil --nc stop \"NordVPN NordLynx\"\"","Q_Score":0,"Tags":"python,proxy,vpn","A_Id":71685557,"CreationDate":"2021-04-10T14:55:00.000","Title":"Is there a way to control Nord vpn via macOS command line or python library?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"is there a way to configure VS Code to automatically clear the terminal right before I run a Python file, for example? I searched a lot for that but without success. Whenever I try to run a file in terminal, previous runs are still there in the terminal, and it gets kinda confusing. Notice that I want to clear the terminal when I run the code in terminal (i.e. when I click the play button), not when I run with debugging. My idea is to always have the terminal empty when I run a file. Any thoughts?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":354,"Q_Id":67039627,"Users Score":0,"Answer":"Currently in VS Code, it does not automatically clear the terminal data settings, but you could use the command \"clear\" in the VS Code terminal to clear the previous output data, or click the'kill' icon to delete the current terminal and use the shortcut key Ctrl+Shift+` to open a new VS Code terminal.","Q_Score":0,"Tags":"python,visual-studio-code,terminal","A_Id":67051975,"CreationDate":"2021-04-10T21:56:00.000","Title":"How to automatically clear terminal in Visual Studio Code when running the file on terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I know there are a lot of anwsers to this question, but so far nothing has worked for me.\nOn MacOS Big Sur (m1 mac), when running my script (with or without sudo) it says: Access denied (insufficient permissions)\nI am using the ev3-dc library. Any help would be amazing","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72,"Q_Id":67045470,"Users Score":0,"Answer":"The problem was that pyusb was unable to access the ev3 mindstorms. So I made a pull request to change it to to hidapi and it works great.","Q_Score":0,"Tags":"python,macos,lego-mindstorms-ev3","A_Id":69962686,"CreationDate":"2021-04-11T13:20:00.000","Title":"MacOS python pyusb\/libusb access denied to ev3 mindstroms","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am creating a code which allows a user to scan an barcode, resulting in the correct file in the explorer they need to open on the computer. A BIG problem I have recently run in to: the files only open properly when they are opened through the application they run on.\nMy question would then be, is it possible to create a script which mimics opening a file through an application? Similar to being in excel and clicking File -> Open...\nIf this is possible, what would be the first steps in order to make it happen?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":67100523,"Users Score":0,"Answer":"If you are on Windows, you might be able to use COM automation via win32com to have the program open the file if the program supports it. I know this works with the Office programs.","Q_Score":1,"Tags":"python","A_Id":67100671,"CreationDate":"2021-04-15T00:00:00.000","Title":"Opening a file through a desktop application in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to write a program that keeps running in the background and performs a certain task at each hour of the day. How do I achieve this?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":124,"Q_Id":67101867,"Users Score":1,"Answer":"You can write a if condition in a infinite while loop to check if current time is equals to your time say (12:00:00pm, 04:00:00am) or you can make use of the sleep method, it stops the exexution of your code for the specified amount of time, you must find that by calculating the difference between your time and the current time and this method does not consume much memory and cpu cycles like the previous method.","Q_Score":0,"Tags":"python,datetime,time","A_Id":67101922,"CreationDate":"2021-04-15T03:28:00.000","Title":"How do I activate a python program at exact whole hours? (12:00:00pm, 04:00:00am)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to write a program that keeps running in the background and performs a certain task at each hour of the day. How do I achieve this?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":124,"Q_Id":67101867,"Users Score":1,"Answer":"I'd advise setting up a cron job to run your python program at specific time","Q_Score":0,"Tags":"python,datetime,time","A_Id":67101959,"CreationDate":"2021-04-15T03:28:00.000","Title":"How do I activate a python program at exact whole hours? (12:00:00pm, 04:00:00am)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a device with python3.7 preinstalled, in which i have installed also python3.9. I managed to change the version I am using of python3 and now the command \"python3\" followed by the .py file runs with python3.9.\nThe problem is I tried installing pandas with pip3 but it does not work (it didn't work even in the preinstalled python3.7), so I found that in debian you can install package, for example in this case pandas, using \"sudo apt-get install python3-pandas\" but this command keeps installing pandas in python3.7 and not in python3.9 even if now \"python3\" refers to python3.9.\nHas anyone ever encountered this problem and has a solution?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":405,"Q_Id":67103394,"Users Score":0,"Answer":"python3.9 -m pip install pandas","Q_Score":0,"Tags":"python,python-3.x,pandas,debian","A_Id":67103483,"CreationDate":"2021-04-15T06:36:00.000","Title":"Install pandas in debian10 on a different python version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I\u2019m working on a python agent which I want to run natively on various environments (basically different docker images but not necessarily). But I really have no idea how things work in this low level area.\nI compiled the project using pyinstaller and it runs successfully on my machine.\nOn the docs, it is written that I should compile the project on the machine I want to run it on, but I do want to prepare various versions for the executable so it\u2019ll be able to run on them in advance, but I don\u2019t know what are the criteria I need to take into consideration.\nIf I want to run the agent on various docker images, what are the specs I need to pay attention to? Architecture? OS? GCC version? Base image?\nWhat is the best way to compile as many binaries as possible to support various docker images?\nIf I compile the project on alpine image, for example, does it mean it will run on all docker images based on alpine?\nAny advice is welcomed.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":154,"Q_Id":67108411,"Users Score":1,"Answer":"If you just have a Python script, that script on its own is (probably) portable, and can run on any system on any OS on any architecture that has a Python interpreter. This isn't a difficult requirement -- both Linux and MacOS systems generally come with Python preinstalled -- and if portability is important to you, I'd try to stay as close to a pure-Python environment as you can.\nOnce you've brought Pyinstaller into the picture, you've created a non-portable binary artifact. You'll need to re-run Pyinstaller on each different OS and each different architecture you want to run it on (Linux\/x86, Linux\/ARM, MacOS\/x86, MacOS\/ARM, Windows\/x86, and probably separate musl [Alpine] vs. glibc [Debian] builds on Linux). There aren't really any shortcuts around this.\nIn principle Docker looks like it can simplify this by running everything under Linux. You still need to create separate x86 and ARM artifacts. Once it's embedded in a Docker image, you wouldn't need to create separate builds for different Linux distributions, you would just need to run the precompiled image. But, the host would need to have Docker installed (which it almost certainly won't by default), and depending on what exactly your agent is reporting on, it could be hampered by running in an isolated environment (in a Linux VM on non-Linux hosts).","Q_Score":0,"Tags":"python,docker,pyinstaller,executable","A_Id":67110507,"CreationDate":"2021-04-15T12:24:00.000","Title":"Question about running executables on various architectures and OSs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Has anyone run into this issue before? I've done pip install --user functions-framework and been told a bunch of requirements are already satisfied.\nWhen I then run functions-framework --target=function I get the error 'functions-framework' is not recognized as an internal or external command, operable program or batch file.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":787,"Q_Id":67132430,"Users Score":0,"Answer":"run command prompt as Administrator.\nThen uninstall functions-framework using the code pip uninstall functions-framework\nReinstall it by: pip install functions-framework\nGo to your main.py directory and run functions-framework --target=","Q_Score":1,"Tags":"python,functions-framework","A_Id":67154854,"CreationDate":"2021-04-16T21:31:00.000","Title":"Functions-framework won't run in windows command line after installing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a cpanel host and I want to run my Python script even if close or logout from cpanel.\nMy Python script has get different inputs in at the beginning of the run.\nSo far I've tried nohup and screen, but they neither fixed this.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":67139738,"Users Score":0,"Answer":"So I've been using screen from long and i'm able to run my script even if i close my terminal without terminating process unless until you power off your machine your screen will be running in the background. and you can see the status of your script execution using \"screen -r\"","Q_Score":0,"Tags":"python-3.x,linux,terminal,cpanel,host","A_Id":67142008,"CreationDate":"2021-04-17T15:17:00.000","Title":"Run python file in cpanel terminal after closing terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed Python38 on aws EC2 linux, and set it as default by running sudo alternatives --set python \/usr\/bin\/python3.8, but still not working. Python2.7 still the default.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":86,"Q_Id":67166945,"Users Score":0,"Answer":"Python is preinstalled on Linux as python 2.7. To run python 3 use the command python3 script.py. This calls upon the python 3 interpreter while python script.py calls upon the python 2.7 interpreter.","Q_Score":0,"Tags":"python-3.x","A_Id":67167990,"CreationDate":"2021-04-19T17:51:00.000","Title":"Python3 not set as default on AWS EC2 linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"On Mac OS.\nDownloaded the Python from their website.\nPython -V return Python 2.7.16, and\npython3 -V return Python 3.9.4\nInstalled pip with : python3 get-pip.py, got Successfully installed pip-21.0.1\nBut when I run pip -V\nI get File \"\/usr\/local\/bin\/pip\", line 1.... SyntaxError: invalid syntax\nAfter reading here a lot, i could not understand (in simple words for dumbs) :\n\nHow could you \"alias\" or update python to show\/run in version 3+ ?\nWhy I can't get the pip version if it's installed ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":568,"Q_Id":67167944,"Users Score":0,"Answer":"Anyone who's interested , what i had to do is to\n\nCD the project on terminal\nrun python3 -m venv env (this create virtual environment of python3 inside project)\nrun source env\/bin\/activate activate it.\nnow pip --version or python --version will return the right results.\n\nFrom my basic knowledge i understand that the mac is using python2 which we don't want to interrupt, so we install python3 and create a 'virtual environment' for it inside our project folder, once its in, we can install all other things.\nto deactivate (stop the virtual env) we run deactivate","Q_Score":1,"Tags":"python,python-3.x,pip","A_Id":67174434,"CreationDate":"2021-04-19T19:04:00.000","Title":"Python and pip installation on Mac don't show version?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I had a short question related to deploying a plotly Dash app via uwsgi. The app is intended to be used by many concurrent users, and we were wondering whether the current approach is suitable for many concurrent users, mainly as we're using States and a dcc.Store() component. I was unaware that using State-parameters could potentially lead to issues for many concurrent users, so any additional insight would be appreciated! If necessary, I can provide a minimal working Dash example as well.\n\nWe designed a multi-page app via Tabs, where the user gives Inputs in three tabs. The output is subsequently displayed in a fourth tab, triggered by a callback.\nThe app callback is triggered via a button (n-clicks), with many of the input parameters of the user being called via States.\nStorage type was defined as follows: dcc.Store(id='session', storage_type='session')\nAll fields have the following persistence settings: persistence=True, persistence_type = 'memory'\n\nThe app itself works already for individual use, so the uwsgi and Apache server are configured correctly as the program works. The main concern at the moment is the concurrent users, and whether the input saved in States influence each other.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":838,"Q_Id":67196888,"Users Score":1,"Answer":"From the info given it seems that you are storing all state on the clientside, and the server thus stays stateless as intended in Dash. Hence I would not expect that you run into any issues with concurrent users.","Q_Score":1,"Tags":"python-3.x,deployment,plotly-dash","A_Id":67202817,"CreationDate":"2021-04-21T13:27:00.000","Title":"Deploying a Dash app for many concurrent users - dcc.Store() & States","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I just downloaded python3 (added to PATH) and sublime editor. In sublime editor, a book I'm using tells me to put in \"cmd\": [\"python3\", \"-u\", \"$file\"], but when I enter it (control B on Windows), I get the following error message -\n[WinError 2] The system cannot find the file specified\n[cmd: ['python3', '-u', ........","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":67207437,"Users Score":0,"Answer":"I think that you should put only python \"filename\" in the command line interface to compile your file.","Q_Score":0,"Tags":"python","A_Id":67207747,"CreationDate":"2021-04-22T05:49:00.000","Title":"Don't understand how to solve error message","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to add environment variables before launching my python script by IDA to debug a library.\nI set up Process Options in Debugger menu and set Application to x86 python. But it seems there is no environment variable options.\nIs there any way to do so?\nIDA Pro 7.5 is x64 application and it uses x64 python. But I want to debug x86 python so I need to change PYTHONHOME before launching python process.\nI tried to use bat file to do it but it launched another process. So it didn't work.\n[Envirionment]\n\nWindows 10","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":165,"Q_Id":67209703,"Users Score":0,"Answer":"If it is remote debugging, add the required environment variables when the mac_server or linux_server starts","Q_Score":2,"Tags":"python,x86,64-bit,ida","A_Id":72400192,"CreationDate":"2021-04-22T08:39:00.000","Title":"Set environment variable befare launching debug process by IDA","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python script script.py designed to run at the command line. To make it easier to run, I have a windows batch file script.bat which sets up some defaults and other things. They exist in the same directory.\nIf I run >script at the command prompt, then the python script is run preferentially to the batch file. >script.bat works as expected.\n>where script lists the batch file first, so to my understanding, it should be run preferentially to the python script.\nCan I ensure that the batch file is run preferentially without renaming or using the file extension?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":115,"Q_Id":67223859,"Users Score":0,"Answer":"Solved by making my script into a zip file containing a main.py. script.pyz runs after the batch.","Q_Score":0,"Tags":"python,windows,batch-file","A_Id":67230205,"CreationDate":"2021-04-23T04:03:00.000","Title":"Running python script from windows batch file - filename conflicts","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am already using robot framework to automate my tests using cygwin installed on windows ( i am not the admin).\nWindow 7\nPython 3.8\nI have succefully installed eclipse red editor howevet when i tried to add the path of my cygwin python interpreter in eclipse red editor preferences I failed,\nit shows only the entry Path and a \"unkown type\" , while it is supposed to recognize python and robot version.\nI tried to double check the path, i have tested the python robot command in the cygdrive bin and it is working.\nThe only thing that works in RED is when i have added an external tool config that points to python command in cygwin bin.\nBut this solution is not optimal becausei could not run an individual test, while in Red it is possible to selec and run it in the GUI.\ndid someone managed to make red work with python interpreter of cygwin?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":67,"Q_Id":67239922,"Users Score":0,"Answer":"Finally I manged to run robot from red gui by changing the name of the executable to python.exe","Q_Score":0,"Tags":"python,eclipse,cygwin,robotframework","A_Id":67269004,"CreationDate":"2021-04-24T06:28:00.000","Title":"red editor eclipse with python interpreter incygwin","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running a jupyter notebook remotely on a server by\n\nconnecting to the server: ssh server:address\ninitialize the jypter notebook ipython notebook --no-browser --port=7000\nusing another terminal window, stabilish remove connection to the notebook ssh -N -f -L localhost:6001:localhost:7000 server:address \nfinally I access it throw localhost:6001 in my browser.\n\nThe thing is: I'd like to keep the notebook running when I turn my computer off. Any ideas on how can I do it?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":200,"Q_Id":67253338,"Users Score":0,"Answer":"You could create a crontab on your remote server with your command ipython notebook --no-browser --port=7000 to manage the execution of Jupiter notebook, that could be the way go.","Q_Score":1,"Tags":"python,ssh,jupyter-notebook,remote-access","A_Id":67253456,"CreationDate":"2021-04-25T12:20:00.000","Title":"Jupyter Notebook remote browser without disconnecting","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm not sure what to search for this.\nI am writing a piece of code using python fire to create a command line interface.\npython test.py function argument\nis there a way to make the shell interpret the following like the command above:\ntest function argument\nSimilar to how I can just call jupyter lab and it will open a notebook etc.\nI have a feeling this is more to do with setting up my bashrc or similar instead of something I can do in Python.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":67258005,"Users Score":0,"Answer":"Add the hashbang (at the start of the file, in case you don't know) \n#!\/usr\/bin\/env python \nor\n#!\/usr\/bin\/env python3\nReplace the 3 with whatever version you have installed and want that file to run\nSave the file to an already existing PATH or add the file location to PATH\nNow you can hopefully run it by typing test.py function argument\nRename test.py to test \nNow you should be able to run it as test function argument\nAlso make sure your file is set as executable","Q_Score":1,"Tags":"python,linux,python-fire","A_Id":67258132,"CreationDate":"2021-04-25T20:32:00.000","Title":"Command line interface for python module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm not sure what to search for this.\nI am writing a piece of code using python fire to create a command line interface.\npython test.py function argument\nis there a way to make the shell interpret the following like the command above:\ntest function argument\nSimilar to how I can just call jupyter lab and it will open a notebook etc.\nI have a feeling this is more to do with setting up my bashrc or similar instead of something I can do in Python.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":67258005,"Users Score":0,"Answer":"You're correct that it has to do with adding to your .bashrc. You want to set an alias.\n\nMake sure your code has an appropriate shebang line at the top, ex. #!\/usr\/bin\/python3\nAdd the following to .bashrc, ex. alias test=python3 \/path\/to\/test.py\n\nFrom there, you can use sys.argv in your code to handle arguments within the program.","Q_Score":1,"Tags":"python,linux,python-fire","A_Id":67258041,"CreationDate":"2021-04-25T20:32:00.000","Title":"Command line interface for python module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Can you advise me on the analogs of the socket library on Python? The task is this, I need to write a very simple script with which I could execute remote commands in cmd windows. I know how this can be implemented using the socket library, but I would like to know if there are any other libraries for such a case.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":67272337,"Users Score":0,"Answer":"Sockets is a low level mechanism by which two systems can communicate each other. Your OS provides this mechanism, there's no analogs.\nNext examples come from the application layer and they work with sockets in their lower communication layers: a socket open by your http server, usually 80 or 443 or a websocket open by your browser to communicate with your server. Or the DNS query that your browser executes when tries to resolve a domain name, also works with sockets between your PC and the DNS server.","Q_Score":0,"Tags":"python,sockets","A_Id":67279572,"CreationDate":"2021-04-26T19:00:00.000","Title":"analogs of socket in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to link JENKINS and tortoise SVN but I am unable to. I have tried the below methods:\n\nCreate batch file for an SVN checkout - Works fine\nCall the same batch file from Jenkins as a Batch command step in Jenkins - Does not work\nTried writing a python script and tried a subprocess.call() with svn checkout - Does not work.\n\nI get the below error when I try with Jenkins -\nsvn: E170013: Unable to connect to a repository at URL\nsvn: E215004: No more credentials or we tried too many times\nAuthentication failed.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":254,"Q_Id":67273897,"Users Score":1,"Answer":"Finally was able to solve it. Clear the cache present for SVN credentials and use the --username --password for the SVN command.","Q_Score":0,"Tags":"python,jenkins,svn,tortoisesvn","A_Id":67360384,"CreationDate":"2021-04-26T21:13:00.000","Title":"JENKINS Tortoise SVN link - Unable to connect to repository","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to find a free cloud storage service with free API, that could help me back up some files automatically.\nI want to write some script (for example python) to upload files automatically.\nI investigated OneDrive and GoogleDrive. OneDrive API is not free, GoogleDrive API is free while it need human interactive authorization before using API.\nFor now I'm simply using email SMTP protocol to send files as email attachments, but there's a max file size limition, which will fail me in the future, as my file size is growing.\nIs there any other recommendations ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":500,"Q_Id":67275889,"Users Score":0,"Answer":"gdownload.py using Python3\n\n\n\n from apiclient.http import MediaIoBaseDownload\n from apiclient.discovery import build\n from httplib2 import Http\n from oauth2client import file, client, tools\n import io,os\n \n CLIENT_SECRET = 'client_secrets.json'\n SCOPES = ['https:\/\/www.googleapis.com\/auth\/admin.datatransfer','https:\/\/www.googleapis.com\/auth\/drive.appfolder','https:\/\/www.googleapis.com\/auth\/drive']\n \n store = file.Storage('tokenWrite.json')\n creds = store.get()\n if not creds or creds.invalid:\n flow = client.flow_from_clientsecrets(CLIENT_SECRET, SCOPES)\n flags = tools.argparser.parse_args(args=[])\n creds = tools.run_flow(flow, store, flags)\n DRIVE = build('drive', 'v2', http=creds.authorize(Http()))\n \n files = DRIVE.files().list().execute().get('items', [])\n \n def download_file(filename,file_id):\n #request = DRIVE.files().get(fileId=file_id)\n request = DRIVE.files().get_media(fileId=file_id)\n fh = io.BytesIO()\n downloader = MediaIoBaseDownload(fh, request,chunksize=-1)\n done = False\n while done is False:\n status, done = downloader.next_chunk()\n print(\"Download %d%%.\" % int(status.progress() * 100))\n fh.seek(0)\n f=open(filename,'wb')\n f.write(fh.read())\n f.close()\n \n rinput = vars(__builtins__).get('raw_input',input)\n fname=rinput('enter file name: ')\n for f in files:\n if f['title'].encode('utf-8')==fname:\n print('downloading...',f['title'])\n download_file(f['title'],f['id'])\n os._exit(0)","Q_Score":2,"Tags":"python,google-drive-api,backup,onedrive,cloud-storage","A_Id":67333345,"CreationDate":"2021-04-27T01:43:00.000","Title":"What cloud storage service allow developer upload\/download files with free API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We are running an API server where users submit jobs for calculation, which take between 1 second and 1 hour. They then make requests to check the status and get their results, which could be (much) later, or even never.\nCurrently jobs are added to a pub\/sub queue, and processed by various worker processes. These workers then send pub\/sub messages back to a listener, which stores the status\/results in a postgres database.\nI am looking into using Celery to simplify things and allow for easier scaling.\nSubmitting jobs and getting results isn't a problem in Celery, using celery_app.send_task. However, I am not sure how to best ensure the results are stored when, particularly for long-running or possibly abandoned jobs.\nSome solutions I considered include:\n\nGive all workers access to the database and let them handle updates. The main limitation to this seems to be the db connection pool limit, as worker processes can scale to 50 replicas in some cases.\n\nListen to celery events in a separate pod, and write changes based on this to the jobs db. Only 1 connection needed, but as far as I understand, this would miss out on events while this pod is redeploying.\n\nOnly check job results when the user asks for them. It seems this could lead to lost results when the user takes too long, or slowly clog the results cache.\n\nAs in (3), but periodically check on all jobs not marked completed in the db. A tad complicated, but doable?\n\n\nIs there a standard pattern for this, or am I trying to do something unusual with Celery? Any advice on how to tackle this is appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":121,"Q_Id":67303047,"Users Score":2,"Answer":"In the past I solved similar problem by modifying tasks to not only return result of the computation, but also store it into a cache server (Redis) right before it returns. I had a task that periodically (every 5min) collects these results and writes data (in bulk, so quite effective) to a relational database. It was quite effective until we started filling the cache with hundreds of thousands of results, so we implemented a tiny service that does this instead of task that runs periodically.","Q_Score":1,"Tags":"python,celery","A_Id":67314568,"CreationDate":"2021-04-28T15:18:00.000","Title":"Persisting all job results in a separate db in Celery","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm wondering how does it work with python package repositories for CentOS (and also other distributions) as I can't find any article about that. Where do python packages\/version come from?\nMy question comes from fact that I want to install python package Quart, and it offers only 2-years old package version 0.6.15 on both CentOS 7 and 8, while on Ubuntu it offers latest 0.14.1.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":44,"Q_Id":67335869,"Users Score":1,"Answer":"The Quart 0.6.* releases are the last ones to support Python 3.6. If you install Python 3.7+ you can then install the latest Quart versions.","Q_Score":0,"Tags":"python,centos,package,repository,quart","A_Id":67367680,"CreationDate":"2021-04-30T14:43:00.000","Title":"Python package repositories on CentOS\/Ubuntu","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have started a private project with Django and Channels to build a web-based UI to control the music player daemon (mpd) on raspberry pi. I know that there are other projects like Volumio or moode audio etc. out of the box that is doing the same, but my intension is to learn something new!\nUp to now I have managed to setup a nginx server on the pi that communicates with my devices like mobile phone or pc. In the background nginx communicates with an uWSGI server for http requests to Django and a daphne server as asgi for ws connection to Django Channels. As well there is a redis server installed as backend because the Channels Layer needs this. So, on client request a simple html page as UI is served and a websocket connection is established so far.\nIn parallel I have a separate script as a mpd handler which is wrapped in a while loop to keep it alive, and which does all the stuff with mpd using the python module python-mpd2.\nThe mpd handler shall get its commands via websocket from the clients\/consumers like play, stop etc. and reacts on that. At the same time, it shall send the timeline of the song when a song is playing, let\u2019s say every one second as well via websocket. I could manage to send frequently data to all connected clients\/consumers with async_to_sync(channel_layer.group_send) from outside but I couldn\u2019t find a solution how to pass data\/commands coming from the clients via websocket to my separate running mpd handler script.\nI read in the docs for Django Channels that it is not recommended to use while loops in the consumers because this will block all the communication \u2013 that\u2019s right I have tried this already. Then I tried to receive messages with the command async_to_sync(channel_layer.receive)('channel_name') in the mpd handler with a direct connection to a consumer. But this command blocks my mpd handler because it works async although I use async_to_sync.\nSo, my question:\nIs it possible to pass messages to outside of Django Channels to other scripts with channel own methods? Do you have any suggestion how to solve this maybe with other methods or workarounds? I am looking for a reliable solution.\nI gave thoughts to that issue and have some ideas, but I don\u2019t know if this will lead to any solution:\n\nPolling:\nThe clients send frequently messages and requests via websocket to control the mpd and update the UI. In this case no handler would be needed. (I don\u2019t know if this method will generate to much traffic on the websocket and makes it slow. As well, the connection to mpd has to be established frequently and closed again. Don\u2019t know if this works robust.)\nDatabase:\nGenerate a database where consumers and the mpd handler have access to. The consumers write the incoming messages in a database and the mpd handler reads them out and does the job. (Here I don\u2019t know if there will be problems when the consumers and mpd handler try to access the db at the same time.)\nUsing Queues with multiprocessing module:\nConsumers passes the messages via a queue to the mpd handler. (Don\u2019t know if this is possible.)\nCatching up the messages in redis:\nMpd handler listens frequently on redis to catch up the messages. I read that when the Layers are used in common way the groups and channel names are listed on redis only. Messages are passed via redis when the consumers are started as workers. (That would mean that all my consumers must start as background worker, but how?)\n\nI hope you may have a solution to my question. You may realise from my ideas and the question marks involved to solve this problem that I am not an IT expert. As I wrote at the beginning, I have another engineering background and a newbie in this but very interested to learn something new! So please be patient with me when I don\u2019t understand everything immediately.\nI hope to read your answers soon and thank you in advance.\nBest regards.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":373,"Q_Id":67347331,"Users Score":0,"Answer":"Whilst nobody gave an answer to my question, I tried a little bit out some possible options.\nI changed the binding of mpd from fix IP to a socket connection and created a mpd_Handler class with some functions\/methods like connect to mpd, disconnect, play, pause etc.\nThis class is imported in Django consumers.py and views.py. Whenever a web client connects to Django or has a new command (like play, skip etc.), the mpd_Handler will perform the command and respond the actual state of mpd like current song metadata.\nA second mpd handler which is running outside of Django as a separate script monitors frequently the mpd state to detect any changes. In case of a change at mpd (e.g., the song of web radio stream has changed or the duration time of the song) this handler informs all clients that are connected to Django consumer group with the command async_to_sync(channel_layer.group_send) so that the clients can update their UI.\nAt the moment it works, and I hope this is a good solution and helps others who have the same problem. Other suggestions are still welcome!\nBest regards.","Q_Score":1,"Tags":"python,django,websocket,redis,django-channels","A_Id":68302357,"CreationDate":"2021-05-01T14:27:00.000","Title":"Django Channels: How to pass incoming messages to external script which is running outside of django?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"So, here is the problem I ran into, I am trying to build a very small-scale MVP app that I will be releasing soon. I have been able to figure out everything from deploying the flask application with Dokku (I'll upgrade to something better later) and have been able to get most things working on the app including S3 uploading, stripe integration, etc. Here is the one thing I am stuck on, how do I generate SSL certs on the fly for customers and then link everything back to the Python app? Here are my thoughts:\nI can use a simple script and connect to the Letsencrypt API to generate and request certs once domains are pointed to my server(s). The problem I am running into is that once the domain is pointed, how do I know? Dokku doesn't connect all incoming requests to my container and therefore Flask wouldn't be able to detect it unless I manually connect it with the dokku domains:add command?\nIs there a better way to go about this? I know of SSL for SaaS by Cloudflare but it seems to only be for their Enterprise customers and I need a robust solution like this that I don't mind building out but just need a few pointers (unless there is already a solution that is out there for free - no need to reinvent the wheel, eh?). Another thing, in the future I do plan to have my database running separately and load balancers pointing to multiple different instances of my app (won't be a major issue as the DB is still central, but just worried about the IP portion of it). To recap though:\nClient Domain (example.io) -> dns1.example.com -> Lets Encrypt SSL Cert -> Dokku Container -> My App\nPlease let me know if I need to re-explain anything, thank you!","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":271,"Q_Id":67352618,"Users Score":-1,"Answer":"Your solutions is a wildcard certificate, or use app prefixing.\nSo I'm not sure why you need a cert per customer, but let's say you are going to do\ncustomer1.myapp.com -> routes to customer1 backend. For whatever reason.\nLet's Encrypt lets you register *.myapp.com and therefore you can use subdomains for each customer.\nThe alternative is a customer prefix.\nSay your app URL looks like www.myapp.com\/api\/v1\/somecomand\nyou could use www.myapp.com\/api\/v1\/customerID\/somecommand and then allow your load balancer to route based on the prefix and use a rewrite rule to remove the customerID back to the original URL.\nThis is more complicated, and it is load balancer dependent but so is the first solution.\nAll this being said, both solutions would most likely require a separate instance of your application per customer, which is a heavy solution, but fine if that's what you want and are using lightweight containers or deploying multiple instances per server.\nAnyway, a lot more information would be needed to give a solid solution.","Q_Score":5,"Tags":"python,nginx,ssl,flask,lets-encrypt","A_Id":67421642,"CreationDate":"2021-05-02T03:01:00.000","Title":"Generating SSL Certs for Customer Domains and integrating with Python Flask","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I can't find a way to copy an image or a file to the clipboard. I tried using pyperclip but it isn't able to do that.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":192,"Q_Id":67356520,"Users Score":1,"Answer":"I found a way to do this using a shell command:\nos.system(f\"xclip -selection clipboard -t image\/png -i {path + '\/image.png'}\")\nIt's less than ideal but it does the job.","Q_Score":0,"Tags":"python,linux,clipboard","A_Id":67364677,"CreationDate":"2021-05-02T12:41:00.000","Title":"How to copy a file\/image to the clipboard in Linux using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have started a dramatiq worker to do some task and after a point, it is just stuck and throws this below-mentioned error after some time.\n[MainThread] [dramatiq.MainProcess] [CRITICAL] Worker with PID 53 exited unexpectedly (code -9). Shutting down...\nWhat can be the potential reason for this to occur? Are System resources a constraint?\nThis queuing task is run inside a Kubernetes pod","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":226,"Q_Id":67362116,"Users Score":0,"Answer":"Please check kernel logs (\/var\/log\/kern.log and \/var\/log\/kern.log.1)\nThe Worker might be getting killed due to OOMKiller (OutOfMemory).\nTo resolve this try to increase the memory if you are running in a docker or pod.","Q_Score":1,"Tags":"kubernetes,rabbitmq,python-3.6,dramatiq","A_Id":71956631,"CreationDate":"2021-05-02T23:58:00.000","Title":"Dramatiq worker getting killed every often","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I was on pycharm and I made a new file to put some code in, but when I ran the code it said \"No Python at 'file_name'\" Underneath in the terminal it also said \"Process Code finished with exit code 103\". Anyone know what that means?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3557,"Q_Id":67378824,"Users Score":1,"Answer":"Go to the directory where you installed python, usually C:\\\\pythonX\nand copy everything to C:\\Users\\ACCOUNT-NAME\\AppData\\Local\\Programs\\Python\\PythonX","Q_Score":3,"Tags":"python","A_Id":69607906,"CreationDate":"2021-05-04T04:30:00.000","Title":"Process finished with exit code 103","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to couch DB and cloudant. A django application is converting a pdf into image using a celery task and storing both in couch DB. But the local cloudant cache is not updating, since couch DB updation is causing by the celery task. When checking the local cache, it's storing a previous doc object with old revision number. Remote couch DB is updating fine and not syncing with local.\nWhy couch updations from celery is not affecting local cache\nIs anything I am doing wrong","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":67407217,"Users Score":0,"Answer":"The local cache is meant to cache content to reduce unnecessary calls to the API. It doesn't sync with the remote DB. In this case \"unnecessary\" is defined by you. If the local cache contains a stale document then you can refresh it by re-fetching the document like my_doc.fetch(). Where my_doc is a reference to your stale cached document.","Q_Score":0,"Tags":"python,django,celery,couchdb,cloudant","A_Id":68653149,"CreationDate":"2021-05-05T18:55:00.000","Title":"Cloudant local cache not syncing with local couch cache. Using python-cloudant","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I followed the official mediapipe page but without any result so can someone help to install mediapipe in raspberry pi 4 in windows it is easy to install it and use it but in arm device like raspberry pi i did not find any resources.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":7421,"Q_Id":67410495,"Users Score":0,"Answer":"I ran the command sudo pip3 install media pipe-rpi4. This worked. When I try to import the module in python I get ModuleNotFoundError: No module named \u2018mediapipe.python._framework_bindings\u2019","Q_Score":5,"Tags":"python-3.x,deep-learning,raspberry-pi4,mediapipe","A_Id":70965111,"CreationDate":"2021-05-06T00:44:00.000","Title":"how to install and use mediapipe on Raspberry Pi 4?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I followed the official mediapipe page but without any result so can someone help to install mediapipe in raspberry pi 4 in windows it is easy to install it and use it but in arm device like raspberry pi i did not find any resources.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":7421,"Q_Id":67410495,"Users Score":1,"Answer":"if you use python3 you can try sudo pip3 install mediapipe-rpi4","Q_Score":5,"Tags":"python-3.x,deep-learning,raspberry-pi4,mediapipe","A_Id":70729637,"CreationDate":"2021-05-06T00:44:00.000","Title":"how to install and use mediapipe on Raspberry Pi 4?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Here's was appears to be an odd question at least from what I've been able to turn up in Google. I'm not trying to determine IF there's a UAC prompt (I've got a couple of reliably ways to do that, win32gui,GetForegroundWindow() returns a 0, or win32gui.screenshot returns exception OSError at least in my case)\nI'm also not looking to BYPASS the UAC, at least from python, I have an update process that's kicking off automatically that I need to get through the UAC. I don't have control of the update process so I don't think it's a good candidate for disabling the UAC with Python. I could just disable the UAC in Win10, but I'd prefer not to if possible. I do have a couple of methods for bypassing the UAC, in one instance where I'm running this in vitualbox I believe I can use VBoxManage guestcontrol to sent keystrokes to the guest system, for a stand alone system I have a microcontroller connected as a USB HID Keyboard, with a basic deadman switch (using the scroll lock to pass data between the python and the microcontroller acting as the HID keyboard) if it doesn't get the signal it sends left arrow enter to bypass the UAC.\nWhat I'm trying to do, and getting stymied with, is verifying that the UAC popup is actually from the update process that I want to accept the UAC prompt for, and not some other random, possibly nefarious application trying to elevate privileges. I can use the tasklist to verify the UAC is up, but I'm not seeing any way to see WHAT caused the UAC prompt. The update process is kicked off from an application that's always running, so I can't check to see if the process itself it running, because it's running under normal operation, I just want to accept the UAC when it's attempting to elevate privileges to update. I've been using a combination of using win32gui.GetWindowText and win32gui.EnumWindows to look for specific window titles, and for differentiating between windows with the same title, taking a screenshot and using OpenCV to match different object that appear in the windows. Both of those methods fail though when UAC is up, which is why I can use them to detect UAC as I mentioned before.\nI suppose I could use a USB camera to take a screenshot of the system, but I'd like to be able to run this headless.\nAnybody have an idea on a way to accomplish this, as the tree said to the lumberjack, I'm stumped.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":91,"Q_Id":67415047,"Users Score":0,"Answer":"If you run a process as administrator, no user account control prompt will appear.\nYou could manually run your process as administrator.\nYou need system privileges to interact with a user account control prompt.","Q_Score":0,"Tags":"python,windows,uac","A_Id":67415235,"CreationDate":"2021-05-06T09:07:00.000","Title":"How can I determine what what is in the Win10 UAC prompt in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I updated the virtualenv on my computer (OS\/X Big Sur), somehow Python version 3.9.0 was installed. But my host environment continues to use 3.6.0 and I'd like to revert my development sdenv to that. How is this done, please?\n(To clarify: the python3 command on my machine is 3.9.)\n--- I've decided to self-close this question as being probably-irrelevant to my actual concern, which is in another simultaneously-active SE thread concerning the \"mysqlclient=1.4.3\" package. This is probably a red herring.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":621,"Q_Id":67474085,"Users Score":0,"Answer":"I think that the reason for the version-change is that some upgrade to OS\/X probably changed the python3 command executable, from 3.6 to 3.9. The venv command apparently simply grabs whatever's there at the time.\nMy actual reason for posting this question has to do with a mysqlclient=1.4.3 install which is now mysteriously failing. And so, this issue might actually be a \"red herring\" to that fundamental question. So, I think I'll just \"table\" this question now and focus on why that install is no longer working. Possibly it doesn't actually have anything to do with 3.6 vs. 3.9.\nThanks for the quick responses, nonetheless.\nP.S.: And, hey, I even found my own answer to that question!","Q_Score":0,"Tags":"python,virtualenv","A_Id":67474464,"CreationDate":"2021-05-10T16:18:00.000","Title":"How to install Python 3.6.0 in a VirtualEnv? instead of 3.9 (CLOSE ME)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I try to refactor variables using the Rename Symbol action, the variable is not refactored, and a tooltip pops up that says \"No result.\" There are no error messages, or any other indication that something is wrong.\nVS Code was recently updated to ver 1.56.1, and along with this update came a switch to Pylance. Before this update, Rename Symbol worked, but now it doesn't work on Remote-SSH, Remote-WSL, or local workspaces. On Remote-WSL in particular, pressing F2 will not even display the refactor dialogue box.\nI have tried restarting the Python Language Server, restarting VS Code, and restarting my PC, but nothing has worked. I would like to continue using Pylance if possible.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":540,"Q_Id":67495056,"Users Score":0,"Answer":"With the new update (1.56.2), the problem had been fixed.","Q_Score":2,"Tags":"python,visual-studio-code,pylance","A_Id":67538544,"CreationDate":"2021-05-11T22:21:00.000","Title":"Rename Symbol (F2) returns \"No result.\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm following the widely used approach of naming log file by timestamp. The log file is created in the beginning of the run. My tool generates a run_id during each run which is also logged in the log file.\nThis worked well, but now as the log files are increasing, it becomes hard whenever a run fails and I need to investigate the log files. I'll be notified which run_id failed, but finding the corresponding log file is hard as I need to do grep -inr over all log files to find the relevant log file, which takes some time.\nIf I could name the log file by run_id, it would have been super simple to just do vim whenever a run fails. But the run_id is not know at the time of log file creation and is rather generated during the run by a sequence generator in backend database.\nWhat would be the ideal solution in this case?\nShould I rename the file at the end of each run? Or is there any other approach I am missing.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":67497957,"Users Score":0,"Answer":"As you said renaming the file at the end of the run seems like the simplest solution. You'll just have to watch out for the possibility that the run fails before run_id is created and the log retains its initial name.\nIf you want you could append run_id to the timestamp if you still want it in the filename.","Q_Score":0,"Tags":"python,logging","A_Id":67498199,"CreationDate":"2021-05-12T05:49:00.000","Title":"using run_id as name of the log file instead of timestamp but run_id is generated during the run","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Even if i risk some negative votes, i want to ask you about a strategy to do some automaticaaly installations of libraries and packages. I tested and worked great the strategy of finding all libraries from pip freeze and attaching them into a file.txt . After that, i used pip install -r file.txt. So far,so good. But what cand you do when you want to gradually add libraries and don't want to write manually in the file.txt,but simply have a code that reads the new library,eventually use the subprocess and installs it automatically? The purpose behind this question is to make the code to work at its fullest only when compiling the program (one single human action,when you run the code, it reads the new libraries and installs it automatically,without to write them in the file.txt) . Any ideas are apreciated,thank you!:)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":67504809,"Users Score":0,"Answer":"It seems like you already have all of the parts in place. Why not just write a bash script that runs every day to call pip freeze, put it in the txt file, and update everything?\nThen run the script as often as you want with crontab","Q_Score":0,"Tags":"python,linux,installation,automation,libraries","A_Id":67504895,"CreationDate":"2021-05-12T13:39:00.000","Title":"Automatically install libraries","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to automate a python script using crontab and reads many tutorials but unable to get in. I'm on AWS linux instance and wants to run my py file in every 45 minutes. My crontab have following lines\n*\/45 * * * * \/usr\/bin\/python3 \/home\/ec2-user\/Project-GTF\/main.py\nand also listing of jobs by crontab -l shows above line\nLet's assume my main.py files contains a single print statement print(\"Hello World\")\nAnd I also use tmux to alive my terminal all the time.\nI suppose that my terminal prints Hello World in every 45 mins but not :( can anyone suggest what's wrong I doing. I don't know much more about cron and never automate a single cron job in my life span :[","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":170,"Q_Id":67506627,"Users Score":2,"Answer":"Traditionally, stdout and stderr from cron jobs have been emailed to their owner, though on present-day systems where email accounts are dissociated from unix accounts, this has become a bit fuzzy. Your best bet is probably to explicitly redirect output to a file.\n(It's possible that there is some AWS specific answer to this, in which case, this being the Internet, someone is sure to tell us. :-) )","Q_Score":0,"Tags":"python-3.x,amazon-ec2,cron,cron-task","A_Id":67506750,"CreationDate":"2021-05-12T15:27:00.000","Title":"How to use cron with tmux session?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a legacy script that uses Python2 (which I can't modify). It imports module yaml. On older machines, this is satisfied using pip install pyyaml.\nI'm using a newly built Ubuntu laptop which has had apt install python2 done. However, there is only a python3 version of pip available. The command python2 -m pip install pyyaml says there is no module named pip. I cannot do apt install python2-pip or anything like apt install python2-pyyaml or python2-yaml or similar as there are no such packages available any more. Is there an easy way I can install module yaml for python2 now that python2 is unsupported?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":819,"Q_Id":67537167,"Users Score":1,"Answer":"Just do a curl for the following to install pip for python2:\ncurl https:\/\/bootstrap.pypa.io\/pip\/2.7\/get-pip.py --output get-pip.py\nThen run\nsudo python2 get-pip.py\nYou can then use the command pip2 --version tp check if pip is installed and for the correct version of python.","Q_Score":0,"Tags":"python,python-2.7,yaml,pyyaml","A_Id":67537253,"CreationDate":"2021-05-14T15:48:00.000","Title":"Installing python2 pyyaml","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running a python app in a different network namespace and it opens a TCP connection to a websocket. The problem is that this connection has microfreezes. It would run fine for a minute approximately and then it would hang for a second. I think it's the network namespace problem because if I run outside it there is no problem.\nI monitored the TCP buffers with ss -tm and what I notice is that when the freeze starts, the buffers also start to fill up. They seem to be empty the rest of the time. Any help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":67539287,"Users Score":0,"Answer":"I found the problem, the app tried to connect to a localhost socket. It failed\nbecause that socket is open with the main ip and not in the network namespace.\nBecause the app is written in python it would hang while trying to connect.","Q_Score":0,"Tags":"python,linux,networking","A_Id":67540910,"CreationDate":"2021-05-14T18:25:00.000","Title":"network namespace TCP microfreeze","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"when i run \"sudo apt install spyder\" this command or another using sudo apt it through an error in ubuntu 20.04\nE: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":415,"Q_Id":67545066,"Users Score":1,"Answer":"Well, just follow what the command says. Try to run 'sudo dpkg --configure -a'","Q_Score":0,"Tags":"python-3.x,linux,spyder,ubuntu-20.04,failed-installation","A_Id":67545731,"CreationDate":"2021-05-15T09:09:00.000","Title":"ubuntu error when install app E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a basic background in DS with Python and I try now the first time to build an application and I need a bit advise which infrastructure to choose on AWS and how to structure the application. The code I can develop\/google on my own :)\nThe main question is: Where\/On which platform of AWS should happen Step 2. I guess I miss there some basic knowledge of applications and therefore I have problems to google the problem myself.\nWhat should happen by the application:\n\nOn a website a user types in values in a form and this values are sended somewhere so be processed. (Already coded)\nNow, this values (so far an email with the values) has to be sent somewhere to be processed. Here I do not know in which infrastructure of AWS I can write an application that can receive this values (\/email) directly and process it automatically?\n3.\/4. Automated process of values, pdf creation and sending etc.\n\nGoal is that always when a user uses the website and sends the email, that the automated process is triggered.\nThank you for your help! :)","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":51,"Q_Id":67551103,"Users Score":1,"Answer":"I would suggest you use a \"fanning out\" architecture with something like Eventbridge or SNS topic.\nWhen your user submits form, you publish a message to an SNS topic.\nThat topic can send an email, and also send the data to a backend service like lambda to save to something like DynamoDB or something like RDS MySQL.","Q_Score":0,"Tags":"python,amazon-web-services,automation,architecture,infrastructure","A_Id":67551504,"CreationDate":"2021-05-15T20:49:00.000","Title":"Which AWS infrastructure to choose to write an automated application in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I coded my own virtual assistant in python. I know how to start apps through my voice via os.system or os.startfile, but I don't know how to close the current window ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":67558580,"Users Score":0,"Answer":"os.system('taskkill \/F \/IM processname')\ne.g: os.system('taskkill \/F \/IM chrome.exe')","Q_Score":0,"Tags":"python","A_Id":67558742,"CreationDate":"2021-05-16T15:39:00.000","Title":"How to close any window using OS library in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to run a Docker container with some Python bots inside. These scripts need libraries like pandas, numpy and matplotlib.\nEverything works great in an x86 environment, but building the image on a Arm environment takes ages.\nFrom what I understood, it seems like it needs to compile the libraries for Arm instead of just downloading the compiled files. I am trying to find a way to avoid all of this.\n\nMatplotlib, pandas, numpy: They're pretty heavy packages, but I need just a few functionalities. Is the a slim version of these libraries?\nAny way to store the compiled stuff in a permanent cache somewhere in the pipeline? (I am using both GitHub and GitLab to build this)\n\nAny help is appreciated\nRegards","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":463,"Q_Id":67591231,"Users Score":1,"Answer":"As @Itamar Turner-Trauring mentioned, matplotlib, numpy, pandas and many more libraries store wheels for aarch64 as well as for x86.\nIn my case, pip was downloading the x86 wheels by default, and then it must compile them during the build and it takes ages.\nAfter changing the pandas version I was using from 1.1.5 to 1.3.5, pip logs changed\nfrom:\nDownloading https:\/\/****\/pypi\/download\/pandas\/1.1.5\/pandas-1.1.5.tar.gz (5.2 MB)\nTo:\nDownloading https:\/\/****\/pypi\/download\/pandas\/1.3.5\/pandas-1.3.5-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (10.9 MB)\nAnd the entire build time changed from 58 to 8 minutes.","Q_Score":1,"Tags":"python,docker,arm,gitlab-ci,github-actions","A_Id":70644911,"CreationDate":"2021-05-18T17:29:00.000","Title":"Python ARM in Docker - Installing requirements takes ages","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In jupyter, when an error occurs, we can continuously debug the error with the command %debug. I want to know if there is the similar way in running python script (.py) in the shell.\nI know that I can use pdb to make some break points, but I just want to know the way without such a pre-processing (because re-running the code until the error costs a lot of time).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":19,"Q_Id":67595247,"Users Score":0,"Answer":"In general, no: it depends on \"the shell\" that you are running. Jupyter launches with a lot of instrumentation in support of its debugger, assuming that you're using Jupyter because you want those capabilities at the ready.\nI presume that you're using some UNIX shell (since you mention pdb); implicitly loading superfluous software is antithetical to the UNIX philosophy.\nI think that what you'll need is one of the \"after\" debugger modes, although that will still leave you without information from just before the error point: those packages cannot do much to trace the history of problem variables.","Q_Score":0,"Tags":"python","A_Id":67595282,"CreationDate":"2021-05-18T23:46:00.000","Title":"Any command like '%debug' (in jupyter) when running python script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"For example: you want to start file \"hello.txt\" in python, you will use this code:\n os.startfile(\"hello.txt)\nBut the system will open \"hello.txt\" with default program\nIf I want to start \"hello.txt\" with Sublime Text or notepad+ or any program which is not a default program. What I have to do:?:(\nThanks (and sorry for my bad english)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":67609041,"Users Score":0,"Answer":"Two possible solutions: Distribute the editor within you application or search for it on the local file system. Most editors got CLI commands, mostly just append the filename right next to the editors executable in command line.","Q_Score":1,"Tags":"python,python-3.x","A_Id":67609060,"CreationDate":"2021-05-19T18:36:00.000","Title":"How to starfile with program which I want?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to deploy function to tabpy server , but I am getting following error while executing :\nclient.deploy('add',add,'adding x and y')\nI am getting following error :\n\nOverwriting existing file \"\/usr\/local\/lib\/python3.9\/site-packages\/tabpy\/tabpy_server\/staging\/endpoints\/name\/1\" when saving query object\nError with server response. code=500; text={\"message\": \"error adding endpoint\", \"info\": \"FileNotFoundError : [Errno 2] No such file or directory: '\/usr\/local\/lib\/python3.9\/site-packages\/tabpy\/tabpy_server\/staging\/endpoints\/name\/1'\"}\nI am able to deploy if I run tabpy on my local machine , but running in docker is not working.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":67609679,"Users Score":0,"Answer":"You can only deploy functions directly from Tabpy server. remote deployment is not possible.","Q_Score":0,"Tags":"python,docker,tableau-api,tabpy","A_Id":71685183,"CreationDate":"2021-05-19T19:25:00.000","Title":"Tabpy: function Deployment fails on Docker running Tabpy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have to connect to a server where my user has access to one small partition from \/home\/users\/user_name where I have a quota of limited space and a bigger partition into \/big_partition\/users\/user\nAfter I am logging into that server I will arrive at \/home\/users\/user_name at the bigging. After that, I am doing the following steps.\n\ncd \/big_partition\/users\/user\ncreate conda --prefix=envs python=3.6\n\non the 4th line, it says Package plan for installation in environment \/big_partition\/users\/user\/envs: which is ok.\n\npress y, and not I am getting the following message.\nOSError: [Errno 122] Disk quota exceeded: '\/home\/users\/user_name\/.conda\/envs\/.pkgs\/python-3.6.2-0\/lib\/python3.6\/unittest\/result.py'\n\n\nCan anyone help me to understand how can I move the .conda folder from \/home\/users\/user_name to \/big_partition\/users\/user at the moment when I am creating this environment?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2230,"Q_Id":67610133,"Users Score":1,"Answer":"I found the solution. All I need to do is to export CONDA_ENVS_PATH with the path where I want to be the .conda\nexport CONDA_ENVS_PATH=.","Q_Score":3,"Tags":"python,linux,anaconda,conda,anaconda3","A_Id":67611958,"CreationDate":"2021-05-19T20:03:00.000","Title":"How to move .conda from one folder to another at the moment of creating the environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to get a complete list of files in a folder and all its subfolders regularly (daily or weekly) to check for changes. The folder is located on a server that I access as a network share.\nThis folder currently contains about 250,000 subfolders and will continue to grow in the future.\nI do not have any access to the server other than the ability to mount the filesystem R\/W.\nThe way I currently retrieve the list of files is by using python's os.walk() function recursively on the folder. This is limited by the latency of the internet connection and currently takes about 4.5h to complete.\nA faster way to do this would be to create a file server-side containing the whole list of files, then transfering this file to my computer.\nIs there a way to request such a recursive listing of the files from the client side?\nA python solution would be perfect, but I am open to other solutions as well.\nMy script is currently run on Windows, but will probably move to a Linux server in the future; an OS-agnostic solution would be best.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":80,"Q_Id":67632192,"Users Score":1,"Answer":"You have provided the answer to your question:\n\nI do not have any access to the server other than the ability to mount the filesystem R\/W.\n\nNothing has to be added after that, since any server side processing requires the ability to (directly or indirectly) launch a process on the server.\nIf you can collaborate with the server admins, you could ask them to periodically start a server side script that would build a compressed archive (for example a zip file) containing the files you need, and move it in a specific location when done. Then you would only download that compressed archive saving a lot of network bandwidth.","Q_Score":0,"Tags":"python,smb,fileserver","A_Id":67632560,"CreationDate":"2021-05-21T07:01:00.000","Title":"Request recursive list of server files from client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to understand Linux OS library dependencies to effectively run python 3.9 and imported pip packages to work. Is there a requirement for GCC to be installed for pip modules with c extention modules to run? What system libraries does Python's interpreter (CPython) depends on?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":610,"Q_Id":67655350,"Users Score":2,"Answer":"I'm trying to understand Linux OS library dependencies to effectively run python 3.9 and imported pip packages to work.\n\nYour questions may have pretty broad answers and depend on a bunch of input factors you haven't mentioned.\n\nIs there a requirement for GCC to be installed for pip modules with c extention modules to run?\n\nIt depends how the package is built and shipped. If it is available only as a source distribution (sdist), then yes. Obviously a compiler is needed to take the .c files and produce a laudable binary extension (ELF or DLL). Some packages ship binary distributions, where the publisher does the compilation for you. Of course this is more of a burden on the publisher, as they must support many possible target machines.\n\nWhat system libraries does Python's interpreter depends on?\n\nIt depends on a number of things, including which interpreter (there are multiple!) and how it was built and packaged. Even constraining the discussion to CPython (the canonical interpreter), this may vary widely.\nThe simplest thing to do is whatever your Linux distro has decided for you; just apt install python3 or whatever, and don't think too hard about it. Most distros ship dynamically-linked packages; these will depend on a small number of \"common\" libraries (e.g. libc, libz, etc). Some distros will statically-link the Python library into the interpreter -- IOW the python3 executable will not depend on libpython3.so. Other distros will dynamically link against libpython.\nWhat dependencies will external modules (e.g. from PyPI) have? Well that completely depends on the package in question!\nHopefully this helps you understand the limitations of your question. If you need more specific answers, you'll need to either do your own research, or provide a more specific question.","Q_Score":0,"Tags":"python,c,linux,gcc","A_Id":67655446,"CreationDate":"2021-05-23T01:09:00.000","Title":"Does Python has dependency on C compilers like GCC in Linux? What OS libraries does Python depends on?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"After trying to run \"service apache2 reload\" on the raspberry terminal, I get an error\nThe error log reads:\n\"apache2.service is not active, cannot reload.\"\nHow do I activate \"apache2.service\" on the raspberry pi?\nI've already installed apache2 on my raspberry pi, but I can't reload it.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":67658607,"Users Score":0,"Answer":"Please try this one sudo service apache2 start . Make sure apache is properly installed in your system.","Q_Score":0,"Tags":"python,flask,web,server,raspberry-pi","A_Id":67658658,"CreationDate":"2021-05-23T10:19:00.000","Title":"How to activate apache2 service on Raspberry Pi","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a python script, which I have successfully converted to .exe file using pyinstaller. Somehow, the icon doesn't show up, if I could get some help\/tip w\/ that. It was stored as .icns file as I work on a mac.\nMain issue here is, my python script writes a .csv file. When I run the .exe file, the csv saves to the main user folder, and not the same folder as the exe file. How do I solve this? I have tried googling and every other method, but doesn't work.\nOne solution is to direct to the folder using command-line and then type '.\/file.exe'.\nBut I am really looking for an option, which gets this done by default","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":46,"Q_Id":67667621,"Users Score":2,"Answer":"While saving the .csv file, you can use os.getcwd()+'\/file.csv'. Hope this might help.","Q_Score":0,"Tags":"python,pyinstaller,exe","A_Id":67667879,"CreationDate":"2021-05-24T06:37:00.000","Title":"How to store file from .exe in desired folder?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am building a Python application bundled with PyInstaller which is available for Windows, Linux (as AppImage) and MacOS.\nI need shortcut icons for those. I have a logo, but I am struggling to bring it to the right format. It seems to me that all three platforms use different icon sizes and formats.\nIs there any online\/offline generator that allows me to create icons for all three platforms from a jpg file?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":634,"Q_Id":67689014,"Users Score":0,"Answer":"It's easier if you just use auto-py-to-exe, it uses pyinstaller, but it auto-generates the command, and so you can just upload the image to a GUI","Q_Score":4,"Tags":"python,icons,pyinstaller","A_Id":67825779,"CreationDate":"2021-05-25T13:29:00.000","Title":"How to create shortcut icons for Windows, MacOS and Linux applications bundled with PyInstaller","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I see that this has been asked before but the solution did not work for me. When entering PIP into the command prompt I get 'pip' is not recognized as an internal or external command,\noperable program or batch file.\nI located pip.exe at C:\\Users\\ZachPC\\Anaconda3\\Scripts and tried adding that to the path environment variable and it still gives me the same error in the command prompt.\nI am not sure why I am still not able to use PIP after adding the path to environment variables path in the control panel.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":67697229,"Users Score":0,"Answer":"if you are using jupyter notes, try \"!pip ....\" as the beginning of the command.","Q_Score":0,"Tags":"python","A_Id":67697248,"CreationDate":"2021-05-26T00:47:00.000","Title":"Trying to use PIP for python in CMD and getting \"'pip' is not recognized as an internal or external command, operable program or batch file.\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an ETL process that runs constantly and with multiple subprocesses using ProcessPoolExecutor. In that process I need to run a few bash commands, and up until now I've used os.system because I was afraid of opening a new subprocess in my subprocess, it caused my a lot of problems in the past (memory error on my machine).\nAny idea of replacement for Popen or maybe reassurance that its way of opening subprocess cant hurt.\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":104,"Q_Id":67700645,"Users Score":1,"Answer":"Not sure what you need exactly but maybe the following infos will help you:\nSo basically you have the variants:\nos.system(\"ls -l\") executes the command passed as a string in a sub-shell ( os.system.__doc__ 'Execute the command in a subshell.')\nsubprocess.call([\"ls\",\"-l\"]) or subprocess.call(\"ls -l\", shell=True) difference being when shell=False is set, there is no system shell started up, so the first argument must be a path to an executable file and if shell=True means system shell e.g. \\bin\\sh will first spin up.\nsubprocess.Popen direct call to the process constructor, where the command is passed as a list and you can set the stdout, stderr value to PIPE and call comunicate() to get the output.\nin Python 3.5 subprocess.run was added as a simplification for subprocess.Popen. The main difference between them is that run executes the command and waits for it to finish while with Popen you can continue doing stuff while the process finishes and then just make calls to comunicate","Q_Score":0,"Tags":"python,subprocess","A_Id":67700784,"CreationDate":"2021-05-26T07:47:00.000","Title":"replacement for Popen that doesn't start new subprocess","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am cloning a project's venv directory to my Windows environment. The virtual env was created on a Linux box. As a result, I guess, it doesn't include the Scripts\\Activate script. The Scripts\\ folder is completely empty.\nWhat do I need to do to Activate the environment (so I can use it in Visual Studio Code)?\nEdit: Apparently I got the venv paradigm wrong. You don't share the venv, but only the requirements.txt and allow each dev to create their venv (or not, if they choose). Thanks @bck and @Santiago","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":120,"Q_Id":67707638,"Users Score":0,"Answer":"Apparently I got the venv paradigm wrong. You don't share the venv, but only the requirements.txt and allow each dev to create their venv (or not, if they choose). Thanks @bck and @Santiago","Q_Score":0,"Tags":"python,python-venv","A_Id":67707932,"CreationDate":"2021-05-26T15:03:00.000","Title":"Python venv from Linux is missing Windows Scripts","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to make a python3.7 installation for CentOS8, so I can install that via rpm\/yum rather than building from source on the target machine (need to avoid installing gcc and other build deps there).\nIs that a reasonable possibility? I'm comfortable building python from source, but I don't know how to package up the resulting install in a portable way (portable to other machines running the same OS). RPM would be ideal but I'd be happy with a tgz and a known set of yum runtime dependencies.\nNote that there is no official CentOS8 python 3.7, only 3.6 and 3.8. I specifically need 3.7.\nGoogling for \"build python distribution\" or \"build python RPM\" just shows how to build python modules for distribution, not python itself.\n(I know miniconda\/miniforge is an alternative way to get this done, but I'd prefer to do the build myself.)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":67723558,"Users Score":0,"Answer":"I didn't figure out how to build an RPM, but it turns out it's good enough just to build with --prefix=\/opt\/python37 and then tar that up. On the target machine, untar and add the shared lib to ld.so.conf and it works fine.","Q_Score":0,"Tags":"python,rpm,centos8","A_Id":67727182,"CreationDate":"2021-05-27T13:49:00.000","Title":"How can I build an rpm or tgz of a particular python version, specifically python3.7 for CentOS8","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to introduce dynamic workflows into my landscape that involves multiple steps of different model inference where the output from one model gets fed into another model.Currently we have few Celery workers spread across hosts to manage the inference chain. As the complexity increase, we are attempting to build workflows on the fly. For that purpose, I got a dynamic DAG setup with Celeryexecutor working. Now, is there a way I can retain the current Celery setup and route airflow driven tasks to the same workers? I do understand that the setup in these workers should have access to the DAG folders and environment same as the airflow server. I want to know how the celery worker need to be started in these servers so that airflow can route the same tasks that used to be done by the manual workflow from a python application. If I start the workers using command \"airflow celery worker\", I cannot access my application tasks. If I start celery the way it is currently ie \"celery -A proj\", airflow has nothing to do with it. Looking for ideas to make it work.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":495,"Q_Id":67725304,"Users Score":0,"Answer":"Thanks @DejanLekic. I got it working (though the DAG task scheduling latency was too much that I dropped the approach). If someone is looking to see how this was accomplished, here are few things I did to get it working.\n\nChange the airflow.cfg to change the executor,queue and result back-end settings (Obvious)\nIf we have to use Celery worker spawned outside the airflow umbrella, change the celery_app_name setting to celery.execute instead of airflow.executors.celery_execute and change the Executor to \"LocalExecutor\". I have not tested this, but it may even be possible to avoid switching to celery executor by registering airflow's Task in the project's celery App.\nEach task will now call send_task(), the AsynResult object returned is then stored in either Xcom(implicitly or explicitly) or in Redis(implicitly push to the queue) and the child task will then gather the Asyncresult ( it will be an implicit call to get the value from Xcom or Redis) and then call .get() to obtain the result from the previous step.\n\nNote: It is not necessary to split the send_task() and .get() between two tasks of the DAG. By splitting them between parent and child, I was trying to take advantage of the lag between tasks. But in my case, the celery execution of tasks completed faster than airflow's inherent latency in scheduling dependent tasks.","Q_Score":1,"Tags":"python,celery,airflow,celery-task","A_Id":67856388,"CreationDate":"2021-05-27T15:26:00.000","Title":"Use existing celery workers for Airflow's Celeryexecutor workers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have recently downloaded Bowtie2-2.3.4 via the precompiled binaries, and added the file location to my path. I have also installed strawberry perl, and added that location to my path as well.\nWhen I try to determine whether bowtie2 is correctly installed by typing bowtie2, bowtie2 -- version, bowtie1 - inspect, get the message: \"Can't open perl script \"C:\\Users\\lberd\\Documents\\Python\": No such file or directory\". If I create an empty directory called Python in my documents folder, it give me \"Can't open perl script \"C:\\Users\\lberd\\Documents\\Python\": Permission denied\".\nI have python 3.9.5 installed, and the location is in my path as well.\nWhat is bowtie2 looking for in the python folder?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":67744165,"Users Score":0,"Answer":"Found the problem: the bowtie binaries were saved in a folder called \"python scripts\" bowtie did not know how to handle the space in the folder name, which is why it stopped at \"python\".","Q_Score":0,"Tags":"python,bowtie","A_Id":67808770,"CreationDate":"2021-05-28T18:52:00.000","Title":"Bowtie2 is expecting a Documents\/Python folder. why?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to write this code on windows :\nos.path.join(folder1 + \"\/\" + folder2)\nit works fine in MAC but in windows it gives me an error: OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'C:\\Users\\Khalaf\\Desktop\\test\\dataset-images\\x.jpg' -> 'C:\\Users\\Khalaf\\Desktop\\test\\dataset-images\\C:\\Users\\Khalaf\\Desktop\\test\\dataset-images-1.jpg'","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1080,"Q_Id":67772480,"Users Score":0,"Answer":"I am trying to learn python and am making a program that will output a script. I want to use os.path.join, but am pretty confused.\nos.path.join('c:', 'sourcedir')\nwhen I use the copytree command, Python will output it the desired way, for example:\nimport shutil\nsrc = os.path.join('c:', 'src')\ndst = os.path.join('c:', 'dst')\nshutil.copytree(src, dst)\nWindows has a concept of current directory for each drive. Because of that, \"c:sourcedir\" means \"sourcedir\" inside the current C: directory, and you'll need to specify an absolute directory.\nAny of these should work and give the same result, but I don't have a Windows VM fired up at the moment to double check:\n\"c:\/sourcedir\"\nos.path.join(\"\/\", \"c:\", \"sourcedir\")\nos.path.join(\"c:\/\", \"sourcedir\")","Q_Score":1,"Tags":"python,path","A_Id":67772699,"CreationDate":"2021-05-31T11:00:00.000","Title":"os.path.join() on windows?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a piece of software deployed to Kubernetes, and in that I have a scenario when I want to have one function called at a later point in time, but I am not sure my current pod will be the one executing it (it may be scaled down, for example).\nSo I need help with a mechanism that will enable me to schedule a function for later, on a pod of my software that may or may not be the one that scheduled it, and also a way to decide not to do the execution if some condition was met ahead of time.\nAlso - I need this to be enabled for thousands of such calls at any given point in time, this is a very fast execution software using Twisted python working on millions of tasks a day. But given the scaling up and down, I cannot just put it on the reactor for later.\nAlmost any use of a known module, external redis\/db is fine.\nSo - I need this community's help...\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":67790863,"Users Score":0,"Answer":"You are roughly speaking describing any worker queue system, with Celery as the most common one in Python. With RabbitMQ as the broker it can easily scale to whatever traffic you throw at it. Also check out Dask but I think Dask is baaaaaad so I mention it only for completeness.","Q_Score":1,"Tags":"python,kubernetes,distributed","A_Id":67793836,"CreationDate":"2021-06-01T14:36:00.000","Title":"Delay a python function call on a different pod","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to start a python script on my Raspberry Pi through an SSH connection from my main machine. But obviously, as soon as I turn off my main machine the SSH terminal will close and with that my python script will be cancelled.\nI'm looking for a method where I can use SSH to open a terminal that runs on the Pi internally and does not close as soon as I turn off my main machine. This should be possible without connecting keyboard and screen to the raspberry, right?\nThank you so much for any tips","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":86,"Q_Id":67802986,"Users Score":0,"Answer":"you can use screen, tmux or nohup for this:\nscreen:\nscreen manager with VT100\/ANSI terminal emulation\ntmux:\nterminal multiplexer\nnohup:\nrun a command immune to hangups, with output to a non-tty\nscreen and tmux will start virtual session that you can connect to later, for hotkeys settings see manual pages\nnohup will just avoid to kill the process for example:\nyes nohup;\nthis will write the output from the program \"yes\" to the nohup.out file and it will not be terminated by disconnection of your ssh","Q_Score":1,"Tags":"python,ssh,terminal,raspberry-pi","A_Id":67803066,"CreationDate":"2021-06-02T10:04:00.000","Title":"Opening and running local terminal through SSH","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want my RPi to open run a python webserver and open Chromium to access it.\nIt should do it automaticaly on startup.\nBUT\nWhen I run the script via command\n$ sudo python3 \/home\/sps-training\/python\/webserver.py\nand open localhost on chromium, it says that it can't access HTML file in another folder (\/deploy)\nbut when I open the dictionary first\n$ cd \/home\/sps-training\/python\/\nand then open the script\n$ python3 webserver.py\nit suddenly works!\nSo there are 2 possible solutions\nThe first one is to make it work by using:\n$ sudo python3 \/home\/sps-training\/python\/webserver.py\nThe second one is to automatically access directory and the start the script\nRight now I'm using \/etc\/profile to run it on startup (I just wrote at the last line with & on the end of the line)\nThanks a bunch for every advice!\nbtw sorry for grammar","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":117,"Q_Id":67809791,"Users Score":0,"Answer":"Like Barmar said. It sounds like the issue you're running into is a working directory vs. current directory problem. When you run sudo python3 \/home\/sps-training\/python\/webserver.py you are launching webserver.py in the directory \/ and if it looks for \"deploy\/index.html\" it's going to look in \/deploy\/index.html. However, when you cd into \/home\/sps-training\/python\/ and then run python3 webserver.py the working directory is now \/home\/sps-training\/python\/ and it will look for deploy\/index.html in \/home\/sps-training\/python\/deploy\/index.html\nThe best solution is to edit the python script to use absolute file paths.\nBarring that, you need to cd to the correct working directory every time.","Q_Score":0,"Tags":"python,linux,raspberry-pi,raspbian","A_Id":67810189,"CreationDate":"2021-06-02T17:23:00.000","Title":"RPi - accessing a folder and then running a python script on startup","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am rather new to java and tomcat and wanted to see if I could get yalls help in an understanding of what's happening.\nI am noticing in our tomcats catalina.sh file, the following env vars are set:\nexport JAVA_HOME=\"\/home\/aim111prod\/jdk-11.0.7+10\"\nexport JAVA_OPTS=\"-Xms8192m -Xmx8192m -Drhino.opt.level=0 -Djava.awt.headless=true -Dcom.aw.aim.bootstrap.file=\/home\/aim111prod\/conf\/Catalina\/bootstrap.xml -Dfile.encoding=UTF8 -Djava.locale.providers=COMPAT -Duser.language=en -Duser.country=US\"\nTomcat is running, but when I run 'printenv' these 2 env vars do not appear in 'printenv' results.\nI am hoping to eventually write a simple start\/stop\/restart script using python3.\nI am assuming because 'printenv' is not showing the 2 set java options in catalina.sh that os.environ.get() wouldn't work either.\nWhat I am noticing is that when 'printenv' is used,\nJAVA_HOME=\/usr\/lib\/jvm\/jre\nI suppose I am needing some help in understanding the way the 2 environment variables are being used?\nHow would I reference the 2 env vars set in catalina.sh with python if 'printenv' isn't picking them up?\nPlease take it easy on me lol I am not as smart as some of you all, but I am trying!\nThank you for your help \/ time\/ suggestions.\n--\nbash-brain","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":292,"Q_Id":67810985,"Users Score":0,"Answer":"It sounds like you're expecting catalina.sh to set those environment variables, and have the changes take effect system-wide. But environment variables don't work like that; export changes the environment of the current process and any child processes.\nIt doesn't affect any parent processes, or unrelated processes on the system.\nOne solution is to write a wrapper shell script that sources catalina.sh (to set up the environment variables in the process that's running your wrapper script), then calls python. The python program would then be able to see the correct set of environment variables. Note: you would have to use a command like source catalina.sh or . catalina.sh to run the commands in catalina.sh in the current process. If you simply execute catalina.sh as a standalone command, it will run in a new process, set the variables there, and exit the process without propagating the variables back to the parent process (i.e. your wrapper script).","Q_Score":1,"Tags":"java,python-3.x,bash,tomcat,environment-variables","A_Id":67811235,"CreationDate":"2021-06-02T18:59:00.000","Title":"JAVA_HOME & JAVA_OPTS set in catalina.sh but 'printenv' in bash\/python doesn't show those environment variables","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm having different python programs doing long polling at different machines, and am thinking of a queuing based mechanism to manage the load and provide an async job functionality.\nThese programs are standalone, and aren't part of any framework.\nI'm primarily thinking about Celery due to its ability for multi-processing and sharing tasks across multiple celery workers. Is celery a good choice here, or am I better off simply using an event based system with RabbitMQ directly?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":132,"Q_Id":67816611,"Users Score":0,"Answer":"I would say yes - Celery is definitely a good choice! We do have tasks that run sometimes for over 20 hours, and Celery works just fine. Furthermore it is extremely simple to setup and use (Celery + Redis is supersimple).","Q_Score":1,"Tags":"python,asynchronous,celery","A_Id":67818794,"CreationDate":"2021-06-03T06:44:00.000","Title":"Using Celery for long running async jobs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I try to run a line of code:\nfrom transformers import pipeline\nI am getting the panicException as follow as:\nPanicException: env_logger::init_from_env should not be called after logger initialized: SetLoggerError(()) in Python","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":67821130,"Users Score":0,"Answer":"Looks like this is a Rust module that's packaged with python. It uses env_logger, which needs to be set before the logger is created in Python. That's why this happened. It looks like a a bug in the library itself.","Q_Score":1,"Tags":"python,exception,panic","A_Id":71737813,"CreationDate":"2021-06-03T12:03:00.000","Title":"PanicException: env_logger::init_from_env should not be called after logger initialized: SetLoggerError(())","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently using basic version of cluster on Confluent cloud and I only have one topic with 9 partitions. I have a REST Api that\u2019s setup using AWS lambda service which publishes messages to Kafka.\nCurrently i am stress testing pipeline with 5k-10k requests per second, I found that Latency is shooting up to 20-30 seconds to publish a record of size 1kb. Which is generally 300 ms for a single request.\nI added producer configurations like linger.ms - 500 ms and batch.size to 100kb. I see some improvement (15-20 seconds per request) but I feel it\u2019s still too high.\nIs there anything that I am missing or is it something with the basic cluster on confluent cloud? All of the configurations on the cluster were default.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":144,"Q_Id":67829974,"Users Score":1,"Answer":"Identified that the issue is with API request which is getting throttled. As mentioned by Chris Chen, due to the exponential back-off strategy by AWS SDK the avg time is shooting up. Requested AWS for increase in concurrent executions. I am sure it should solve the issue.","Q_Score":0,"Tags":"amazon-web-services,apache-kafka,confluent-platform,confluent-kafka-python,confluent-cloud","A_Id":67898337,"CreationDate":"2021-06-04T00:01:00.000","Title":"AWS lambda to Confluent Cloud latency issues","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to figure out how to implement a workflow so that a sensor task wait for external dag to complete, for only wait for a certain number of days. It's a daily job so I'd like a sensor job to wait for example 3 days, and on the forth day send out an email, and either waiting or do some other task.\nCould someone please help to shed some light on how to achieve this? Also how do we communicate the \"days counter\" from one day to another? Many thanks for your help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":715,"Q_Id":67837592,"Users Score":4,"Answer":"You can use a ExternalTaskSensor with the following configurations:\n\ntimeout = 3 * 24 * 60 * 60 - 3 days in seconds, after that time sensor will fail\npoke_interval = 12 * 60 * 60 - 12h between sensor checks, you can adjust it to let say a check every hour. It will reduce number of times when you check the external dag state\nmode = \"reschedule\" - in this way the sensor will not occupy worker slot for 3 days, it will be scheduled, executed and if condition is not met it will be rescheduled to be executed in next poke_interval seconds. It's a good practice to use this mode for long running tasks.\n\nAdditionally you can build your waiting DAG as wait_task >> [success_task , fail_task] where\n\nwait_task is your sensor\nsuccess_task has trigger rule all_success and is followed when the sensors succeeds\nfail_task with all_failed trigger rule and handles scenario when sensor finally return false or timeouts","Q_Score":2,"Tags":"python,airflow-scheduler,airflow","A_Id":67838242,"CreationDate":"2021-06-04T12:53:00.000","Title":"Airflow sensor task only wait for a certain period","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I git cloned files on my Mac, now I want to delete them, but can't find, where is the location of default store?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1305,"Q_Id":67847692,"Users Score":1,"Answer":"The clone happens in the directory from where you issued the command if not specified explicitly. If you have lost the terminal, then search for all the folders with a .git directory and you will locate it.","Q_Score":1,"Tags":"python,github","A_Id":67847703,"CreationDate":"2021-06-05T08:14:00.000","Title":"Where to find my git clone repos on my mac?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Jenkins job that runs a Python code. The python file is located in a stage called \"run\".\nCan I add more Jenkins job stages from python code - at runtime?\nI mean, the final result would be 3 stages in Jenkins following Python code run:\n\nrun stage\nerrors stage\nsummary stage","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":67856978,"Users Score":0,"Answer":"I dont think adding stages at runtime is possible.But what is the intention of adding runtime stages so that would help in providing better suggestions?\nOne option could be : You can continue executing scripts in the run stage itself and take the return status, based on the conditions you could execute remaining stages which would be static but executed on codition based on the return status of the stage.","Q_Score":0,"Tags":"python,jenkins","A_Id":67858525,"CreationDate":"2021-06-06T08:04:00.000","Title":"Can I add Jenkins job stages to the pipeline at runtime?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So, I have access to a server by ssh with some gpus where I can run some python code. I need to do that using a docker container, however if I try to do anything with docker in the server i get permission denied as I dont have root access (and I am not in the list of sudoers). What am I missing here?\nBtw, I am totally new to Docker (and quite new to linux itself) so it might be that I am not getting some fundamental.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":70,"Q_Id":67868374,"Users Score":0,"Answer":"I solved my problem. Turns out I simply had to ask the server administrator to add me to a group and everything worked.","Q_Score":0,"Tags":"python,docker,ssh,server","A_Id":67871954,"CreationDate":"2021-06-07T08:35:00.000","Title":"Running python code in a docker image in a server where I dont have root access","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I\u2019m a bit confused as to what you mean.\nI built a python package and it's working fine. After that, I converted the python package from .py to .exe file and the .exe file also working fine.\nBut after I add .exe file to C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Startup this path. After the .exe file run when the windows system is restarted it raises the file not found error.\nIf I run the .exe file manually in a startup it works fine and it takes a path as C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\Programs\\Startup but after restarting the windows system it takes path C:\\Windows\\System32\\.\nIs there any other way to run the file automatically when the windows system is restarted or switch on?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":67869998,"Users Score":0,"Answer":"You can use the registry to run files on startup, HKEY_LOCAL_MACHINE\\Software\\Microsoft\\Windows\\CurrentVersion\\Run can be used, MAKE SURE YOU ARE CAREFUL when adding and removing values within the registry however.\nI'm not sure why the startup folder is not working for you however, I'm pretty sure buran is correct in his possible answer","Q_Score":0,"Tags":"python,python-3.x,windows","A_Id":67870099,"CreationDate":"2021-06-07T10:29:00.000","Title":"Is there a way to make an .exe file automatically start when a Windows 10 PC is booted\/started, only in the CMD\/Terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to databricks and sql and want to add some data there.\nI am using python notebook in azure databricks. I have created a very big empty delta table. Columns here:\nId| A| B| C| D| E| F| G| H| I| J| K (A,B,C.... are column names)\nI will parse log files as they will appear them in blob and create dataframes. The dataframes could like this.\nDF1\nA| B| C| D| (A,B,C.... are column names)\nDF2\nA| B| D| E| (A,B,C.... are column names)\nDF3\nA| B| D| F| (A,B,C.... are column names)\nI want to insert all of these data frames in the delta table. In addition I will also need to add Id(log_file_id). Is there a way to insert data in this manner to the tables?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":67871278,"Users Score":0,"Answer":"Create an empty dataframe having all the columns let say X. Then you can concatenate all the other dataframes into X. Then save X directly.","Q_Score":0,"Tags":"python,pandas,azure,databricks","A_Id":67879315,"CreationDate":"2021-06-07T12:02:00.000","Title":"databricks: Add a column and insert rest of the data in a table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to install the hdf5 library from Homebrew. Running brew install hdf5 in the terminal returns the following error message:\n\n==> Searching for similarly named formulae...\nError: No similarly named formulae found.\nError: No available formula or cask with the name \"hdf5\".\n==> Searching for a previously deleted formula (in the last month)...\nError: No previously deleted formula found.\n==> Searching taps on GitHub...\nError: No formulae found in taps.\n\nI am running this on a mac with Mojave version 10.14.6. What next steps should I try to download the hdf5 library?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":803,"Q_Id":67876124,"Users Score":1,"Answer":"It seems that at least part of my Homebrew download was in the wrong location. Running brew doctor, I got Warning: Homebrew\/homebrew-core was not tapped properly!. This was fixed by running rm -rf \"\/usr\/local\/Homebrew\/Library\/Taps\/homebrew\/homebrew-core\" and then brew tap homebrew\/core.","Q_Score":0,"Tags":"python,homebrew,hdf5","A_Id":67879103,"CreationDate":"2021-06-07T17:22:00.000","Title":"Unable to install hdf5 library from homebrew","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Can dolphindb write and query data at the same time by invoking python api?\nBy invoking dolphindb's python api, Can it write and query data at the same time?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":67885313,"Users Score":0,"Answer":"The python api of dolphindb supports multiple threads to write data and check data at the same time, but you must avoid writing data to the same partition at the same time, otherwise an error will be reported.","Q_Score":0,"Tags":"python,sql,dolphindb","A_Id":69732093,"CreationDate":"2021-06-08T09:49:00.000","Title":"Can dolphindb write and query data at the same time by invoking python api?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a requirement that I need to clean up GitLab repo, and need to remove all empty subdirectories present in a specific directory inside a specific project of a specific group.\nSince, there are more than 10000 such directories that I need to remove, I was planning to do it programmatically using Gitlab python API.\nHowever, I can't seem to find any way to list subdirectories or to remove them in GitLab Python API documentation. Is it possible to implement?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":139,"Q_Id":67888612,"Users Score":1,"Answer":"So, git doesn't actually store directories themselves as objects that it manages. It stores paths in tree objects, but once that part of the tree is empty they don't remove the sub-paths.\nThe simplest way to do the cleanup you want is simply remove all the files from the sub-directories and then reclone the repository. It will not recreate the sub-folders that are empty.\n[Newer versions of git may actually clean up sub-directories by the way, but older ones won't. My git 2.31 will, eg]","Q_Score":2,"Tags":"python,python-3.x,gitlab,gitlab-api","A_Id":67888736,"CreationDate":"2021-06-08T14:03:00.000","Title":"Deleting a subdirectory present in a directory inside a project Gitlab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have two parameters that I need either both or neither of. I thought of two solutions: one optional parameter with nargs=2, or two optional parameters that are contingent on each other. I found answers for nargs=2, but the help output seems messy and opaque to users. I'd like to investigate making two optional parameters that are contingent on each other.\nIf I provide the environment, I also need to provide the service, and vice-versa. If I provide neither, that's fine.\nNormal optional parameters get added with this type of help output:\nusage: script.py [-h] [-e ENVIRONMENT] [-s SERVICE] [-u] [-d]\nI want this type of help output (and the underlying requirements):\nusage: script.py [-h] [-e ENVIRONMENT -s SERVICE] [-u] [-d]\nIs there a flag or easy way to do this, so it shows up clearly in the help? Writing an additional check to enforce this is trivial, but making my help clear to users seems out of my reach.\nIn the mean time, I've added help to the argparse like this: parser.add_argument('-s', '--service', metavar='SERVICE', help='Service to use, if -e also used')","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":67896243,"Users Score":0,"Answer":"Though this option is not currently supported in argparse, you can use subparser as a workaround solution.\nEach dependent parameter will define a new subparser which will contain the other parameter as a required parameter.\nIt's a bit ugly and will create some code duplication, but it will solve your dependency problem and also will be reflected properly in the help and error.","Q_Score":1,"Tags":"python,argparse,optional-parameters","A_Id":67899287,"CreationDate":"2021-06-09T01:10:00.000","Title":"How do I make a pair of optional parameters contingent on each other in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"After creating a model with Azure designer and creating a Real-time inference pipeline I would like to use the trained artifacts in a local script. I'm trying to look for the model.py in the Azure Storage Explorer but cannot find it, or the python where the trained model is called using pytorch.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":197,"Q_Id":67913682,"Users Score":0,"Answer":"The model artifacts can be downloaded using az ml model download - that will get all of the files.","Q_Score":2,"Tags":"azure,azure-machine-learning-studio,azure-machine-learning-service,azureml-python-sdk","A_Id":67970119,"CreationDate":"2021-06-10T02:04:00.000","Title":"How to use trained artifacts from AzureML","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a windows server 2012, where I need to move some .hash file from one directory to another one.\nI build a simple python script that runs well when I launch it manually, the files are all moved.\nNow I need to schedule windows for run it every 5 minutes. Task scheduler schedule it, and every 5 minutes execute it, but simply the files are no more moved, even if Task Schedule says it runned successfully.\nI googled a lot of similar question, trying to fill the start in field, build a .cmd script and other things like that, none of these worked.\nHow can I fix it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":67967898,"Users Score":0,"Answer":"I found out later that there was something wrong with admin's permissions.\nWith the correct user now all works fine.","Q_Score":0,"Tags":"python,task,scheduled-tasks,windows-server,windows-task-scheduler","A_Id":68032271,"CreationDate":"2021-06-14T09:34:00.000","Title":"Task scheduler run my python script but it does nothing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have created a docker container with my flask app. and it works.\nNow, I have seen that a particular app functionality has a bug, and I have modified the particular .py script to solve the problem.\nHow can I do to rebuild the docker image taking into account only the changes and push it with changes into the docker hub?\nI'm looking for a solution that avoids recreating the image from the beginning and reinstalling the whole set of dependency from requirements.txt, but ju update a file.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":219,"Q_Id":67975453,"Users Score":0,"Answer":"docker build will automatically build just as many layers as are necessary, and docker push will only push the required deltas.\nThis means you don't have to do anything special.","Q_Score":0,"Tags":"python,docker,flask","A_Id":67975576,"CreationDate":"2021-06-14T18:30:00.000","Title":"How change a Python file within a container","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a P4 switch where it's connected to 2 hosts over mininet.\nI have created the basic.p4 controller was within the ingress processing created a meter function, along with the topology file in python.\nEverything within the SDN environment runs well, however, I am facing issues with the best practice on applying the metering function, mine is currently:\n\n0.0001 > allow 100packets\/sec, if each packet size is equal to 1000 bytes, then the obtained throughput can be 100 packets\/sec * 1000 (bytes) *8 = 800 kbps.\n\nI am setting the meter for the traffic that is destined to H2 (00:00:00:00:00:02).\nAny advice on the approach?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":67990382,"Users Score":0,"Answer":"Interesting, I will assume that your P4 SDN runs without any issues, so its a display of setting the meter, make sure you have defined your tables and set the meters after, here is an example:\n\ntable_add phy_forward forward 1 => 2\n\n\ntable_add phy_forward forward 2 => 1\n\n\ntable_add m_table m_action 00:00:00:00:00:02 => 2\n\n(the line bellow, if you have it within your meter function)\n\ntable_set_default m_filter drop\n\n\ntable_add m_filter NoAction 0 =>\nFinally have this:\nmeter_set_rates meter_functionName 2 0.0001:1 0.0005:1\n\nYou can have all of this in a text file and call it in your python run file.","Q_Score":0,"Tags":"python,sdn,metering","A_Id":67990458,"CreationDate":"2021-06-15T16:47:00.000","Title":"P4 Language Based Metering Monitoring Error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm attempting to install camelot, but for some reason Ghostscript won't install properly, so I keep getting the error RuntimeError: Please make sure that Ghostscript is installed whenever I try to use read_pdf.\nWhen I went to check if Ghostscript was installed using ctypes.util.find_library, it cannot find it. I have installed Ghostscript using homebrew at the terminal (Warning: ghostscript 9.54.0 is already installed and up-to-date. when I tried to do it a second time, making me pretty certain that it's installed).\nThe camelot documentation tells me that something is wrong, but doesn't specify what.\nIs anyone able to shed some light over where my errors are?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":367,"Q_Id":68028411,"Users Score":1,"Answer":"You need install ghostscript for python and add bin folder to environment variable path.","Q_Score":0,"Tags":"python,macos,installation,ghostscript,python-camelot","A_Id":68376143,"CreationDate":"2021-06-18T01:57:00.000","Title":"Ghostscript not installing properly - find_library('gs') returns None","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to be able to unit test my airflow operators without having to run airflow db init before the tests.\nIs there a way to do this ?\nThank you very much for your help !","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":729,"Q_Id":68049095,"Users Score":0,"Answer":"Not a direct answer but a different approach on this.\nWe have the same issue and still experimenting on this. For now we have decided to split the functionality of the custom airflow operator in two parts. Operator itself handling anything that has to do with the db and orchestrates the process and in another module we have the logic of the operator - that has nothing to do with the database.\nFor example, assume that you have an operator that gets a csv file from a bucket performs transformations and then inserts the file in the database. Lets call it CsvToDBOperator. For this we would have two files\/classes.\n\nCsvToDBOperator\nCsvToDBUtils\n\nThe second one will keep all the methods\/functions that are responsible for the transformations. The first one will be responsible to get the file from the bucket, pass it to the second for the transformation and then getting the result from it and finally inserting it in the DB.\nLike this you can write unit tests for CsvToDBUtils class and only create unit tests on those.","Q_Score":2,"Tags":"python,unit-testing,airflow","A_Id":68053246,"CreationDate":"2021-06-19T17:08:00.000","Title":"Airflow unit testing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to install streamlit which requires the pyarrow module (Python lib for Apache arrow).\nThere's no error message, the installation just hangs indefinitely.\nI did some research, and found out that probably pyarrow developers are not supporting Python 3.8 (not sure).\nHow can I use streamlit on macOS Big Sur 11.1?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":767,"Q_Id":68051068,"Users Score":-1,"Answer":"I think that what you perceive as \"installation just hangs\" is probably the installer compiling Arrow and all its dependencies. Which takes a lot of time.\nIf no wheel is provided for the platform you are targeting, pip will download the source code and try to compile everything from scratch.\nNotice by the way that you probably haven't reached the point where it actually tries to install arrow (it might still be installing numpy or cython) because unless you have libarrow (The C++ libraries) already installed system-wide, then installing pyarrow from source should fail with a \"Could NOT find Arrow\" error.","Q_Score":0,"Tags":"python,macos,pyarrow,streamlit","A_Id":68069691,"CreationDate":"2021-06-19T22:00:00.000","Title":"Can't install pyarrow on macOS Big Sur","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Python script, part of a test system that calls many third party tools\/processes on multiple [Windows] machines, and hence has been designed to clean up comprehensively\/carefully when aborted with CTRL-C; the clean-up can take many seconds, depending on what's going on. This clean-up process works fine from a [Windows] command prompt.\nI run that Python script from [a scripted pipeline] Jenkinsfile, using return_value = bat(\"python my_script.py params\", returnStatus: true), which also works fine.\nHowever I need to be able to perform the abort\/clean-up during a Jenkins [v2.263.4] run, i.e. when someone presses the little red X, and that bit I can't fathom. I understand that Jenkins sends SIGTERM when the abort button is pressed so I am trapping that in my_script.py:\nSAVED_SIGTERM_HANDLER = signal(SIGTERM, sigterm_handler)\n...and running the processes I would normally call from a KeyboardInterrupt in sigterm_handler() as well, but they aren't being called. I understand that the IO stream to the Jenkins console stops the moment the abort button is pressed; I can see that the clean-up functions aren't being called by looking at the behaviour of my script(s) from the \"other side\": it appears as though my_script.py is simply stopping dead, all connections from it drop the moment the abort button is pressed, there is no clean-up.\nCan anyone suggest a way of making the abort button in Jenkins give my bat()ed Python script time to clean-up? Or am I just doing something wrong? Or is there some other approach to this within Jenkins that I'm missing?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":191,"Q_Id":68051496,"Users Score":0,"Answer":"After much figuring out, and kudos to our tools people who found the critical \"cookie\" implementation detail in Jenkins, the workaround to take control of the abort process [on Windows] is as follows:\n\nhave Jenkins call a wrapper, let's call it (a), and open a socket or a named-pipe (socket would work on both Linux and Windows),\n(a) then launches (b), via \"start\" so that (b) runs as a separate process but, CRITICALLY, the environment that (a) passes to (b) MUST have JENKINS_SERVER_COOKIE=\"ignore\" added to it; Jenkins uses this flag to find the processes it has launched in order to kill them, so you must set this \"cookie\" to \"ignore\" to stop Jenkins killing (b),\n(b) connects back to (a) via the socket or pipe,\n(a) remains running for as long as (b) is connected to the socket or pipe but also lets itself be killed by CTRL-C\/SIGTERM,\n(b) then launches the thing you actually want to run,\nwhen (a) is terminated by a Jenkins abort (b) notices (because the socket or pipe will close) and performs a controlled shut-down of the thing you wanted to run before (b) exits,\nseparately, make a thing, let's call it (c), which checks whether the socket\/named-pipe is present: if it is then (b) hasn't terminated yet,\nin Jenkinsfile, wrap the calling of (a) in a try()\/catch()\/finally() and call (c) from the finally(), hence ensuring that the Jenkins pipeline only finishes when (b) has terminated (you might want to add a guard timer for safety).\n\nQuite a thing, and all for the lack of what would be a relatively simple API in Jenkins.","Q_Score":0,"Tags":"python,shell,jenkins-pipeline,jenkins-groovy,sigterm","A_Id":68213142,"CreationDate":"2021-06-19T23:29:00.000","Title":"Jenkinsfile and allowing an aborted `bat()` command time to clean up","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have some Python scripts which I am running in Windows using Command Prompt by entering the following command: python -m scriptname.\nI have noticed that quite often, these scripts randomly pause and stop running. I can resume them by typing any key in Command Prompt, but I have no idea why this is happening or how to prevent it all together. Has anybody encountered a similar problem, and does anybody have any suggestions they may be able to offer?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":398,"Q_Id":68051819,"Users Score":1,"Answer":"Right click on the cmd window, and click on properties. Check if you have QuickEdit Mode on. If you do, uncheck that box.\nThis is a known issue for command prompt that if you click on the window, it will enter select mode, which will pause the program.","Q_Score":1,"Tags":"python,windows,terminal,command-line-interface","A_Id":68051962,"CreationDate":"2021-06-20T00:51:00.000","Title":"Why Do My Python Scripts Pause in Terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Can I use the same key pair generated on my windows environment in Linux Environment to decrypt?\nSuppose I generate a key pair using python-gnupg in my windows environment and encrypt a file. Can I use the private key of this generated key in my Linux environment to decrypt the message?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":71,"Q_Id":68058266,"Users Score":1,"Answer":"The real question here is how can you securely transfer your private key from one system to another? If you have a secure means of file transfer, you could probably use it for the main file instead of just for the key, and an extra layer of encryption is probably unnecessary.\nIf you cannot securely transfer any files, then you shouldn't send any plaintext or secret keys between systems. Fortunately, this is a situation where public key encryption shows its strengths. You can create two separate key pairs, one for each system. You only need to have each system send the other the public key of the key pair, the private keys are never taken off the system they were created on. You don't care if an attacker is able to get a copy of those public keys, indeed, some public keys are published on the internet!\nWhen you have a file you want to securely send from one system to another, you use the public key for the recipient to do the encryption. The sender may also want to sign the file with its own private key (so the integrity of the file can be verified at the other end). The encrypted (and signed) file can then be transported by less secure means from one system to the other, without too much fear of an attacker getting a copy, since it will be very hard for that attacker to crack the encryption. The recipient can decrypt the file using their secret key (and verify the signature using the public key of the sender).","Q_Score":0,"Tags":"python,encryption,public-key-encryption,pgp,python-gnupgp","A_Id":68058456,"CreationDate":"2021-06-20T17:10:00.000","Title":"PGP Encryption Key Usage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"so I scripted an executable program with tkinter in Python, and it serves as an launcher for my apps. It has buttons to be pressed to execute command subprocess.call. But when I use it to start an app, the whole script hangs and my system Windows 10 sees it as not responding. Everything only goes back to normal after I close the process that the script started. I used try except but it did not do anything. My expectation is that the script starts the app and goes on to continue waiting for the next command from the user. How do I achieve that?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":68067345,"Users Score":0,"Answer":"Ok problem solved I replaced my subprocess.call with subprocess.Popen and everything works now :D","Q_Score":0,"Tags":"python,freeze","A_Id":68068948,"CreationDate":"2021-06-21T11:43:00.000","Title":"How to stop python script from hanging when used \"subprocess\" module to start applications","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have some python airflow dag code that I have inherited and I am not sure where certain module operators are now.\nThe line I am getting an error on is:\nfrom AirflowHelpers.base_utils import get_airflow_version, get_airflow_home_directory, AIRFLOW_TYPE\nThat AirflowHelpers is an unresolved reference and I cannot find a module for this to install. I'd thought it was in the airflow.utils.helpers but I do not see anything related to get_airflow_version, get_airflow_home_directory or AIRFLOW_TYPE\nWere these moved to a different module within airflow and if so which submodule are these buried in? Even a pointer to searchable documentation would help.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":68068472,"Users Score":0,"Answer":"This was an internally created module.","Q_Score":0,"Tags":"python-3.x,airflow","A_Id":68071678,"CreationDate":"2021-06-21T13:04:00.000","Title":"What module do I use for AirflowHelpers.base_utils what has get_airflow_version, get_airflow_home_directory changed to?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am looking to call an excel file located within a docker container into python. How would i go about doing this? I can't seem to find the correct file path.\nWhat I have done is copied the excel files from a local directory into a existing docker container. I have done this because airflow cannot find files in my local directory. I now need a means for python to find these files.\nAny help would be greatly appreciated.\nSteven.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":68072903,"Users Score":0,"Answer":"Try using volumes in docker so that you should be able to access the file","Q_Score":1,"Tags":"python,excel,docker,airflow","A_Id":68072954,"CreationDate":"2021-06-21T18:19:00.000","Title":"Is there a way to call an excel file located within a docker container into python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using Docker on Ubuntu 20.04. A Docker container has already Python 3.4.2 installed on it.\nNow, I'm gonna upgrade it to Python 3.5 or later. I didn't find anything useful on the web to do that. Would be thankful if anyone lends my hands.\nI need that to install numpy on the Docker container. I've already upgraded the pip and setuptools for the Python 3.4.2, but when I run:\npip3 install numpy\nit returns that a Python 3.5 or later required.\nAny help would be appreciated!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1470,"Q_Id":68072977,"Users Score":1,"Answer":"Change the base image in the Dockerfile, tto use the new Python version, then rebuild the image.","Q_Score":1,"Tags":"docker,numpy,pip,python-3.5,ubuntu-20.04","A_Id":68073005,"CreationDate":"2021-06-21T18:25:00.000","Title":"How to upgrade to Python 3.5 in a Docker container that has already installed Python 3.4? (I'm running Docker containers on Ubuntu 20.04)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to set an Env variable in airflow which I can later use in my pipeline. I need it to identify the metadata.\nLike if $ENV == 'dev' use s3-dev-bucket\nif $ENV == 'prod' use s3-prod-bucket which will be identified by s3-$ENV-bucket.\nI have tried putting it in variables from airflow UI but the variable's value turns invalid after some time. It would be great if someone can help with a reliable method for this.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":134,"Q_Id":68111079,"Users Score":0,"Answer":"You have multiple options here:\n\nAs Simon D suggested, the best solution would be to make that part of the connection. Then per airflow environment, you would have the same connection id but different credentials and end-points.\nYou could use Airflow variables. However, you mentioned they turn invalid. I am unaware with any bugs why the variable value could turn invalid. Maybe good to have a look at that.\nYou could use environment variables. Not sure how you are running airflow but this is also an option.","Q_Score":0,"Tags":"python,amazon-s3,environment-variables,airflow","A_Id":68115268,"CreationDate":"2021-06-24T07:09:00.000","Title":"How to set and ENV variable in airflow like dev\/qa\/prod?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am creating a pipeline with a python script on Azure Web Service.\nMy script uses psycopg2 to connect to the postgres database\nbut I am getting an error trying to import psycopg2 saying\nfrom psycopg2._psycopg import ( # noqa\nImportError: \/home\/site\/wwwroot\/antenv\/lib\/python3.7\/site-packages\/psycopg2\/_psycopg.cpython-37m-x86_64-linux-gnu.so: undefined symbol: PQencryptPasswordConn\nany help would be apprciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":68117531,"Users Score":0,"Answer":"PQencryptPasswordConn was introduced in PostgreSQL v10. So you must be trying to use psycopg2 with a libpq that is older than that.\nThe solution is to upgrade your PostgreSQL client.","Q_Score":0,"Tags":"python,postgresql,azure,psycopg2","A_Id":68117655,"CreationDate":"2021-06-24T14:18:00.000","Title":"Can't Install psycopg2 on Azure","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"First time creating a DAG. Each time I run within Pycharm, I get a FileExistsError [WinError 183], in addition to, an AirflowConfigException. Both errors describe how their attempts to create airflow.cfg failed because this file already exists within ~\/airflow\/airflow.cfg\nHow do I let the code know to use the existing cfg file? I read elsewhere this may be related to setting an AIRFLOW_HOME environment variable?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":241,"Q_Id":68135138,"Users Score":0,"Answer":"Yes there are multiple options here.\n\nYou can start airflow in the directory you have your airflow.cfg in. However this is strongly discouraged.\nBefore you start airflow, you run export AIRFLOW_HOME=\/path\/to\/your\/airflow.cfg followed by the command.\nYou can set the AIRFLOW_HOME environment variable in your bash or zsh profile. It depends on which terminal you use. This is the recommended way.","Q_Score":0,"Tags":"python,error-handling,runtime-error,airflow","A_Id":68136371,"CreationDate":"2021-06-25T17:38:00.000","Title":"DAG runtime error with creating Airflow.cfg file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using a python script from the git repo. I want to build the script in jenkins and obtain the console output values as a parameter so that I can pass it to a downstream job. I could not find a proper answer through out google. Is there any possible way?\nIs there any way to obtain the console output values of a job in jenkins? I want to obtain the console output values as parameter and pass it to another job! Please help me out! Thank you :)","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":501,"Q_Id":68149485,"Users Score":-1,"Answer":"you can use output = sh(script: '', returnOutstd: true).trim() to get the output of any cmd, then assign to a Groovy variable.\nThen you can call the downstream job in current Jenkinsfile via build job: '', parameters: []","Q_Score":0,"Tags":"python,jenkins,jenkins-pipeline,jenkins-plugins,jenkins-groovy","A_Id":68150261,"CreationDate":"2021-06-27T08:53:00.000","Title":"obtaining console output as a parameter or a variable in jenkins","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using ffmpeg to read and write raw audio to\/from my python script. Both the save and load commands I use produce the warning \"Guessed Channel Layout for Input Stream #0.0 : mono\"? This is despite that fact that I am telling ffmpeg using -ac 1 before both the input and output that there is only one channel. I saw some other answers where I should set -guess_layout_max 0 but this seems like a hack since I don't want ffmpeg to guess; I am telling it exactly how many channels there are with -ac 1. It should not need to make any guess.\nMy save command is formatted as follows with r being the sample rate and f being the file I want to save the raw audio to. I am sending raw audio via stdin from python over a pipe.\nffmpeg_cmd = 'ffmpeg -hide_banner -loglevel warning -y -ar %d -ac 1 -f u16le -i pipe: -ac 1 %s' % (r, shlex.quote(f))\nLikewise my load command is the following with ffmpeg reading from f and writing raw audio to stdout.\nffmpeg_cmd = 'ffmpeg -hide_banner -loglevel warning -i %s -ar %d -ac 1 -f u16le -c:a pcm_u16le -ac 1 pipe:' % (shlex.quote(f), r)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1631,"Q_Id":68161835,"Users Score":4,"Answer":"-ac sets no. of channels, not their layout, of which there can be multiple for each value of a channel count.\nUse the option -channel_layout.\nffmpeg -hide_banner -loglevel warning -y -ar %d -ac 1 -channel_layout mono -f u16le -i pipe: ...","Q_Score":3,"Tags":"python,ffmpeg","A_Id":68172509,"CreationDate":"2021-06-28T10:34:00.000","Title":"Why is ffmpeg warning \"Guessed Channel Layout for Input Stream #0.0 : mono\"?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an ExecuteScript processor which calls a Python script to transform the data flow. This works well, but I need to call a jar file and get the results on one piece of data. I've found the following code, but this doesn't work as I can't import subprocess in Jython. Is there another library that can be called, or alternate code which will work? Trying to find a Jython for NiFi scripting guide appears to be a fruitless query.\n...command = \"java -jar \" result = suprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":68169706,"Users Score":0,"Answer":"This does work, my error was further below when trying to read the result. NiFi errors are not always easy to decipher and rarely fall on the line of code where the error actually exists.\nWhen reading the results if you wish to convert to string use this code.\n...value = result[0].decode(\"utf-8\")\nOf course substitute appropriate index as needed.","Q_Score":0,"Tags":"python,jar,apache-nifi","A_Id":68215062,"CreationDate":"2021-06-28T20:32:00.000","Title":"Apache-NiFi call jar file from python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a flask server which consists of multiple methods. I am aiming to automate the execution of these methods by using Airflow.\nI am thinking of using the following steps:-\n\nSetting up Airflow by defining multiple DAGS to call the relevant flask methods in a pipeline.\nDeploying Flask Server.\nDeploying Airflow (using docker-compose).\n\nMainly, I am thinking to seperate the Airflow and flask servers independently. Do you think this is a good plan? Any other suggestions would be highly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":577,"Q_Id":68171820,"Users Score":1,"Answer":"It depends on a couple of things.\n\nCan you run the methods from inside Airflow? For security reasons it is often required to keep some functionality in a different environment\/cluster. Reasons for that could be the required database access that you want to give to the Airflow environment.\nIs this functionality\/methods also invoked from other locations or is it solely for Airflow?\nWhat other functionality does the flask server have that you can't live without?\nAre there python dependency conflicts? Even in that case you could use the VirtualEnvOperator of Airflow.\n\nIf there is no answer here that is completely blocking you from invoking these methods from inside Airflow, I would vote to do them completely inside Airflow. This will reduce coupling and also reduce the maintenance burden for you in the long term. Besides, Airflow will prevent you from needing to worry about a lot of things, like connectivity, exception codes and callbacks for when something went wrong.","Q_Score":1,"Tags":"python,docker,flask,airflow","A_Id":68179600,"CreationDate":"2021-06-29T02:14:00.000","Title":"Automating flask server with Apache. Airflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to schedule the execution of a Python file recurrently. However, the time when this process has to rerun changes every time.\nI had thought of creating a task on Windows Task Scheduler, and then create a variable that updates when the task has to be triggered again","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":143,"Q_Id":68172009,"Users Score":0,"Answer":"The Task Scheduler is controlled by COM interfaces (CLSID_TaskScheduler). Note that you can create a one-time task that deletes when it expires, then just have your app create a new task with the new time. If you don't want to play with COM, you can use os.system to fire the at application, which enters a one-time task.","Q_Score":1,"Tags":"python,scheduled-tasks","A_Id":68172061,"CreationDate":"2021-06-29T02:48:00.000","Title":"Is there any way to modify a task from Task Scheduler programmatically with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to ask if it's possible to run a set of tests written in python (pytest)\non a running NodeJS application running in Docker?\nWhat I want to achieve:\n1.setup github action to run and build the 'test Docker container' on pull_request (done)\n2.run pytest as soon as the node container starts (pending)\n3.run another github workflow based on the test results of pytest (there is also a question how to achieve it,I saw somewhere that maybe cypress can help)\nPlease let me know if I should provide Dockerfile if it's necessary\nthanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":744,"Q_Id":68176685,"Users Score":0,"Answer":"Solved by using ENTRYPOINT in Dockerfile, where I put my bash script which run npm start & pytest -c xxx","Q_Score":0,"Tags":"python,node.js,docker,github,pytest","A_Id":68224989,"CreationDate":"2021-06-29T10:11:00.000","Title":"How to run pytest tests after docker container starts","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"bash: \/usr\/lib\/command-not-found: \/usr\/bin\/python3: bad interpreter: Permission denied\nthis error occured and I am very new to linux so I don't understand the meaning of it.\nand system said that my python3-pip version is already newest.\nwhat can i do for solving this error?\nmy ubuntu version is 18.04","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":159,"Q_Id":68188550,"Users Score":0,"Answer":"Enter\nsudo python3 -m pip install ipykernel\nEnter password","Q_Score":0,"Tags":"python-3.x,jupyter-notebook,ubuntu-18.04","A_Id":68188589,"CreationDate":"2021-06-30T04:42:00.000","Title":"python3 -m pip install ipykernel returns an error permission denied","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have made a Flask API for a spacy ner code and deployed it on Docker. In the code I have used python's logging to return the outputs to a file, info.log.\nThe question is, how to access the log file in the container after running it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":239,"Q_Id":68194502,"Users Score":0,"Answer":"Since I had to look for a long time, I picked up bits of answers from different places and am compiling it here for anyone who is stuck.\nAfter running the container, go to the terminal and post the following commands.\n(I used pycharm and the terminal started inside the directory where my code and dockerfile were stored)\ndocker ps\n(this shows the containers running currently)\ndocker exec -it 'container-name' bash\n(now you have entered the container)\nls -lsa\n(this will show all the files in the container, including the log file)\ncat info.log\nNow, you can see the log file contents on the terminal.","Q_Score":0,"Tags":"python,docker,logging","A_Id":68194646,"CreationDate":"2021-06-30T12:09:00.000","Title":"Python logging in Docker. How to access log file formed inside the Docker container","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Created Pythonshell with simple script, like just requests.get(). Elasticsearch cluster is in VPC.\nI tried using self-referencing groups, endpoints but nothing worked. Also custom connection with JDBC fails Could not find S3 endpoint or NAT gateway for subnetId (but it exists).\nI see that for Spark jobs ESConnector is available but can not find any working way to make it with Pythonshell jobs. Is there any way to allow such connection?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":141,"Q_Id":68210056,"Users Score":0,"Answer":"Solved, I was missing route to NAT gateway in private subnet.","Q_Score":0,"Tags":"python,amazon-web-services,elasticsearch,aws-glue,amazon-vpc","A_Id":68223839,"CreationDate":"2021-07-01T12:19:00.000","Title":"AWS Glue pythonshell job - how to connect to elasticsearch in VPC?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I deleted a conda environment by mistake, and I do not have a yaml\/text file containing the list of libraries. However, I have the following data:\n\n\n\n\nName\nVersion\nBuild\nChannel\n\n\n\n\n_tflow_select\n2.1.0\ngpu\nanaconda\n\n\nabsl-py\n0.13.0\npypi_0\npypi\n\n\naiohttp\n3.6.3\npy37he774522_0\nanaconda\n\n\nalabaster\n0.7.12\npypi_0\npypi\n\n\nanyio\n2.0.2\npypi_0\npypi\n\n\nappdirs\n1.4.4\npypi_0\npypi\n\n\nargon2-cffi\n20.1.0\npy37he774522_1\nanaconda\n\n\nase\n3.21.0\npypi_0\npypi\n\n\nasgiref\n3.3.1\npypi_0\npypi\n\n\nastor\n0.8.1\npy37_0\nanaconda\n\n\nastroid\n2.5.2\npypi_0\npypi\n\n\nasync-timeout\n3.0.1\npy37_0\nanaconda\n\n\nasync_generator\n1.10\npy37h28b3542_0\nanaconda\n\n\nattrs\n20.2.0\npy_0\nanaconda\n\n\nazure-core\n1.10.0\npypi_0\npypi\n\n\nazure-eventhub\n5.1.0\npypi_0\npypi\n\n\nazure-storage-blob\n12.6.0\npypi_0\npypi\n\n\n\n\nPlease note that above table is a just the first few libraries present in the environment.\nCan anyone please suggest me a way to create an environment using the following information?\nPlease do not ask me to install the libraries one by one as there are many libraries to be installed. Also, do not suggest me to create a .yml\/.txt file and then use conda\/pip to install all of them at a go, as putting everything in the correct format would take a lot of time.\nPlease let me know if those two are the only solutions to this problem.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":95,"Q_Id":68211331,"Users Score":0,"Answer":"Merv's comment to my question:\n\nThis looks like the output from the conda list. Is that what you have?\nI made a script for converting to YAML - see\nstackoverflow.com\/a\/65912328\/570918\n\nI referred to that link, and it helped me resolve my problem.\nThanks, Merv.\nThere's another workaround for this problem which is quite tedious, so I recommend people facing similar issues to follow Merv's answer.","Q_Score":0,"Tags":"python-3.x,pip,conda","A_Id":68219611,"CreationDate":"2021-07-01T13:42:00.000","Title":"conda create an environment without yaml or text file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I started to using the Dronekit, Dronekit-STIL and Mavlink to simulate my python scripts. Afters some days using it without problem I started to receive the error: WARNING:dronekit:Link timeout, no heartbeat in last 5 seconds.\nI had tried to reinstall all the things but nothings works.\nI had install the PIP pachages on Linux Ubutun 18. I had try the same packages on Ubutun 20 but I receive the same error.\nI had install this packages:\npymavlink>=2.3.3\nMAVProxy-1.8.39\ndronekit-2.9.2\ndronekit-sitl-3.3.0\nPython 2.7.17\nFollow my steps to receive the error:\n1 - dronekit-sitl copter --home=-25.56731,-42.61554,0,180\nos: linux, apm: copter, release: stable\nSITL already Downloaded and Extracted.\nReady to boot.\nExecute: \/home\/cesar\/.dronekit\/sitl\/copter-3.3\/apm --home=-23.56731,-46.61554,0,180 --model=quad -I 0\nStarted model quad at -23.56731,-46.61554,0,180 at speed 1.0\nbind port 5760 for 0\nStarting sketch 'ArduCopter'\nSerial port 0 on TCP port 5760\nStarting SITL input\nWaiting for connection ....\nbind port 5762 for 2\nSerial port 2 on TCP port 5762\nbind port 5763 for 3\nSerial port 3 on TCP port 5763\n2 - mavproxy.py --master tcp:127.0.0.1:5760 --out udp:127.0.0.1:14551 --out udp:10.0.2.15:14550\nConnect tcp:127.0.0.1:5760 source_system=255\nLog Directory:\nTelemetry log: mav.tlog\nMAV> Waiting for heartbeat from tcp:127.0.0.1:5760\nonline system 1\nSTABILIZE> Mode STABILIZE\nAP: Calibrating barometer\nAP: Initialising APM...\nAP: barometer calibration complete\nAP: GROUND START\nInit Gyro**\nINS\nG_off: 0.00, 0.00, 0.00\nA_off: 0.00, 0.00, 0.00\nA_scale: 1.00, 1.00, 1.00\n3 - python hello.py\nStart simulator (SITL)\nStarting copter simulator (SITL)\nSITL already Downloaded and Extracted.\nReady to boot.\nConnecting to vehicle on: tcp:127.0.0.1:5760\nWARNING:dronekit:Link timeout, no heartbeat in last 5 seconds\nafter 30s\nERROR:dronekit.mavlink:Exception in MAVLink input loop\nTraceback (most recent call last):\nFile \"\/usr\/local\/lib\/python2.7\/dist-packages\/dronekit\/mavlink.py\", line 211, in mavlink_thread_in\nfn(self)\nFile \"\/usr\/local\/lib\/python2.7\/dist-packages\/dronekit\/init.py\", line 1371, in listener\nself._heartbeat_error)\nAPIException: No heartbeat in 30 seconds, aborting.\nTraceback (most recent call last):\nFile \"hello.py\", line 11, in\nvehicle = connect(connection_string, wait_ready=True)\nFile \"\/usr\/local\/lib\/python2.7\/dist-packages\/dronekit\/init.py\", line 3166, in connect\nvehicle.initialize(rate=rate, heartbeat_timeout=heartbeat_timeout)\nFile \"\/usr\/local\/lib\/python2.7\/dist-packages\/dronekit\/init.py\", line 2275, in initialize\nraise APIException('Timeout in initializing connection.')\ndronekit.APIException: Timeout in initializing connection.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":825,"Q_Id":68213754,"Users Score":0,"Answer":"Hard to say without knowing the content of hello.py.\nTry connecting through udp:127.0.0.1:14551 in the hello.py script rather than tcp:127.0.0.1:5760.\nAlso, it looks like you're starting another SITL instance from your script, but again, hard to know without seeing the code.","Q_Score":0,"Tags":"linux,dronekit-python,dronekit","A_Id":68954285,"CreationDate":"2021-07-01T16:23:00.000","Title":"DRONEKIT - WARNING:dronekit:Link timeout, no heartbeat in last 5 seconds","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"there.I am moving forward to use google cloud services to store Django media files. But one thing that stops me is about the Google and Amazon free tier. I had read the google cloud docs but I am confuse about many things. For free tiers, New customers also get $300 in free credits to run, test, and deploy workloads. What I want to know is if they are gonna automatically charge me for using the cloud-storage after 3 months of trial is over because I am gonna put my bank account. This case is same on Aws bucket which allows to store mediafiles for 1 year after then what's gonna happen. Are they auto gonna charge me?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":68214391,"Users Score":0,"Answer":"I have never used google cloud before. For AWS free tier you can use the storage with the limited features they allow to free tier. Regarding charges, you can definitely setup a cloudwatch alert in AWS which will alert you if your usage is beyond the free tier limit or you are about to get charged. So you can set that up and be assured you won't get surprise before you get alerted for the same.\nHope this helps. Good luck with your free tier experience.","Q_Score":0,"Tags":"python,django,amazon-web-services","A_Id":68215549,"CreationDate":"2021-07-01T17:14:00.000","Title":"Queries related to Google cloud storage and Aws bucket for file storage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I ssh'd into a linux server to run Airflow. I have made the scheduler(airflow scheduler -D) and database initialized (airflow db init). However, even when trying to create the simplest of DAGs using python (I also tried using Airflow's predefined example py scripts), Airflow does not list the DAG when running the airflow dags list command.\nI'm sure the syntax of my py code is correct because the DAG showed up on a windows instance but my setup for airflow within Linux is somehow not correct? Also used python3 script.py to execute.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":71,"Q_Id":68215454,"Users Score":1,"Answer":"Basically the dags folder's permission's weren't allowing anything to be written into it. I just sudo'd every command or chmod the folder. Also to ensure Airflow was correctly run, I suggest using a YAML file with docker compose to streamline Airflow setup.","Q_Score":1,"Tags":"python,airflow,directed-acyclic-graphs","A_Id":68230557,"CreationDate":"2021-07-01T18:44:00.000","Title":"Running python scripts does not create Airflow DAG?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python script that needs to be executed from within the folder where it is located. I am distributing it as PyInstaller compiled executable (wrapped into an AppImage for Linux). I will probably migrate to Platypus for OSX, so that I get a .app file.\nThe problem is that the executable isn't executed from within the correct directory when being double-clicked (because the AppImage \/ .app bundle add some folders to the path).\nI want to add an os.chdir() command so that it goes to the correct path on all platforms, no matter if run as .py file, bundled as .exe, bundled as .app, or bundled as AppImage. What is the best way to do that?\nNote: The reason why I need it to be executed from the correct directory is some log \/ data \/ config files being located there.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":88,"Q_Id":68218321,"Users Score":0,"Answer":"I ended up using full paths, like \/opt\/foobar on Linux or c:\\opt\\foobar on windows. Simple and stable solution, only drawback is the user might require admin rights.","Q_Score":0,"Tags":"python,pyinstaller,bundle,chdir,platform-independent","A_Id":69020141,"CreationDate":"2021-07-02T00:34:00.000","Title":"How to always execute PyInstaller \/ Platypus compiled Python executable in the directory where it is located","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I build a docker container with python and anaconda env, all work without docker container.\nThen a move that code to container, i got a problem with start that code and i got problem, my steps:\n\nDocker container can not start by CMD [\"python\",\"server.py\"], it not work because some module not found but i am sure that my conda env is perfect (see step 3)\nI moved my code to server1.py and my server.py just while True: pass\nI get in container by \"docker exec -it bash\", i saw the conda env activated\nI run \"python server1.py\" the code work perfectly\n\nWhy wrong with mycode?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":68234630,"Users Score":0,"Answer":"I'm not familiar enough with Docker, it seems likely that it's not activating your conda env though. You could try asking for the required python interpreter explicitly.\nTo find it go inside your container and activate your conda env. where python should then give you the full path to the interpreter.\nUse that path in your docker CMD instead of just python.","Q_Score":1,"Tags":"python,docker,anaconda","A_Id":68234802,"CreationDate":"2021-07-03T08:44:00.000","Title":"Why i can not run python script by CDM [\"python\",\"server.py\"] but can run manually","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to run some python code on aws platform periodically(probably once a day). Program job is to connect to S3, download some files from bucket, do some calculations, upload results back to S3. This program runs for about 1 hour so I cannot make use of Lambda function as it has a maximum execution time of 900s(15mins).\nI am considering to use EC2 for this task. I am planning to setup python code into a startup and execute it as soon as the EC2 instance is powered on. It also shuts down the instance once the task is complete. The periodic restart of this EC2 will be handled by lambda function.\nThough this a not a best approach, I want to know any alternatives within aws platform(services other than EC2) that can be best of this job.\nSince","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":276,"Q_Id":68234790,"Users Score":2,"Answer":"If you are looking for other solutions other than lambda and EC2 (which depending on the scenario it fits) you could use ECS (Fargate).\nIt's a great choice for microservices or small tasks. You build a Docker image with your code (Python, node, etc...), tag it and then you push the image to AWS ECR. Then you build a cluster for that and use the cloudwatch to schedule the task with Cloudwatch or you can call a task directly either using the CLI or another AWS resource.\n\nYou don't have time limitations like lambda\nYou don\u2019t also have to setup the instance, because your dependencies are managed by Dockerfile\nAnd, if needed, you can take advantage of the EBS volume attached to ECS (20-30GB root) and increase from that, with the possibility of working with EFS for tasks as well.\n\nI could point to other solutions, but they are way too complex for the task that you are planning and the goal is always to use the right service for the job\nHopefully this could help!","Q_Score":1,"Tags":"python,amazon-web-services,amazon-ec2,aws-lambda,job-scheduling","A_Id":68308309,"CreationDate":"2021-07-03T09:08:00.000","Title":"Run python code on AWS service periodically","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to have a PHP program call a python program and have it run the background. When I call this from the shell nohup python3 automation.py args >\/dev\/null 2>&1 & , everything runs fine. I run top and jobs and I can see that it is executing. The script finishes successfully.\nNow I would like this script to be called from a PHP program and then run in the background, so I am using this command, which is the same above. For the program not to hang, I output it to null.\nexec('nohup python3 automation.py args >\/dev\/null 2>&1 &')\nEverything runs fine for awhile when I administer a top but then it dies after a few seconds and I am left scratching my head to figure out why. How do I troubleshoot this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":68248674,"Users Score":0,"Answer":"Problem solved. The linux user that PHP is used when commands are executed is www-data. That user didn't have write permissions for the file. To log in as www-data for testing, run this su -s \/bin\/bash www-data. Any command you execute in the shell should be equivalent to calling it from PHP.","Q_Score":0,"Tags":"python,php","A_Id":68277024,"CreationDate":"2021-07-04T20:48:00.000","Title":"Why is my python script dying in the background when called from PHP?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to see if I can protect my python source code in the following scenario:\n\nI am managing a linux server, where I am the only one to have root privileges\nMy users have separated user accounts on this server\nI want to find a way to install a private pure-python module on this server so that my users may import that module, without them being able to access the code\n\nIs there a way to do such a thing?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":97,"Q_Id":68263063,"Users Score":1,"Answer":"It sounds like you want a code obfuscation tool, which makes code unreadable without some very dedicated reverse engineering by renaming variables, functions, modules, etc., replacing their names with gibberish.\nIf a computer can execute code, then someone with admin access to that computer can also read the compiled code, no exceptions. If you don't want someone to steal your logic, you obfuscate. If you don't want people to pirate your software (use it without paying), you can add some software protections (research how other subscription software is protected) and obfuscate those as well, so that it's hard to bypass the restrictions and clearly in breach of IP laws.\nAlternatively (if suitable, which it usually is), you might want to run the code on your own server and publish an API for your customers to use. For their convenience, you might also develop an abstraction of the public API for clients to use. This won't let clients access code at all; clients indirectly ask the server to do something, and the server does it if everything is in order (the client has a valid subscription, for example).","Q_Score":0,"Tags":"python,linux,copy-protection","A_Id":68280040,"CreationDate":"2021-07-05T23:49:00.000","Title":"Keeping Python Code Private On Controlled Linux Server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using a Python + web3 script that sends a transaction from addr1 to addr2.\nI want to replace this transaction from another script (home computer vs server) so it's difficult for me to get the transaction hash and use eth.replace_transaction()) with the pending transaction's gasPrice * 1.125.\nHow can I replace the transaction? I know its nonce and its source and destination adresses but not the hash (because I'm missing the exact gasPrice used).\nCan I get the transaction from the blockchain by nonce and block or is there some other way of doing this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":527,"Q_Id":68264484,"Users Score":0,"Answer":"Transactions are processed by the nonce, not by hash.\nBroadcast a valid transaction with the same nonce, but higher gas, to replace a transaction. In the case miners see two valid competing transactions with the same nonce, they usually pick one with the lower nonce (not guaranteed though).","Q_Score":0,"Tags":"python-3.x,solidity,web3,web3py","A_Id":68266618,"CreationDate":"2021-07-06T04:18:00.000","Title":"How to replace transaction by something other than transaction_hash?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way to send multiple redisearch commands in a single pipeline using redisearch-py client? Also is it possible to mix the classic redis commands with the redisearch commands in the same pipeline?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":68306445,"Users Score":0,"Answer":"Yes, it is possible, for both your questions.\nIIRC some commands will not work in transactions or Lua scripts but pipelines weren't a problem.","Q_Score":0,"Tags":"python,redis,redisearch","A_Id":68777184,"CreationDate":"2021-07-08T17:46:00.000","Title":"Pipelining redisearch-py commands","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"This is mainly openedx dashboard course page not coming related program.\n\nFailed to get program UUIDs from the cache for site","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":68316215,"Users Score":0,"Answer":"In edxapp user create env (source ..\/edxapp_env) and then run this command\n\npython manage.py lms --settings production\/devstack cache_programs","Q_Score":0,"Tags":"python,openedx,edx,juniper","A_Id":68316216,"CreationDate":"2021-07-09T11:45:00.000","Title":"Failed to get program UUIDs from the cache for site in Openedx Applications","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm sending a simple 'post' request through the 'requests' module. It works fine when execute it directly through the linux terminal. However, when I set it up through the crontab, the log is indicating and error.\n\nIf I execute the below through the terminal, it works fine.\n\n\n'\/usr\/bin\/python3.6 \/location\/sa\/tb\/uc\/md\/se\/sea.py'\n\n\nIf I setup the crontab as follows, I get an error.\n\n\n\n\n\n\n\n\n\n\n\n\/usr\/bin\/python3.6 \/location\/sa\/tb\/uc\/md\/se\/sea.py >> ~\/Test_log.log 2>&1\n\n\n\n\n\n\n\n\n\n\n\nBelow is the error message:\n\n\nFile\n\"\/usr\/local\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\",\nline 600, in urlopen\nchunked=chunked) File \"\/usr\/local\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\",\nline 343, in _make_request\nself._validate_conn(conn) File \"\/usr\/local\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\",\nline 839, in validate_conn\nconn.connect() File \"\/usr\/local\/lib\/python3.6\/site-packages\/urllib3\/connection.py\", line\n344, in connect\nssl_context=context) File \"\/usr\/local\/lib\/python3.6\/site-packages\/urllib3\/util\/ssl.py\", line\n345, in ssl_wrap_socket\nreturn context.wrap_socket(sock, server_hostname=server_hostname) File \"\/usr\/lib64\/python3.6\/ssl.py\", line 365, in wrap_socket\n_context=self, _session=session) File \"\/usr\/lib64\/python3.6\/ssl.py\", line 776, in init\nself.do_handshake() File \"\/usr\/lib64\/python3.6\/ssl.py\", line 1036, in do_handshake\nself._sslobj.do_handshake() File \"\/usr\/lib64\/python3.6\/ssl.py\", line 648, in do_handshake\nself._sslobj.do_handshake() ConnectionResetError: [Errno 104] Connection reset by peer\n\nWhat did I try?\n\nTried adding absolute path inside the script.\n\nAdded a proxy to the headers, but no go.\n\n\nAny help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":185,"Q_Id":68316997,"Users Score":0,"Answer":"Some servers don't start re-listen immediately (check_mk flag), while calling multiple requests from a single connection. One of the reason is to avoid DoS attacks and service availability to all users.\nSince your crontab made your script to call the same API multiple times using a single connection, I'd suggest you to add a void timer before making a request, e.g. add time.sleep(0.01) just before calling the API.","Q_Score":0,"Tags":"python,python-3.x,cron","A_Id":68354804,"CreationDate":"2021-07-09T12:42:00.000","Title":"ConnectionResetError while running through cron","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"there I'm trying to install pip and have begun by installing python. I've installed python via an exe. However when I do basic things such as checking the version it says not found. I can run python in the command prompt by typing py.\nHowever when it type: python --version it says not found? I've also tried python3 --version and using a capital P, to no avail? as such running py get-pip.py is not working stating not found. please can someone assist?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":187,"Q_Id":68326651,"Users Score":0,"Answer":"Maybe you should reinstall it. Probably it is because of the setup wizard","Q_Score":0,"Tags":"python,windows,installation,pip","A_Id":68326684,"CreationDate":"2021-07-10T09:55:00.000","Title":"python --version not working on Windows command prompt but installed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"there I'm trying to install pip and have begun by installing python. I've installed python via an exe. However when I do basic things such as checking the version it says not found. I can run python in the command prompt by typing py.\nHowever when it type: python --version it says not found? I've also tried python3 --version and using a capital P, to no avail? as such running py get-pip.py is not working stating not found. please can someone assist?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":187,"Q_Id":68326651,"Users Score":0,"Answer":"Check if your environment variables are correctly set. Under \"Path\" there should be a line leading to your python folder somewhere. For me, the path is \"C:\\Users\\myUsername\\AppData\\Local\\Programs\\Python\\Python39\\\". You may need to restart your PC after changing something in the environment variables for it to take effect.","Q_Score":0,"Tags":"python,windows,installation,pip","A_Id":68326693,"CreationDate":"2021-07-10T09:55:00.000","Title":"python --version not working on Windows command prompt but installed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"The title is my question. I can't think of anything that is useful to store jobs to external database.\nCan you guys provide some use cases?\nthank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":87,"Q_Id":68384355,"Users Score":0,"Answer":"If you schedule jobs dynamically (at run time instead of adding them all at application startup), you don't want to lose them when you restart the application.\nOne such example would be scheduling notification emails to be sent, in response to user actions.","Q_Score":0,"Tags":"python,mongodb,redis,jobs,apscheduler","A_Id":68391755,"CreationDate":"2021-07-14T19:57:00.000","Title":"Why would i want to store apscheduler jobs to JobStore (Redis, Mongo, etc.)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"The program is a standard flask program, and it does some cleanup as part of the initialization. In the cleanup() procedure, using os.remove(\"abc.txt\"), I noticed that the file is removed, but not reclaimed by the OS.\nI use both \"python website.py\" and \"gunicorn website:app\" to run the application and both have the same problem, in Linux environment. In MacOS, I can't reproduce it.\nAfter the file is os.remove , it is no longer listed in \"ls\" command, but when I run\nlsof | grep deleted\nI can still see this file being listed as deleted but opened by the python application.\nBecause this file is already \"os.remove\", it is not listed in ls command, and du will not calculate this file.\nBut if this file is big enough, df command will show the space of this file is still being occupied, not being reclaimed. Because this file is still \"being open by the flask application\", as the lsof program claimed.\nAs soon as I stop the flask application from running, the lsof will not have this file, and the space is reclaimed.\nUsually when the file is too small, or when the application stops or restarts frequently, you won't notice the space is being occupied. But this is not very reasonable for keeping the space. I would expect the website running for years.\nWhen searching internet for \"open but deleted files\", most suggestions are \"find the application and kill it\". Is there a way to keep the flask application running without restarting it? My application doesn't actually \"open\" this file, but simply os.remove it.\nSuggestion on how to delete file and re-claim the space immediately?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":100,"Q_Id":68401311,"Users Score":0,"Answer":"The Flask application either needs the large file to continue running, or does not release unneeded resources.\nIf the app needs the large file, that's it. Otherwise, the app is buggy and in need to be corrected.\nIn both cases, the \"being open\" status of the large file (that, at least on Linux, leads to the file still being present in the mass memory system) cannot be controlled by your script.","Q_Score":1,"Tags":"python,linux,lsof","A_Id":68406918,"CreationDate":"2021-07-15T22:12:00.000","Title":"A flask website, when it delete a file (os.remove(\"abc.txt\")), the file is removed but the space is not reclaimed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I had to download the Microsoft store version of Python for BS Windows compatibility reasons- I'd been working off of the direct python download before this. Recently, I was messing with PATH stuff and messed up in a weird way. I deleted my C:\\ProgramFiles(x86) PATH on accident and launched a system restore to get it back.\nThings got worse from there.\nWhen my computer relaunched, my web browsers were deleted. Unrelated, but weird.\nI went back to working on PATH stuff, and solved my original WIN + R problems.\nHowever, as soon as I launched a program with pyperclip, the module was no longer present. I've installed pip, and pyperclip using cmd but I can't import the module into IDLE or run it in my programs.\nI think it has to do with the multiple versions of Python and some directory issue.\nIs there any way to install pyperclip in all versions of Python present? I don't want to deal with more issues by trying to delete just one instance of Python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":68405975,"Users Score":0,"Answer":"Some problems solve themselves. I just kept messing with file locations and running pip install pyperclip in cmd. Buffed out.","Q_Score":0,"Tags":"python,pyperclip","A_Id":68406221,"CreationDate":"2021-07-16T08:35:00.000","Title":"Can't install pyperclip, multiple instances of Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an airflow comprising of 2-3 steps\n\nPythonOperator --> It runs the query on AWS Athena and stores the generated file on specific s3 path\nBashOperator --> Increments the airflow variable for tracking\nBashOperator --> It takes the output(response) of task1 and and run some code on top of it.\n\nWhat happens here is the airflow gets completed within seconds even if the Athena query step is running.\nI want to make sure that after the file is generated further steps should run. Basically i want this to be synchronous.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":198,"Q_Id":68409560,"Users Score":1,"Answer":"Athena's query API is asynchronous. You start a query, get an ID back, and then you need to poll until the query has completed using the GetQueryExecution API call.\nIf you only start the query in the first task then there is not guarantee that the query has completed when the next task runs. Only when GetQueryExecution has returned a status of SUCCEEDED (or FAILED\/CANCELLED) can you expect the output file to exist.\nAs @Elad points out, AWSAthenaOperator does this for you, and handles error cases, and more.","Q_Score":1,"Tags":"python,asynchronous,airflow,amazon-athena","A_Id":68413346,"CreationDate":"2021-07-16T13:08:00.000","Title":"How to run Airflow tasks synchronously","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way of getting the name of the Glue Job that produced the log stream given only the jrid?\nThe only parameters that I can work with are the jrid and the log group name.\nI know I can pull all the glue jobs and then go through them individually until I find the glue job that has that specific jrid but I feel like there has to be a more efficient way of doing this.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":426,"Q_Id":68435533,"Users Score":0,"Answer":"There doesn't seem to be an obvious way to get from Job Run ID to Job Name.\nYou'll probably need to:\n\nCall list_jobs() to get all Job Names\nFor each Job, call get_job_run()\nLook for a batching Id in the response","Q_Score":0,"Tags":"python-3.x,aws-lambda,boto3,aws-glue,amazon-cloudwatchlogs","A_Id":68435781,"CreationDate":"2021-07-19T05:37:00.000","Title":"Finding Glue Job Name with only a job run id","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using JupyterLab on a MacOS Big Sur and Chrome browser. Before today I could upload files to JupyterLab and simply \"copy path\" into e.g. pd.read_csv('') and it would be recognized. It will recognize the path to a file on my desktop but not a file uploaded to JupyterLab. I would appreciate any help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1687,"Q_Id":68447535,"Users Score":0,"Answer":"I use the exact same setup. Jupyter Lab opens in Chrome, but it uses files from your local directories. Therefore files aren't uploaded. If you copy path directly from Jupyter it will give you the relative path. If you need the absolute path you have to find the file in your local directory.","Q_Score":1,"Tags":"python,jupyter-lab","A_Id":68447652,"CreationDate":"2021-07-19T22:11:00.000","Title":"JupyterLab Error FileNotFoundError: [Errno 2] No such file or directory","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to simply monitor if my Kafka cluster is up. Occasionally the machines running Kafka were shutdown. I want to send an email alert if the cluster is not available.\nI can create a producer and consumer to send and receive dummy messages periodically. Is there a simpler way to do it?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":68447845,"Users Score":0,"Answer":"Actually knowing if cluster is up is not so easy at all, there is discussion with community what is the best practice to decide if kafka cluster is up and active but there is no current good way to get this information, as kafka architecture is distributed system, you might have big clusters and while one or more brokers are down , still having your cluster to give high available service, not effecting the integrity of data. Also you might have problems with one topic while on other topics it might work fine.\nOne suggestion I read which might give you the most certain approach is to produce \"dummy\" msgs to your applicative topics, and \"skip\" these msgs on consumption, that guarantee you that your application would work. I don't like this approach very much as it requires to \"send junk to your main topics\"\nOther approaches are like you say \"produce\/consume to\/from test\/healthcheck topic\" but it is might not give full guarantee that your application would work, this is a lot like select from dummy in other db approaches... if for them is good enough....\nAnother suggestion is to use AdminClient to read the metrics of cluster, if metrics are provided that usually means the cluster is healthy , also not very good guarantee...","Q_Score":1,"Tags":"apache-kafka,kafka-python","A_Id":68451322,"CreationDate":"2021-07-19T23:00:00.000","Title":"Monitor if Kafka is up?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is it possible for airflow to recognized compiled python code? I'm in beginning stage of this research. If this is not possible, I'm thinking of having non-compiled python dag execute compiled executables. Thoughts or recommendations? Thank you in advance for sharing!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":68448967,"Users Score":1,"Answer":"Compiled python code (.pyc) files are not inspected by the Airflow scheduler.\nYou will need to have python files (.py) which contains the DAG\/task instantiation for Airflow to recognize it as a pipeline.\nOnce you have a pipeline declared, you can leverage any operator to perform tasks.\nI believe you would be using BashOperators since that will allow you to execute bash commands for executing compiled executables.","Q_Score":0,"Tags":"python,airflow,directed-acyclic-graphs","A_Id":68460428,"CreationDate":"2021-07-20T02:39:00.000","Title":"Can airflow work with compiled python dag?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to deploy a model which uses the pillow (PIL) library on ubuntu (Ubuntu 20.04.2 LTS). The model was built on windows, and i have since discovered that PIL returns different arrays when reading the same image between Ubuntu and Windows.\nThis seems to be because of the jpeglib version (9 not 8). Does anyone know how to change this version so i can replicate the same results on a linux mahcine?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":43,"Q_Id":68474269,"Users Score":1,"Answer":"Solved by building Pillow from source with jpeglib-8 installed. This means jpegs loaded with the same decoder match.","Q_Score":0,"Tags":"linux,windows,python-imaging-library","A_Id":68486971,"CreationDate":"2021-07-21T17:47:00.000","Title":"Change Image.core.jpeglib version in PIL on ubuntu","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I intend to run a .cmd file, which will have\npython my_program.py \"Short string of information to pass to Python\"\nin it. The Python program will determine the path that the .cmd file needs, so I want to pass it as a string back to the .cmd when it finishes running, and then I want the .cmd to assign it to a variable, like so:\nSET MY_VARIABLE \nIs there a way I can do this? This Python file may also need to do some configuration first. We are running Python 2.7.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":68477196,"Users Score":0,"Answer":"you can save Text File as batch file .bat\npython.exe \"path\\my_program.py\"\nafter that you can use windows schedule to schedule your script.\nfor configuration you can save it in one file and read it from python itself.","Q_Score":0,"Tags":"python,windows,batch-file,cmd","A_Id":68477262,"CreationDate":"2021-07-21T22:33:00.000","Title":"How can I run Python from a .cmd file, and have it pass a string back to the batch file, then SET a variable to this string?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am getting this error that:\nModuleNotFoundError: No module named 'frontend' once I am trying to import fitz in my docker application.\nTried using:\n\nfitz==0.0.1.dev2\nPyMuPDF==1.16.14\nBut still the same problem is there.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":457,"Q_Id":68483853,"Users Score":1,"Answer":"I had a problem with these 2 packages.\nI used these 2 lines and it solved my problem:\n\npip install fitz pip install pip install PyMuPDF==1.19.0\nthe problem is that the latest version of PyMuPDF is not compatible with fitz.","Q_Score":1,"Tags":"python,pdf","A_Id":70973480,"CreationDate":"2021-07-22T11:15:00.000","Title":"Fitz (ModuleNotFoundError: No module named 'frontend') error in docker","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We are using Apache Airflow through AWS.\nWe have a requirements.txt with all of our python packages and we ran into a problem.\n\nAt one point, we inserted the following packges , updated the enviroment, and it worked for a few weeks:\nkubernetes\napache-airflow[postgres,aws]==1.10.12\napache-airflow-backport-providers-postgres==2020.6.24\napache-airflow-backport-providers-amazon==2021.3.3\npandas==1.2.3\npython-dateutil==2.8.1\nsmart-open==5.1.0\nfsspec==2021.6.1\ns3fs==2021.6.1\nxlrd==2.0.1\nopenpyxl==3.0.7\nboto3\naiobotocore\nbotocore\n\nThe problem:\nWe must use apache-airflow-backport-providers-amazon: it depends on botocore being: 1.19.0 < botocore >= 1.18.0\nboto3 depends on: 1.19.0 < botocore >= 1.18.18\naiobotocore depends on: botocore that doesn't match these version that I listed above.\nAnd that is exactly our problem. Now the enviorment doesn't work because it can't install requirements.txt, as this dependecny fails it.\nI believe that if I manage to remove aiobotocore, it would work.\nIts good to note that I removed aiobotocore from requirements.txt and it still shows that aiobotocre depends on botocore and it fails the requirements.txt (when updating the enviorment).\nI am sort of new to Python so excuse me if something was written poorly. If anyone has any suggestions, it would be a life saver!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":141,"Q_Id":68490098,"Users Score":0,"Answer":"Didn't manage to find a way to uninstall\/remove a package.\nEnded up rolling back the version of \"requirements.txt\" when editing the airflow enviroment and it seems as it uninstalled the packages.","Q_Score":0,"Tags":"python,amazon-web-services,package,airflow","A_Id":68495603,"CreationDate":"2021-07-22T18:48:00.000","Title":"How to uninstall python package in Apache Airflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to manually push updates\/files to devices located on different networks. I know I can run a periodic repository pull command but that is not efficient and based on fixed time intervals. I would like to setup an instant file push instead of a time based one. For example, when I upload a new file all the devices detect this OR are notified by the master an update is ready. How can I make a new file upload detected by my devices as close to instant as possible?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":44,"Q_Id":68491807,"Users Score":1,"Answer":"I'm not sure the scale of the project you're trying to accomplish, but I've set up a sort of auto-update checker in Python before. The basic idea is that you'd have a location to host the file and some way to determine if the file has been changed (I used a version number). This can be a basic web server located on the Pi. Then, each client checks the web server whenever you think it's appropriate to do an update, for example at the start of the program. If there is a version number mismatch, you can automatically pull the file from the web server, or direct the user to download it manually.","Q_Score":0,"Tags":"python,shell,updates","A_Id":68491883,"CreationDate":"2021-07-22T21:42:00.000","Title":"Push a file to multiple devices at once","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I receive this error when trying to use IBM Watson. Maybe someone has the same problem - or even better - a solution?\n\n\n Traceback (most recent call last):\n File \"MY FILE.py\", line 27, in \n service.recognize(\n File \"C:\\Users\\...\\lib\\site-packages\\ibm_watson\\speech_to_text_v1.py\", line 566, in recognize\n response = self.send(request)\n File \"C:\\Users\\....\\lib\\site-packages\\ibm_cloud_sdk_core\\base_service.py\", line 308, in send\n raise ApiException(response.status_code, http_response=response)\n ibm_cloud_sdk_core.api_exception.ApiException: Error: \n Internal Server Error\n Internal Server Error - Write\n The server encountered an internal error or misconfiguration and was unable to\n complete your request.\n Reference #4.a54a0760.1627114991.5298a594\n \n , Code: 503\n\n\nHere is the code I am using\n\n\n # Accepts only .mp3 Format of Audio\n # File \n \n \n import json\n from os.path import join, dirname\n from ibm_watson import SpeechToTextV1\n from ibm_watson.websocket import RecognizeCallback, AudioSource\n from ibm_cloud_sdk_core.authenticators import IAMAuthenticator\n \n \n # Insert API Key in place of \n # YOUR UNIQUE API KEY - I replaced this text with my API KEY in actual code\n authenticator = IAMAuthenticator('YOUR UNIQUE API KEY') \n service = SpeechToTextV1(authenticator = authenticator)\n \n #Insert URL in place of API_URL - below is actual I am using\n service.set_service_url('https:\/\/api.us-east.speech-to-text.watson.cloud.ibm.com')\n \n # Insert local mp3 file path in\n # place of LOCAL FILE PATH - I am using my C drive and replaced actual file path ending \n with open(join(dirname('__file__'), r'C:\/Users\/LOCAL FILE PATH.mp3'), \n 'rb') as audio_file:\n \n dic = json.loads(\n json.dumps(\n service.recognize(\n audio=audio_file,\n content_type='audio\/flac', \n model='en-US_NarrowbandModel',\n continuous=True).get_result(), indent=2))\n \n # Stores the transcribed text\n str = \"\"\n \n while bool(dic.get('results')):\n str = dic.get('results').pop().get('alternatives').pop().get('transcript')+str[:]\n \n print(str)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":180,"Q_Id":68508385,"Users Score":0,"Answer":"Solution provided by IBM engineer worked;\n\"Your code is using the URL of our US-East datacenter and it's recommended to include your instance ID in the URL.\"\nAdding URL ...\/instance\/MY ID\nsolved problem!","Q_Score":0,"Tags":"python,server,websphere,ibm-watson,internals","A_Id":68523536,"CreationDate":"2021-07-24T08:45:00.000","Title":"IBM Watson \"Internal Server Error - Write\" Error - ibm_cloud_sdk_core.api_exception.ApiException: Error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a python application that runs as a service (it's a tornado web server). I want to the application to be able to self-update as follows.\n\nuser uploads a package file that contains a new version of the application files\nthe web application launches a separate python (or script) application that does the following:\n\nTurn off the main application (systemctl stop myapplication)\nupdate the files from the uploaded package.\nRestart the application with the updates installed (new version)\n\n\n\nI've tried a nohup and double-fork approach to launch the \"updater\" program, but it appears as soon as I shutdown the application from the spawned child program, the updater dies and the process fails. I'm not sure if this is because I'm not detaching the update process correctly (which I think I am) or if systemd process monitored of the service causes issues with this approach.\nAny suggestions? I'm considering using a separate application (tornado) running in parallel that I send a HTTP request to trigger it to control the parent application and do the install.\nThoughts?\n-Jeff\nAny suggestions?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":297,"Q_Id":68539209,"Users Score":0,"Answer":"I solved the problem, not the most elegant, but functional.\nI created a one-shot systemd service that is dormant (non-enabled, not running). All that this service does is execute my python update program.\nProcess:\n\nMain service (application) starts the one-shot service\nOne shot service starts up, stops the main service\nOne service installs updated application files\nOne shot service restarts the main service and exits\nMain service is now live and updated.\n\nIts bit more than that, but the implementation works.\n-Jeff","Q_Score":0,"Tags":"python,service,tornado,systemd","A_Id":68899561,"CreationDate":"2021-07-27T05:01:00.000","Title":"Python3 - How to self-update application running as a service","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python application that runs as a service (it's a tornado web server). I want to the application to be able to self-update as follows.\n\nuser uploads a package file that contains a new version of the application files\nthe web application launches a separate python (or script) application that does the following:\n\nTurn off the main application (systemctl stop myapplication)\nupdate the files from the uploaded package.\nRestart the application with the updates installed (new version)\n\n\n\nI've tried a nohup and double-fork approach to launch the \"updater\" program, but it appears as soon as I shutdown the application from the spawned child program, the updater dies and the process fails. I'm not sure if this is because I'm not detaching the update process correctly (which I think I am) or if systemd process monitored of the service causes issues with this approach.\nAny suggestions? I'm considering using a separate application (tornado) running in parallel that I send a HTTP request to trigger it to control the parent application and do the install.\nThoughts?\n-Jeff\nAny suggestions?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":297,"Q_Id":68539209,"Users Score":0,"Answer":"I would suggest using either a separate application or even a separate script that runs in cron - unless you really need this to be \"real-time\".\nDo consider what happens if the new application doesn't work and you need to start the old one...","Q_Score":0,"Tags":"python,service,tornado,systemd","A_Id":68883502,"CreationDate":"2021-07-27T05:01:00.000","Title":"Python3 - How to self-update application running as a service","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"There is a scenario: the python script needs to be placed under \/home\/my_user_name\/bin for other users to use.\nBut sometimes it appears that a user changes this file by mistake. Is there any best practice to solve this problem?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":65,"Q_Id":68545741,"Users Score":0,"Answer":"The whole point of \/home\/username is that the user can install\/change stuff in there.\nIf it should not be changeable, install it somewhere else, such as \/usr\/local\/bin, which the executable bit set on it chmod +x","Q_Score":0,"Tags":"python,python-3.x,linux","A_Id":68546040,"CreationDate":"2021-07-27T13:26:00.000","Title":"Protect python files from being changed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to put all my logic and backend in python and frontend in React\/Typescript but don't know if it's possible to have a unique Dockerfile containing a python image and a node image.\nis it the best approach to do it or should I have multiple Dockerfiles and communicate the backend and frontend with Kafka or any other message streaming framework, I'm kinda lost here!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":85,"Q_Id":68552800,"Users Score":0,"Answer":"For the most part, you should structure applications the same way whether you package them in Docker or not. It might help to think about how you'd structure this application without Docker:\n\nYou'd probably develop the front end (with React, npm, a package.json file) and back end (with Python, pip, a setup.py file) separately.\nIf you wanted to run the React dev server, you'd separately run the Python application and the front end, in separate processes; or you could compile the React application to static files that the Python app served directly.\nThe React application would make HTTP requests to the back-end application.\n\nThese translate reasonably into Docker:\n\nYou'd build separate Docker images, one FROM python and one FROM node.\nEither you can use something like Docker Compose to run the two images in two containers, including the Webpack dev server, or you can use a multi-stage Dockerfile to build the React application into static files and then COPY its built content into the Python image.\nThe React application still makes HTTP requests to the back-end application.\n\nAlso remember that there's no particular requirement that all of your workflow must be in Docker all the time. If the back end depends on a database, you could run the database (only) in Docker while using a normal Python virtual environment to develop the backend. Or, you can run the back end in a container while using webpack-dev-server to develop the front end, in a normal host Node environment, not in a container. So long as you do a good job of declaring your library dependencies in the normal way (package.json; setup.py\/requirements.txt\/Pipfile) your application should run reasonably consistently in any environment, Docker or not.","Q_Score":0,"Tags":"python,node.js,docker","A_Id":68553373,"CreationDate":"2021-07-27T23:36:00.000","Title":"Is there a way to run a image of node and another of Python on the same dockerfile?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"From a python program on host A, I want to be able to call another python program (package)\/function on remote machine B (via TCP\/ip, ssh tunnels or similar). A and B have different OS (windows\/linux).\nI want to be able to pass (and get back as returned value) any python (and -even better- numpy) types. The call should look -as much as possible- as a normal python call.\nRunning via command line args, writting\/geting stdin\/out (Popen...) is not really what I want (even if I am aware that json can serialize anything...and this can eventually be working). Also the function argument\/returned values will be large (MBytes or GBytes)\nI am aware this can also be done via client\/server and TCP IP sockets, but here again, this requires defining my own protocol and maintaining both sides... This is actually the current implementation which I am trying to replace!\nI would like something simpler, a bit like being able to load and run python package on a remote...RPC like...\nIf a new function is created on the remote, the host should just be able to call it...\nI have googled a bit, and I found something called knockout, which was aimed to do just that (importing python module from remote host), but it looked discontinued. if not, I did not find the correct URL... I was also worried about python version mismatch between remote and host...\nAny other suggestion or hints?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":68587967,"Users Score":0,"Answer":"Just wanted to update that post.\nPyRPC is what I am now using. it was a bit overkill given the simplicity of my issue, and error reporting is not as transparent that I'd like it to be, BUT it works over different version of python which turned out to be quite nice in my case","Q_Score":0,"Tags":"python,rpc","A_Id":69582121,"CreationDate":"2021-07-30T08:41:00.000","Title":"python modules on remote machine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I added some credentials to my env variables through Windows System options to access it in a more secure way, however whenver I try to get their values in Python (using os.environ) the keys aren't found. I've tried to reboot my computer, but this didn't help.\nI would appreciate any help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":122,"Q_Id":68611132,"Users Score":0,"Answer":"I could access them . Could you please check if environ variables are created successfully . Restart python console. I dont think we need to restart computer.","Q_Score":0,"Tags":"python,operating-system,environment-variables","A_Id":68612327,"CreationDate":"2021-08-01T13:59:00.000","Title":"Can't access new environment variables through Python os","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm not able to run a Python script from ExecuteStreamCommand.\nThese are the processor properties:\nCommand arguments: \/home\/directory\/test.py\nCommand path: \/bin\/python3\nWorking directory: \/home\/directory\nError message: Traceback (most recent call last): File \"\/home\/test.py\", line 1, in import nipyapi ModuleNotFoundError: No module named 'nipyapi'","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":194,"Q_Id":68616133,"Users Score":0,"Answer":"The way that I am able to get ExecuteStreamCommand processor to run python scripts with imported modules is to install the imported modules on the actual machine itself. Therefore, this means running pip or pip3 install {insert module name} on the command prompt for each system that this processor is running on.","Q_Score":0,"Tags":"python,apache-nifi","A_Id":68621758,"CreationDate":"2021-08-02T03:45:00.000","Title":"How to add additional libraries in ExecuteStreamCommand in Nifi?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to locate my dbt_project.yml file which is not in the root directory of my project. Previously, I was using an env var called DBT_PROJECT_DIR in order to define where dbt_project.yml file is located and it was working fine. In a similar way, I am using DBT_PROFILE_DIR and it still works correct. But I cannot make DBT_PROJECT_DIR work. Any help is appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1325,"Q_Id":68624812,"Users Score":0,"Answer":"I'm fairly certain this is not supported. Are you not able to change the directory to the directory of the dbt_project.yml before running dbt commands?\nAs a workaround, could you just add --project-dir $PROJECT_DIR to every command you plan to run?","Q_Score":1,"Tags":"python,ubuntu,environment-variables,dbt","A_Id":68628513,"CreationDate":"2021-08-02T16:11:00.000","Title":"How to set project-dir in dbt with environment variable?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using Pycharm Pro for a Flask project on Windows 10.\nI'm using Git Bash internally to Pycharm and globally as well. My global Git Bash works totally fine, no weird \"command not found\".\nThe Git Bash terminal internal to Pycharm initially works fine when I open a new instance of the terminal, however after I run my Flask script on localhost, it starts throwing \"command not found\" for everything.\nEven if I type python or python3, it says \"command not found\". Same for ls, cd, etc...nothing works anymore.\nDuring that time, my global Git Bash still works totally fine. In Pycharm, if I close the faulty terminal and open a new instance, everything works fine again.\nEDIT: It seems to happen after I source activate my venv environment.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":112,"Q_Id":68626515,"Users Score":1,"Answer":"Try installing python or python3 in proper way in your virtual environment and resolve all the dependencies accordingly. For installation make sure to activate your venv and run the shell as an administrator.","Q_Score":0,"Tags":"python,bash,pycharm","A_Id":68626790,"CreationDate":"2021-08-02T18:38:00.000","Title":"Git Bash commands stop working after a while (\"command not found\") in Pycharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Within a GAE application, we want to process Pub Sub messages by batches, for example: every 10 seconds read all pending messages on a subscription and process them in a batch (because that will be more efficient). A synchronous subscriber.pull() would nicely allow us to read a batch of pending messages. The question is what would I do next ? Sleep for 10 seconds then read again ? But that would require a permanent background task, which is sort of difficult to set up in App Engine. An endpoint called by a cron every minute (or every hour), that runs a number of cycles of [ read and process messages, sleep for 10 seconds ] cycles for an hour, then exits ? Any better idea ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":176,"Q_Id":68642529,"Users Score":1,"Answer":"You can use Cloud Scheduler to call every minutes your App Engine endpoint. This endpoint reads the pubsub subscription for a while (let say 45 seconds) processes the messages and then return a 200 HTTP code.\nIf you want to read by time window of 10s, you need to build the process on your side. Continue to call the endpoint every minutes (it's the Serverless pattern, the process is perform only inside a request handling), but the endpoint listen the subscription for 10s, processes the messages, sleep 10s et repeat that 5 time, and then return a 200 HTTP code.","Q_Score":0,"Tags":"python,google-app-engine,google-cloud-pubsub","A_Id":68648911,"CreationDate":"2021-08-03T20:39:00.000","Title":"GCP Pub Sub: process messages by batches","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I build a DAG that begins with the download of a file over the SFTPOperator, I save it and proceed with accessing and processing it, with a PythonOperator.\nI had no issues with this approach at all, till I started to scale up my celery-workers from 1 to 2.\nNow I run in to the problem of a file that isn't available in both workers.\nHow do I solve it? Do I download the file over the SFTPHook and combine these Tasks?\nCan I constrain the spread on to different workers?\nkind regards,\nCreedsCode","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":68677411,"Users Score":0,"Answer":"You should create a shared location between the workers. In case of multiple workers, you have no guarantee that two tasks will be run by the same worker.\nAn AWS EFS or something similar where you will download that file and later read, will be enough. I'm not good with infrastructure, so I can't help you with implementation details, but this is the solution I used for a similar problem.","Q_Score":0,"Tags":"python,airflow","A_Id":68677940,"CreationDate":"2021-08-06T06:57:00.000","Title":"File dependend DAG execution","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Getting this error while trying to run a eth-brownie script on MacOS\n\nModuleNotFoundError: No module named 'Users.xyz'\n\nRun command:\n\nbrownie run scripts\/mainnet\/poolUpdaterMainNet.py --network bsc-main\n\nWould be great if someone can help.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":714,"Q_Id":68721661,"Users Score":0,"Answer":"Move your project folder to C:\/\/ drive (or wherever pip install packages).\nIt's not a problem in Linux because there's no disk partition in Linux.","Q_Score":3,"Tags":"python,macos,blockchain,solidity,cryptocurrency","A_Id":72063674,"CreationDate":"2021-08-10T05:53:00.000","Title":"eth-brownie - No module named","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using raspberry pi 3 and i can communicate to a port via terminal using the following commands:\nto open port\n\nssty -F \/dev\/ttyUSB2 -echo\n\n\ncat \/dev\/ttyUSB2&\n\nTo send messages i use:\n\necho 'AT' > \/dev\/ttyUSB2\n\nThe response of the port is 'OK'\nI doing a python code to save the answers of the terminal in a variable, i tried to use the pySerial library but doesn't work, is there any another method that i can use?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":68730388,"Users Score":0,"Answer":"Just use \"serial\" library. I use it and works good.","Q_Score":1,"Tags":"python,terminal,raspberry-pi3,raspbian","A_Id":68730480,"CreationDate":"2021-08-10T16:23:00.000","Title":"Python reading and writing to ttyUSB","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Im really quite new to coding and brand new to Python so apologies if this is a dumb question.\nI'm writing basic scripts in VS Code and when I run them the result in the terminal is just..... ugly. Instead of just printing the result of my code, it prints details about my version of Windows, a little copyright notice, the full file path to my code... then eventually gets round to executing my actual code.\nIs there any way for me to configure the terminal so that it just shows my code and not all the other bits? I've already seem about an extension called Code Runner, but this prints to the \"Output\" tab and doesn't allow any user input","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":270,"Q_Id":68734356,"Users Score":0,"Answer":"go to settings\nsearch for terminal\nfind Code-runner : Run in Terminal and turn that off.","Q_Score":1,"Tags":"python,visual-studio-code","A_Id":68752222,"CreationDate":"2021-08-10T23:12:00.000","Title":"Running Python in VS Code Terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying out the Windows Terminal app and I want to start using it instead of CMD. The problem is, I cannot set the File Explorer to open Python files with it by default. I tried using the Open button leading to the file path, but I get the following error\n\nwtThe file cannot be accessed by the system.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":278,"Q_Id":68734462,"Users Score":0,"Answer":"With Windows 10, there is no way to change the default terminal from Windows Console Host to Windows Terminal.\nHowever, with Windows 11, it is now possible to change the default terminal application. This currently needs a Windows Terminal Preview version.\nYou can activate this from either in Windows 11 Settings (Choose the default terminal app to host the user interface for command-line applications) or in Windows Terminal Preview (Settings->Startup->Default Terminal Application)","Q_Score":1,"Tags":"python,windows,windows-terminal","A_Id":69609738,"CreationDate":"2021-08-10T23:32:00.000","Title":"How to open Python files in the new Windows Terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying out the Windows Terminal app and I want to start using it instead of CMD. The problem is, I cannot set the File Explorer to open Python files with it by default. I tried using the Open button leading to the file path, but I get the following error\n\nwtThe file cannot be accessed by the system.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":278,"Q_Id":68734462,"Users Score":0,"Answer":"You can have this behaviour on double click by changing\nHKEY_CLASSES_ROOT\\Python.File\\Shell\\open\\command\ndefault value from:\n\"C:\\Windows\\py.exe\" \"%L\" %*\nto\n\"C:\\Users\\USER\\AppData\\Local\\Microsoft\\WindowsApps\\wt.exe\" \"python\" \"%L\" %*\n(you have to replace \"USER\" with the username on your windows machine)","Q_Score":1,"Tags":"python,windows,windows-terminal","A_Id":71923603,"CreationDate":"2021-08-10T23:32:00.000","Title":"How to open Python files in the new Windows Terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I am executing \"os.system('ls \/usr\/bin')\" from PyCharm with venv (and without) I miss some binaries that would show up when executing previous command from my \"normal\" terminal.\nSeems to be a problem with my PyCharm-Environment...?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":68735165,"Users Score":0,"Answer":"PyCharm was installed as Flatpak, changing to snap package fixed the issue","Q_Score":0,"Tags":"python,path,pycharm","A_Id":68770424,"CreationDate":"2021-08-11T01:55:00.000","Title":"Missing usr\/bin binaries in venv?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"By default, MetaFlow retries failed steps multiple times before the pipeline errors out. However, this is undesired when I am CI testing my flows using pytest-- I just want the flows to fail fast. How do I temporarily disable retries (without hard-coding @retry(times=0) on all steps)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":35,"Q_Id":68776921,"Users Score":1,"Answer":"You can disable it with by setting the METAFLOW_DECOSPECS environment variable: METAFLOW_DECOSPECS=retry:times=0.\nThis temporarily decorates all steps with @retry(times=0)-- unless they are already decorated, in which case this won't override the hard-coded retry settings.\nSource: @Ville in the MetaFlow Slack.","Q_Score":0,"Tags":"python,continuous-integration,pytest,netflix-metaflow","A_Id":68776922,"CreationDate":"2021-08-13T18:29:00.000","Title":"How to disable automatic retries of failed Metaflow tasks?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to write a scalable app using python and docker and mongodb\nThe app just runs a web search and returns a list of web pages\nI then store this list in a mongodb collection using dictionary\nI also have created a simple docker file that exposes the python entry point\nI now would like to understand how to make a docker compose file so that I can scale the app, e.g. \u201cmy 10 million users\u201d want to make more searches and return more values (at the moment I am limiting the search to 10 results).\nPlease can you advise?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":14,"Q_Id":68782625,"Users Score":0,"Answer":"I don't think a providing a Compose file will necessarily help here. If you paginate the Mongo response, that'll help page load times\nOtherwise, to distribute database load, you need to simply need to add more Mongo and Python containers. To handle web server load balancing, add a reverse proxy. You can refer to the replicas option in the Compose spec if you want to do that easily, otherwise, simply copy paste the same services\nWorth mentioning that Elasticsearch is the more common tool for search results than Mongo","Q_Score":0,"Tags":"python,mongodb,docker","A_Id":68784208,"CreationDate":"2021-08-14T10:58:00.000","Title":"Scalable dockerized Python web search with storage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I recently installed python3.9.6 on ubuntu\nand it all seemed to work\nbut when enter python3 on the terminal it shows python3.8.5, not python3.9.6\nI want to type in python, python3, or python3.9 to open python3.9.6\ncan someone help me?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":576,"Q_Id":68792446,"Users Score":1,"Answer":"Check python version on terminal - python --version\nGet root user privileges. On terminal type - sudo su\nWrite down the root user password.\nExecute this command to switch to python 3.6.\nupdate-alternatives --install \/usr\/bin\/python python \/usr\/bin\/python3 1\nCheck python version - python --version","Q_Score":0,"Tags":"python,python-3.x,linux,ubuntu,python-3.9","A_Id":68792483,"CreationDate":"2021-08-15T14:27:00.000","Title":"How to set python3 as default","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using the direct runner of Apache Beam Python SDK to execute a simple pipeline similar to the word count example. Since I'm processing a large file, I want to display metrics during the execution. I know how to report the metrics, but I can't find any way to access the metrics during the run.\nI found the metrics() function in the PipelineResult, but it seems I only get a PipelineResult object from the Pipeline.run() function, which is a blocking call. In the Java SDK I found a MetricsSink, which can be configured on PipelineOptions, but I did not find an equivalent in the Python SDK.\nHow can I access live metrics during pipeline execution?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":240,"Q_Id":68803591,"Users Score":2,"Answer":"The direct runner is generally used for testing, development, and small jobs, and Pipeline.run() was made blocking for simplicity. On other runners Pipeline.run() is asynchronous and the result can be used to monitor the pipeline progress during execution.\nYou could try running a local version of an OSS runner like Flink to get this behavior.","Q_Score":3,"Tags":"python,apache-beam","A_Id":68807089,"CreationDate":"2021-08-16T13:24:00.000","Title":"Access Apache Beam metrics values during pipeline run in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Im new to programming and stuff and was looking to install python 3.9.6 using home-brew. To do that would I just have to type in brew install python@3.9 into the terminal, or is there some other way? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2211,"Q_Id":68806411,"Users Score":1,"Answer":"You can't ensure a specific version 3.9.6 with brew package python@3.9. Homebrew is a package manage designed to get latest pacakge. python@3.9 will be kept updating to the latest patch version 3.9.x.\nIf you REALLY to stick with specific python version, choose conda (miniconda is preferred) or pyenv.","Q_Score":1,"Tags":"python-3.x,installation,terminal,homebrew,python-3.9","A_Id":68806638,"CreationDate":"2021-08-16T16:40:00.000","Title":"how to install python 3.9.6 using homebrew on Mac m1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way to spin up a Docker container and then activate a given conda environment within the container using a Python script? I don't have access to the Dockerfile of the image I'm using.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":122,"Q_Id":68840687,"Users Score":0,"Answer":"I think if you load right docker image there is no need to activate conda in your dockerfile try this image below\nFROM continuumio\/miniconda3\nRUN conda info\nThis worked for me.","Q_Score":0,"Tags":"python,docker,anaconda,containers,environment","A_Id":70638635,"CreationDate":"2021-08-19T00:28:00.000","Title":"Activate conda env within running Docker container?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using Windows 10 64 bit. When I open .py files using the command prompt it opens the file in pycharm. I would like to open the files in python. Is there an alternative command I can use to open the file with python? Or is there some setting i can alter to make python the default app when I open .py files using command prompt? Cheers","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":585,"Q_Id":68878731,"Users Score":1,"Answer":"It's because you may have made PyCharm your default program to open .py files. You just need to change default app for file type. There's a option in windows for doing this :\nJust search \"Choose default apps by file type\" in windows search bar. And choose python (or any suitable program) as default app for .py files.","Q_Score":0,"Tags":"python,windows,operating-system","A_Id":68878753,"CreationDate":"2021-08-22T05:25:00.000","Title":"How do I change default application for a file in Windows 10?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed Anaconda in the Windows 10, I am able to run the IPython in cmd with some Linux commands like 'ls' and 'pwd' but when I try to run the 'mv' command\n\"\n\nmv some_file.txt ~\/myproject\/\n\nit gives the error :\n\nFile \"\", line 1.\n\nCan someone please tell me what would be the correct format to run this shell code in\ncommand prompt IPython.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":68883939,"Users Score":0,"Answer":"You get an error because mv is not a supported magic command on Windows. Going through the docs I couldn't find an equivalent magic command but this might help you:\n!move some_file.txt C:\\Users\\Some_User\\myproject\\","Q_Score":0,"Tags":"shell,ipython","A_Id":68884129,"CreationDate":"2021-08-22T18:08:00.000","Title":"cannot run \"mv some_file.txt ~\/myproject\/ \" command in the cmd ipython","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've a program which needs to have two processes doing mutually exclusive reads and writes to a mongodb document.\nOne part (lets call it \"process_a\") reads and updates (adds) to the document. The other (lets call it \"process_b\") reads and updates (deletes) all the values to the document.\nSo in an ideal scenario, process_a's read and writes never overlaps with process_b's read and writes.\nOtherwise, if the process_b's reads right before the process_a updates the document with new values. Process_b would delete (set it to zero) the document without realizing the update from process_a every happened. Thus failing to record the transaction.\nIs there any way to lock the document\/collection while one process performs its read and update task.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":68885391,"Users Score":0,"Answer":"I found no way of doing it with 100% certainty. So I ended up modifying code somewhere else to prevent it from happening.","Q_Score":0,"Tags":"python,mongodb","A_Id":68885826,"CreationDate":"2021-08-22T21:43:00.000","Title":"Perform mutually exclusive Read & Write operations on a mongoDB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Basically I need to access to the execution date on a PythonVirtualenvOperator and as far as I know you can't pass the execution date as op_kwargs or provide_context=True. I read that by using pendulum one can achieve this but I haven't seen any useful docs about it. Does anyone knows how to achieve this or has an example that can illustrate this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":68886365,"Users Score":0,"Answer":"The provide_context is deprecated and no longer needed in Airflow 2. PythonVirtualenvOperator derives from PythonOperator and you can pass all the variables to it the same way as you would for PythonOperator.\nYou have the very same context dictionary passed to it (well, it's a serializable version of it, but datetime fields are serializable) and you can use it in the same way you would use them in PythonOperator.","Q_Score":0,"Tags":"python,airflow","A_Id":68897545,"CreationDate":"2021-08-23T01:24:00.000","Title":"Airflow PythonVirtualenvOperators access to context datetime","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a databricks notebook with some code (Python) to upload a file from dbfs to a SharePoint location. The notebook runs correctly when executed stand-alone, and the file is uploaded, but when I try to schedule it using ADF or a Databricks job, the command for the SharePoint upload gets skipped.\nOther commands are executed okay. I'm using O365 REST Python client for the Sharepoint upload. I'm not sure if my choice of library is causing this to happen.\nHas anyone faced something similar?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":828,"Q_Id":68891544,"Users Score":1,"Answer":"From the info, is not clear if this in your code, but maybe it will help you or others with the mysterious \"Command skipped\" problem when running in job mode, as titled:\nThis will happen when a notebook runs another notebook using a run call, e g.,\n%run .\/subordinate_notebook\nand that subordinate notebook ends with\ndbutils.exit(\"Some message\")\nIn this situation, after that subordinate notebook exits, the remaining cells in the primary notebook are skipped. The message \"Command skipped\" will show.\nNote, %run behaves differently from dbutuls.notebook.run()\nUsing\nresult_message = dbutuls.notebook.run(.\/subordinate_notebook)\nwill avoid this problem.\nRemoving the dbutils.exit(\"Some message\") will also eliminate the issue.\nI hope that helps.","Q_Score":1,"Tags":"python,sharepoint,azure-databricks,office365-rest-client","A_Id":69500974,"CreationDate":"2021-08-23T11:06:00.000","Title":"Databricks notebook command skipped only when scheduled as a job","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm trying to build a python ETL pipeline in google cloud, and google cloud dataflow seemed a good option. When I explored the documentation and the developer guides, I see that the apache beam is always attached to dataflow as it's based on it.\nI may find issues processing my dataframes in apache beam.\nMy questions are:\n\nif I want to build my ETL script in native python with DataFlow is that possible? Or it's necessary to use apache beam for my ETL?\nIf DataFlow was built just for the purpose of using Apache Beam? Is there any serverless google cloud tool for building python ETL (Google cloud function has 9 minutes time execution, that may cause some issues for my pipeline, I want to avoid in execution limit)\n\nMy pipeline aims to read data from BigQuery process it and re save it in a bigquery table. I may use some external APIs inside my script.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":391,"Q_Id":68893891,"Users Score":1,"Answer":"Regarding your first question, Dataflow needs to use Apache Beam. In fact, before Apache Beam there was something called Dataflow SDK, which was Google proprietary and then it was open sourced to Apache Beam.\nThe Python Beam SDK is rather easy once you put a bit of effort into it, and the main process operations you'd need are very close to native Python language.\nIf your end goal is to read, process and write to BQ, I'd say Beam + Dataflow is a good match.","Q_Score":1,"Tags":"python,google-cloud-dataflow,apache-beam,serverless","A_Id":68895029,"CreationDate":"2021-08-23T13:51:00.000","Title":"Can I use google DataFlow with native python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I pulled a python image from hub images doing docker pull python:3.9-alpine.\nThen i tried to launch the container from this image doing like docker run -p 8888:8888 --name test_container d4d6be1b90ec.\nThe container is never up. with docker ps i didn't find it.\nDo you know why please?\nThank you,","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":116,"Q_Id":68910493,"Users Score":1,"Answer":"Your container is not launched because there is no server(nginx,apache etc..) to point, there is only python and the necessary dependencies.\nIn order to run that image you can try the following command:\n\ndocker run --name test_python -it [id_image]\n\nAnd if you open another terminal and use docker ps you will see that the container is up.","Q_Score":1,"Tags":"python,docker","A_Id":68911177,"CreationDate":"2021-08-24T15:56:00.000","Title":"how to run docker container from python image?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"For more clarifying information, I am using a Ubuntu 20.04. I am receiving the following error when trying to read a text file from a different directory: FileNotFoundError: [Errno 2] No such file or directory\nSay that the python code \"code.py\" that I am trying to run resides in Folder1. The text file \"test.txt\" that I am trying to read off of is in Folder2, where the relative pathing from \"code.py\" is \"..\/Folder2\/test.txt\". However, when I am using: from pathlib import Path and\nPATH = Path(\"..\\Folder2\\test.txt\"), I receive the error. Is there a better way to go about this? Thank you in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":417,"Q_Id":68933390,"Users Score":0,"Answer":"You should use os.path.join(), as it was built to work cross-platform(Linux\/Mac\/Windows).","Q_Score":0,"Tags":"python,ubuntu,path","A_Id":68933781,"CreationDate":"2021-08-26T06:16:00.000","Title":"Receiving FileNotFoundError: [Errno 2] No such file or directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to run a batch job in GCP dataflow. The job itself is very memory intensive at times.\nAt the moment the job keeps crashing, as I believe each worker is trying to run multiple elements of the pcollection at the same time.\nIs there a way to prevent each worker from running more than one element at a time?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":68940104,"Users Score":0,"Answer":"The principle of Beam is to write a processing description and to let the runtime environment (here dataflow) running it and distributing it automatically. You can't control what it is doing under the hood.\nHowever you can try different things\n\nCreate a window and trigger it each 1 element in the pane. I don't know if it will help to distribute better the process in parallel, but you can have a try.\nThe other solution is to outsource the processing (if possible). Create a Cloud Functions, or a Cloud Run (you can have up to 16Gb of memory and 4CPUs per Cloud Run instance) and set (for Cloud Run) the concurrency to 1 (to process only 1 request per instance and therefore, to have 16Gb dedicated to only one processing -> this behaviour (concurrency = 1) is by default with Cloud Functions). In your dataflow job, perform an API call to this external service. However, you can have up to 1000 instances in parallel. If your workload required more, you can have 429 HTTP error code because of lack of resources.\nThe latest solution is to wait the new and serverless runtime of Dataflow which scale the CPU and memory automatically without the \"worker\" object. It will be totally abstracted and the promise is to no longer have out of memory crash! However, I don't know when it is planned.","Q_Score":0,"Tags":"python,google-cloud-platform,dataflow","A_Id":68944564,"CreationDate":"2021-08-26T14:07:00.000","Title":"GCP Dataflow Batch jobs - Preventing workers from running more than one element at a time in a batch job","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using locust to run load test. Specifically I am trying to use docker-compose and following the documentation at https:\/\/docs.locust.io\/en\/stable\/running-locust-docker.html\nI want to retrive test stats in CSV format per the directions in https:\/\/docs.locust.io\/en\/stable\/retrieving-stats.html\nNow when running this setup headless how can I get aggregated results in CSV format from all workers? The non headless version allows me to download the aggregated, results as a CSV, but am not sure of the headless version would work here.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":294,"Q_Id":68943425,"Users Score":1,"Answer":"You should only have to worry about running --headless --csv=example (as noted from the docs page you linked to) and such on the master. The workers don't need those as headless only applies to the master and they don't aggregate their own results. The CSVs generated by the master should contain all the results from all the workers. If you've tried this and you're not seeing all the data you're wanting, you may want to try adding --csv-full-history.\nFrom the docs page:\n\nThe files will be named example_stats.csv, example_failures.csv and example_history.csv (when using --csv=example). The first two files will contain the stats and failures for the whole test run, with a row for every stats entry (URL endpoint) and an aggregated row. The example_history.csv will get new rows with the current (10 seconds sliding window) stats appended during the whole test run. By default only the Aggregate row is appended regularly to the history stats, but if Locust is started with the --csv-full-history flag, a row for each stats entry (and the Aggregate) is appended every time the stats are written (once every 2 seconds by default).","Q_Score":0,"Tags":"python,docker,locust","A_Id":68943924,"CreationDate":"2021-08-26T18:04:00.000","Title":"locust how to use docker-compose and get aggregated results","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can ssh into a remote machine.\nI then try to connect to a jupyter notebook job that I started on one of the nodes of the remote machine:\nssh -L 8069:localhost:8069 me@remote.machine ssh -L 8069:localhost:8069 me@node14\nThis has always worked fine in the past.\nWhen I execute this lately, nothing happens until I eventually get a time out message. If I cancel it and then try to simply ssh into the remote machine again, it again does nothing until I get the error message:\nssh: connect to host remote.machine port 22: Connection timed out\nI am trying to figure out if this is a problem at my end or at the remote machine. If it's the latter I can't understand why I am able to ssh to the remote machine fine until I try the\nssh -L 8069:localhost:8069 me@remote.machine ssh -L 8069:localhost:8069 me@node14\nconnection.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":68958148,"Users Score":0,"Answer":"You are trying to do a double ssh connection: one to remote.machine and then another one to node14.\nThe problem seems to be the ssh process in the node14 machine. So, you can connect to the first machine but no to the second one. Ask your administrator to enable the sshd process in node14\nYou can test this case by logging into remote.machine via:\nssh -L 8069:localhost:8069 me@remote.machine.\nOnce you get shell access you can try the connection to node14 via:\nssh -L 8069:localhost:8069 me@node14.\nAccording to the description, this last try should fail with the timeout.","Q_Score":0,"Tags":"python,ssh,jupyter-notebook,remote-server","A_Id":68959713,"CreationDate":"2021-08-27T18:52:00.000","Title":"Problem connecting to a Jupyter Notebook on a remote machine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have thousands of files stored in MongoDB which I need to fetch and process.\nProcessing consists of a few steps which should be done sequentially. The whole process takes around ~2 mins per file from start to end.\nMy question is how to do that as fast as possible while being scalable in future? Should I do it in pure python or should I maybe use Airflow + Celery (or even Celery by itself)? Are there any other ways\/suggestions I could give a try?\nAny suggestion is appreciated.\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":75,"Q_Id":68976526,"Users Score":2,"Answer":"Celery alone is precisely made to do what you need - no need to reinvent the wheel.","Q_Score":0,"Tags":"python,multiprocessing,celery,airflow,etl","A_Id":68982629,"CreationDate":"2021-08-29T20:08:00.000","Title":"Processing thousands of files in parallel","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"On command $which python3$ , the location says \/opt\/homebrew\/bin\/python3 on my Mac. Is this okay for python to be in other directory than \/usr\/local\/ ?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":31,"Q_Id":68978540,"Users Score":1,"Answer":"Yes. It will work. I mean if you change the location of installation directory, mac os will recognize it and python3 instruction will work.","Q_Score":0,"Tags":"python,directory","A_Id":68978579,"CreationDate":"2021-08-30T03:05:00.000","Title":"Python installation directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to create a vm on gcp using ansible playbook on my ansible master machine\nmy ansible master is on ubuntu-desktop(WSL)\nI have installed requests and google-auth\nbut while running playbook, I am getting error\n\"FAILED! => {\"ansible_facts\": {\"discovered_interpreter_python\": \"\/usr\/bin\/python3\"}, \"changed\": false, \"msg\": \"Please install the requests library\"}\"\nand while running \"pip3 install requests\". It is prompting it's already present\n\"Defaulting to user installation because normal site-packages is not writeable\nRequirement already satisfied: requests in \/usr\/lib\/python3\/dist-packages (2.22.0)\"","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":140,"Q_Id":68979752,"Users Score":0,"Answer":"I have solved the problem, I was getting \"please install the requests library\" because I was not installing requests on host server,\nrequests module needs to be installed on slave machine as well.","Q_Score":0,"Tags":"ubuntu,ansible,python-requests","A_Id":69055151,"CreationDate":"2021-08-30T06:28:00.000","Title":"While running Ansible playbook getting error install request library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm looking to set up python on a new machine.\nI've found many instructions on this however I'm concerned with keeping the main installation clean so that each future environment can be modified specifically while I become familiar with the ins and outs of the program and packages.\nI've installed python and git on my old machine and having not really known anything I did all the installs via the admin account and made all settings global.\nLater discovered this was likely not the best way to do it.\nI wonder if anyone here might be able to point this crayon eater in the right direction?\nWould I be best off to make a user account on the computer specifically for my developing projects and install python, git, etc locally on this profile? Or are there parts of the install which one would want to have installed from the admin account?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":28,"Q_Id":68991330,"Users Score":0,"Answer":"It is OK to have git installed globally. Just create a new repository for each project, using git init.\nFor maintaining python dependencies per project, consider using virtualenv or pyenv. They create virtual environments which can be activated and deactivated and keep you from cluttering up your globally install python packages.\nAn alternative is to create a Docker image for each project and run your projects inside Docker containers.\nIf you are a beginner, the latter might be an overkill.","Q_Score":0,"Tags":"python,git,installation","A_Id":68991371,"CreationDate":"2021-08-30T23:36:00.000","Title":"Clean python and git install, admin or user? MacOS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hello i want to run a specific script by for example typing \"run\" will run the script.\ni found some people talking about envoirment vars but i dont know what to do can anyone help me?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":69005332,"Users Score":0,"Answer":"I guess you could do alias python=\"run\"","Q_Score":0,"Tags":"python,command","A_Id":69005723,"CreationDate":"2021-08-31T21:10:00.000","Title":"How to execute a python script with a custom command?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to download the AWS CLI tools onto my mac. The error message is pretty clear Unsupported Python version detected: Python 2.7 To continue using this installer you must use Python 3.6 or later. The issue I'm having is that aliasing python to python3 isn't working. For some reason, after aliasing, the installer still references Python 2.7.\nAfter aliasing through the cli didn't work for installing AWS CLI, I added alias python=python3 to my .zshrc file. Running python --version returns Python 3.9.6. Running the AWS installer still references the older version of python.\nI'm hesitant to completely override the older version, because I've read from multiple sources that the default python on OS X should not be touched.\nCan someone explain how I can reference the newer version of python when installing the AWS CLI tools?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":6745,"Q_Id":69006266,"Users Score":2,"Answer":"My problem was that I was trying to download an old version of the awscli. After downloading the newest version I ran into some issues with the credentials file. Upon updating the credentials file and adding a config file in the .aws directory, everything worked as expected.","Q_Score":2,"Tags":"python-3.x,amazon-web-services,macos,command-line-interface,aws-cli","A_Id":69007725,"CreationDate":"2021-08-31T23:29:00.000","Title":"AWS CLI not working because of unsupported Python version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"pip install packagename.whl\nis not working in linux\nCan you please suggest me something for installation of python packages in linux with no internet connection.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":69024440,"Users Score":0,"Answer":"I am going to assume you have some device that has a network connection,\nso probably download the libraries onto some kind of medium, sd card, usb drive, and then build them on the board using the build instructions, usually in the readme or something.\nYou will have to be careful, since you wont be using pip, to check the version of python you have and download the corresponding\/compatible version of the library.","Q_Score":0,"Tags":"python,package,requirements.txt","A_Id":69024600,"CreationDate":"2021-09-02T05:08:00.000","Title":"Installing python packages in linux which doesn't have network connection using windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Although my issue is related to building Python, it could be considered generic. I have built Python with a custom TclTk installation, using --with-tcltk-includes and --with-tcltk-libs. Say I passed \/foo\/bar\/spam path to the latter. The problem is, when I use strace to check where the binary looks for its shared libraries, I see \/foo\/bar\/spam in the search path, although I do not want that because this application will be shipped and this path does not exist anywhere else beyond my own machine. So I want to use it just during the build, but not as a search path for the generated binary. Any ideas?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":16,"Q_Id":69034352,"Users Score":0,"Answer":"Static linking is probably your best bet, because you likely can't guarantee that your customers will have any of the other required libraries (like sqlite3, for example) either. Just don't pass the flag --enable-shared to .\/configure and it will build a statically-linked lib by default.\nUnfortunately, I don't see a way with the standard options to build a partial-static\/partial dynamic lib. That doesn't mean it isn't possible with some creative tinkering with the Makefile, but conceptually I'm not sure how it would work anyway.","Q_Score":0,"Tags":"python,python-3.x,build,configuration","A_Id":69034556,"CreationDate":"2021-09-02T16:52:00.000","Title":"How to fine-tune where a binary looks for its shared libraries during configure\/build (linux)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have two or three sets of Azure credentials for Work, Work Admin, and Personal. This morning, I clicked the wrong login credential during an interactive login while doing some local development. My local dev app now has an identity of me@company.com, when I need my identity to actually be me@admin.com. Because I clicked the wrong identity, my application immediately starts getting obvious authorization errors.\nMy implementation is pretty naive right now, and I'm relying on the Python Azure SDK to realize when it needs to be logged in, and to perform that login without any explicit code on my end. This has worked great so far, being able to do interactive login, while using the Azure-provided creds when deployed.\nHow can I get my local dev application to forget the identity that it has and prompt me to perform a new interactive login?\nThings I've tried:\n\nTurning the app off and back on again. The credentials are cached somewhere, I gather, and rebooting the app is ineffective.\nScouring Azure docs. I may not know the magic word, and as a consequence many search results have to do with authentication for users logging into my app, which isn't relevant.\naz logout did not appear to change whatever cache my app is using for it's credential token.\nSwitching python virtual environments. I thought perhaps the credential would be stored in a place specific to this instance of the azure-sdk library, but no dice.\nScouring the azure.identity python package. I gather this package may be involved, but don't see how I can find and destroy the credential cache, or any out way to log out.\nDeleting ~\/.azure. The python code continued to use the same credential it had prior. ~\/.azure must be for the az cli, not the SDK.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":175,"Q_Id":69046118,"Users Score":1,"Answer":"Found it! The AzureML SDK appears to be storing auth credentials in ~\/.azureml\/auth\/.\nDeleting the ~\/.azureml directory (which didn't seem to have anything else in it anyway) did the trick.","Q_Score":1,"Tags":"identity,azure-sdk-python","A_Id":69094775,"CreationDate":"2021-09-03T13:57:00.000","Title":"How do I get my local python app to forget its Azure credentials so I can do a new interactive login?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am fairly new to GCP.\nI have some items in a cloud storage bucket.\nI have written some python code to access this bucket and perform update operations.\nI want to make sure that whenever the python code is triggered, it has exclusive access to the bucket so that I do not run into some sort of race condition.\nFor example, if I put the python code in a cloud function and trigger it, I want to make sure it completes before another trigger occurs. Is this automatically handled or do I have to do something to prevent this? If I have to add something like a semaphore, will subsequent triggers happen automatically after the the semaphore is released?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":69047815,"Users Score":0,"Answer":"All of the info supplied has been helpful. The best answer has been to use a max-concurrent-dispatches setting of 1 so that only one task is dispatched at a time.","Q_Score":0,"Tags":"python,google-cloud-platform","A_Id":69170122,"CreationDate":"2021-09-03T16:06:00.000","Title":"How do I limit access to a cloud storage bucket to one process at a time?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have created a Pipeline in Azure ML which makes calls to Azure Cognitive Services Text Analytics using its Python API. When I run the code I have written locally, it executes without error, but when run it in the pipeline it fails to perform the Sentiment Analysis and Key Phrase Extraction calls with a strange error message:\n\nGot exception when invoking script at line 243 in function azureml_main: 'ServiceRequestError: : Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'.\n\nUpon further testing, it appears that it is able to open the Text Analytics Client correctly (Or at least without throwing an error), but when it gets to the line that actually makes the call out using the Python API it throws the above error.\nI wondered if it was an Open SSL issue, but when I checked the version it had access to TLS 1.2: OpenSSL 1.1.1k 25 Mar 2021\nIt does not appear to be a temporary issue; I started seeing the issue last week, and I have seen it over a number of environments and with different input datasets.\nHas anyone seen a similar issue before? Any ideas on how it could be resolved?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":238,"Q_Id":69068520,"Users Score":2,"Answer":"After speaking with Microsoft Support, it turns out this error was a platform error introduced in a recent update of Azure ML. Their product team are currently investigating a solution.\nAs a temporary fix, if you see this issue, you can try switching between using your personal endpoint and the generic regional endpoint; In this case, the error was only introduced for using personal endpoints. The endpoints in question have the following formats:\n\nPersonal: https:\/\/.cognitiveservices.azure.com\/\nRegional: https:\/\/.api.cognitive.microsoft.com\/","Q_Score":2,"Tags":"python,azure,azure-cognitive-services,azure-machine-learning-service,azure-python-sdk","A_Id":69170605,"CreationDate":"2021-09-06T02:19:00.000","Title":"Azure Machine Learning Python Module failing to Execute Calls to Cognitive Services","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am facing this error when I start my flask python3 application on mac.\nOSError: cannot load library 'gobject-2.0-0': dlopen(gobject-2.0-0, 2): image not found. Additionally, ctypes.util.find_library() did not manage to locate a library called 'gobject-2.0-0'\nI am using weasyprint in my project which is causing this issue.\nI tried to install glib and it is installed in my system","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3879,"Q_Id":69097224,"Users Score":0,"Answer":"I had the same issue after the homebrew update. Turned out the issue was because of the older pango lib version.\nI did brew install pango\nThis upgraded pango lib from 1.48.2 -> 1.50.4 which internally installed gobject's latest version as dep. And my issue got resolved.","Q_Score":8,"Tags":"python-3.x,flask,glib,weasyprint,macbookpro-touch-bar","A_Id":71291557,"CreationDate":"2021-09-08T04:58:00.000","Title":"gobject-2.0-0 not able to load on macbook","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was trying to run python code with the \"python\" command. There was no error message but the program just wasn't executed.\nFor test.py, I merely wrote a print(123), hoping it may print out the number. However, there's no output.\nAlso, command such as python --version didn't work either.\nI'm using anaconda and I've added it to my path. I've found that if I specify the full path of python.exe, it would work. I'm wondering why? I thought that when python is added to the path, it should be called with the python command.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":69099233,"Users Score":0,"Answer":"Make sure Python is properly installed\nMake sure it is added in PATH properly\nTry running a .py outside Anaconda","Q_Score":0,"Tags":"python,windows","A_Id":69099499,"CreationDate":"2021-09-08T08:07:00.000","Title":"Why the python command does't work even if I've added it to my path?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How can I specify a function to run when a DAG fails using the taskflow api? Using the old style I am able to specify a function to run on_failure but I cannot figure out or find documentation to do it using the taskflow api with the DAG and task operators.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":145,"Q_Id":69107940,"Users Score":0,"Answer":"The @dag decorator accepts all arguments that you can pass to when creating DAG instance, especially on_failure_callback. Same goes for @task decorator.\nAlso the on_failure_callback should be a function accepting context argument. In your example it's not, is there no Error when executing on_failure_callback error?","Q_Score":0,"Tags":"python,airflow,airflow-taskflow","A_Id":69114104,"CreationDate":"2021-09-08T18:20:00.000","Title":"How to specify function for on_failure with taskflow api","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to convert the command line return of docker container ls -q into a python list using the os.system library and method.\nIs there a way to do this using the os library?\nEssentially I'm trying to check if there are any running containers before I proceed through my code.\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":95,"Q_Id":69111185,"Users Score":0,"Answer":"I also found that os.popen('docker container ls -q').read().split('\\n') works as well. Thank you for all the suggestions.","Q_Score":0,"Tags":"python,docker,command-line","A_Id":69118717,"CreationDate":"2021-09-09T01:20:00.000","Title":"Convert docker container ls -q output into a python list","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I clicked \"python: select interpreter - enter interpreter path - find\"\nBut I can't find that path.\nHow do I find and select interpreter path?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":190,"Q_Id":69130525,"Users Score":0,"Answer":"You need to point to the python.exe which is under the installation location of the python.\nIf you don't know where the python.exe is, you can find it in the cmd with the command of where python. And of course, you need to add the python installation location to the system environment variable of path first.","Q_Score":0,"Tags":"python,visual-studio-code,wsl-2","A_Id":69582220,"CreationDate":"2021-09-10T09:58:00.000","Title":"How do I find the virtualenv path installed via wsl2 when choosing python interpreter in vscode?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have saved images in the sequential format- 0.jpg, 1.jpg, ---,99.jpg, 100.jpg. I am making video from these frames by using ffmpeg for conversion:\ncmd1 = 'ffmpeg -framerate 25 -pattern_type glob -i \"\/home\/ubuntu\/17\/1_1\/*.jpg\" \/home\/ubuntu\/17_a.avi'\nBut ffmpeg is not reading the images in sequential manner. How to make a video that take frames in a sequential order.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":200,"Q_Id":69140600,"Users Score":1,"Answer":"The default behaviour for ffmpeg is to read images in continuous numeric sequence. But since you have specified glob pattern, it uses that. Drop pattern_type and specify the correct pattern in the filename field.\nffmpeg -framerate 25 -i \"\/home\/ubuntu\/17\/1_1\/%d.jpg\" \/home\/ubuntu\/17_a.avi'","Q_Score":2,"Tags":"python,ffmpeg","A_Id":69141446,"CreationDate":"2021-09-11T06:29:00.000","Title":"Make a video in sequential order from frames","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using os.getcwd() to get user current directory. I want to check whether a file is in Desktop folder or not. But getcwd() function only returns\nusername directory, like this \/Users\/macbookair, but I expect this:\n\/Users\/macbookair\/Desktop\nI am using macOS Big Sur. Any help would be greatly appreciated. THanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":69148459,"Users Score":0,"Answer":"You can use os.chdir('\/Users\/macbookair\/Desktop') to change directory and\nos.listdir() - to list the file names in that directory.","Q_Score":0,"Tags":"python","A_Id":69148499,"CreationDate":"2021-09-12T04:33:00.000","Title":"check whether a file is in Desktop folder or not Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have two exactly the same airflow jobs with the same code base. The only difference is that they are writing data to different Mongo collections.\nOne of the jobs is ~ 30% slower than another, how is it possible? May Airflow allocate for one job more resources than for another?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":69161073,"Users Score":0,"Answer":"If both use the same queues, have same priority and are always run in the same environment there should be no visible difference. Unless one of the jobs is run in different time and load on the system at that moment is higher. Is this duration difference a visible trend?\nHave you tested performance of those jobs outside Airflow? Size and complexity of the collections may also matter.","Q_Score":0,"Tags":"python,airflow","A_Id":69164012,"CreationDate":"2021-09-13T10:23:00.000","Title":"One airlfow job is much slower than another one with same code base","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am a beginner in Python and PyCharm. I had installed both Python and PyCharm in my D drive, instead of C. But then, I was facing some issues, so had to uninstall and reinstall them in my C drive. Now, when I use the pip command in CMD, it says\nFatal error in launcher: Unable to create process using '\"D:\\Program Files\\Python\\python.exe\" \"C:\\Users\\USERNAME\\AppData\\Local\\Programs\\Python\\Python39\\Scripts\\pip.exe\" ': The system cannot find the file specified.\nI tried looking for solutions but have given up. Does anyone have any suggestions?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":69165133,"Users Score":0,"Answer":"I also had the same problem and I tried the following steps:\n\nOpen CMD as an administrator.\nWrite 'python -m pip install -U --force pip' in the CMD.\nRestart your PC\/laptop.\nTry the pip command now.","Q_Score":0,"Tags":"python,pip,pycharm,drive","A_Id":69165224,"CreationDate":"2021-09-13T15:12:00.000","Title":"Problem with installing Python in external drive","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am a beginner in Python and PyCharm. I had installed both Python and PyCharm in my D drive, instead of C. But then, I was facing some issues, so had to uninstall and reinstall them in my C drive. Now, when I use the pip command in CMD, it says\nFatal error in launcher: Unable to create process using '\"D:\\Program Files\\Python\\python.exe\" \"C:\\Users\\USERNAME\\AppData\\Local\\Programs\\Python\\Python39\\Scripts\\pip.exe\" ': The system cannot find the file specified.\nI tried looking for solutions but have given up. Does anyone have any suggestions?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":69165133,"Users Score":0,"Answer":"This is because your path file is still leading to the pip in D: Go to path and change that to your C: directory for pip","Q_Score":0,"Tags":"python,pip,pycharm,drive","A_Id":69165241,"CreationDate":"2021-09-13T15:12:00.000","Title":"Problem with installing Python in external drive","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've got a little Python script here for batch renaming files. Really simple.\nI'd like to be able to run it simply from the dock, by dragging a folder on to the icon, it will use that folder as the input for the script and run it.\nCan anyone point me in the direction of a solution to this?\nI've had a little look at Py2App, and will have a play in the coming days, but I'm not quite sure if it's overkill for me.\nThere is also the AppleScript route, but I'd like to avoid that if possible.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":69174457,"Users Score":0,"Answer":"Py2app can definitely do this, make sure you enable the \u201cargv_emulation\u201d option to convert dragged files to command-line arguments.\nThat\u2019s assuming you don\u2019t use a GUI library, if you do look into how that library exposed file-open events as the argv_emulation code tends to be incompatible with GUI libraries.","Q_Score":1,"Tags":"python,python-3.x,macos,py2app,dock","A_Id":69175469,"CreationDate":"2021-09-14T08:30:00.000","Title":"How can I run my Python script as a \"drag and drop\" icon on the dock? (MacOS)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed python 3.9 in Ubuntu, because it comes with python 3.8 which is an older version.\nI changed the command for terminal alias python 3 = python 3.9, but when I installed pip, it installed for python 3.8 and after that when I am using pip install to install python packages, it installs for python 3.8. How can I fix it?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":69176058,"Users Score":0,"Answer":"try with pip3 install\nThis kind of issue even happened to my case when I was working with the python modules recently on my project. Try this out it worked for me.","Q_Score":0,"Tags":"python,ubuntu","A_Id":69176142,"CreationDate":"2021-09-14T10:24:00.000","Title":"Python pip installing problem in ubuntu libux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed python 3.9 in Ubuntu, because it comes with python 3.8 which is an older version.\nI changed the command for terminal alias python 3 = python 3.9, but when I installed pip, it installed for python 3.8 and after that when I am using pip install to install python packages, it installs for python 3.8. How can I fix it?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":69176058,"Users Score":0,"Answer":"Due to variations in the installation process of python, pip often requires different ways to access for different people. A few examples that may help you include pip3 install, py pip install py -3 pip install or python3 pip install. Usually one of these works for me.","Q_Score":0,"Tags":"python,ubuntu","A_Id":69180527,"CreationDate":"2021-09-14T10:24:00.000","Title":"Python pip installing problem in ubuntu libux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"The python file in question collects data from API and the server is down once in a while so it stops running. to solve this, I've written this batch file that would keep running the file even if it encounters an error, so in few minutes it would be running as usual.\n:1\npython3 datacollection.py\ngoto 1\n\n\nNow, how do I achieve the same with bash file?\nNote: I want to run the python file on AWS EC2 Ubuntu.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":69177215,"Users Score":1,"Answer":"Just to add to @Willian Pursell answer. You could run screen command on your AWS server, then run while sleep 1; do python3 datacollection.py; done in your newly created session. Finally detach to the session using Ctrl + ad keys. With that, you will be able to exit your session. Exit the server, the command will keep running on the background. Cheers.","Q_Score":1,"Tags":"python,bash,amazon-web-services,ubuntu","A_Id":69177343,"CreationDate":"2021-09-14T11:48:00.000","Title":"bash code to run python file continuously","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The python file in question collects data from API and the server is down once in a while so it stops running. to solve this, I've written this batch file that would keep running the file even if it encounters an error, so in few minutes it would be running as usual.\n:1\npython3 datacollection.py\ngoto 1\n\n\nNow, how do I achieve the same with bash file?\nNote: I want to run the python file on AWS EC2 Ubuntu.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":57,"Q_Id":69177215,"Users Score":1,"Answer":"Use some kind of process manager e.g pm2 for this. It will be more stable than that.","Q_Score":1,"Tags":"python,bash,amazon-web-services,ubuntu","A_Id":69177243,"CreationDate":"2021-09-14T11:48:00.000","Title":"bash code to run python file continuously","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can install, uninstall and upgrade prettytable using pip [un]install [-U] prettytable. But running the mbed-tools compile command keeps on reporting \"ModuleNotFoundError: No module named 'prettytable'\"\nI can see the package is added to my ...\\appdata\\local\\programs\\python\\python39\\Lib\\site-packages\\ folder. According to my environmental path the folder ...\\appdata\\local\\programs\\python\\python39\\Scripts\\ is added and many packages has en .exe file installed there, but prettytabe is missing. Could this be the problem? If so, how do I install it and ensure that it actually has en exe install too?\nI'm running python 3.9.7 on pretty-much-the-latest Windows 10.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":91,"Q_Id":69194199,"Users Score":0,"Answer":"So it turned out I had Python 3.8.7 installed together with Python 3.9.7. So uninstalling the older one solved the problem...","Q_Score":0,"Tags":"python,installation,pip,mbed,prettytable","A_Id":69203606,"CreationDate":"2021-09-15T13:33:00.000","Title":"How to fix prettytable installation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I had to format my laptop and python was installed, but there is something different now, before formatting when I open CMD and type python it runs without anything else, but now I have to change the directory to run python (cd C:\\Users\\Khaled\\Desktop\\python) to run, what can I do to run python without changing directory??","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":73,"Q_Id":69198344,"Users Score":1,"Answer":"Install python in C: drive then set Environment Variable\nby Right click on My COMPUTER icon then select properties\nthen got Advance system Setting if Window PC\nthen select Environment Variable and give path of your Python where you installed\nfor example if in C Drive\nC:\\python39\\Scripts;C:\\python39\npython39 is my folder name of python installed directory which is in C drive\nset user and sysem variable by click on edit option","Q_Score":0,"Tags":"python,cmd","A_Id":69198457,"CreationDate":"2021-09-15T18:43:00.000","Title":"python not working in cmd until path given","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"In a real system, some sensor data will be dumped into specific directory as csv files. Then some data pipeline will populate these data to some database. Another pipeline will send these data to predict service.\nI only have training and validation csv files as of now. I'm planning to simulate the flow to send data to predict service following way:\nDAG1 - Every 2 min, select some files randomly from a specific path and update the timestamp of those files. Later, I may chose to add a random delay after the start node.\nDAG2 - FileSensor pokes every 3 min. If it finds subset of files with modified timestamp, it should pass those to subsequent stages to eventually run the predict service.\nIt looks to me if I use FileSensor as-is, I can't achieve it. I'll have to derive from FileSensor class (say, MyDirSensor), check the timestamp of all the files - select the ones which are modified after the last successful poke and pass those forward.\nIs my understanding correct? If yes, for last successful poke timestamp, can I store in some variable of MyDirSensor? Can I push\/pull this data to\/from xcom? What will be task-id in that case? Also, how to pass these list of files to the next task?\nIs there any better approach (like different Sensor etc.) for this simulation? I'm running the whole flow on standalone machine currently. My airflow version is 1.10.15.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":182,"Q_Id":69217245,"Users Score":1,"Answer":"I am not sure if current Airflow approach is best for this use case actually. In the current incarnation Airflow is really all about working on \"data intervals\" - so basically each \"dag run\" is connected to some \"data interval\" and it should be processing data for that data interval. Classic Batch processing.\nIf I understand your case is more like a streaming (not entirely) but close. You get some (subset of) data which arrived since the last time and you process that data. This is not what (again current) version of Airflow - not even 2.1 is supposed to handle - because there is a complex manipulation of \"state\" which is not \"data interval\" related (and Airflow currently excels in the \"data interval\" case).\nYou can indeed do some custom operators to handle that. I think there is no ready-to-reuse pattern in Airflow for what you want to achieve, but Airflow is flexible enough that if you write your own operators you can certainly work around it and implement what you want. And writing operators in Airflow is super easy - it's a simple Python class with \"execute\" which can reuse existing Hooks to reach out to external services\/storages and use XCom for communication between tasks. It's surprisingly easy to add a new operator doing even complex logic (and again - reusing hooks to make it easier to communicate with external services). For that, I think it's still worth to use Airflow for what you want to do.\nHow I would approach it - rather than modifying the timestamps of the files, I'd create other files - markers - with the same names, different extensions and base my logic of processing on that (this way you can use the external storage as the \"state\"). I think there will be no ready \"operators\" or \"sensors\" to help with it, but again - writing custom one is easy and should work.\nHowever soon (several months) in Airflow 2.2 (and even more in 2.3) we are going to have some changes (mainly around flexible scheduling and decoupling dag runs from data intervals and finally to allow dynamic DAG with flaxible structure that can change per-run) that will provide some nice way of handling cases similar to yours.\nStay tuned - and for now rely on your own logic, but look out for simplifying that in the future when Airflow will be better suited for your case.\nAnd in the meantime - do upgrade to Airflow 2. It's well worth it and Airflow 1.10 reached end of life in June, so the sooner you do, the better - as there will not be any more fixes to Airflow 1.10 (even critical security fixes)","Q_Score":0,"Tags":"python-3.x,airflow","A_Id":69221007,"CreationDate":"2021-09-17T02:57:00.000","Title":"How to retrieve recently modified files using airflow FileSensor","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was using AWS glue python shell. The program uses multiple python libraries which not natively available for AWS. Glue can take .egg or .whl files for external library reference. All we need to do is put these .egg or .whl file in some S3 location and point to it using it's full path. I tried with one external library [for instance openpyxl] and it worked. Now the problem is since I have multiple external libraries like pandas, numpy, openpyxl and pytz to be referred, I cant give full path of all these packages as only path can be specified as external python library reference. I tried giving the s3 folder name where I placed all these packages, it does not work.\nHow can I specify these multile .egg or .whl files so that my glue job can use them.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":392,"Q_Id":69240958,"Users Score":0,"Answer":"This question is already answered by gbeaven, but for some reasons I am unable mark it as answer. This was fixed by comma separating the file paths in the additional python modules.","Q_Score":1,"Tags":"python,aws-glue,python-packaging,aws-glue-workflow,aws-glue-connection","A_Id":69254193,"CreationDate":"2021-09-19T06:45:00.000","Title":"AWS Glue python shell - Using multiple libraries","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have created and pushed a docker image to Docker Hub. Am pulling the image on the other side on the client machines. However there are config files inside the image that are client site specific (change from site to site) - for example the addresses of the RTSP cameras per site. How would I edit these files on each client site? Do I need to manually vim each image on each client site manually or is there a simpler way?\nOr is the solution to extract these config files entirely from the image, copy them separately to client site and somehow change the code to reach these files outside the image?\nthanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":31,"Q_Id":69242107,"Users Score":0,"Answer":"You better keep your image in DockerHub as a baseimage w\/o any dynamic config in it (or simply ignore it).\nOn the client side, you need to create your local image from the baseimage from the DockerHub with replacing via COPY or by mounting it as Volume.\nOR as @Klaus D. commented","Q_Score":0,"Tags":"python,docker","A_Id":69242156,"CreationDate":"2021-09-19T09:46:00.000","Title":"Editing config file inside Docker image on client site","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I downloaded a library and need to make changes to the source code\/source file. The library seems to be located at \/usr\/local\/lib\/python3.7\/dist-packages. Where can I find the files on Windows?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":470,"Q_Id":69249457,"Users Score":0,"Answer":"First of all, find out where Python is installed on Windows. Incase if you installed Python by downloading the application from its website, the application should be installed by default in C:\\Users\\UserName\\AppData\\Local\\Programs\\Python\\Python39\nOnce you opened the installation folder, head over to the Lib\\site-packages directory and you can find the source code folders of installed packages. Hope that helps to solve your problem.","Q_Score":0,"Tags":"python,windows,google-colaboratory","A_Id":69249572,"CreationDate":"2021-09-20T05:29:00.000","Title":"find and edit source code of installed python libraries in Google Colab on Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I downloaded a library and need to make changes to the source code\/source file. The library seems to be located at \/usr\/local\/lib\/python3.7\/dist-packages. Where can I find the files on Windows?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":470,"Q_Id":69249457,"Users Score":0,"Answer":"By default the install location is C:\\Python39 - so it would be C:\\Python39\\Lib\\site-packages","Q_Score":0,"Tags":"python,windows,google-colaboratory","A_Id":69249543,"CreationDate":"2021-09-20T05:29:00.000","Title":"find and edit source code of installed python libraries in Google Colab on Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"so I have a python program that invokes singularity exec on a .sif file via os.system. Then on the next line of my program, I use os.system again to attempt to run a python script. I assumed that this would start up the singularity, and then run my script from it, however currently it just runs the exec command, brings me into the container, and then hangs (it does not execute the python command).\nDoes anybody have any advice or experience with this issue?\nThank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":69258322,"Users Score":0,"Answer":"Singularity exec runs a single, specified command, it does not change the execution environment to that of the singularity image.\nIf you need an interactive session, use: singularity shell my_image.sif\nIf you need to run multiple commands, write a shell script and use that: singularity exec my_image.sif my_script.sh\nAlternatively, use singularity to run your python script. Then everything will be done in the context of the image rather than the host machine.","Q_Score":0,"Tags":"python,os.system,singularity-container","A_Id":69268989,"CreationDate":"2021-09-20T17:03:00.000","Title":"executing os.system python commands after invoking a singularity exec command via os.system","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am attempting to install PIP into my Python so I can install other modules. But I am getting the errors below. Any ideas?\n[root@sandbox ~]# python3 get-pip.py\nCollecting pip<18\nRetrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:600)'),)': \/packages\/0f\/74\/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4\/pip-10.0.1-py2.py3-none-any.whl\nRetrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:600)'),)': \/packages\/0f\/74\/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4\/pip-10.0.1-py2.py3-none-any.whl\nRetrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:600)'),)': \/packages\/0f\/74\/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4\/pip-10.0.1-py2.py3-none-any.whl\nRetrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:600)'),)': \/packages\/0f\/74\/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4\/pip-10.0.1-py2.py3-none-any.whl\nRetrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:600)'),)': \/packages\/0f\/74\/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4\/pip-10.0.1-py2.py3-none-any.whl\nCould not install packages due to an EnvironmentError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: \/packages\/0f\/74\/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4\/pip-10.0.1-py2.py3-none-any.whl (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:600)'),))","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":140,"Q_Id":69270178,"Users Score":0,"Answer":"Hi all issue turned out to be back end HTTPS restrictions on our network.\nI had to get a bypass to install PIP itself. Once that was done I was able to install modules using the trusted host method.","Q_Score":0,"Tags":"python,python-3.x,pip","A_Id":69538687,"CreationDate":"2021-09-21T13:58:00.000","Title":"Python get-pip install failes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wrote a python 3.7 script under Windows 7 and compiled it using auto-py-to-exe. I can run the .exe with no problem in my computer but when my co-worker tries to run it under Windows XP there is an error: \"The procedure entry point GetFinalPathNameByHandleW could not be located in the dynamic link library Kernel32.dll\"\nIs this because XP doesn't support python 3.7? From what I know, XP supports only up to 3.4 but I can't rewrite the code with python 3.4 because then one of the libraries I used is unsupported.\nIs there any way I can make it work on XP or is the problem something else?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":93,"Q_Id":69274424,"Users Score":1,"Answer":"Those kinds of error messages mean that the program is looking \"inside\" the specified file (in this case kernel32.dll) trying to find a function\/procedure to run called 'GetFinalPathNameByHandleW' and not finding it.\nEither the program is calling the wrong nonexistent function or the library file doesn't have it in there. Things are not matching up somewhere.\nA DLL is a Dynamic Link Library and files like kernel32.dll are sometimes just a bunch of functions\/procedures\/subroutines all lumped into one portable file.\nIn a primitive way, you can use a text editor to open the kernel32.dll file (make a copy if it your desire) and search for a string 'GetFinalPathNameByHandleW' and you will not find it.\nSo your program is calling a function inside a DLL but that function does not exist in the Windows XP kernel32.dll.\nI think GetFinalPathNameByHandleW was introduced in Windows Vista, so going forward from there you would be fine.\nIf you want your program to work on XP, you need to stick to functions that are part of XP and GetFinalPathNameByHandleW ain't in there hence the error.","Q_Score":0,"Tags":"python,python-3.7,windows-xp,python-3.4","A_Id":69274486,"CreationDate":"2021-09-21T19:16:00.000","Title":"Python program I wrote in python3.7\/Windows 7 won't run in Windows XP","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Invoke-Expression : At line:1 char: 522\n...ntel Wireless Common;C:\\WINDOWS\\System32\\OpenSSH;\"C:\\Mingw\\bin;c:\\MinG\ntoken 'C:\\Mingw\\bin' in expression or statement.\nC:\\Users\\admin anaconda3 shell\\condabin Conda.psm1:107 char:9\nInvoke-Expression -Command SactivateCommand;\n\nCategoryInfo\n: ParserError: (:) [Invoke-Expression), ParseExcep\ntion\nFullyQualifiedErrorid : Unexpected Token, Microsoft PowerShell.Commands. In\nvokeExpressionCommand","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":605,"Q_Id":69278454,"Users Score":0,"Answer":"The variables of the path are wrong for the\n\n\"C:\\Mingw\\bin;c:\\MinG token 'C:\\Mingw\\bin'\n\nhave you noticed that there are double quote and single quote there. these signs caused problem. you have to correct these variables in you path.","Q_Score":0,"Tags":"python","A_Id":70478710,"CreationDate":"2021-09-22T05:03:00.000","Title":"when I open anaconda prompt it's showing an error of mentioned below.how to resolve this issue?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Here is a typical request:\nI built a DAG which updates daily from 2020-01-01. It runs an INSERT SQL query using {execution_date} as a parameter. Now I need to update the query and rerun for the past 6 months.\nI found out that I have to pause Airflow process, DELETE historical data, INSERT manually and then re-activate Airflow process because Airflow catch-up does not remove historical data when I clear a run.\nI'm wondering if it's possible to script the clear part so that every time I click a run, clear it from UI, Airflow runs a clear script in the background?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":148,"Q_Id":69290067,"Users Score":0,"Answer":"After some thought, I think here is a viable solution:\nInstead of INSERT data in a DAG, use a DELETE query and then INSERT query.\nFor example, if I want to INSERT for {execution_date} - 1 (yesterday), instead of creating a DAG that just runs the INSERT query, I should first run a DELETE query that removes data of yesterday, and then INSERT the data.\nBy using this DELETE-INSERT method, both of my scenarios work automatically:\n\nIf it's just a normal run (i.e. no data of yesterday has been inserted yet and this is the first run of this DAG for {execution_date}), the DELETE part does nothing and INSERT inserts the data properly.\n\nIf it's a re-run, the DELETE part will purge the data already inserted, and INSERT will insert the data based on the updated script. No duplication is created.","Q_Score":1,"Tags":"python,airflow","A_Id":69290614,"CreationDate":"2021-09-22T19:16:00.000","Title":"How to purge historical data when clearing a run from Airflow dashboard?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We have started one celery worker reading from Rabbitmq (one queue):\ncelery -A tasks worker -c 1 (one process)\nWe send to RabbitMq 2 chains (3 tasks in each chain):\nchain(*tasks_to_chain1).apply_async() (let's call it C1 and its tasks C1.1, C1.2, C1.3)\nchain(*tasks_to_chain2).apply_async() (let's call it C2 and its tasks C2.1, C2.2, C2.3)\nWe expected the tasks to be run in this order: C1.1, C1.2, C1.3, C2.1, C2.2, C2.3.\nHowever we are seeing this instead: C1.1, C2.1, C1.2, C2.2, C1.3, C2.3.\nWe don't get why. Can someone shed some light on what's happening?\nMany thanks,\nEdit: more generally speaking we observe that chain 2 starts before chain 1 ends.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":69318302,"Users Score":0,"Answer":"Without access to full code it is not possible to test, but a plausible explanation is that your asynchronous sender just happens to do that. I would also assume the order is not deterministic. You will likely get somewhat different results if you keep repeating this.\nWhen you execute apply_async(), an asynchronous task is created that will (probably, not sure without seeing code) start submitting those tasks to the queue, but as the call is not blocking, your main program immediately proceeds to the second apply_async() that creates another background task to submit things to the queue.\nThese two background tasks will run in the background handled by a scheduler. What you now see in your output is that each task submits one item to the queue but then passes control to the other task, which again submits one and then hands back control.\nIf you do not want this to happen asynchronously, use apply instead of apply_async. This is a blocking call, and your main program execution does not proceed until the first three tasks have been submitted. With asynchronous you can never be sure of the exact order of execution between tasks. You will know C1.1 will happen before C1.2 but you cannot guarantee how C1 and C2 tasks are interleaved.","Q_Score":0,"Tags":"python,rabbitmq,celery","A_Id":69573782,"CreationDate":"2021-09-24T16:22:00.000","Title":"Running multiple celery chains are getting mixed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have multiple pipelines with scripts defined on my azure devops.\nI would like to be able to run them from my local machine using python.\nIs this possible?\nIf yes, how to achieve it?\nRegards,\nMaciej","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":69357387,"Users Score":0,"Answer":"You can't run them in this way that you will take YAML code and put it python or actually any language and run it locally. You need to build agent to run your pipeline. So you can create a pool, install agent on you local machine, change your pipeline to use this pool.","Q_Score":0,"Tags":"python,azure-devops,azure-pipelines","A_Id":69357756,"CreationDate":"2021-09-28T07:23:00.000","Title":"Is there a way to run azure-devops pipeline from python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have three docker containers. First container is a Wordpress container which starts a wordpress instance. Second container is a selenium server container created from selenium\/standalone-chrome image which is supposed to do some initial setup of the wordpress via UI interactions. I have a python script that has all the commands to send to the selenium server. I am running this script in a python container as the third container. All the containers are spawned using docker-compose and are in the same network, so that communication can happen.\nOnce the python container is finished running the script it exits, however the selenium server and the wordpress container keep running. Once I am done with the script, I want to stop the selenium server container as well but keep the wordpress container running.\nI had a thought to run a script inside the python container as entrypoint which first executes the script and then issues a command to stop the other container but for that I guess the python container should also have docker available inside it. So, I think this will not work. Is there a simple way to achieve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":69357877,"Users Score":0,"Answer":"The command\ndocker ps --filter=name='my-container'\nwill show you if the interesting container is still here\nexample\ndocker ps\nshows many containers\nbut you can filter\ndocker ps --filter=name='cadvisor' CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 984f35929991 google\/cadvisor \"\/usr\/bin\/cadvisor -\u2026\" 3 years ago Up 2 hours 0.0.0.0:1234->8080\/tcp cadvisor\n`\nand so a script can test the presence of both containers, only one, and do a\ndocker stop xxx\nwhen needed","Q_Score":0,"Tags":"python,docker,selenium,docker-compose","A_Id":69358516,"CreationDate":"2021-09-28T07:59:00.000","Title":"Is it possible to stop a docker container after other docker container inside the same network exits?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to run a GRPC server on a custom Linux distro, which has no access to internet or python pip. Can anyone please provide some guidance how I can generate the cygrpc.cp37-win_amd64 file specific to the platform**[32bit custom Linux with Python 3.7]**? I have noted that if I copy the grpc folder, the only error that comes up is that GRPC fails to import cygrpc","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":91,"Q_Id":69358280,"Users Score":1,"Answer":"The solution I found for this problem is to use a Docker container with 32bit Linux(for e.g. RHEL) image and install gRPC in this container. After installation the platform specific cygrpc.cp.. file is generated inside _cython folder in grpc folder. Then it can be used wherever you need it.","Q_Score":0,"Tags":"python,pip,grpc-python","A_Id":71843626,"CreationDate":"2021-09-28T08:29:00.000","Title":"Using GRPC python without internet access and pip install for custom Linux distro","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to figure out how to send an entire address balance in a post EIP-1559 transaction (essentially emptying the wallet). Before the London fork, I could get the actual value as Total balance - (gasPrice * gas), but now it's impossible to know the exact remaining balance after the transaction fees because the base fee is not known beforehand.\nIs there an algorithm that would get me as close to the actual balance without going over? My end goal is to minimize the remaining Ether balance, which is essentially going to be wasted. Any suggestions would be highly appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":427,"Q_Id":69371882,"Users Score":2,"Answer":"This can be done by setting the 'Max Fee' and the 'Max Priority Fee' to the same value. This will then use a deterministic amount of gas. Just be sure to set it high enough - comfortably well over and above the estimated 'Base Fee' to ensure it does not get stuck.","Q_Score":2,"Tags":"python,ethereum,web3,web3py","A_Id":69986683,"CreationDate":"2021-09-29T07:10:00.000","Title":"Sending entire Ethereum address balance in post EIP-1559 world","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have to install GRPC python from source as the target machine does not have internet connection. The target machine has python 3.7 and pip3 installed. Can anyone share the process how to do it. Thanks in advance","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":46,"Q_Id":69379680,"Users Score":0,"Answer":"Here's how I solved the problem. The main issue with using the downloaded gRPC package is that the cython compiler is platform dependent. The cython compiler is found in the grpc\/_cython and it looks something like this \"cygrpc.cp37-win_amd64.pyd\". Here cp37 is the python version, win is the os or the platform name and amd64 is the architecture.\nWhat I did in order to solve this issue, is I had to download the corresponding cython file for 32bit Linux platform - cygrpc.cpython-37m-i386-linux-gnu.so . I then created 2 separate grpc packages - one for Linux and one for Windows. This can be extended to as many platforms and architectures you want.\nAfter this using pf = platform.system() to determine the os and architecture and called the respective grpc package.\nThis comprehensively solved the issue for me.","Q_Score":0,"Tags":"selinux,grpc-python","A_Id":69642177,"CreationDate":"2021-09-29T15:52:00.000","Title":"Install GRPC from source for SELINUX 32bit","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"API 1 in container 1 --> API 2 in container 2\nI have a simple FastAPI REST API (API 1) in one docker container invokes another simple FastAPI REST API (API 2) in another docker container. The call is via requests package.\nWhen I perform Apache Benchmark test on API 2 (ab -n 10000 -c 500 http:\/\/[url of API 2]) , the (RPS) request per second is around 4600. However, when I perform the the same apache benchmark on API 1 (ab -n 10000 -c 500 http:\/\/[url of API 1]), the RPS drops to around 1350 despite API 1 is just a simple requests.post call to API2 without any processing logic. I do not understand why the nested call reduce the RPS so drastically\nAPI 0 in container 0 --> API 1 in container 1 --> API 2 in container 2\nTo further confirm my observation, I created another FastAPI REST API 0 in another docker container which consists of a simple request.post call to API 1. The apache benchmark test (ab -n 10000 -c 500 http:\/\/[url of API 0]) further slowing down to RPS 530\nMay I know the reason? I thought http request call to FastAPI REST API shouldn't add that much overhead in the series of nested call\nIn my Microservice distributed application, I have multiple Microservice hosted in different containers whereby some processing of request from client browser might incur nested calls between microservices(for example, client browser call A, A call B, B call C)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":331,"Q_Id":69388318,"Users Score":0,"Answer":"In absolute terms, if your container stack is handling 530 requests per second, then each request takes 1.9 ms. That's pretty good on the whole. (The performance problems I worry about in my day job are individual HTTP requests that take minutes to complete.) I would not worry about tuning this.\nIt might be more interesting to break this down in terms of latency (time per request) rather than throughput (requests per unit time). Apachebench can probably output median and 99th-percentile latency. Just looking at averages, though:\n\n\n\n\nPath\nThroughput\nAverage Latency\n\n\n\n\napi2\n4600 req\/s\n217 \u00b5s\n\n\napi1 \u2192 api2\n1350 req\/s\n740 \u00b5s\n\n\napi0 \u2192 api1 \u2192 api2\n530 req\/s\n1886 \u00b5s\n\n\n\n\n217 \u00b5s is quite fast; adding 0.5-1.0 ms for an additional HTTP hop seems reasonable to me. Remember that Python is an interpreted language, so this setup will probably be slower than a compiled language like Go, if you're counting microseconds; but also that your \"real\" application will probably involve a database, and in most cases an SQL request will immediately dwarf these numbers. If the database round-trip requires a minimum of 100 ms then adding 1 ms for an HTTP proxy hop is essentially free.\nIn the microservice environment you describe, having a sequence of cross-service calls is pretty typical. I'd suggest tuning each service individually: if service B calls service C twice, and service C requires 500 ms to process a request, then the entire stack will require > 1s to run, but profiling service C on its own will hopefully point at the problem. In my experience the most likely thing you'll need to focus on is the database setup. Having fewer network hops is \"better\" in the abstract, but the numbers you show here suggest that adding or removing a proxy isn't going to be a big difference in the final performance.","Q_Score":0,"Tags":"docker,python-requests,nested,benchmarking,fastapi","A_Id":69391034,"CreationDate":"2021-09-30T07:53:00.000","Title":"Will nested FastAPI REST API call via requests slow down the performance?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"ERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'C:\\Users\\Sampath\\anaconda3\\Lib\\site-packages\\~5py\\defs.cp38-win_amd64.pyd'\nConsider using the --user option or check the permissions.\nI tried pip install mediapipe","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":7196,"Q_Id":69396320,"Users Score":3,"Answer":"EnvironmentError: Access is denied errors usually stem from one of two reasons:\n\nYou do not have the proper permissions to install these files, and you should try running the same commands in an Administrator Command Prompt. 90% of the time, this should solve the problem.\nIf the first doesn't work, then the problem is usually from an external program accessing a file, and you (or the installation script) are trying to delete that file (you cannot delete a file that is opened by another program). Try to restart your computer, so that whatever process is using that file will be shut down. Then, try the command again.","Q_Score":4,"Tags":"python,mediapipe","A_Id":69396372,"CreationDate":"2021-09-30T17:10:00.000","Title":"Could not install packages due to an EnvironmentError: [WinError 5] Access is denied","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hi dear ladies and guys,\nso i've been struggling today to find out how to make flower use the redis backend to get the historical tasks. I've read that Flower has the --persistent flag but this creates its own file.\nWhy does it need this file? Why doesn't it just pull the records from redis?\nI don't get it. ( I have RabbitMQ as broker and Redis as backend configured in the Celery() constructor)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":69409871,"Users Score":0,"Answer":"The short answer is that flower won't know which task results to look for in the backend. Since redis databases can be shared with other processes, flower can't guarantee that a key that looks a certain way will contain a celery result that it \"should\" be monitoring. The persistent flag lets flower keep track of the task results it \"should\" be monitoring by saving a copy of any tasks that it sees going through the broker queue and, thus, keep track of relevant results.","Q_Score":0,"Tags":"python,python-3.x,redis,rabbitmq,celery","A_Id":69442599,"CreationDate":"2021-10-01T17:27:00.000","Title":"Why to use redis as backend for celery if flower takes snapshots anyway?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a set of commands, each from Juniper & Aruba switches, that I would like to automatically convert. Is there a way to use a string of Juniper commands and have them output a string of Aruba commands? If so, how would I approach this using Python? Would I use \"if else\", python dictionary commands, or some other syntax?\nFor example:\nI have just come up with this script:\ndef Juniper(sets): print ('host-name', set1) print ('console idle-timeout 600\\n' 'console idle-timeout serial-usb 600\\n' 'aruba-central disable\\n 'igmp lookup-mode ip\\n' 'console idle-timeout serial-usb 600\\n') print (\"logging (system1) oobm\") #I AM TRYING TO ADD THE INPUT OF system1 IN BETWEEN LIKE THIS ^ AS SHOWN ABOVE\nset1= input('Enter hostname with quotations:\\n') system1 = input('Enter system log IP address:') Juniper(set1)\nPlease let me know how to add an input in between two strings or words","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":69421321,"Users Score":0,"Answer":"I would approach simply using show command with display json on juniper side and than use a jinja2 template to build your Aruba config using which dynamic data you want from juniper output and yes you can do all in python from extracting data from device till the convertion. Extracting data from device you can use scrapli nornir and than use the result as data for render with jinja template.","Q_Score":0,"Tags":"python,automation,juniper,aruba","A_Id":72133283,"CreationDate":"2021-10-03T01:01:00.000","Title":"Juniper to Aruba Commands using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wrote python code on windows at development stage, During my project I used many modules for example: pandas,numpy,...etc.\nI am trying to deploy this code in production (Linux), and those modules not installed on this machine. so I want to make my code portable with it\u2019s drivers and utilities, modules which I used it.\nHow can I do that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":69464752,"Users Score":0,"Answer":"1.Generate a requirements.txt file from your local project\n2.Login to your linux server via ssh\n3.Install the version of python you want on your linux server (it generaly contains a 2.7 version)\n4.Create a specific folder for your project\n5.In that folder, create a virtual environment using the Python that you have installed\n\nupload the requirements.txt in the folder and install it using pip install -r requirements.txt after activating the virtual environment.","Q_Score":0,"Tags":"python,linux","A_Id":69464855,"CreationDate":"2021-10-06T11:19:00.000","Title":"How to make my python code portable with it\u2019s drivers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to implement a Kafka producer in Python but I want to avoid possible future issues about new Kafka versions, so I would like to know if it is possible to implement a Kafka producer without importing the Producer module from confluent_kafka?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":28,"Q_Id":69494173,"Users Score":0,"Answer":"Import the producer from kafka-python or aiokafka then...?\nSure, you can re-implement the TCP protocol yourself, but you'll get fixes faster if you use a well maintained library, and you'll have the exact same problem with anything else, if there ever is a breaking change\nHowever, the Kafka protocol has maintained backwards compatibility ever since version 0.10 (released 4 years before the current version, as of this post!)\nI'd say you're over thinking the problem","Q_Score":0,"Tags":"python,apache-kafka,kafka-producer-api","A_Id":69496138,"CreationDate":"2021-10-08T10:12:00.000","Title":"Is there a way to implement a Kafka producer without importing Producer from confluent_kafka?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm learning to code with python 3.9, but the resources I'm using are referred to a previous build. In one of the exercises I'm supposed to use pythonw.exe to run a script in the background, however I don't have such file in my installation folder.\nAfter searching online, I have found that the file must have been automatically deleted by the antivirus during the install. So I uninstalled and reinstalled again while disabling the antivirus during the install (malwarebytes + win defender), but pythonw is still missing.\nAt this point I don't know if there is another workaround or if there is an alternative to pythonw for version 3.9.. Anyone can help shed some light?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":334,"Q_Id":69502251,"Users Score":1,"Answer":"pythonw.exe should be in the same directory as python.exe. For me (running Python 3.8) that is C:\\Program Files\\Python38. With a different install option, it probably would be in C:\\Python38. (Yours would presumably have \"Python39\" where I have \"Python38\".)\nDid you get your Python installation file from python.org? It should include pythonw.exe; if you got your distribution from somewhere else, it may be tailored differently.\nThe difference between python.exe and pythonw.exe is that pythonw.exe will avoid opening an extra window. If you don't care about that, go ahead and use python.exe.\nBy the way, I have both MalwareBytes and Windows Defender installed, and neither of those have ever deleted pythonw.exe for me, so I doubt that's your issue.","Q_Score":0,"Tags":"python,windows,python-3.9,pythonw","A_Id":69502328,"CreationDate":"2021-10-08T22:19:00.000","Title":"pythonw.exe in python 3.9","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm learning to code with python 3.9, but the resources I'm using are referred to a previous build. In one of the exercises I'm supposed to use pythonw.exe to run a script in the background, however I don't have such file in my installation folder.\nAfter searching online, I have found that the file must have been automatically deleted by the antivirus during the install. So I uninstalled and reinstalled again while disabling the antivirus during the install (malwarebytes + win defender), but pythonw is still missing.\nAt this point I don't know if there is another workaround or if there is an alternative to pythonw for version 3.9.. Anyone can help shed some light?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":334,"Q_Id":69502251,"Users Score":0,"Answer":"Actually 5 minutes after posting the question I found the solution.\nI reinstalled python enabling the option \"install for all users\". Now the file is there..","Q_Score":0,"Tags":"python,windows,python-3.9,pythonw","A_Id":69506398,"CreationDate":"2021-10-08T22:19:00.000","Title":"pythonw.exe in python 3.9","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently doing data processing in python, but the amount of data I handle is so large that it takes an enormous amount of time.\nI would like to run the data processing job every day between 6pm and 6am, and proceed gradually.\nIs it possible to do that with APScheduler?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":69503767,"Users Score":0,"Answer":"APScheduler can schedule the job to start, but your code needs to check the time on each iteration and return from the call if the hour is between 7 am and 5 pm.","Q_Score":0,"Tags":"python,cron,apscheduler","A_Id":69505169,"CreationDate":"2021-10-09T04:34:00.000","Title":"How to add a job to be started at a fixed time every day and to be finished at a fixed time with APScheduler","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have both python 3.8 and 3.9 on my Mac. when I install a new package by pip3 install ..., the package will go to the python 3.9 folder, but apparently the executable commands to run in terminal is in python 3.8 folder, so even though I installed a package, I cannot run the command lines come with it.\nI guess I have to somehow point the path of python to version 3.9?\nCan somebody help me to fix this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":250,"Q_Id":69511914,"Users Score":0,"Answer":"Try which python3.9 to definitively determine if and where Python 3.9 lives on your path. Then run python3.9 -m pip install \u2026 to ensure that the command gets executed against the Python 3.9 interpreter.","Q_Score":0,"Tags":"python,pip,python-3.9","A_Id":69511958,"CreationDate":"2021-10-10T02:45:00.000","Title":"Pip installs packages into the python 3.9 folder, but the executable commands are in the python 3.8","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've seen mention of sigkill occurring for others but I think my use case is slightly different.\nI'm using the managed airflow service through gcp cloud composer, running airflow 2. I have 3 worker nodes all set to the default instance creation settings.\nThe environment runs dags fairly smoothly for the most part (api calls, moving files from on prem) however it would seem as though its having a terribly hard time executing a couple of slightly larger jobs.\nOne of these jobs uses a samba connector to incrementally backfill missing data and store on gcs. The other is a salesforce api connector.\nThese jobs run locally with absolutely no issue so I'm wondering why I'm encountering these issues. There should be plenty memory to run these tasks as a cluster, although scaling up my cluster for just 2 jobs doesn't seem like it's particularly efficient.\nI have tried both dag and task timeouts. I've tried increasing the connection timeout on the samba client.\nSo could someone please share some insight into how I can get airflow to execute these tasks without killing the session - even if it does take longer.\nHappy to add more detail if required but I don't have the available data in front of me currently to share.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":167,"Q_Id":69517796,"Users Score":0,"Answer":"Frustratingly, increasing resources meant the jobs could run. I don't know why the resources weren't enough as they really should've been. But optimisation for fully managed solutions isn't overly straight forward other than adding cost.","Q_Score":0,"Tags":"python,airflow,samba,google-cloud-composer","A_Id":69537131,"CreationDate":"2021-10-10T18:15:00.000","Title":"Airflow sigkill tasks on cloud composer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Unfortunately I can't use shell=True or cwd=whatever\/.., I don't want a subshell.\nNeither providing the full path of the command or setting the PATH environment variable makes any difference. Calling shutil.which() on the command returns the expected path. I've tried providing the command as a string and as a list.\nBut, as mentioned, what is really killing me is that the identical code works perfectly on a slightly newer R-Pi with an identical directory structure.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":69520936,"Users Score":0,"Answer":"(Moved solution on behalf of the question author to move it to the answer section).\nThe second R-Pi had no \/usr\/bin\/bash, only \/bin\/bash ..\nShell files being called by Popen had #!\/usr\/bin\/bash headers, so 'file not found' was the correct error, but it was interpreter not found, rather than script not found, as I'd assumed!","Q_Score":0,"Tags":"python-3.x,raspberry-pi,subprocess","A_Id":69605705,"CreationDate":"2021-10-11T04:11:00.000","Title":"python3.7 subprocess.Popen throws \"[Errno 2] No such file or directory\" on one R-Pi but not the other, both running buster","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an issue with my (dockerized) application consuming an ever increasing amount of memory until the Linux kernel kills is. In the container's docker logs I only got an ominous Killed message without further context - only after checking the kernel logs (cat \/var\/log\/kern.log) and docker stats did I realize what was happening - memory usage was going up by ~10MB per second.\nThe application is an asyncronous grpc-server with several concurrent tasks (using return ensure_future(self._continuous_function) to keep the function's task up and running, async so the server endpoints are not blocked).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":462,"Q_Id":69521780,"Users Score":0,"Answer":"I figured out that ensure_future() caused my memory leak issue - or rather the return I used with it. Apparently the return meant that a reference to the original task was being kept in memory (instead of being garbage collected), and the function only had a very short wait time associated with it (1ms) - so memory kept building up fast.\nSo after removing the leak my app now consumes about 60MB of very constant memory. I hope this helps somebody else out, I couldn't find a reference to this exact issue which is why I'm sharing it here.","Q_Score":0,"Tags":"python-asyncio,python-3.9","A_Id":69521781,"CreationDate":"2021-10-11T06:17:00.000","Title":"Asyncio with memory leak (Python)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In my docker-compose.yaml I am booting up Apache Pulsar's standalone environment and then running some python scripts to write an output.txt and print some text.\nNormally if I run this locally, the print statements go to the terminal and the output.txt gets written and saved at the parent directory. But when I run this process with my docker-compose.yaml nothing gets printed because my terminal is now dedicated to the status updates since both images are meant to persist.\nSo where do I see my output files and print statements to determine I'm getting the expected behavior?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":218,"Q_Id":69548029,"Users Score":0,"Answer":"For the files, I would recommend you add a volume mount to the container, and write out the files in that location in the container. Then you can access them as you would any other file locally.\nFor the print statements, docker-compose logs should let you view the output from the containers.","Q_Score":0,"Tags":"python,docker,docker-compose","A_Id":69548105,"CreationDate":"2021-10-12T23:27:00.000","Title":"Where to see print statements or find outputs generated in Docker container?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Here\u2019s one I\u2019m hoping should be quite simple:\nHow can I get my UI window to execute a command when the user closes the window?\nI\u2019ve created a scriptJob that runs while the window is open and I\u2019d like to run a command to terminate it when the window is closed, and avoid having any scriptJobs running when the tool isn\u2019t in use.\nUsing Python in Maya2020\nAny pointers greatly appreciated","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":94,"Q_Id":69561722,"Users Score":0,"Answer":"You can add a parentparameter to the scriptJob command to attach the scriptJob to an UI element, have a look at the docs: parent - Attaches this job to a piece of maya UI. When the UI is destroyed, the job will be killed along with it.","Q_Score":0,"Tags":"python,user-interface,maya,autodesk","A_Id":69567632,"CreationDate":"2021-10-13T20:06:00.000","Title":"Execute command when UI window is closed | Maya\/Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is it possible to have someone not have python installed, double click on my installer and through that get an executable, the python, and the needed libraries?\nFor that I would probably also want to know, if I can type anything but python path\/file.py to run the python script. Because for that you would need to add the python installation to path. And is there a way to bypass this? To type something else instead of python like manually type the location where the executable is or so?\nMy boss asked me this today. Don\u2019t think that\u2019s a common task to do with python but if I know how to do that with python, I for one can do it with python and I will also easier understand how one normally writes installers and programs with guis and such. I do like python though.\nI guess this is not an easy question. I really am interested though. If you can only answer the second question, I would also be very grateful because then i can figure out the rest I think.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":60,"Q_Id":69562437,"Users Score":1,"Answer":"You can use pyinstaller to create an executable. pip install pyinstaller in cmd.\nThen cd into the directory where your file is, and type pyinstaller --onefile filename.py","Q_Score":0,"Tags":"python,user-interface,windows-installer","A_Id":69562478,"CreationDate":"2021-10-13T21:17:00.000","Title":"How to make an installer for a python program","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed python using Visual Studio, but I can't use python from the command prompt. I also want to run python from the command prompt, but it is only accessible from Visual Studio. I have tried adding the path of the directory in which Visual Studio has installed python to the user environment variables, but typing python in the command prompt opens up Windows Store.\nPlease somebody help me with this. Is there any way around or do I have to install python separately too.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":119,"Q_Id":69566639,"Users Score":0,"Answer":"thanks for replying. I figured out the answer myself, and thought to answer it for anyone in the future who comes across the same issue.\nI went into settings->Apps\nFind the python installation and click on it, select modify.\nThen click on \"modify installation\" when the python installation repair window opens, then again click on \"modify\", make sure to select \"Add python to path\", and also py launcher option also.\nThat fixed the problem for me.\nThanks, again.","Q_Score":0,"Tags":"python","A_Id":69581941,"CreationDate":"2021-10-14T07:26:00.000","Title":"How to run python from the terminal if I installed it using Visual Studio?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"What I wish to understand is what is good\/bad practice, and why, when it comes to imports. What I want to understand is the agreed upon view by the community on the matter, if there's any one such in some PEP document or similar.\nWhat I see normally is people have a python environment, use conda\/pip to install packages and all that's needed to do in the code is to use \"import X\" (and variants). In my current understanding this is the right way to do things.\nWhenever python interacts with C++ at my firm, though, it always ends up with the need to use sys.path and absolute imports (but we have some standardized paths to use as \"base\" and usually define relative paths based on those).\nThere are 2 major cases:\n\nPython wrapper for C++ library (pybind\/ctype\/etc) - in this case the user python code must use sys.path to specify where the C++ library to import is.\nA project that establish communications between python and C++ (say C++ server, python clients, TCP connections and flatbuffer serialization between the two) - Here the python code lives together with the C++ code, and if it some python files end up using sys.path to import python modules from the same project but that live in a different directory - Essentially we deploy the python together with the C++ through our C++ deployment procedure.\n\nI am not fully sure if we can do something better for case #1, but case #2 seems quite unnecessary, and basically forced just by the choice to not deploy the python code through a python package manager. Choice ends up forcing us to use sys.path on both the library and user code.\nThis seems very bad to me as basically this way of doing things doesn't allow us to fully manage our python environments (since we have some libraries that we import even thought they are not technically installed in the environment), and that is probably why I have a negative view of using sys.path for imports. But I need to find if I'm right, and if so I need some official (or almost) documents to support my case if I'm to propose fixes to our procedures.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":69567639,"Users Score":0,"Answer":"For your scenario 2, my understanding is you have some C++ and accompanying python in one place, and a separate python project wants to import that python.\nCould you structure the imported python as a package and install it to your environment with pip install path\/to\/package? If it's a package that you'll continue to edit, you can add the -e flag to pip install so that when the package changes your imports get the latest code.","Q_Score":0,"Tags":"python,import","A_Id":69567996,"CreationDate":"2021-10-14T08:42:00.000","Title":"Python sys.path vs import","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working on a Python web app that uses Celery to schedule and execute user job requests.\nMost of the time the requests submitted by a user can't be resolved immediately and thus it makes sense to me to schedule them in a queue.\nHowever, now that I have the whole queuing architecture in place, I'm confused about whether I should delegate all the request processing logic to the queue\/workers or if I should leave some of the work to the webserver itself.\nFor example, apart from the job scheduling, there are times where a user only needs to perform a simple database query, or retrieve a static JSON file. Should I also delegate these \"synchronous\" requests to the queue\/workers?\nRight now, my webserver controllers don't do anything except validating incoming JSON request schemas and forwarding them to the queue. What are the pros and cons of having a dumb webserver like this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":31,"Q_Id":69575344,"Users Score":1,"Answer":"I believe the way you have it right now plus giving the workers the small jobs now is good. That way the workers would be overloaded first in the event of an attack or huge request influx. :)","Q_Score":1,"Tags":"python,celery,task-queue","A_Id":69575452,"CreationDate":"2021-10-14T17:58:00.000","Title":"Which requests should be handled by the webserver and which by a task queue worker?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I built a docker container with anaconda and other packages. in the container, I used echo \"export PATH=\"\/root\/anaconda3\/bin:$PATH\"\" and ~\/.bashrc and \/bin\/bash -c \"source ~\/.bashrc\"\", conda command worked fine, and the python version was correct.\nhowever, when committed and pushed the container to the docker hub, and then pulled it elsewhere, it gave me \"bash: conda: command not found\" when i tried to use conda command.\nWould anyone tell me how to solve this problem? Any advice or suggestion will be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":106,"Q_Id":69581970,"Users Score":0,"Answer":"Find the result of this problem myself.\nI build and commit this docker in one machine with root permission\uff0cand the Anaconda3 was installed in \/root\/anaconda3\/bin. I then pulled and run this docker in another machine without root permission, in which the installation path can't be accessed. I thus got the \"conda command not found\" error.\nTo solve this problem, it seems that I have to ture to someone with root permission for help. Thank @David Maze anyway.","Q_Score":0,"Tags":"python,docker,anaconda,environment-variables","A_Id":69615665,"CreationDate":"2021-10-15T08:36:00.000","Title":"conda command not found in docker container after commit and push","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"What is the difference between the .\/file.py & the python file.py commands?\nWhat I See\nI usually see people using .\/file.py when they are using a terminal-text editor like vim or nano or emacs OR when they are using linux based operating systems like Ubuntu or Arch Linux.\nAnd I usually see python file.py from the ones who are using some other operating system. I\u2019m probably not correct. But if it is so, what is the difference between the both?\nThank You!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":56,"Q_Id":69604762,"Users Score":0,"Answer":"on linux-based operating systems, when you execute a text file, if it starts with #!\/bin\/python (shebang syntax), it will actually do \/bin\/python filename, so it is faster to do this than having to type python all the time, it is easier to make it an executable file, but there are no major differences","Q_Score":1,"Tags":"python,terminal,operating-system,editor","A_Id":69604799,"CreationDate":"2021-10-17T13:25:00.000","Title":"\u201c.\/file.py\u201d VS \u201cpython file.py\u201d","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I after installation python 3.10.0 the latest version in command prompt show me 2.7.2 version but i don't have installed other version in my system in windows","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2098,"Q_Id":69612874,"Users Score":1,"Answer":"If you run python --version and it returns \"Python 2.7.2\", you have it installed and it is the first found python in your current path. That doesn't mean you don't also have 3.10.0 installed, it just means that it isn't the first one found in your path.","Q_Score":0,"Tags":"python","A_Id":69612898,"CreationDate":"2021-10-18T08:25:00.000","Title":"I install python 3.10.0 and in command prompt show me version 2.7.2","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"In Linux, the multiprocesing module uses fork as the default starting method for a new process. Why is then necessary to pickle the function passed to map? As far as I understand all the state of the process is cloned, including the functions. I can imagine why that's necessary if spawn is used but not for fork.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":142,"Q_Id":69618993,"Users Score":2,"Answer":"Job-methods like .map() don't start new processes so exploiting fork at this point would not be an option. Pool uses IPC to pass arguments to already running worker-processes and this always requires serialization (pickling). It seems there's some deeper misunderstanding with what pickling here involves, though.\nWhen you look at job-methods like .map(), the pickling for your function here just results in the qualified function-name getting send as string and the receiving process during unpickling basically just looks up the function in its global scope for a reference to it again.\nNow between spawn and fork there is a difference, but it already materializes as soon as worker-processes boot up (starts with initializing Pool). With spawn-context, the new worker needs to build up all reachable global objects from scratch, with fork they're already there. So your function will be cloned once during boot up when you use fork and it will save a little time.\nWhen you start sending jobs later, unpickling your sent function in the worker, with any context, just means re-referencing the function from global scope again. That's why the function needs to exist before you instantiate the pool and workers are launched, even for usage with spawn-context.\nSo the inconveniences you might experience with not being able to pickle local or unnamed-functions (lambdas) is rooted in the problem of regaining a reference to your (then) already existing function in the worker-processes. If spawn or fork is used for setting up the worker-processes before, doesn't make a difference at this point.","Q_Score":3,"Tags":"python,fork,pickle,python-multiprocessing","A_Id":69623158,"CreationDate":"2021-10-18T15:47:00.000","Title":"Why is the function passed to Pool.map pickled when mutiprocessing uses fork as a starting method?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"For the sake of experience, I'd prefer to try this in cmd and not use get-pip in pypi.org\nTried changing windows account from user to administrator and a youtube instruction video.\nWas awarded the following error message:\nSuccessfully uninstalled pip-20.3.1\nERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'C:\\Users\\User\\AppData\\Local\\Temp\\pip-uninstall-j78tgiwv\\pip.exe'\nConsider using the --user option or check the permissions.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":718,"Q_Id":69623079,"Users Score":0,"Answer":"Running pip install --upgrade --user pip (like pip is suggesting) will install pip on the user site packages (not the global). This works fine on some cases, but the upgraded package will only be available to the user who installed.\nAs I can read, your message seems harmless (it failed on a temporary file), if you run pip --version you should see the latest version. However, if that doesn't work, you may have to add the --user option to your command.","Q_Score":0,"Tags":"python,pip,upgrade","A_Id":69623271,"CreationDate":"2021-10-18T22:08:00.000","Title":"Attempting to upgrade pip from version 20.3.1 to version 21.3. How do I use the `--user` option?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm currently learning about Docker. I'm trying to use it in my python project (I'm using Django)\nIn my Dockerfile, I want my image to install the dependencies of my project into each new container.\nI just created a requirements.txt file using the command tool 'pipreqs'\nAfter looking at the content of this file, I realize that I have 2 others files related to the dependencies:\n\nPipfile\nPipfile.lock\n\nI think they have been created and updated when I was using pipenv command.\nMy question is : Which one of these file should I use in my Dockerfile? Pipfile, Pipfile.lock or requirements.txt?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":107,"Q_Id":69626224,"Users Score":0,"Answer":"Default choice is requirements.txt with pinned versions.\nVersions can be pinned by pip freeze > requirements.txt or pipenv lock -r > requirements.txt.\nYou need Pipfile and Pipfile.lock if you going to use pipenv inside container. Then pipenv install will use your Pipfile.lock.","Q_Score":0,"Tags":"python,docker,dependencies,requirements,pipfile","A_Id":69628031,"CreationDate":"2021-10-19T06:50:00.000","Title":"Which Dependencies File Should I Use for my Dockerfile?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"In Airflow for a DAG, I'm writing a monitoring task which will run again and again until a certain condition is met. In this task, when some event happened, I need to store the timestamp and retrieve this value in next task run (for the same task) and update it again if required.\nWhat's the best way to store this value?\nSo far I have tried below approaches to store:\n\nstoring in xcoms, but this value couldn't be retried in next task run as the xcom variable gets deleted for each new task run for the same DAG run.\nstoring in Airflow Variables - this solves the purpose, I could store, update, delete as needed, but it doesn't look clean for my use case as lot of new Variables are getting generated per DAG and we have over 2k DAGs (pipelines).\nglobal variables in the python class, but the value gets overridden in next task run.\n\nAny suggestion would be helpful.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":233,"Q_Id":69628179,"Users Score":0,"Answer":"If you have task that is re-run with the same \"Execution Date\", using Airflow Variables is your best choice. XCom will be deleted by definition when you re-run the same task with the same execution date and it won't change.\nBasically what you want to do is to store the \"state\" of task execution and it's kinda \"against\" Airflow's principle of idempotent tasks (where re-running the task should produce \"final\" results of running the task every time you run it. You want to store the state of the task between re-runs on the other hand and have it behave differently with subsequent re-runs - based on the stored state.\nAnother option that you could use, is to store the state in an external storage (for example object in S3). This might be better in case of performance if you do not want to load your DB too much. You could come up with a \"convention\" of naming of such state object and pull it a start and push when you finish the task.","Q_Score":0,"Tags":"python,airflow","A_Id":69651519,"CreationDate":"2021-10-19T09:18:00.000","Title":"Airflow: Best way to store a value in Airflow task that could be retrieved in the recurring task runs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"We're using Airflow for job scheduling, and calling Apache Beam for the ETL step. The data source is unstructured files (batch) which need to be parsed before they can be turned into PCollections. It appears to me that the two best options available are:\n\nAdd a preprocessing node to the Airflow DAG to parse the files and write to a parquet file, which is then processed by Beam.\nWrite a custom IO connector in Beam to parse the unstructured file and create the PCollection.\n\nWhich option better fits Beam best practices?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":146,"Q_Id":69638310,"Users Score":1,"Answer":"I'd vote for 1) if you need to re-use these files for other pipelines later and parsing these unstructured files takes significant amount of time.\nOn the other hand, if parsing these files can be run in parallel and you don't need to wait for all files to be ready, then I'd choose 2).\nAnyway, I think it will be a trade-off that depends on your needs and input data.","Q_Score":1,"Tags":"python,architecture,airflow,etl,apache-beam","A_Id":69650248,"CreationDate":"2021-10-19T22:31:00.000","Title":"Best Practice for processing unstructured data with Apache Beam","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We're using Airflow for job scheduling, and calling Apache Beam for the ETL step. The data source is unstructured files (batch) which need to be parsed before they can be turned into PCollections. It appears to me that the two best options available are:\n\nAdd a preprocessing node to the Airflow DAG to parse the files and write to a parquet file, which is then processed by Beam.\nWrite a custom IO connector in Beam to parse the unstructured file and create the PCollection.\n\nWhich option better fits Beam best practices?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":146,"Q_Id":69638310,"Users Score":1,"Answer":"In my opinion the most important part of an ETL is not what does when it works perfectly; but how you handle the rejects (errors, incomplete data, etc).\nIf you can reuse the code, then #1 works, but my bet is #2 because all the code that handles the ETL lies together.\nIf you don't want to write a customIO but wanted to execute some external application to parse the data, you can use a custom docker container for a dataflow job.","Q_Score":1,"Tags":"python,architecture,airflow,etl,apache-beam","A_Id":69654116,"CreationDate":"2021-10-19T22:31:00.000","Title":"Best Practice for processing unstructured data with Apache Beam","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python project and I need to use its venv in the terminal offered by PyCharm. However, some dependencies show as not installed but they are present in the venv folder. I think the project is using another venv in the Terminal. How can I check which venv is being used in the terminal and how can I change it to be the one in the project's folder?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":229,"Q_Id":69658774,"Users Score":0,"Answer":"If you execute any program using 'run' button in PyCharm, it will give you the python it is using, in your case, it should be venv. Now from Terminal use that command instead on just python\/python3.\nFor example, your are running a program using run button, you would see something like\n\/Users\/soumyabratakole\/.local\/share\/virtualenvs\/pythonProject-8Ymrt2w6\/bin\/python \/Users\/soumyabratakole\/PycharmProjects\/pythonProject\/main.py\nNow you can use \/Users\/soumyabratakole\/.local\/share\/virtualenvs\/pythonProject-8Ymrt2w6\/bin\/python instead of just python from your terminal to use your venv.","Q_Score":1,"Tags":"python,terminal,pycharm,virtualenv,virtual","A_Id":69658926,"CreationDate":"2021-10-21T08:49:00.000","Title":"How to view which venv is used in pycharm in terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Having trouble with pip install web3 using python3.10 on a macOS bigsur 11.6\nsays Building wheel for cytoolz error\nI have already tried the following:\n\nxcode-select --install\nrunning virtual environment\nbumping down to python3.7 (worked for others)\n\nany more new things to try? help","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":454,"Q_Id":69680990,"Users Score":0,"Answer":"I had the same issue with Python 3.10.0. I downgraded to Python3.9.7 and was able to build cytoolz wheel and eth-brownie installation worked!","Q_Score":0,"Tags":"python,web3","A_Id":69875995,"CreationDate":"2021-10-22T17:32:00.000","Title":"having problem installing web3 with pip installer (cytoolz error)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a script on my RasPi that on startup is in charge of moving and erasing a bunch of old files.\nThe script, named \"erasePi.py\" is called from rc.local\nI have had some issues in the past few days with this script, lacking required permissions to erase and copy files.\nAt the end, I've found I was calling erasePi.py from rc.local with the following line:\nsudo python \/home\/...\/erasePi.py\nand I changed with:\npython \/home\/...\/erasePi.py\nsince all the scripts run from rc.local have root permissions.\nEverything working now, but I would like to ask if this solution is a coincidence of various factors or simply, with that sudo I was triggering an abnormal behavior of Raspbian?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":69694670,"Users Score":0,"Answer":"It seems to be that the sudo command was the issue.\nAs you mentioned in your comment, rc.local file has root permissions. So it is not necessary to include sudo command.\nI hope this comment helps you\nKind regards","Q_Score":0,"Tags":"python,file,raspberry-pi,sudo","A_Id":69729199,"CreationDate":"2021-10-24T07:26:00.000","Title":"Delete files on a Raspberry Pi with Python script - permission error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am concepting an HID usb device which should be able to autoexecute a little Python\/C++\/(...) login program when connected to a computer, to allow the user to enter a password to access the content of the memory stick.\nAny idea how to start with or what should I consider for this program?\nImportant:\n\nOS will be Windows (probably 8+) and Linux (probably CentOS)\nComputers will not have any program installed to interact with this usb device.\nIt is all about just and only the usb device.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":69710154,"Users Score":0,"Answer":"User inserts USB device to PC\nPC recognizes the USB device as a memory disk\n\nThe memory disk is empty (because password not entered) except\nOnly a file named enter-password.txt\n\n\nUser opens enter-password.txt and enters the password and saves\nReal contents in the memory disk appear on the fly\n\nIt's compatible to all OS and no program is required in advance in PC.","Q_Score":0,"Tags":"python,c++,usb,hid","A_Id":70013644,"CreationDate":"2021-10-25T14:47:00.000","Title":"Autoexecutable program on HID memory stick","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have developed a personal site in the local with python3.8.\nwhen I deployed the AWS ubuntu ec2 sever used the code file which deployed in the local, and when saved my blog contents, there is the following error. By the way, the site can saved well in the sever python3.6 which have been tested .\nFile \"\/home\/ubuntu\/.local\/lib\/python3.6\/site-packages\/whoosh\/index.py\", line 123, in open_dir\nreturn FileIndex(storage, schema=schema, indexname=indexname)\nFile \"\/home\/ubuntu\/.local\/lib\/python3.6\/site-packages\/whoosh\/index.py\", line 421, in init\nTOC.read(self.storage, self.indexname, schema=self._schema)\nFile \"\/home\/ubuntu\/.local\/lib\/python3.6\/site-packages\/whoosh\/index.py\", line 664, in read\nsegments = stream.read_pickle()\nFile \"\/home\/ubuntu\/.local\/lib\/python3.6\/site-packages\/whoosh\/filedb\/structfile.py\", line 245, in read_pickle\nreturn load_pickle(self.file)\nValueError: unsupported pickle protocol: 5\nI am wondering is that a possible caused by the file in the local environment.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1012,"Q_Id":69733400,"Users Score":0,"Answer":"I have solved it, just deleted the pickle 5 file which generated by python3.8 version in the local. you can detect the file name in the code load_pickle(self.file) ,for example print(self.file). you can get the file position and name.","Q_Score":0,"Tags":"amazon-web-services,amazon-ec2,flask-sqlalchemy,python-3.6,ubuntu-18.04","A_Id":69733949,"CreationDate":"2021-10-27T05:48:00.000","Title":"python3.6: ValueError: unsupported pickle protocol: 5","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I selected my python interpreter to be the one pipenv created with pipenv shell in vscode. Then if I open the terminal\/cmd manually or run a script using the play button up on the right, the new terminal\/cmd opened will run the activate script which runs the terminal in the virtual environment. My question here is, is it using my pipenv environment or venv environment? Because if i run the pipenv shell or pipenv install, it will say that \"Pipenv found itself running within a virtual environment, so it will automatically use that environment...\". And also if i type exit, instead of terminating that environment, it closes the terminal.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":567,"Q_Id":69734988,"Users Score":0,"Answer":"This is the way I usually interact with pipenv:\n\nto check if you're on pipenv and not venv you pip graph.\nIf the terminal prints Courtesy Notice: Pipenv found itself running within a virtual environment(...) then it means you're in a regular venv\nYou can then deactivate and pipenv shell if you want to do it\nclean or just straight pipenv shell (I don't know if there is any\ndifference) and the terminal will load environment variables and\nactivate the pipenv environment for the remainder of it's duration.\nAfter this you can reload the interpreters and pick the\nPython(...):pipenv option.\nIf you exit here, you will return to your regular venv, after which you can exit to close the terminal or deactivate to return to your global environment.\n\nvenv uses the same folder as pipenv. The installed packages are also the same, you can check by running pip graph and pip list so it's just a matter of running pip shell manually.\nI'd love to know if there is some way to activate the environment in VS Code automatically from pip shell instead.","Q_Score":0,"Tags":"python,visual-studio-code,pipenv","A_Id":70113339,"CreationDate":"2021-10-27T08:09:00.000","Title":"Is python interpreter in vscode using pipenv or venv","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I selected my python interpreter to be the one pipenv created with pipenv shell in vscode. Then if I open the terminal\/cmd manually or run a script using the play button up on the right, the new terminal\/cmd opened will run the activate script which runs the terminal in the virtual environment. My question here is, is it using my pipenv environment or venv environment? Because if i run the pipenv shell or pipenv install, it will say that \"Pipenv found itself running within a virtual environment, so it will automatically use that environment...\". And also if i type exit, instead of terminating that environment, it closes the terminal.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":567,"Q_Id":69734988,"Users Score":0,"Answer":"You are using the python interpreter which is shown at the bottom-left of the VSCode.\nEven you activate a virtual environment created by pipenv in the terminal, it will have no affection on the new terminal and execute the python code.\nAnd if the pipenv found it was in a virtual environment it will not create a new virtual environment with the command pipenv install. And if you execute pipenv shell, it is still in the virtual environment which you activated before. And you can check which python you are using to verify it.","Q_Score":0,"Tags":"python,visual-studio-code,pipenv","A_Id":69750429,"CreationDate":"2021-10-27T08:09:00.000","Title":"Is python interpreter in vscode using pipenv or venv","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to create an Azure function app using python.\nIn my app, I need to convert a pdf to image.\nThis needs to use popple-utils.\nWhen I run the app and I want to convert the pdf to image: convert_from_bytes(file_name,500) I got this error: Unable to get page count. Is poppler installed and in PATH?\nhow can I fix this problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":148,"Q_Id":69737043,"Users Score":0,"Answer":"I tried to dockerize the Azure function and it worked completely\nJust need to add ad Docker file in root of the project like this:\nFROM mcr.microsoft.com\/azure-functions\/python:3.0-python3.7\nENV AzureWebJobsScriptRoot=\/home\/site\/wwwroot \\ \nAzureFunctionsJobHost__Logging__Console__IsEnabled=true \nCOPY requirements.txt \/ \nRUN apt-get update -y \nRUN apt-get install poppler-utils -y \nRUN pip install -r \/requirements.txt \nCOPY . \/home\/site\/wwwroot\nThen dockerized it and pushed it to Azure Registry.\n\nnavigate to your Azure portal\ncreate an Azure function (docker)\nnavigate to deployment center\naddress your docker image\nand it will work ok","Q_Score":1,"Tags":"python,azure","A_Id":69737044,"CreationDate":"2021-10-27T10:33:00.000","Title":"azure function poppler utils","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I don't use XCode and it takes a lot of space .. wondering if its OK to delete it after downloading it for Homebrew to update the latest Python on the Terminal.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":69764305,"Users Score":0,"Answer":"You won't need Xcode to use Homebrew, but some of the software and components you'll want to install using brew will rely on Xcode's Command Line Tools package.\nYou should leave it installed but you can also delete it and download it again when you need it.\nI would recommend you to leave it installed and maybe get an extension for your storage.\nLG Merlin","Q_Score":0,"Tags":"python-3.x,xcode,homebrew","A_Id":69764651,"CreationDate":"2021-10-29T06:04:00.000","Title":"Is it OK to uninstall XCode after using it to update Python version on terminal through Homebrew?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Someone please help me with this error\nafter installing pipx successfully - $ python3 -m pip install --user pipx &&\n$ python3 -m pipx ensurepath\nSuccessfully installed pipx-0.16.4\nwhen i run this command - $ pipx install eth-brownie\nI'm getting this error - $ bash: pipx: command not found","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":286,"Q_Id":69767756,"Users Score":1,"Answer":"Try using pip3 install eth-brownie. Worked for me","Q_Score":1,"Tags":"python,installation,solidity,brownie,pipx","A_Id":71096538,"CreationDate":"2021-10-29T11:01:00.000","Title":"Pipx installation Problem \/ eth-brownie installation error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an APScheduler application running in the background and wanted to be able to interrupt certain jobs after they started. I want to do this to have more control over the processes and to prevent them from affecting other jobs.\nI looked up for solutions using APScheduler itself, but what I want seems to be more like interrupting a function that is already running.\nIs there a way to do this while not turning off my program? I don't want to delete the job or to stop the application as a whole, just to stop the process at specific and unpredictable situations.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":74,"Q_Id":69772923,"Users Score":1,"Answer":"On APScheduler 3, the only way to do this is to set a globally accessible variable or a property thereof to some value that is then explicitly checked by the target function. So your code will have to support the notion of being interrupted. On APScheduler 4 (not released yet, as of this writing), there is going to be better cancellation support, so coroutine-based jobs can be externally cancelled without explicitly supporting this. Regular functions still need to have explicit checks but there will now be a built-in mechanism for that.","Q_Score":0,"Tags":"python,function,apscheduler","A_Id":69778537,"CreationDate":"2021-10-29T17:46:00.000","Title":"How to interrupt a function that is already running in Python (APScheduler)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I created a script that runs when I press a combination of keys, it means that I call my \"script.py\" from my \"keyboard customize shortcuts\".\nIt does not work properly because when I run my script from the terminal, I have to type my sudo password.\nJust to be more clear, if I run my script without sudo, it shows this: ImportError: You must be root to use this library on linux.\nSo, it must be like this sudo python3 script.py, then it ask for my password (which I rather to not).","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":111,"Q_Id":69786566,"Users Score":0,"Answer":"I fixed it.\nI created a file inside \"\/etc\/sudoers.d\" named \"script\" without extension.\nInside the script file I put \"myuser ALL=(ALL) NOPASSWD: \/home\/user\/Documents\/Scripts\/script.py\".\nThen I gave the right permissions with \"chmod +x \/home\/user\/Documents\/Scripts\/script.py\"\nFinally I put \"sudo \/home\/user\/Documents\/Scripts\/script.py\" on my custom shortcut.\nThanks!","Q_Score":1,"Tags":"python-3.x,linux-mint","A_Id":69849675,"CreationDate":"2021-10-31T11:46:00.000","Title":"How to run script on Linux without password?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I created a script that runs when I press a combination of keys, it means that I call my \"script.py\" from my \"keyboard customize shortcuts\".\nIt does not work properly because when I run my script from the terminal, I have to type my sudo password.\nJust to be more clear, if I run my script without sudo, it shows this: ImportError: You must be root to use this library on linux.\nSo, it must be like this sudo python3 script.py, then it ask for my password (which I rather to not).","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":111,"Q_Id":69786566,"Users Score":0,"Answer":"There could be a few things to blame for this:\n\nYou are logged in as root by default (not recommended) and so when you installed the library with pip3 install theLibrary, you installed it on the sudo account. Trying running sudo pip3 install theLibrary, or just create a new user account.\nYou are running this in IDLE, and you did not provide it with root privileges. You could fix this by running sudo idle first.\nThe quick fix: You could log in as sudo before running anything and stay as sudo for the duration of your session (so you wouldn't have to type your password every time) by running sudo -i and then typing your password once when prompted.","Q_Score":1,"Tags":"python-3.x,linux-mint","A_Id":69786894,"CreationDate":"2021-10-31T11:46:00.000","Title":"How to run script on Linux without password?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a few Celery workers that perform tasks that are not always that fast. The tasks are usually a bunch of HTTP requests and DB queries (using pyscopg2 behind SQLAlchemy). I'm running in Kubernetes and the CPU usage is always fairly low (0.01 or so). Celery automatically set the concurrency to 2 (number of cores of a single node), but I was wondering whether it would make sense to manually increase this number.\nI always read that the concurrency (processes?) should be the same as the number of cores, but if the worker does not use a whole core, couldn't it be more? Like concurrency=10 ? Or that would make no difference and I'm just missing the point of processes and concurrency?\nI couldn't find information on that. Thanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":68,"Q_Id":69802026,"Users Score":1,"Answer":"Everything is true. Celery automatically sets the number of cores as concurrency, as it assumes that you will the entire core (CPU intensive task).\nSounds like you can increase the concurrency, as your tasks are doing more I\/O bound tasks (and the CPU is idle).\nTo be on the safe side, I would do it gradually and increase to 5 first, monitor, ensure that CPU are fine and then to 10..","Q_Score":0,"Tags":"python,celery","A_Id":69802278,"CreationDate":"2021-11-01T19:43:00.000","Title":"How to decide what concurrency to use for Celery in a two core machine (for API Requests & DB Queries bound)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed Python 3.10 from deadsnakes on my Ubuntu 20.04 machine.\nHow to use it? python3 --version returns Python 3.8.10 and python3.10 -m venv venv returns error (I've installed python3-venv as well).","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1452,"Q_Id":69830431,"Users Score":1,"Answer":"So I was having the exact same problem. I figured out that I actually had to run \"sudo apt-get install python3.10-full\" instead of just \"sudo apt-get install python3.10\". I was then able to create a python3.10 virtual environment by executing \"python3.10 -m venv virt\".","Q_Score":1,"Tags":"python,python-3.x,ubuntu,python-venv","A_Id":71095440,"CreationDate":"2021-11-03T19:03:00.000","Title":"How to use Python3.10 on Ubuntu?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I created an executable on my Mac that runs Python code. It creates a csv file.\nWhen I run the file on its own without freezing it using Pyinstaller, it works fine on both Windows and Mac. Then, when I do freeze it, it works fine, but it won't output the csv file on Mac (it does on Windows).\nIt will only work if I run the exec file in the CLI with --user.\nI've already tried giving it permissions in the System Preferences, and in the Sharing & Permissions section of the info window. To no avail.\nIs there anything I may have overlooked that others may know about?\nThanks, in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":29,"Q_Id":69846042,"Users Score":0,"Answer":"It turns out that the file was being output in the user folder and not in the same folder as the exec file.\nFor those of you working on Python scripts and freeze them using Pyinstaller, remember that files that you output are placed in your user folder on Mac.","Q_Score":0,"Tags":"python,command-line-interface,exec,pyinstaller","A_Id":69873851,"CreationDate":"2021-11-04T21:29:00.000","Title":"How can I run an exec file that ouputs a csv without using `--user` in Mac?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to VScode and want to run Python in it for my college project. I have seen that in VScode all the programmes are executed in Windows PowerShell Terminal(By default). But the problem is that it also shows the file address which is being executed which I don't want. So, please could you give a suggestion which software should be used in the terminal which only executes code and doesn't show any file address. And How can I change it?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":273,"Q_Id":69849163,"Users Score":0,"Answer":"ctrl+shift+ is a Command Used to open terminal in VS code .. You can also Try Extension of python shell or powershell in VSCode ...","Q_Score":0,"Tags":"python,powershell,visual-studio-code,vscode-settings","A_Id":69849187,"CreationDate":"2021-11-05T05:59:00.000","Title":"How can I change a PowerShell terminal to a simple one in VScode for Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hi i got this probleme :\ncx_Oracle.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library: \"\/app\/oracle\/product\/10.2.0\/server\/lib\/libclntsh.so: cannot open shared object file: No such file or directory\". See https:\/\/cx-oracle.readthedocs.io\/en\/latest\/user_guide\/installation.html for help\ni searched and tried multiple fixes but none seemed to work i checked multiple time my oracle installation and it's x64 i've setup the correct path with ldconfig and all but it's still not working and i don't know why i can't figure out what's the problem. ( i'm a total beginner )","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":933,"Q_Id":69850014,"Users Score":1,"Answer":"Make sure that your environment variable ORACLE_HOME point to an Oracle installation: in your case:\nexport ORACLE_HOME=\/oracle\/product\/10.2.0\/server\nand correct to unset LD_LIBRARY_PATH.\nOracle by default searches in $ORACLE_HOME\/lib for the libraries.","Q_Score":0,"Tags":"python,oracle","A_Id":69855654,"CreationDate":"2021-11-05T07:52:00.000","Title":"cx_Oracle.DatabaseError: DPI-1047: Cannot locate a 64-bit Oracle Client library UBUNTU","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hi is there a way to run a python script located\/saved in certein gdrive folder?\nI need this cause i want use python running on container but be able to modify py script whenever i need.\nIs it possibile? Or is there a batter way?\nRegard","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":69889306,"Users Score":0,"Answer":"solved downloading from github when container start.","Q_Score":0,"Tags":"python,gdrive","A_Id":69945224,"CreationDate":"2021-11-08T20:04:00.000","Title":"Run python script saved on gdrive","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So we are in the process of migrating from Azure SQL DB to Azure Synapse SQL Pools. I figured setting Airflow up to use the new database would be as simple as changing the server address and credentials, but when we try to connect to the database via Airflow it throws this error:\n40532, b'Cannot open server \"1433\" requested by the login. The login failed.\nWe use the generic mssqloperator and mssqlhook. I have verified the login info, pulled the server address directly from Synapse, and the synapse connection string shows port 1433 is correct, so I am at a loss for what could be causing the issue. Any help would be appreciated.\nEdit: The Airflow Connection schema we use is the Microsoft Sql Server Connection, with host being {workspace}.sql.azuresynapse.net, login being the admin login, password being the admin password, and port being 1433","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":69904134,"Users Score":0,"Answer":"The error is due to the port which was not enabled.\nMake sure that port 1433 is open for outbound connections on all firewalls between the client and the internet.","Q_Score":0,"Tags":"python,airflow,azure-synapse","A_Id":69911363,"CreationDate":"2021-11-09T19:43:00.000","Title":"Migrating from Azure Sql to Azure Synapse, can't connect to Synapse in Airflow","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The error the application throws is:\nERROR:saml2.sigver:check_sig: \nERROR:saml2.response:correctly_signed_response: Failed to verify signature\nERROR:saml2.entity:Signature Error: Failed to verify signature\nERROR:saml2.client_base:XML parse error: Failed to verify signature\nAnd it seems to be a Windows problem. Does anyone know how should I implement this? The command used to verify the XML is:\nC:\\Windows\\xmlsec1.exe --verify --enabled-reference-uris empty,same-doc --enabled-key-data raw-x509-cert --pubkey-cert-pem C:\\Users\\me\\AppData\\Local\\Temp\\tmp8wssc6_f.pem --id-attr:ID urn:oasis:names:tc:SAML:2.0:assertion:Assertion --node-id _579304c7-f1c4-5918-83ee-4b33c5df1e00 --output C:\\Users\\me\\AppData\\Local\\Temp\\tmpw9lbnowc.xml C:\\Users\\me\\AppData\\Local\\Temp\\tmpcg9l7jik.xml\nAnd it returns b\"\".\nThanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":330,"Q_Id":69916491,"Users Score":0,"Answer":"For those who may face this problem in the future: Windows OS (still don't know certainly if the problem is caused due an OS particularity, I couldn't test it in other environments), pysaml2, and django-saml2-auth don't handle self signed certificates very well. I could solve the problem by just forking pysaml2\/django-saml2-auth and passing downloaded cert-files from IdP (.pem) manually.","Q_Score":0,"Tags":"python,django,azure-active-directory,saml-2.0","A_Id":69991218,"CreationDate":"2021-11-10T16:03:00.000","Title":"Problems while implementing SSO using Django SAML2 Auth and AzureAD","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"The error the application throws is:\nERROR:saml2.sigver:check_sig: \nERROR:saml2.response:correctly_signed_response: Failed to verify signature\nERROR:saml2.entity:Signature Error: Failed to verify signature\nERROR:saml2.client_base:XML parse error: Failed to verify signature\nAnd it seems to be a Windows problem. Does anyone know how should I implement this? The command used to verify the XML is:\nC:\\Windows\\xmlsec1.exe --verify --enabled-reference-uris empty,same-doc --enabled-key-data raw-x509-cert --pubkey-cert-pem C:\\Users\\me\\AppData\\Local\\Temp\\tmp8wssc6_f.pem --id-attr:ID urn:oasis:names:tc:SAML:2.0:assertion:Assertion --node-id _579304c7-f1c4-5918-83ee-4b33c5df1e00 --output C:\\Users\\me\\AppData\\Local\\Temp\\tmpw9lbnowc.xml C:\\Users\\me\\AppData\\Local\\Temp\\tmpcg9l7jik.xml\nAnd it returns b\"\".\nThanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":330,"Q_Id":69916491,"Users Score":0,"Answer":"Your question is a little vague. It seems that you have sent an authenrequest and got a response and the application on your end is throwing the signature validation error. If that is correct then you likely do not have the correct cacert from the identity provider defined in your application.\nQuestions about SAML and verifying XML signatures really need the original xml idealy in base64 so it is possible to try to check the signature.","Q_Score":0,"Tags":"python,django,azure-active-directory,saml-2.0","A_Id":69929629,"CreationDate":"2021-11-10T16:03:00.000","Title":"Problems while implementing SSO using Django SAML2 Auth and AzureAD","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am running a python shell program in AWS Glue but after running for around 10 minutes its failing with error Internal service error. The logs or error logs does not give any information. Most of the time it fails by just saying Internal service error and rarely it runs for 2 days and gets timed out. The code uses pandas for transformations and it looks ok, it runs fine on local machine, necessary changes done so that it works on AWS glue[where it read\/write files to s3 location instead of local folder]. What could be wrong here? Any input is appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":116,"Q_Id":69927162,"Users Score":0,"Answer":"This issue was figured out. The problem was the job was unable to download the dependent python libraries due to an access issue to the s3 bucket. Once the access issue was resolved the job started running fine.","Q_Score":0,"Tags":"python,amazon-web-services,dataframe,amazon-s3,aws-glue","A_Id":70111762,"CreationDate":"2021-11-11T11:00:00.000","Title":"AWS Glue python shell Job fails with Internal Service error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"My pip command (On apple m1) is\nenv OPENBLAS=\/usr\/local\/opt\/appl\/contrib-appleclang\/OpenBLAS-0.3.18-ser CC='\/usr\/bin\/clang' CFLAGS='-Wno-error=implicit-function-declaration -Wno-error=strict-prototypes -fvisibility=default -mmacosx-version-min=10.12 -fPIC -pipe' CXX='\/usr\/bin\/clang++' CXXFLAGS='-stdlib=libc++ -Wno-error=implicit-function-declaration -Wno-error=strict-prototypes -fvisibility=default -mmacosx-version-min=10.12 -fPIC -pipe' F90='\/usr\/local\/gfortran\/bin\/gfortran' \/usr\/local\/opt\/appl\/contrib-appleclang\/Python-3.9.6-sersh\/bin\/python3 -vvv -m pip install --cache-dir=\/Users\/home\/research\/cary\/projects\/svnpkgs --no-binary :all: --no-use-pep517 scipy==1.7.2\nIn the output I see\ncwd: \/private\/var\/folders\/zz\/zyxvpxvq6csfxvn_n0000yyw0007qq\/T\/pip-install-xj7c16et\/scipy_675d2f67970e4d598db970286e95680a\/\nCan I set that directory?\nThen when I try going to that directory, it is gone\n$ ls \/private\/var\/folders\/zz\/zyxvpxvq6csfxvn_n0000yyw0007qq\/T\/p\nip-install-xj7c16et\/scipy_675d2f67970e4d598db970286e95680a\/\nls: \/private\/var\/folders\/zz\/zyxvpxvq6csfxvn_n0000yyw0007qq\/T\/pip-install-xj7c16et\/scipy_675d2f67970e4d598db970286e95680a\/: No such file or directory\nIs there a way to keep it around?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":22,"Q_Id":69932225,"Users Score":1,"Answer":"Yes, you can use --no-clean to keep the temporary directories.\nHowever, it sounds like you really want pip wheel, which will build a (re)installable .whl of your package. It accepts most same arguments as pip install.","Q_Score":1,"Tags":"python,pip,scipy","A_Id":69932235,"CreationDate":"2021-11-11T17:07:00.000","Title":"Is there a way to set where python pip builds a module? And keep that build?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to import fortran files to my python script by using f2py.\nFor that I compile them via\nf2py -c -m my_lib *.f\nwhich produces the file \"my_lib.cpython-38-darwin.so\" which I import to my script by\nimport my_lib.\nOn my Intel-based Macbook that works well. However, running the script on an M1 machine yields the following error:\nImportError: dlopen(.\/my_lib.cpython-38-darwin.so, 0x0002): tried: '.\/my_lib.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')), '\/usr\/local\/lib\/my_lib.cpython-38-darwin.so' (no such file), '\/usr\/lib\/my_lib.cpython-38-darwin.so' (no such file)\nSame happens when I start my terminal in Rosetta mode.\nAny idea how to solve this issue?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":97,"Q_Id":69942988,"Users Score":1,"Answer":"I don't know if you could solve the problem but I found a solution to solve it on my Mac.\nRunning f2py on my Mac caused the same error.\nAfter some time of searching for reasons I tried to use another Python Environment (based on another Base Interpreter).\nTill changing I used an Anaconda Environment.\nI think the error could be caused by an interpreter that's not running natively on the M1 chip. The compiled Fortran code must run natively on the Mac. So probably there could be problems in compatibility.\nNow I'm using the Interpreter given by the \"Developer Command Line Tools\" (in this case Python 3.8) to compile the source code and to build the .so file.\nI also use this Base Interpreter to run my script that has to include the Fortran package.\nBy doing so I was able to make it work on my Mac.","Q_Score":2,"Tags":"python,fortran,apple-m1,f2py","A_Id":71725708,"CreationDate":"2021-11-12T12:29:00.000","Title":"ImportError: How can I get F2PY working on Apple M1?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I run pipenv run pytest on my Windows machine, I get the following message:\n\nWarning: Your Pipfile requires python_version 3.7, but you are using unknown\n(C:\\Users\\d.\\v\\S\\python.exe).\n\n\n$ pipenv --rm and rebuilding the\nvirtual environment may resolve the issue.\n$ pipenv check will surely fail\n\nI have tried running pipenv --rm, pipenv install & re-running the tests but I get the same error message.\nUnder Programs and Features, I have Python 3.7.0 (64-bit) & Python Launcher so I'm not sure where it is getting the unkown version from.\nCan someone please point me in the right direction?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":631,"Q_Id":69943147,"Users Score":0,"Answer":"I had a similar issue, the following fixed my issue.\n\nUpdating pip using the 'c:\\users...\\python.exe -m pip install --upgrade pip' command\nadding PYTHONPATH (pointing to the directory with the pipfil) to enviroment variables\nremoving the current virtualenv (pipenv --rm)\nand reinstalling (pipenv install)","Q_Score":1,"Tags":"python,pytest","A_Id":70285571,"CreationDate":"2021-11-12T12:43:00.000","Title":"Receiving Your Pipfile requires python_version 3.7, but you are using unknown, when running pytest even though I have Python 3.7.0 installed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I deployed Apache Spark 3.2.0 using this script run from a distribution folder for Python:\n.\/bin\/docker-image-tool.sh -r -t my-tag -p .\/kubernetes\/dockerfiles\/spark\/bindings\/python\/Dockerfile build\nI can create a container under K8s using Spark-Submit just fine. My goal is to run spark-submit configured for client mode vs. local mode and expect additional containers will be created for the executors.\nDoes the image I created allow for this, or do I need to create a second image (without the -p option) using the docker-image tool and configure within a different container ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":69980622,"Users Score":0,"Answer":"It turns out that only one image is needed if you're running PySpark. Using Client-mode, the code spawns the executors and workers for you and they run once you create a spark-submit command. Big improvement from Spark version 2.4!","Q_Score":1,"Tags":"python,docker,apache-spark,kubernetes","A_Id":70038985,"CreationDate":"2021-11-15T20:41:00.000","Title":"Two separate images to run spark in client-mode using Kubernetes, Python with Apache-Spark 3.2.0?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there any API or attributes that can be used or compared to determine if all messages in one topic partition are consumed? We are working on a test that will use another consumer in the same consumer group to check if the topic partition still has any message or not. One of our app services is also using Kafka to process internal events. So is there any way to sync the progress of message consumption?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":320,"Q_Id":70039321,"Users Score":0,"Answer":"Yes you can use the admin API.\nFrom the admin API you can get the topic offsets for each partition, and a given consumer group offsets. If all messages read, the subtraction of the later from the first will evaluate to 0 for all partitions.","Q_Score":0,"Tags":"apache-kafka,kafka-consumer-api,confluent-kafka-python","A_Id":70087085,"CreationDate":"2021-11-19T18:24:00.000","Title":"Kafka consumer: how to check if all the messages in the topic partition are completely consumed?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My python application uses concurrent.futures.ProcessPoolExecutor with 5 workers and each process makes multiple database queries.\nBetween the choice of giving each process its own db client, or alternatively , making all process to share a single client, which is considered more safe and conventional?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":80,"Q_Id":70051159,"Users Score":0,"Answer":"It is better to use multithreading or asynchronous approach instead of multiprocessing because it will consume fewer resources. That way you could use a single db connection, but I would recommend creating a separate session for each worker or coroutine to avoid some exceptions or problems with locking.","Q_Score":0,"Tags":"python,python-3.x,concurrent.futures","A_Id":70051218,"CreationDate":"2021-11-21T01:30:00.000","Title":"Sharing DB client among multiple processes in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My python application uses concurrent.futures.ProcessPoolExecutor with 5 workers and each process makes multiple database queries.\nBetween the choice of giving each process its own db client, or alternatively , making all process to share a single client, which is considered more safe and conventional?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":80,"Q_Id":70051159,"Users Score":1,"Answer":"Short answer: Give each process (that needs it) its own db client.\nLong answer: What problem are you trying to solve?\nSharing a DB client between processes basically doesn't happen; you'd have to have the one process which does have the DB client proxy the queries from the others, using more-or-less your own protocol. That can have benefits, if that protocol is specific to your application, but it will add complexity: you'll now have two different kinds of workers in your program, rather than just one kind, plus the protocol between them. You'd want to make sure that the benefits outweigh the additional complexity.\nSharing a DB client between threads is usually possible; you'd have to check the documentation to see which objects and operations are \"thread-safe\". However, since your application is otherwise CPU-heavy, threading is not suitable, due to Python limitations (the GIL).\nAt the same time, there's little cost to having a DB client in each process; you will in any case need some sort of client, it might as well be the direct one.\nThere isn't going to be much more IO, since that's mostly based on the total number of queries and amount of data, regardless of whether that comes from one process or gets spread among several. The only additional IO will be in the login, and that's not much.\nIf you're running out of connections at the database, you can either tune\/upgrade your database for more connections, or use a separate off-the-shelf \"connection pooler\" to share them; that's likely to be much better than trying to implement a connection pooler from scratch.\nMore generally, and this applies well beyond this particular question, it's often better to combine several off-the-shelf pieces in a straightforward way, than it is to try to put together a custom complex piece that does the whole thing all at once.\nSo, what problem are you trying to solve?","Q_Score":0,"Tags":"python,python-3.x,concurrent.futures","A_Id":70051503,"CreationDate":"2021-11-21T01:30:00.000","Title":"Sharing DB client among multiple processes in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Our airflow retrieves reports from a website. We submit the parameters for the report and use a sensor to detect the status of it. Sometimes the system drops reports, and the sensor is setup to recognize when this happens. Instead of failing when this happens, is there a way to have the sensor task signal to the scheduler that itself and its parent task that submits the report need to be cleared so they can run again?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":70088390,"Users Score":0,"Answer":"With Airflow 2.0+ this is possible. You can use the airflow API to clear the task and its parent. The API call simply needs a list of tasks to clear so its super easy. Documentation can be found under Docs in the airflow interface.\nEdit:\nIn the effort of being more clear, here is a more detailed explanation. I used a httpHook in my child task to make a call to my deployments ClearDagTask endpoint to clear the parent submit task. The child task was a sensor so i just had it reschedule itself while the parent task resubmitted the report request. The API swagger documentation can be found under Docs in your own deployment, at least with Astronomer, and explains everything you need to build the call.","Q_Score":0,"Tags":"python,airflow","A_Id":70521213,"CreationDate":"2021-11-23T22:02:00.000","Title":"Airflow: Can a child task signal to Airflow that the immediate parent needs reran?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"so I have a server and did an api for it so I can update patch files to my server, however now when I update the some batch files in the server, I always have to stop running the server and than run it again to see the changes, I was wondering what can I do so that my server restart it's self","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":70091885,"Users Score":0,"Answer":"Yes you can,\nMake requests api send a json like {'do': 'refresh_server'}-\nthen just type exit(), then run the file again using the os module.\nEdit: This is solution for windows","Q_Score":0,"Tags":"python,api","A_Id":70091948,"CreationDate":"2021-11-24T07:05:00.000","Title":"How can I restart my server using a request api","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"My question is to do with calling python from a command prompt in windows 10 after installing a new version (3.9 to 3.10).\nWhen I type (into a cmd prompt): py i get Python 3.10.0.\nWhen I type (into a cmd prompt): python i get Python 3.9.6.\n\nSo two questions:\n\nWhy do I get two different versions when typing python compared to py?\nHow can I ensure that they point to the same version or how can I select different versions?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":70096382,"Users Score":1,"Answer":"This is because there are two versions of python on your computer. When you want to refer to a particular version of python just do:\npy - version\n\nFor example, if you want to reference python 3.10 in cmd, do: py - 310\nAnd when you want to reference to 3.9 do: py - 39\n\nMake sure you have the correct spacing^","Q_Score":1,"Tags":"python,python-3.x","A_Id":70096690,"CreationDate":"2021-11-24T12:49:00.000","Title":"different python versions from the same command prompt in windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"The error message is as follows:\n\npython3.exe: No module named gdal2tiles\n\nI downloaded the Osgeo4w installer and through this I installed GDAL\\OGR 3.4 and python3-gdal libraries.\nThis is the command line I'm trying to launch:\npython3 -m gdal2tiles -p mercator -z 0-10 -w none --processes=4 -r near --exclude --xyz -a 0.0 -n C:\\myMap.geotiff C:\\xyzTiles\\ --config GDAL_PAM_ENABLED NO\nIf instead I explicit the path for gdal2tiles I get another error:\nC:\\OSGeo4W\\apps\\Python37\\Scripts\\gdal2tiles -p mercator -z 0-1 -w none --processes=4 -r near --exclude --xyz -a 0.0 -n C:\\myMap.geotiff C:\\xyzTiles\\ --config GDAL_PAM_ENABLED NO\n\nAttributeError: module 'main' has no attribute 'spec'","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":73,"Q_Id":70109317,"Users Score":0,"Answer":"I was able to make the module visible to Python by moving to the scripts folder, inside the cmd file. I don't know if there is a way to set this path globally on the operating system. For now, however, I have solved it like this.\nCD C:\\OSGeo4W\\apps\\Python37\\Scripts\npython3 -m gdal2tiles -p mercator -z 0-10 -w none --processes=4 -r near --exclude --xyz -a 0.0 -n C:\\myMap.geotiff C:\\xyzTiles\\ --config GDAL_PAM_ENABLED NO","Q_Score":0,"Tags":"python,gdal","A_Id":70125375,"CreationDate":"2021-11-25T10:16:00.000","Title":"Python fails to start a GDAL command","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to run a flask app inside a docker container on AWS ECS. I created a task and assigned it to a cluster, but now I cannot ping the public ip adress of this task.\nI also cannot send the request via POSTMANN as it was possible when I ran the docker container from the same image on my local machine. Any help is highly appreciated! Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":70120466,"Users Score":0,"Answer":"After two weeks I finally found the answer to this problem! If you run your task inside the awsvpc, you need to create an inbound rule, which states the ip adress from where you want to send the POST request with Postman. After setting up the rule, the aws task inside the cluster can be accessed.","Q_Score":0,"Tags":"python,amazon-web-services,docker,networking","A_Id":70211673,"CreationDate":"2021-11-26T06:44:00.000","Title":"How to connect to task inside a cluster on AWS ECS via Postmann?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a situation where my data lies in a different GCP project say \"data-pro\" and my compute project is set up as a different GCP project, which has access to \"data-pro\" 's tables.\nSo is there way to specify the default project-id using which the queries must run ?\ni can see that there is a default data set , parameter .. but no default projectID.\nSO my queries are as follows :\n\nselect name ,id from employeedDB.employee .\/\/ this employeedDB is in data-proc\n\nand my BigQueryInsertJobOperator Configuration is as below :\n\nBigQueryInsertJobOperator(dag=dag, task_id=name,\ngcp_conn_id=connection_id,--\/\/connection_id over compute-proc\nconfiguration={\n\"query\": {\n\"query\": \"{% include '\"+sqlFile+\"' %}\",\n\"useLegacySql\": False\n},\n},\npool='bqJobPool')","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":143,"Q_Id":70153553,"Users Score":0,"Answer":"You should define different connection id with different project (and you can set it either via parameter in each task or via \"default_args\" feature.","Q_Score":1,"Tags":"python,google-cloud-platform,google-bigquery,airflow,airflow-2.x","A_Id":70157291,"CreationDate":"2021-11-29T11:01:00.000","Title":"BigQueryInsertJobOperator Configuration for default project ID","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So, I'm currently trying to construct a c++ aplication that calls for a python script. The main idea is that the python script runs a loop and prints decisions based on user input. I want the cpp program to be able to wait and read(if there s an output from python). I tried to make them \"talk\" via a file but it turned out bad. Any ideas?\nPS: im calling the script using system(\"start powershell.exe C:\\\\python.exe C:\\\\help.py\");\nIf there is any better way please let me know! Thanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":70180960,"Users Score":0,"Answer":"You could write to a file from python and check every certain amount of time in the c++ program to check for any changes in the file.","Q_Score":0,"Tags":"python,c++,multithreading,operating-system,multiprocessing","A_Id":70180979,"CreationDate":"2021-12-01T08:19:00.000","Title":"Is there a way to continuously collect output from a Python script I'm running into a c++ program?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We have Glue job with Type: 'Python Shell', Py version: 'Python 3.6', DPUs: '1\/16', Glue version: '1.0'.\nHow can i change the glue version from 1.0 to 2.0?\nDoes AWS Glue python shell job even support Glue version 2.0?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":461,"Q_Id":70201730,"Users Score":2,"Answer":"Unfortunately Python Shell jobs can only run on Glue version 1.0.","Q_Score":2,"Tags":"python,shell,version,aws-glue,jobs","A_Id":70201932,"CreationDate":"2021-12-02T14:59:00.000","Title":"Does AWS Glue python shell job support Glue version 2.0?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to connect to my BACnet device using BAC0. Yabe is able to detect the BACnet device. However when I try to connect to the device via BAC0.connect(network IP) followed by BAC0.device(device IP and other parameters) I get the error msg - IP address provided is invalid. Check if another software is using port 47808. When I run the command the Wireshark trace shows BACnet APDU protocol being used with appropriate Confirmed-REQ and Complex-ACK msg between the local-network-IP and device-IP, suggesting that the device was polled. However the Wireshark trace shows up after the command terminates with the error message. Could it be that the command terminates prematurely? If so, how to handle it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":301,"Q_Id":70237611,"Users Score":1,"Answer":"Figured it out. The IP address was correct. But there was another software that starts automatically and runs in the background. It also uses BACNet. As a result the port 47808 was getting used by this software. Wireshark was capturing the communication to the device via this software, since the software has a discovery tool for BACnet devices. BAC0.connect works now.","Q_Score":0,"Tags":"python,bacnet","A_Id":70248781,"CreationDate":"2021-12-05T19:05:00.000","Title":"Yabe detects BACnet device but BAC0.connect does not","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to run a python code to create feature store. When I am running I am getting Bigquery.jobs.create permission error. I checked the permissions for my account with gcloud iam roles describe roles\/viewer and Bigquery permissions are there.\nNow, what mistake I am making and how can I solve this error.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":70250653,"Users Score":0,"Answer":"It seems that you need to create BigQuery job. At least the account you are using should have \"BigQuery Job User\" role.","Q_Score":0,"Tags":"python,google-cloud-platform,google-bigquery,feature-store,feast","A_Id":70315936,"CreationDate":"2021-12-06T19:09:00.000","Title":"BigQuery.jobs.create pemission","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have just updated my Macbook to Monterey and installed XCode 13. I'm now seeing errors when trying to link my code - for example one library needs to link to the system python2.7, but gives the error:\n\nKeiths-MacBook-Pro:libcdb keith$ make rm -f libcdb.1.0.0.dylib\nlibcdb.dylib libcdb.1.dylib libcdb.1.0.dylib\n\/Applications\/Xcode.app\/Contents\/Developer\/Toolchains\/XcodeDefault.xctoolchain\/usr\/bin\/clang++\n-stdlib=libc++ -headerpad_max_install_names -arch x86_64 -isysroot \/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/MacOSX.platform\/Developer\/SDKs\/MacOSX12.0.sdk\n-mmacosx-version-min=10.13 -Wl,-rpath,@executable_path\/..\/Frameworks -Wl,-rpath,\/usr\/local\/Qt-5.15.7\/lib -single_module -dynamiclib -compatibility_version 1.0 -current_version 1.0.0 -install_name libcdb.1.dylib -o libcdb.1.0.0.dylib release\/db.o release\/KDTree.o release\/db_Wlist.o release\/db_VSeg.o\nrelease\/db_View.o release\/db_ViaInst.o release\/db_ViaGen.o\nrelease\/db_Via.o release\/db_Vertex.o release\/db_Vector.o\nrelease\/db_Utils.o release\/db_Trapezoid.o release\/db_Transform64.o\nrelease\/db_Transform.o release\/db_Techfile.o release\/db_Style.o\nrelease\/db_Signal.o release\/db_Shape.o release\/db_SegParam.o\nrelease\/db_Segment.o release\/db_Rectangle.o release\/db_Rect.o\nrelease\/db_QTree.o release\/db_Property.o release\/db_Polygon.o\nrelease\/db_PointList.o release\/db_Point.o release\/db_Pin.o\nrelease\/db_Path.o release\/db_ObjList.o release\/db_Obj.o\nrelease\/db_Net.o release\/db_Mst.o release\/db_Mpp.o release\/db_Lpp.o\nrelease\/db_Line.o release\/db_Library.o release\/db_Layer.o\nrelease\/db_Label.o release\/db_InstPin.o release\/db_Inst.o\nrelease\/db_HVTree.o release\/db_HSeg.o release\/db_HierObj.o\nrelease\/db_Group.o release\/db_Ellipse.o release\/db_Edge.o\nrelease\/db_CellView.o release\/db_Cell.o release\/db_Array.o\nrelease\/db_Arc.o -F\/usr\/local\/Qt-5.15.7\/lib -L..\/libcpp\/release -lcpp\n-L\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/config\n-lpython2.7 -framework QtWidgets -framework QtGui -framework AppKit -framework Metal -framework QtNetwork -framework QtCore -framework DiskArbitration -framework IOKit -framework OpenGL -framework AGL\nld: cannot link directly with dylib\/framework, your binary is not an\nallowed client of\n\/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/MacOSX.platform\/Developer\/SDKs\/MacOSX12.0.sdk\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/config\/libpython2.7.tbd\nfor architecture x86_64 clang: error: linker command failed with exit\ncode 1 (use -v to see invocation) make: ***\n[release\/libcdb.1.0.0.dylib] Error 1\n\nGiven that I have recompiled (successfully) the Qt libs and the code for this library, why is it giving me this 'your binary is not an allowed client' error?\nAs far as I can see the python2.7 paths have not changed, so the error is baffling.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":491,"Q_Id":70278110,"Users Score":0,"Answer":"So the quick and diirty fix is to edit the Python.tdb file that is located at:\n\/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/MacOSX.platform\/Developer\/SDKs\/MacOSX12.0.sdk\/System\/Library\/Frameworks\/Python.framework\/Versions\/Current\/Python.tdb\nAnd add your library\/executable targets to the clients list.\nOf course, there is a reason Apple are doing this - python2 is deprecated and sooner or later they will drop it. But until they do, this works.","Q_Score":1,"Tags":"python,xcode,macos-monterey","A_Id":70286243,"CreationDate":"2021-12-08T16:06:00.000","Title":"Linking to system python gives 'binary is not an allowed client' error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have just updated my Macbook to Monterey and installed XCode 13. I'm now seeing errors when trying to link my code - for example one library needs to link to the system python2.7, but gives the error:\n\nKeiths-MacBook-Pro:libcdb keith$ make rm -f libcdb.1.0.0.dylib\nlibcdb.dylib libcdb.1.dylib libcdb.1.0.dylib\n\/Applications\/Xcode.app\/Contents\/Developer\/Toolchains\/XcodeDefault.xctoolchain\/usr\/bin\/clang++\n-stdlib=libc++ -headerpad_max_install_names -arch x86_64 -isysroot \/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/MacOSX.platform\/Developer\/SDKs\/MacOSX12.0.sdk\n-mmacosx-version-min=10.13 -Wl,-rpath,@executable_path\/..\/Frameworks -Wl,-rpath,\/usr\/local\/Qt-5.15.7\/lib -single_module -dynamiclib -compatibility_version 1.0 -current_version 1.0.0 -install_name libcdb.1.dylib -o libcdb.1.0.0.dylib release\/db.o release\/KDTree.o release\/db_Wlist.o release\/db_VSeg.o\nrelease\/db_View.o release\/db_ViaInst.o release\/db_ViaGen.o\nrelease\/db_Via.o release\/db_Vertex.o release\/db_Vector.o\nrelease\/db_Utils.o release\/db_Trapezoid.o release\/db_Transform64.o\nrelease\/db_Transform.o release\/db_Techfile.o release\/db_Style.o\nrelease\/db_Signal.o release\/db_Shape.o release\/db_SegParam.o\nrelease\/db_Segment.o release\/db_Rectangle.o release\/db_Rect.o\nrelease\/db_QTree.o release\/db_Property.o release\/db_Polygon.o\nrelease\/db_PointList.o release\/db_Point.o release\/db_Pin.o\nrelease\/db_Path.o release\/db_ObjList.o release\/db_Obj.o\nrelease\/db_Net.o release\/db_Mst.o release\/db_Mpp.o release\/db_Lpp.o\nrelease\/db_Line.o release\/db_Library.o release\/db_Layer.o\nrelease\/db_Label.o release\/db_InstPin.o release\/db_Inst.o\nrelease\/db_HVTree.o release\/db_HSeg.o release\/db_HierObj.o\nrelease\/db_Group.o release\/db_Ellipse.o release\/db_Edge.o\nrelease\/db_CellView.o release\/db_Cell.o release\/db_Array.o\nrelease\/db_Arc.o -F\/usr\/local\/Qt-5.15.7\/lib -L..\/libcpp\/release -lcpp\n-L\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/config\n-lpython2.7 -framework QtWidgets -framework QtGui -framework AppKit -framework Metal -framework QtNetwork -framework QtCore -framework DiskArbitration -framework IOKit -framework OpenGL -framework AGL\nld: cannot link directly with dylib\/framework, your binary is not an\nallowed client of\n\/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/MacOSX.platform\/Developer\/SDKs\/MacOSX12.0.sdk\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/lib\/python2.7\/config\/libpython2.7.tbd\nfor architecture x86_64 clang: error: linker command failed with exit\ncode 1 (use -v to see invocation) make: ***\n[release\/libcdb.1.0.0.dylib] Error 1\n\nGiven that I have recompiled (successfully) the Qt libs and the code for this library, why is it giving me this 'your binary is not an allowed client' error?\nAs far as I can see the python2.7 paths have not changed, so the error is baffling.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":491,"Q_Id":70278110,"Users Score":0,"Answer":"What fixed it for me was to drop the Python2 framework and link with the framework provided by my version of Python3 installed via Brew.\nLocate your installation of Python3 by using this command in Terminal: ls -la $(which python3).\nThe framework is in the \"Frameworks\" folder located one level above \"bin\", for example: \/usr\/local\/Cellar\/python@3.9\/3.9.7_1\/Frameworks\nOnce your have the location of the Python3 framework, add it as a framework in your XCode project.\nIn the Build Settings, don't forget to add:\n\nthe Frameworks folder location in Framework Search Path (e.g. \"\/usr\/local\/Cellar\/python@3.9\/3.9.7_1\/Frameworks\")\nthe framework's Headers folder in Header Search Path (e.g. \"\/usr\/local\/Cellar\/python@3.9\/3.9.7_1\/Frameworks\/Python.framework\/Headers\")\n\nSome functions changed in version 3 so you'll need to update some of your Python function calls.","Q_Score":1,"Tags":"python,xcode,macos-monterey","A_Id":70520378,"CreationDate":"2021-12-08T16:06:00.000","Title":"Linking to system python gives 'binary is not an allowed client' error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We are using symphony in our company and I am trying to send a message to the alert bot in symphony.\nSomeone sent me a small python script which does this already which uses the socket library.\nThey send the message as socket.send(msg) using import socket in their script.\nQuestion is : what is socket.send comparable with in kdb ? It's not a http post so it's not the .Q.hp .. Is this similar -> {h:hopen hsym`$\"host:port\";h\"someMessageCompatibleWithSymbphonyBot\";hclose h}\nUPDATE: I have been told that my kdb message is not pure tcp. Can anyone point me in the right direction?","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":98,"Q_Id":70297951,"Users Score":5,"Answer":"hopen works for kdb-to-kdb, not kdb-to-other so yes in that sense it's not pure tcp.\nNormally when kdb needs to communicate with another system by tcp you would use some sort of middleman library to handle the communication layer.\nIn theory you could use the python script\/package in your kdb instance if you use one of the numerous kdb<>python interfaces (pyq, embedpy, qpython etc)","Q_Score":3,"Tags":"python,python-3.x,kdb","A_Id":70298183,"CreationDate":"2021-12-09T23:20:00.000","Title":"the equivalent of python socket.send() in kdb","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to set up a sort of basic screensaver on a Raspberry PI.\nI'm using python because I like it a lot more than bash.\nThe script uses popen to call feh and display a slideshow.\nThere's also NodeRed running on this Pi, and that manages the actions bound to a few GPIO buttons.\nI would like to intercept those button inputs to stop the slideshow.\nI thought of 3 ways:\n\nuse NodeRed to kill (-15 or -9) feh. But this leaves behind a \"defunct\" process, and python should rely on the absence of the spawned process before spawning a new one\nuse the RPi.GPIO library and bind a callback event to the button press. This doesn't work because it throws the error that the channel (i.e. the GPIO pin) is already in use, and the RPI crashes and has to be power cycled.\nuse a OS environment variable. This could work, but I don't want to continually be polling the variable. The feh process needs to stop immediately upon a button press.\n\nSo how else could this be achieved?\nIs there a way to bind a function to some external trigger?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":120,"Q_Id":70313951,"Users Score":0,"Answer":"I realized that since the script logic is very simple and only has one task, it would have been easy to use a socket (synchronous and blocking) and wait on that for NodeRed to send its command. And indeed it was","Q_Score":0,"Tags":"python-3.x,events,triggers,gpio","A_Id":70319910,"CreationDate":"2021-12-11T09:22:00.000","Title":"make python wait for external trigger","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have to upload a file into an FTP server from a specific IP. So I have to use ssh tunnels to connect to the FTP server. For that, I use:\n\n$ ssh -NL localhost:7777:ftpserver.com:21 root@myserver.com\n\nThis way, I can connect to the FTP server using:\n\n$ FTP localhost 7777\n\nAnd the command port will be 7777 to 21, and everything works great until I want to use the data port.\nI need a way to specify what port I want to use for data transfer in active mode to create another ssh tunnel and pass it through there. (I know it's 20 by default, but I can't assign my local port 20! I need to use 7778 or something.)\nI need a solution for both the terminal command line and pythons ftplib.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":70324025,"Users Score":0,"Answer":"the solution was using sshuttle instead!\n\nsshuttle -r root@myserver 0\/0\n\nthis will tunnel my entire connection to go through myserver! thus all ports will be usable\nnow i can just use the ftp command and by using 'passive', everything works great.","Q_Score":0,"Tags":"python,terminal,ftp","A_Id":70520899,"CreationDate":"2021-12-12T13:30:00.000","Title":"specify data port in ftp (terminal or python)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We can basically use databricks as intermediate but I'm stuck on the python script to replicate data from blob storage to azure my sql every 30 second we are using CSV file here.The script needs to store the csv's in current timestamps.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":39,"Q_Id":70345519,"Users Score":1,"Answer":"There is no ready stream option for mysql in spark\/databricks as it is not stream source\/sink technology.\nYou can use in databricks writeStream .forEach(df) or .forEachBatch(df) option. This way it create temporary dataframe which you can save in place of your choice (so write to mysql).\nPersonally I would go for simple solution. In Azure Data Factory is enough to create two datasets (can be even without it) - one mysql, one blob and use pipeline with Copy activity to transfer data.","Q_Score":1,"Tags":"python,azure,apache-spark,google-cloud-platform,databricks","A_Id":70347715,"CreationDate":"2021-12-14T07:59:00.000","Title":"Is there any way to replicate realtime streaming from azure blob storage to to azure my sql","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm looking to a way to \"simply\" access to a Cach\u00e9 database using python (I need to make sql query on this database).\nI've heard about a python package (Intersys) but I can't find it anymore (having this package would be the most simple way).\nI've tried using pyodbc connection with the appropriate Cach\u00e9 driver : it works on my machine, however when I try to deploi the function in production (Linux OS), the driver's file is not found.\nThank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":70360965,"Users Score":0,"Answer":"There is only one way, on how to make it work with Python, is using pydobc, and InterSystems driver.","Q_Score":0,"Tags":"python,azure-functions,intersystems-cache,intersystems","A_Id":70363390,"CreationDate":"2021-12-15T09:09:00.000","Title":"Connecting Cach\u00e9 database in Azure function","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When i have python 3.5 and python 3.6 on ubuntu .I entered some alternate commands to use python 3.5 only (when I type python -V and python3 -V the same output is 3.5.2)\nAnd then i install virtualenv and virtualenvwrapper \u2014 these packages allow me to create and manage Python virtual environments:\n$ sudo pip install virtualenv virtualenvwrapper\n$ sudo rm -rf ~\/get-pip.py ~\/.cache\/pip\nTo finish the install of these tools, I updated our ~\/.bashrc file.I added the following lines to your ~\/.bashrc :\nexport WORKON_HOME=$HOME\/.virtualenvs\nexport VIRTUALENVWRAPPER_PYTHON=\/usr\/bin\/python3\nsource \/usr\/local\/bin\/virtualenvwrapper.sh\nNext, source the ~\/.bashrc file:\n$ source ~\/.bashrc\nAnd final I created your OpenCV 4 + Python 3 virtual environment:\n$ mkvirtualenv cv -p python3\ni have created the virtual environment but had some problems in the back end and i guess it was due to the presence of python3.6. In the end i decided to uninstall python 3.6 and rerun the steps above from scratch and had a problem at the last step that I mentioned above.When i enter command \"mkvirtualenv cv -p python3\" i get an ERROR:\nFileExistsError: [Errno 17] File exists: '\/usr\/bin\/python' -> '\/home\/had2000\/.virtualenvs\/cv\/bin\/python'\nAt the same time when i enter the command \"update-alternatives --config python\" python3.6 is no longer there,but i get a warning:\nupdate-alternatives: warning: alternative \/usr\/bin\/python3.6 (part of link group python) doesn't exist; removing from list of alternatives\nThere is 1 choice for the alternative python (providing \/usr\/bin\/python).\nLooking forward to your help, thank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":418,"Q_Id":70365689,"Users Score":0,"Answer":"From the commands you've shared, the error arises from the mkvirtualenv cv being run twice - i.e. the environment already exists. To remove the environment you created, do: rmvirtualenv env-name-here which in this case will become rmvirtualenv cv. This shouldn't be done with that environment active, BTW. An alternate route is to delete $WORKON_HOME\/env-name-here. By default, $WORKON_HOME is usually .virtualenvs.","Q_Score":1,"Tags":"python,opencv,virtualenv,virtualenvwrapper","A_Id":70370334,"CreationDate":"2021-12-15T14:47:00.000","Title":"FileExistsError: [Errno 17] File exists: '\/usr\/bin\/python' -> '\/home\/had2000\/.virtualenvs\/cv\/bin\/python'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I created a python script to automate a task.\nI would like to run it every day on a hourly basis and for this I created a task in Windows Task Scheduler.\nIs there a way to write a script to log in Windows automatically when my account is logged out, because my script fails if the Windows user is not logged in.\nThank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":70378478,"Users Score":0,"Answer":"This is the wrong way to approach this. It's normal and expected that your computer will not always be logged in. You should not try to change that, as it would be very insecure.\nThe right thing to do is figure out why your script will not run when the system is not logged in. When setting your task up, you can set user credentials it should use when starting. Make sure your program is not making any incorrect assumptions about accessibility or security when it runs.","Q_Score":0,"Tags":"python,windows,scheduled-tasks,taskscheduler,windows-users","A_Id":70378593,"CreationDate":"2021-12-16T11:47:00.000","Title":"Python to log in Windows automatically with task scheduler","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"On my windows PC, I created a python script. Now, i want to run this on LINUX machine. Using pyinstaller,I can able to create exe file. But, how can I create an executable file on windows pc, which should run in linux machine.\nThanks for help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":143,"Q_Id":70379205,"Users Score":0,"Answer":"Found the solution. If you are writing python script in windows machiene, you can create only windows executable files. If you are writing python script in linux, you can create executable shell file to run on linux. From linux machiene, one cannot create exe file to run on windows.","Q_Score":0,"Tags":"python,windows,binary","A_Id":71137159,"CreationDate":"2021-12-16T12:41:00.000","Title":"How to convert .py file to executable file to run in linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have two scripts a.py and b.py. Both work independently from command line by providing relevant args. Both scripts run on linux box. Both scripts have numerous methods and main method. How can I call a.py into b.py as a module ? should I just 'import a' at the top of b.py and then call the relevant methods inside the main method of b.py ? or is there a way to directly call the main method of a.py inside b.py ?\nNote: I don't want to create multiple supporting files like setup.py or init.py if that is possible. thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":70387808,"Users Score":0,"Answer":"Not sure if its the best way, but it worked for now. I imported the other script at the top (its in same directory on server). Then created a new method in the calling script and called all relevant methods to create a single package and then called this new method in the main method. It is working for me. I will close the question with that note. Thanks \u2013","Q_Score":0,"Tags":"python,module","A_Id":70407366,"CreationDate":"2021-12-17T02:34:00.000","Title":"calling one python script as a module within another python script and both scripts should work independently from command line as well","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How can I connect two applications with the scenario below?\nApplication1:\nOur infrastructure was created on AWS with python-django and react, its a private VPC that i can only access via SSH to the EC2 bastion instance (as far as to be able to write codes into)and the way the backend was deployed to create the backend URL api.mywebsite.com (which has multiple endpoints) was through cloudfront and Route53. (www.mywebsite.com was built via s3 and can talk to the backend api.mywebsite.com).\nApplication2:\n(This is a client infrastructure)\nAt this time i haven't met the client to know what their system is made of but regardless i need to find a way to write some codes on this system when a specific event is triggered to send data to an API endpoint of Application1.\nWhat would be the best way to implement such a logic or API to connect Application1 and Application2?\n(Especially considering that Application1 infrastructure is a private VPC)\nThis is pretty much the same way that someone would use an API like STRIPE...I guess, but i am not sure how to achieve such result...\nthank you in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":70411814,"Users Score":0,"Answer":"If understood correctly App1 own by your, App2 is own by 3rd party.\nThe way I would solve in a scalable way is to deploy a service on aws lambda that can get requests from App2(or any other app that you would decide in the future) , I would give this service role in order to connect with App1 and would have all logic there.\nWhat you will gain is that your app is secure(you don't expose it to outside 3rd app) , the solution is scalable (you don't need to do nothing if you want to have other apps connecting)","Q_Score":0,"Tags":"python,reactjs,python-3.x,django,amazon-web-services","A_Id":70412022,"CreationDate":"2021-12-19T13:22:00.000","Title":"How to connect two applications and hit the API of the first application1 from application2","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have different venvs in my machine in which I have python 3.10.\nNow for a specific project, I realised that python 3.10 is not suitable as some libraries are still not compatible. Therefore when creating a new venv for a new project, I would like to downgrade python, say to 3.8, only for this specific venv.\nHow can I do that?\nWhat should I type onto the terminal to do this?\nPS: I use VS and its terminal to create venv","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":6751,"Q_Id":70422866,"Users Score":1,"Answer":"you can do it by using \"virtualenv\" library. It can be installed with command pip install virtualenv\nthen followed by command\nvirtualenv \"name_of_your_environment\" #no quotes\nand then use the following code to activate your venv\n\"name_of_your_environment\"\\Scripts\\activate #NOTE YOU MUST BE at your directory where you created your env.\nits for VS CODE but I prefer installing conda and then creating env on conda prompt using conda which later you can access to vs code to and its easy to activate that env from anywhere just type conda activate 'name_of_your_env' on vs terminal","Q_Score":4,"Tags":"python-3.x,python-venv","A_Id":70423045,"CreationDate":"2021-12-20T13:52:00.000","Title":"how to create a venv with a different python version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am setting up Robot Framework on RHEL 8.4. I have python 3.6 installed on my machine. However, when I try to run robot from within python virtual environment it throws\n-ksh: robot: cannot execute [Permission Denied]\nI also ran whereis robot and gave all permissions to the robot file.\nThe error happens when I am trying to run robot as a user other than root from within the virtual environment however, it works fine when run within virtual environment as root.\nHowever, I am not keen to continue normal development as root and would like this to work via my normal user.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":91,"Q_Id":70461139,"Users Score":0,"Answer":"It sounds quite strange but upgrading PIP resolved the issue.","Q_Score":0,"Tags":"python,linux,robotframework,rhel8","A_Id":70463794,"CreationDate":"2021-12-23T10:55:00.000","Title":"RHEL Permission Denied whilst trying to Run Robot Framework","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"while trying to do\n\npip install web3\n\nI am always getting the following error\n\nBuilding wheel for cytoolz (setup.py) ... error ERROR: Command\nerrored out with exit status 1: command:\n\/Library\/Frameworks\/Python.framework\/Versions\/3.10\/bin\/python3 -u -c\n'import io, os, sys, setuptools, tokenize; sys.argv[0] =\n'\"'\"'\/private\/var\/folders\/gb\/5fvn1z1s689bzt7vbll5845c0000gn\/T\/pip-install-cxwpjegv\/cytoolz_88244d2146254468892c582d0b9e33fa\/setup.py'\"'\"';\nfile='\"'\"'\/private\/var\/folders\/gb\/5fvn1z1s689bzt7vbll5845c0000gn\/T\/pip-install-cxwpjegv\/cytoolz_88244d2146254468892c582d0b9e33fa\/setup.py'\"'\"';f\n= getattr(tokenize, '\"'\"'open'\"'\"', open)(file) if os.path.exists(file) else io.StringIO('\"'\"'from setuptools import\nsetup; setup()'\"'\"');code = f.read().replace('\"'\"'\\r\\n'\"'\"',\n'\"'\"'\\n'\"'\"');f.close();exec(compile(code, file, '\"'\"'exec'\"'\"'))'\nbdist_wheel -d\n\/private\/var\/folders\/gb\/5fvn1z1s689bzt7vbll5845c0000gn\/T\/pip-wheel-ljv1jb3k\ncwd: \/private\/var\/folders\/gb\/5fvn1z1s689bzt7vbll5845c0000gn\/T\/pip-install-cxwpjegv\/cytoolz_88244d2146254468892c582d0b9e33fa\/\nComplete output (56 lines): [1\/5] Cythonizing cytoolz\/utils.pyx\n[2\/5] Cythonizing cytoolz\/dicttoolz.pyx [3\/5] Cythonizing\ncytoolz\/functoolz.pyx [4\/5] Cythonizing cytoolz\/itertoolz.pyx\n[5\/5] Cythonizing cytoolz\/recipes.pyx running bdist_wheel running\nbuild running build_py creating build creating\nbuild\/lib.macosx-10.9-universal2-3.10 creating\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz copying\ncytoolz\/compatibility.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz copying\ncytoolz\/_version.py -> build\/lib.macosx-10.9-universal2-3.10\/cytoolz\ncopying cytoolz\/init.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz copying\ncytoolz\/_signatures.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz creating\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/curried copying\ncytoolz\/curried\/operator.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/curried copying\ncytoolz\/curried\/init.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/curried copying\ncytoolz\/curried\/exceptions.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/curried copying\ncytoolz\/itertoolz.pyx -> build\/lib.macosx-10.9-universal2-3.10\/cytoolz\ncopying cytoolz\/dicttoolz.pyx ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz copying\ncytoolz\/functoolz.pyx -> build\/lib.macosx-10.9-universal2-3.10\/cytoolz\ncopying cytoolz\/recipes.pyx ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz copying\ncytoolz\/utils.pyx -> build\/lib.macosx-10.9-universal2-3.10\/cytoolz\ncopying cytoolz\/utils.pxd ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz copying\ncytoolz\/init.pxd -> build\/lib.macosx-10.9-universal2-3.10\/cytoolz\ncopying cytoolz\/recipes.pxd ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz copying\ncytoolz\/functoolz.pxd -> build\/lib.macosx-10.9-universal2-3.10\/cytoolz\ncopying cytoolz\/dicttoolz.pxd ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz copying\ncytoolz\/cpython.pxd -> build\/lib.macosx-10.9-universal2-3.10\/cytoolz\ncopying cytoolz\/itertoolz.pxd ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz creating\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_none_safe.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_utils.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_curried.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_compatibility.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_embedded_sigs.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_functoolz.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_inspect_args.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_doctests.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_tlz.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_signatures.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/dev_skip_test.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_recipes.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_docstrings.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_dev_skip_test.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_dicttoolz.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_serialization.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_curried_toolzlike.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests copying\ncytoolz\/tests\/test_itertoolz.py ->\nbuild\/lib.macosx-10.9-universal2-3.10\/cytoolz\/tests running\nbuild_ext creating build\/temp.macosx-10.9-universal2-3.10 creating\nbuild\/temp.macosx-10.9-universal2-3.10\/cytoolz clang\n-Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -arch arm64 -arch x86_64 -g -I\/Library\/Frameworks\/Python.framework\/Versions\/3.10\/include\/python3.10\n-c cytoolz\/dicttoolz.c -o build\/temp.macosx-10.9-universal2-3.10\/cytoolz\/dicttoolz.o xcrun:\nerror: invalid active developer path\n(\/Library\/Developer\/CommandLineTools), missing xcrun at:\n\/Library\/Developer\/CommandLineTools\/usr\/bin\/xcrun error: command\n'\/usr\/bin\/clang' failed with exit code 1\n---------------------------------------- ERROR: Failed building wheel for cytoolz\n\nI have tried to update wheel, installing with sudo, and nothing worked so far,\nwould love for some help,\nthanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":944,"Q_Id":70472326,"Users Score":0,"Answer":"Solution was to install xcode tools using\n\nxcode-select --install","Q_Score":2,"Tags":"python,web3,python-wheel","A_Id":70472966,"CreationDate":"2021-12-24T11:17:00.000","Title":"ERROR: Failed building wheel for cytoolz when installing web3.py on macOS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How do I handle a subprocess.run() error in Python? For example, I want to run cd + UserInput with subprocess.run(). What if the user types in a directory name which does not exist? How do I handle this type of error?","AnswerCount":3,"Available Count":1,"Score":-0.1325487884,"is_accepted":false,"ViewCount":805,"Q_Id":70509342,"Users Score":-2,"Answer":"You might use a try and expect block to handle exceptions.","Q_Score":1,"Tags":"python,subprocess,cd","A_Id":70509437,"CreationDate":"2021-12-28T15:47:00.000","Title":"How to handle subprocess.run() error\/exception","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Will a python file run on a random user's computer just fine, or do I need to convert it into an .exe first and then send the .exe file to the user?\nThank you.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":61,"Q_Id":70511371,"Users Score":2,"Answer":"If the random user has python installed on their computer then the file will run fine. If they don't have it installed, then you will need to convert it into an executable.","Q_Score":0,"Tags":"python,executable","A_Id":70511395,"CreationDate":"2021-12-28T18:58:00.000","Title":"Do python files need to be converted into an exe to run on users' computers?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to copy build artifacts to Azure virtual Machine,\n\nI have tried to create key pair in virtual machine and used that for service connection in azure devops for copy file over ssh task but getting errors i.e. Error: Failed to connect to remote machine. Verify the SSH service connection details. Error: privateKey value does not contain a (valid) private key.\nWe also tried with Windows Machine File Copy task but getting error i.e. network path not found, we have gone through this error & done configuration required for this but same error is coming.\n\nWhat steps should I perform or check to solve these errors?\nIs there any other way to copy artifacts on virtual machine from DevOps?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":129,"Q_Id":70528922,"Users Score":0,"Answer":"You can set up self hosted agent in your Azure VM and then the build artifact will generate in the machine.\nAnd if you are based on Microsoft agent, please notice whether the pipeline is in the allowed address of the Azure VM.\nAlso, there is a FTP task it can also been used to transfer files","Q_Score":0,"Tags":"python,selenium,azure-devops,azure-virtual-machine,azure-pipelines-release-pipeline","A_Id":70532045,"CreationDate":"2021-12-30T08:04:00.000","Title":"How to copy build artifacts from Azure DevOps to Azure Virtual Machine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My Python Notebooks log some data into stdout and when I run this notebooks via UI I can see outputs inside cell.\nNow I am running Python Notebooks on Databricks via API (\/2.1\/jobs\/create and then \/2.1\/jobs\/run-now) and would like to get output. I tried both \/2.1\/jobs\/runs\/get and \/2.1\/jobs\/runs\/get-output however none of the includes stdout of the Notebook.\nIs there any way to access stdour of the Notebook via API?\nP.S. I am aware of dbutils.notebook.exit() and will use it, if it will not be possible to get stdout.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":209,"Q_Id":70535164,"Users Score":1,"Answer":"Yes, it is impossible to get the default return from python code. I saw that on a different cloud provider you can get an output from logs also.\n100% solution it is using dbutils.notebook.exit()","Q_Score":1,"Tags":"python,databricks","A_Id":71657037,"CreationDate":"2021-12-30T18:01:00.000","Title":"Databricks how to get output of the Notebook Jobs via API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have this issue where I installed python 3.9 and python 3.10. I want to run some .py file in python 3.9, so naturally I choose open with and browse to python 3.9 .exe file. But for some reasons, it opens up the python.exe from python3.10 folder. Here's the directory:C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python39. But it always opens up python.exe from this directory: C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":44,"Q_Id":70542662,"Users Score":1,"Answer":"Use the absolute path for the python version you want to execute or you can specify the version you want to run. It is done as follow.\n\npy -version file.py","Q_Score":0,"Tags":"python","A_Id":70542733,"CreationDate":"2021-12-31T13:12:00.000","Title":"Can't run .py in a different python version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"As the title asked: Is the RequestHandler.on_finish() method guaranteed to be called? Even if, say, the .post() method had an unhandled Exception?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":70564622,"Users Score":2,"Answer":"Yes, on_finish is always called even when an unhandled exception occurs.\nTornado runs the handler method within a try...except block. So when there's an unhandled exception, Tornado generates a 500 error response and calls the finish() method to close the request which in turn calls the on_finish() method.","Q_Score":0,"Tags":"python,tornado","A_Id":70603540,"CreationDate":"2022-01-03T10:48:00.000","Title":"Tornado: Is the RequestHandler.on_finish() method guaranteed to be called?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using Python and curses to make a little text editor.\nHowever, I'm having some trouble when trying to compute the width of a given text.\nIn particular, I've noticed that Chinese characters (e.g. \u5927) and emojis (e.g. ) on my terminal (the default Terminal app on MacOS) actually take twice the width of a typical ASCII printable character.\nGiven a string I intend to display using curses, is there some way I can determine how many columns a string will use up on the screen without actually displaying it?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":64,"Q_Id":70573954,"Users Score":1,"Answer":"You probably want wcwidth() and wcswidth(). They're not in the standard library (and not part of curses), but are available via pip install wcwidth.","Q_Score":0,"Tags":"python,terminal,curses","A_Id":70576785,"CreationDate":"2022-01-04T03:59:00.000","Title":"python curses: detect text's display width before printing it","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I develop a custom python library that I put in S3 bucket, and now I want to use Zeppelin with pyspark interpreter to interact with it. However, I can't find a way to do it. Anybody knows how to do so?\nThings that I have tried:\n\nIn glue it is possible to include external python library in S3 by specifying 'Python lib path', which makes me think that in Zeppelin it is possible\nThere are methods such as using %dep interpreter but it is only for JAR library, while I want to load python library\n\nAny suggestion is appreciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":70576672,"Users Score":0,"Answer":"Never mind I find the answer\nWhen creating the dev endpoint you can actually specify 'Python lib path' the same way as you specify it at the glue job","Q_Score":0,"Tags":"python,amazon-s3,pyspark,aws-glue,apache-zeppelin","A_Id":70587088,"CreationDate":"2022-01-04T09:33:00.000","Title":"Import python external library from S3 in Zeppelin","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Say I have the program program.py in a non-root directory. Thus, in order to run the program, I have to type python3 program.py every time. Is there a shortcut that I can enable in my unix environment where I type a specific command for a specific program to run. I.e. for the above, I want to just be able to type program and have the program run.\nI am not greatly familiar with unix but I believe this has something to do with adding something to your path which I am unfamiliar with this method. Any references and\/or support is greatly appreciated!!\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":70588031,"Users Score":0,"Answer":"Add this to the top of your file.\n#!\/usr\/bin\/python3, or wherever your install is.\nThen chmod +x your_file.py.","Q_Score":0,"Tags":"python,unix","A_Id":70588057,"CreationDate":"2022-01-05T05:07:00.000","Title":"Shortcut for not having to type out `python3 program.py` when running python program","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"A bioinformatics protocol was developed and we would like to dockerize it to make its usage easier for others. It consists of 2 softwares and several python scripts to prepare and parse data. Locally, we run these modules on a cluster as several dependent jobs (some requiring high resources) with a wrapper script, like this:\n\nparsing input data (python)\nrunning a few 10-100 jobs on a cluster (each having a piece of the output of step 1). Every step's job depends on the previous one finishing, involving:\na) compiled C++ software on each piece from 1\nb) a parsing python script on each piece from 2a\nc) an other, resource-intensive compiled C++ software; which uses mpirun to distribute all the output of 2b\nfinalizing results (python script) on all results from step 2\n\nThe dockerized version does not necessarily needs to be organized in the same manner, but at least 2c needs to be distributed with mpirun because users will run it on a cluster.\nHow could I organize this? Have X different containers in a workflow? Any other possible solution that does not involve multiple containers?\nThank you!\nps. I hope I described it clearly enough but can further clarify if needed","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":70610947,"Users Score":0,"Answer":"I think in your project it is important to differentiate docker images and docker container.\nIn a docker image, you will package your code and the dependencies to make it work. The first question is, should all your code be in the same image : you have python scripts, c++ software, so it could make sense to have several images, each capable to run one job of your process.\nA docker container is a running instance of a docker image. So if you decided to have several images, you will have several docker containers running during your process. If you decide to have only one image, then you can decide to run everything in one container, by running your wrapper script in the container. Or you could have a new wrapper script instantiating docker containers for each step. This could be interesting as you seem to use different hardware depending on the step.\nI can't give specifics about mpirun as I'm not familiar with it","Q_Score":0,"Tags":"python,docker,containers,workflow","A_Id":70712382,"CreationDate":"2022-01-06T17:14:00.000","Title":"How to containerize workflow which consists of several python scripts and softwares?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"It's my first post on this web site. I'm having a problem with running my python codes on Windows Power Shell although my Atom text editor can run. Because I've alread installed the latest python version. When I try to run codes with PowerShell it says me basicly that Python can not be found.\nDo you have any idea to solve this problem.\nThank you.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":42,"Q_Id":70615151,"Users Score":1,"Answer":"Find python (where \/R \\ python.exe) and add it to your path (win+r systempropertiesadvanced, go to environment variables, find PATH under the top box, start editing it, then add the python.exe directory to path.) Then restart powershell.","Q_Score":0,"Tags":"python,powershell","A_Id":70615183,"CreationDate":"2022-01-07T00:15:00.000","Title":"I can't run my python codes in Windows Power Shell despite my text editor can run","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm somewhat new to Ubuntu (16.04, armhf) and am trying to use pytimeextractor which requires both cython and pyjnius to enable the Java\/Python interaction, but am running into the following error with pyjnius:\nSystemError: Error calling dlopen(b'\/usr\/lib\/jvm\/jdk1.8.0-openjdk-armhf\/jre\/lib\/arm\/server\/libjvm.so': b'\/usr\/lib\/jvm\/jdk1.8.0-openjdk-armhf\/jre\/lib\/arm\/server\/libjvm.so: cannot open shared object file: No such file or directory'\nI was initially having issues with setting JAVA_HOME (getting a KeyError), which lead me to purge existing Java installations such as the folder referenced in the SystemError above: \"*\/jdk1.8.0-openjdk-armhf\/...\"\nAfter reinstalling Java and setting JAVA_HOME in etc\/environment, then uninstalling and reinstalling pyjnius, it is still pointing to this old, now non-existent Java installation... rather than the JAVA_HOME now set (\/usr\/lib\/jvm\/java-1.8.0-openjdk-armhf) and I have not the slightest idea why.\nCould someone please help point me in the right direction of resolving this issue? I feel out of my depth with the extent of Ubuntu knowledge required to accurately diagnose the errors in front of me and swiftly resolve them. Thank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":68,"Q_Id":70646863,"Users Score":0,"Answer":"I have managed to resolve this issue. As far as I can tell, I had set the JAVA_HOME to point at the correct installation of Java but needed to log out and back in for the changes to take effect.","Q_Score":0,"Tags":"python,java,pyjnius","A_Id":70647367,"CreationDate":"2022-01-10T01:16:00.000","Title":"Pyjnius looking for libjvm.so in wrong folder - Ubuntu 16.04","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Generic explanation: My application consumes messages from a topic and then splits them into separate topics according to their id, so the topics are named like topic_name_id. My goal is to connect those new topics to a certain sink (s3 or snowflake, haven't decided) so that the messages published in those topics will end up there. However, i've only found ways to do this using a configuration file, where you connect the sink to a topic that already exists and which you know the name of. But here the goal would be to connect the sink to the topic created during the process. Is there a way this can be achieved?\nIf the above is not possible, is there a way to connect to the common topic with all the messages, but create different tables (in snowflake) or s3 directories according to the message ID? Adding to that, in case of s3, the messages are added as individual json files, right? No way to combine them into one file?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":70656524,"Users Score":0,"Answer":"The outgoing IDs are known, right?\nKafka Connect uses a REST API that you generate a JSON HTTP body using those IDs and finalized topic names, then use requests, for example, to publish and start connectors for those topics. You can do that directly from the process directly before starting the producer, or you can send a request with the ID\/topic name to a lambda job instead, which communicates with the Connect API\nWhen using different topics with the S3 Sink connector, there will be separate S3 paths and separate files, based on the number of partitions in the topic and the other partitioner settings defined in your connector property. Most S3 processes are able to read full S3 prefixes, though, so I don't imagine that being an issue\nI don't have experience with the Snowflake connector to know how it handles different topic names.","Q_Score":0,"Tags":"python,apache-kafka,apache-kafka-connect","A_Id":70657588,"CreationDate":"2022-01-10T17:26:00.000","Title":"Kafka connector sink for topic not known in advance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to Python \/ Azure VM world.\n\nI have created Python\/Django website using PyCharm IDE.\nCreated Azure VM with Python\/Django installed\non Azure VM , I can run : Python manage.py runserver and i can access it using Azure URL from externally.\n\nQuestion:\n\nto run\/deploy a python website on VM, do we have to run Python manage.py command or there is any other way ?\nin case I have to deploy multiple websites what i should do ?\nand python manage.py session also gets expired pretty soon and site is not accessible anymore , how to keep it running ?\n\nRegards Shakeel","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":70663056,"Users Score":0,"Answer":"after some research I found the solution, answer to my own question is for my requirements I can use NGINX and GUNICORN :)","Q_Score":0,"Tags":"python,django,azure,virtual-machine","A_Id":70668540,"CreationDate":"2022-01-11T07:31:00.000","Title":"Azure Virtual Machine (Ubuntu) how to deploy multiple Python websites","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have 2 python files main.py and test.py when I run main.py I want to run test.py and some point of time in new terminal because if I run in same terminal main.py got crashed and closed and program fails.\nAny ideas how can I do this.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":70666575,"Users Score":0,"Answer":"I cannot say for sure without reading the code whether it will work, but here's the steps to run two pieces of code simultaneously:\n\nOpen File Explorer, and navigate to where you have saved the files.\nWrite cmd in the address bar of the file explorer.\nWhen the prompt opens up, write the python command that you use to run the first file: py .py or python or python3 .\nRepeat process 2 and 3, but this time, type the name of the second file.","Q_Score":0,"Tags":"python","A_Id":70666628,"CreationDate":"2022-01-11T12:09:00.000","Title":"how to run 2 python files in one go?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have 2 python files main.py and test.py when I run main.py I want to run test.py and some point of time in new terminal because if I run in same terminal main.py got crashed and closed and program fails.\nAny ideas how can I do this.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":70666575,"Users Score":0,"Answer":"Main.py must call other files to run them","Q_Score":0,"Tags":"python","A_Id":70666614,"CreationDate":"2022-01-11T12:09:00.000","Title":"how to run 2 python files in one go?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an application that is hosted through Google App Engine. It is intended to be a file hosting application, where files are uploaded directly to GCS. However, there is some processing that needs to happen with these files, so originally my plan was to download the files, do the modifications, then reupload. Unfortunately, GAE is a read-only file system. What would be the proper way to make file modifications to objects in GCS from GAE? I am unfamiliar with most google cloud services, but I see ones such as google-cloud-dataproc, would these be able to do it?\nOperations are removing lines from files, and combining files into a single .zip","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":70,"Q_Id":70686552,"Users Score":2,"Answer":"You can store the file in the tmpfs partition that you have on App Engine mounted in \/tmp. It's in memory file system and you will use memory to store files. If the files are too large, increase the memory size of your App Engine instance else you will have a out of memory error\nIf the file is too big, you have to use another product.\nThink to clean the files after use to free memory space.","Q_Score":1,"Tags":"python,google-app-engine,google-cloud-platform,google-cloud-storage","A_Id":70693303,"CreationDate":"2022-01-12T18:23:00.000","Title":"Modifying files in Google Cloud Storage from Google App Engine","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Using Python 3.8.\nI have a module that imports pgpy for encryption\\decryption.\nWhen run manually, everything works as expected.\nHowever, when it is called by a Python scheduler running as a Windows service, it constantly throws the error:\nDLL load failed while importing _openssl: The specified module could not be found.\n\nI've look at other solutions that talk about having the specific dlls in the DLL path, but that hasn't helped me.\nlibcrypto-1_1.dll, libcrypto-1_1-x64.dll, libssl-1_1.dll, and libssl-1_1-x64.dll are all located in the Python38\\DLLs folder (and the Scripts folder also for some reason).\nAgain, the script runs correctly with no issue when run manually. It's only when it's called by a scheduler run under a Windows service that it fails.\nLooking for any advice or clue as to what I might be able to do here.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":191,"Q_Id":70702974,"Users Score":0,"Answer":"pip install --upgrade pip\npip uninstall pyopenssl cryptography\npip install pyopenssl cryptography\nimport openssl:\npython -v -c 'from OpenSSL import SSL'","Q_Score":0,"Tags":"python-3.8","A_Id":71550539,"CreationDate":"2022-01-13T20:51:00.000","Title":"Python as a Windows service: DLL load failed while importing _openssl","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Django application that a former colleague of mine developed. It was created in Windows, but I need to deploy it on a Linux server. The requirements.txt fails to install all requirements on the Linux server. Specifically, the mysql, mysqlconnector, and version of psutil fail to install.\nI cannot find any documentation on converting a Windows Django project to be compatible with Linux. Instead, most sources say that Django apps will work on either environment interchangeably, but this does not seem to be true since I cannot install the requirements or migrate the project.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":95,"Q_Id":70703327,"Users Score":-1,"Answer":"Mysql connector can be tricky. You remove it from the requirements and install other packages. That connector only affects your database.\nYou can download a binary distribution of the connector that suits your Linux system and install.","Q_Score":1,"Tags":"python,django,windows,amazon-linux-2","A_Id":70703381,"CreationDate":"2022-01-13T21:30:00.000","Title":"Django App developed in Windows will not run on Linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm learning fastapi from Youtube class\nI succeeded. except for the [uvloop] module\nI realized that uvloop doesn't install in windows and my development environment is Windows + PyCharm.\nHow are others using this module? Are they only using mac?\nWhat should I do?\nShould I view other videos or remove uvloop? or replace uvloop?\nHelp me.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":406,"Q_Id":70731019,"Users Score":0,"Answer":"Fastapi itself does not depend on uvloop. The transient extra dependency UVIcorn installed with ao called standard extras however does. However, UVicorn[standard] is just an extra dependency and not a required one. So if you just install fastapi without any extras and uvicorn without extras you should be good to go.","Q_Score":0,"Tags":"python,uvloop","A_Id":70731420,"CreationDate":"2022-01-16T14:31:00.000","Title":"Is it impossible developing with fastApi, uvloop, windows?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to customize the bash prompt with the shell variable PS1.\nHow can I access the python virtual environment name to apply color formatting?\nPS1 is set to: \\u@\\h in \\W \\$\nI would expect the output to be user@host in ~ $\nBut I get (base) user@host in ~ $ ((base) (venv) user@host in ~ $ when using a virtual environment named venv)\nIf possible it would also be great to only display (venv) instead of (base) (venv), or is there any use case where the (base)-additon makes sense?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":133,"Q_Id":70763301,"Users Score":1,"Answer":"The activate script makes a one-time change of hard-coding the name of the virtual environment into the current value of PS1. You can disable this by adding VIRTUAL_ENV_DISABLE_PROMPT to your environment (any non-empty value will do), and use the value of $(basename $VIRTUAL_ENV) to customize your prompt however you like.","Q_Score":1,"Tags":"python,bash,shell,python-venv,ps1","A_Id":70763349,"CreationDate":"2022-01-18T22:46:00.000","Title":"Format python virtual environment name with shell variable PS1 on bash prompt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So whenever i am trying to read from a source with stream i get this error \"A file referenced in the transaction log cannot be found\" and it points to a file that does not exist.\nI have tried:\n\nChanging the checkpoint location\nChanging the start location\nRunning \"spark._jvm.com.databricks.sql.transaction.tahoe.DeltaLog.clearCache()\"\n\nIs there anything else i could do?\nThanks in advance guys n girls!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":166,"Q_Id":70773415,"Users Score":0,"Answer":"So! I had another stream that was running and it had the same parent directory as this stream.. this seems to have been a issue.\nFirst stream was looking in: .start(\"\/mnt\/dev_stream\/first_stream\")\nSecond stream was looking in: .start(\"\/mnt\/dev_stream\/second_stream\")\nEditing the second stream to look in .start(\"\/mnt\/new_dev_stream\/new_second_stream\") fixed this issue!","Q_Score":0,"Tags":"python,databricks,azure-databricks,databricks-connect","A_Id":70773744,"CreationDate":"2022-01-19T15:37:00.000","Title":"Databricks streaming \"A file referenced in the transaction log cannot be found\"","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"First off, I might have formulated the question inaccurately, feel free to modify if necessary.\nAlthough I am quite new to docker and all its stuff, yet somehow managed to create an image (v2) and a container (cont) on my Win 11 laptop. And I have a demo.py which requires an .mp4 file as an arg.\nNow, if I want to run the demo.py file, 1) I go to the project's folder (where demo.py lives), 2) open cmd and 3) run: docker start -i cont. This starts the container as:\nroot:834b2342e24c:\/project#\nThen, I should copy 4) my_video.mp4 from local project folder to container's project\/data folder (with another cmd) as follows:\ndocker cp data\/my_video.mp4 cont:project\/data\/.\nThen I run: 5) python demo.py data\/my_video.mp4. After a while, it makes two files: my_video_demo.mp4 and my_video.json in the data folder in the container. Similarly, I should copy them back to my local project folder: 6)\ndocker cp cont:project\/data\/my_video_demo.mp4 data\/, docker cp cont:project\/data\/my_video_demo.json data\/\nOnly then I can go to my local project\/data folder and inspect the files.\nI want to be able to just run a particular command that does 4) - 6) all in one.\nI have read about -v option where, in my case, it would be(?) -v \/data:project\/data, but I don't know how to proceed.\nIs it possible to do what I want? If everything is clear, I hope to get your support. Thank you.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":70784190,"Users Score":0,"Answer":"Well, I think I've come up with a way of dealing with it.\nI have learned that with -v one can create a place that is shared between the local host and the container. All you need to do is that run the docker and provide -v as follows:\ndocker run --gpus all -it -v C:\/Workspace\/project\/data:\/project\/data v2:latest python demo.py data\/my_video_demo.mp4\n--gpus - GPU devices to add to the container ('all' to pass all GPUs);\n-it - tells the docker that it should open an interactive container instance.\nNote that every time you run this, it will create a new container because of -it flag.\nPartial credit to @bill27","Q_Score":0,"Tags":"python,docker,cmd,windows-11","A_Id":70811427,"CreationDate":"2022-01-20T10:01:00.000","Title":"how to execute a python program (with args) that uses docker container?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a simple python script that I would like to run thousands of it's instances on GCP (at the same time). This script is triggered by the $Universe scheduler, something like \"python main.py --date '2022_01'\".\nWhat architecture and technology I have to use to achieve this.\nPS: I cannot drop $Universe but I'm not against suggestions to use another technologies.\nMy solution:\n\nI already have a $Universe server running all the time.\nCreate Pub\/Sub topic\nCreate permanent Compute Engine that listen to Pub\/Sub all the time\n$Universe send thousand of events to Pub\/Sub\nCompute engine trigger the creation of a Python Docker Image on another Compute Engine\nScale the creation of the Docker images (I don't know how to do it)\n\nIs it a good architecture?\nHow to scale this kind of process?\nThank you :)","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":171,"Q_Id":70820906,"Users Score":2,"Answer":"As I understand, you will have a lot of simultaneous call on a custom python code trigger by an orchestrator ($Universe) and you want it on GCP platform.\nLike @al-dann, I would go to serverless approach in order to reduce the cost.\nAs I also understand, pub sub seems to be not necessary, you will could easily trigger the function from any HTTP call and will avoid Pub Sub.\nPubSub is necessary only to have some guarantee (at least once processing), but you can have the same behaviour if the $Universe validate the http request for every call (look at http response code & body and retry if not match the expected result).\nIf you want to have exactly once processing, you will need more tooling, you are close to event streaming (that could be a good use case as I also understand). In that case in a full GCP, I will go to pub \/ sub & Dataflow that can guarantee exactly once, or Kafka & Kafka Streams or Flink.\nIf at least once processing is fine for you, I will go http version that will be simple to maintain I think. You will have 3 serverless options for that case :\n\nApp engine standard: scale to 0, pay for the cpu usage, can be more affordable than below function if the request is constrain to short period (few hours per day since the same hardware will process many request)\nCloud Function: you will pay per request(+ cpu, memory, network, ...) and don't have to think anything else than code but the code executed is constrain on a proprietary solution.\nCloud run: my prefered one since it's the same pricing than cloud function but you gain the portability, the application is a simple docker image that you can move easily (to kubernetes, compute engine, ...) and change the execution engine depending on cost (if the load change between the study and real world).","Q_Score":0,"Tags":"python,python-3.x,google-cloud-platform,architecture,scalability","A_Id":70826005,"CreationDate":"2022-01-23T09:57:00.000","Title":"Run & scale simple python scripts on Google Cloud Platform","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a simple python script that I would like to run thousands of it's instances on GCP (at the same time). This script is triggered by the $Universe scheduler, something like \"python main.py --date '2022_01'\".\nWhat architecture and technology I have to use to achieve this.\nPS: I cannot drop $Universe but I'm not against suggestions to use another technologies.\nMy solution:\n\nI already have a $Universe server running all the time.\nCreate Pub\/Sub topic\nCreate permanent Compute Engine that listen to Pub\/Sub all the time\n$Universe send thousand of events to Pub\/Sub\nCompute engine trigger the creation of a Python Docker Image on another Compute Engine\nScale the creation of the Docker images (I don't know how to do it)\n\nIs it a good architecture?\nHow to scale this kind of process?\nThank you :)","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":171,"Q_Id":70820906,"Users Score":2,"Answer":"It might be very difficult to discuss architecture and design questions, as they usually are heavy dependent on the context, scope, functional and non functional requirements, cost, available skills and knowledge and so on...\nPersonally I would prefer to stay with entirely server-less approach if possible.\nFor example, use a Cloud Scheduler (server less cron jobs), which sends messages to a Pub\/Sub topic, on the other side of which there is a Cloud Function (or something else), which is triggered by the message.\nShould it be a Cloud Function, or something else, what and how should it do - depends on you case.","Q_Score":0,"Tags":"python,python-3.x,google-cloud-platform,architecture,scalability","A_Id":70822075,"CreationDate":"2022-01-23T09:57:00.000","Title":"Run & scale simple python scripts on Google Cloud Platform","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have written a Python script using Selenium to automate a workflow on a third-party website. It works fine on my machine.\nBut when I try to run the same script on a GCP instance, I get Cloudflare's 1020 Access Denied error. I am using Google Chrome headless as the Selenium webdriver.\nI am guessing the website owner has put a blanket firewall restriction on GCP instance external IPs.\nMy questions:\n\nDoes my assumption makes sense? It is even possible to put such a restriction?\nHow do I bypass the firewall? What if I set static IP to the GCP instance? Or some way to use VPN through the headless Chrome?\nWould changing the cloud provider help? Any less well know cloud provider which won't be blocked?\n\nAny other suggestions?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":204,"Q_Id":70835801,"Users Score":0,"Answer":"Yes, the Cloudflare firewall can block IP ranges amongst other options so this in entirely possible.\nNot sure you should ask how to circumvent security. A static IP might work or it might not, it depends entirely on the unknown restrictions set by the website operator. Again, VPN may or may not work depending on what restrictions the website operator set up.\nSince we can't know what restrictions are in place another cloud provider might work or it might not. It could also stop working if the website operator decides to block that IP range as well.\n\nThe only way to be sure is to ask the website operator.","Q_Score":0,"Tags":"python,selenium,google-cloud-platform,firewall,cloudflare","A_Id":70836559,"CreationDate":"2022-01-24T14:53:00.000","Title":"Cloudflare gives access denied when accessing a website from GCP instance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Python application (windows machine) connecting to on-prem SQL-Server to fetch the data and do some python functions.\nI wanted this application keeps continuously check the data periodically.\nSo, I kept this application in AWS-ECS and assigned the cron-job using lambda.\nThe problem I am facing in cloud-watch logs,\nI could see the error: \"timeout: invalid time interval 'm'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":70840767,"Users Score":0,"Answer":"I created and run my application in Windows environment in my machine. But when I deploy the code in AWS. In particularly, the application triggers with lambda its working in a Linux environment so the work directory for windows and linux path must be changed. For example:\nin windows: app\/foldername\/.py\nin Linux: src\/foldername\/.py\nSo with this small working directory issue, it fails to look the code path when Lambda trigger. That is the reason the server connection timeout error.","Q_Score":0,"Tags":"python,aws-lambda,amazon-ecs","A_Id":70906479,"CreationDate":"2022-01-24T21:31:00.000","Title":"deployed application fails with timeout","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"This is what occurs when installing the pyinstaller, please help me solve this.\n\npip install pyinstaller\nCollecting pyinstaller\nUsing cached pyinstaller-4.8-py3-none-win_amd64.whl (2.0 MB)\nInstalling collected packages: pyinstaller\nWARNING: Failed to write executable - trying to use .deleteme logic\nERROR: Could not install packages due to an OSError: [WinError 2] The system cannot find the file\nspecified: 'C:\\Python310\\Scripts\\pyi-archive_viewer.exe' -> 'C:\\Python310\\Scripts\\pyi-\narchive_viewer.exe.deleteme'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":221,"Q_Id":70844918,"Users Score":0,"Answer":"I had a similar problem \"Requirement already satisfied\" with the Pillow module. I saw it in IDE, but it was broken. So I deleted the package folder in C:\\Python310\\Lib\\site-packages and reinstalled it. You could try to do the same.","Q_Score":0,"Tags":"python,python-3.x,windows,pyinstaller","A_Id":70944107,"CreationDate":"2022-01-25T07:37:00.000","Title":"While installing pyinstaller, this occured : ERROR: Could not install packages due to an OSError: [WinError 2]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using centos7 and latest anaconda release with python3.9 to build a web server, but pip install uwsgi returned an error: \"libpython3.9.a\" not found. Only \"libpython3.9.so\" was provided by anaconda3.\nSeems that there are some solutions for macos and debian, but nothing found for centos7, should I yum install something?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":270,"Q_Id":70858200,"Users Score":0,"Answer":"use make to install python source code for a temp dir\nfind lib\/libpython3.9.a in your python install dir\ncopy the lib\/libpython3.9.a to your conda env path (eg: anaconda3\/envs\/\/lib\/python3.9\/config-3.9-x86_64-linux-gnu\/),this path from install uwsgi error log(like, gcc: error: xxx\/libpython3.9.a: No such file or directory)\nrerun pip install uwsgi and fix it","Q_Score":0,"Tags":"centos7,uwsgi,python-3.9,anaconda3","A_Id":71908555,"CreationDate":"2022-01-26T02:56:00.000","Title":"make error in conda env need libpython3.9.a but only have libpython3.9.so","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to deploy my Django 4.0.1 application to Google App Engine. But I'm receiving an error:\n\nCould not find a version that satisfies the requirement Django==4.0.1.\n\nOn a localhost the app works fine. The same error I have with asgiref==3.5.0\nThe full text of this error:\n\nERROR: Could not find a version that satisfies the requirement\nDjango==4.0.1 (from -r requirements.txt (line 6)) (from versions:\n1.1.3, 1.1.4, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.2.4,\n1.2.5, 1.2.6, 1.2.7, 1.3, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.4, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.4.10, 1.4.11, 1.4.12, 1.4.13, 1.4.14,\n1.4.15, 1.4.16, 1.4.17, 1.4.18, 1.4.19, 1.4.20, 1.4.21, 1.4.22, 1.5, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.5.11, 1.5.12, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.\n6.4, 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.6.9, 1.6.10, 1.6.11, 1.7, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.11, 1.8a1, 1.8b1, 1.8b2, 1.8rc1, 1.8, 1.8.1, 1.8 .2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7, 1.8.8, 1.8.9, 1.8.10, 1.8.11, 1.8.12, 1.8.13, 1.8.14, 1.8.15, 1.8.16, 1.8.17, 1.8.18, 1.8.19, 1.9a1, 1.9b1, 1.9rc1, 1.9rc2, 1.9, 1.9.1, 1.9.2, 1.\n9.3, 1.9.4, 1.9.5, 1.9.6, 1.9.7, 1.9.8, 1.9.9, 1.9.10, 1.9.11, 1.9.12, 1.9.13, 1.10a1, 1.10b1, 1.10rc1, 1.10, 1.10.1, 1.10.2, 1.10.3, 1.10.4, 1.10.5, 1.10.6, 1.10.7, 1.10.8, 1.11a1, 1. 11b1, 1.11rc1, 1.11, 1.11.1, 1.11.2, 1.11.3, 1.11.4, 1.11.5, 1.11.6, 1.11.7, 1.11.8, 1.11.9, 1.11.10, 1.11.11, 1.11.12, 1.11.13, 1.11.14, 1.11.15, 1.11.16, 1.11.17, 1.11.18, 1.11.20, 1 .11.21, 1.11.22, 1.11.23, 1.11.24, 1.11.25, 1.11.26, 1.11.27, 1.11.28, 1.11.29, 2.0a1, 2.0b1, 2.0rc1, 2.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.10, 2.0.12 , 2.0.13, 2.1a1, 2.1b1, 2.1rc1, 2.1, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.1.5, 2.1.7, 2.1.8, 2.1.9, 2.1.10, 2.1.11, 2.1.12, 2.1.13, 2.1.14, 2.1.15, 2.2a1, 2.2b1, 2.2rc1, 2.2, 2.2.1, 2.2.2, 2.\n2.3, 2.2.4, 2.2.5, 2.2.6, 2.2.7, 2.2.8, 2.2.9, 2.2.10, 2.2.11, 2.2.12, 2.2.13, 2.2.14, 2.2.15, 2.2.16, 2.2.17, 2.2.18, 2.2.19, 2.2.20, 2.2.21, 2.2.22, 2.2.23, 2.2.24, 2.2.25, 2.2.26, 3 .0a1, 3.0b1, 3.0rc1, 3.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.0.6, 3.0.7, 3.0.8, 3.0.9, 3.0.10, 3.0.11, 3.0.12, 3.0.13, 3.0.14, 3.1a1, 3.1b1, 3.1rc1, 3.1, 3.1.1, 3.1.2, 3.1.3, 3.1.4,\n3.1.5, 3.1.6, 3.1.7, 3.1.8, 3.1.9, 3.1.10, 3.1.11, 3.1.12, 3.1.13, 3.1.14, 3.2a1, 3.2b1, 3.2rc1, 3.2, 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.5, 3.2.6, 3.2.7, 3.2.8, 3.2.9, 3.2.10, 3.2.11) Step #1: ERROR: No matching distribution found for Django==4.0.1 (from -r\nrequirements.txt (line 6)) Step #1: WARNING: You are using pip version\n20.2.2; however, version 21.3.1 is available. Step #1: You should consider upgrading via the '\/env\/bin\/python -m pip install --upgrade\npip' command.\n\nI have Google Cloud SDK 369.0.0\nWhat is the reason and how to fix it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":158,"Q_Id":70870462,"Users Score":2,"Answer":"The error is telling you that the maximum version of Django available on GCP for GAE is 3.2.11. Instead of specifying version 4.0.1 in your requirements.txt file, you can either use a lower version (any of the ones listed in your error) or don't specify a version and GAE will pick the latest that it has.\nNote: Google (Cloud Providers) don't always\/necessarily support the highest version of an App\/package immediately. It usually takes them a bit of time before they add support for it whereas you can download a latest version to your local environment (your computer and work with it).","Q_Score":0,"Tags":"python-3.x,django,google-app-engine,google-cloud-platform","A_Id":70872771,"CreationDate":"2022-01-26T21:26:00.000","Title":"GCP: Google App Engine flexible + Django 4.0.1 problem on deployment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm getting the OS error: No space left on device, when saving somewhere north of 17 million small files into a single directory on Amazon Sagemaker local storage. I'm using the numpy.save function, python 3.8.12. df -h shows that the drive is only about 80% full. cat \/proc\/sys\/fs\/file-max returns 6,269,329, which is a lot less than the number of saved files, df -i returns 28% IUse% for the parent folder. What could be the problem here?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":70900116,"Users Score":0,"Answer":"SageMaker Notebook instance are equipped with an \"ML Volume\" mounted at \/home\/ec2-user\/SageMaker, whose size you can control, and that is 5GB by default, and that can be up to 16TB.\nIf you have the right permissions, you can increase volume size (it requires to switch off the instance and restart it)\nThen, edit your code to make sure it writes to that folder","Q_Score":1,"Tags":"python,amazon-sagemaker","A_Id":70945807,"CreationDate":"2022-01-28T20:46:00.000","Title":"No space left on device: Amazon Sagemaker can't save more files in directory even when HDD is not full","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Does anyone know if I can install a python library's submodule with poetry? I can install with pip\npip install library_a[sub_library]\nHow can I do this in poetry?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":70928712,"Users Score":0,"Answer":"To do that in poetry you can use\npoetry add \"library_a[sub_library]\"\nMake sure to add the quote \" to make it work.","Q_Score":0,"Tags":"python-3.x,pip,python-poetry","A_Id":70928772,"CreationDate":"2022-01-31T15:41:00.000","Title":"Install part of a pypi library with poetry","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I've setup an OpenSSH server inside of a windows virtual machine, and I've connected to the user I want to control e.g\n\nssh user@ip_adress\n\nAnd I want to run for example a python program, notepad or anything else as if it was executed by the user I have open on my virtual machine.\nHow would I achieve this? Currently it's running things in its own instance without gui, and modifying files etc. does seem to sync.\nThe use case I need is running a python application that needs access to the screen & needs to send key\/mouse inputs etc.\nI've tried PsExec to run things, but I can't seem to get it working either?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":70942467,"Users Score":0,"Answer":"Oops, it will not be that easy. By default ssh only open a bi-directional stream which if fine for command line applications or at most for full screen terminal application like vi. It has built-in features to transport a X11 desktop, but Windows uses a different protocol which is RDP.\nIf the client machine is also a Windows machine, it is still fairly easy, because ssh allows port forwarding: you map a local port on your machine to a remote port on the server. The configuration actually depends on the client software, but for the excellent PUTTY, you can configure it through the menu.\nOne this is done, you connect you local remote desktop client to that local port, and you will get a remote desktop of the server. Now you can use any GUI application on the server.\nIf the client machine is a Unix-like (Linux), then you will have to find a RDP client. You could Google for KRDC, Remmina, rdesktop or xfreerdp...\n\nOf course if no firewall exists between the client and the server, you should just bypass the ssh tunnel and directly connect to the RDP port on the server machine (by default it should be 3389)...","Q_Score":0,"Tags":"python,windows,virtual-machine,openssh","A_Id":70942761,"CreationDate":"2022-02-01T14:42:00.000","Title":"How do I run applications as if they were being run by an active windows user? OpenSSH Server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an apk with buildozer that works fine with test ads on an android device (my phone) in the buildozer.spec file i have this line:\nandroid.meta_data = com.google.android.gms.ads.APPLICATION_ID=ca-app-pub-3940256099942544~3347511713\nthe app is not published on playstore yet, and i have two types of ads banners and interstitial the question is, do i get two ad units from AdMob one for each type and if so what to do to this android.meta_data line ?.\nOn the buildozer.spec its said that this is a list\n# (list) Android application meta-data to set (key=value format)\nandroid.meta_data = com.google.android.gms.ads.APPLICATION_ID=ca-app-pub-3940256099942544~3347511713\naddionally i have this line:\nandroid.gradle_dependencies = 'com.google.firebase:firebase-ads:10.2.0'\nso m really confused of what to do exactly","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":70964922,"Users Score":0,"Answer":"its simple u go to *android.meta_data = * line and instead of this :\nandroid.meta_data = com.google.android.gms.ads.APPLICATION_ID=ca-app-pub-3940256099942544~3347511713\nu enter this :\nandroid.meta_data = [com.google.android.gms.ads.APPLICATION_ID=ca-app-pub-3940256099942544\/1033173712,com.google.android.gms.ads.APPLICATION_ID=ca-app-pub-3940256099942544\/6300978111]","Q_Score":0,"Tags":"python,android,kivy,admob,buildozer","A_Id":70965143,"CreationDate":"2022-02-03T01:27:00.000","Title":"Ad units for banners and interstitial with buildozer?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I use iTerm2 as my default terminal app in OSX. A would like to ask if is possible to insert text automatically with a hotkey (or selecting it from a menu), with iTerm, like the old macros of some word processing packages. My idea is, for example, if I press cmd + ctrl s , iTerm insert automatically \"sftp -i\"\nI know that iTerm has support for scripting with AppleScript and Python but I'm not sure how I do this","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":70971049,"Users Score":0,"Answer":"You can also use Keyboard shortcuts.\nFrom Preferences -> Keys\nDefine a new shortcut and use the \"Action\" \"Send Text\"\nThis works on iTerm2 build 3.4.15","Q_Score":1,"Tags":"python,scripting,applescript,iterm","A_Id":72213801,"CreationDate":"2022-02-03T12:10:00.000","Title":"Is it possible to create a macro \/ script in iTerm2 that insert text automatically selected from a list?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I think the title pretty much sums up my requirement, I would appreciate it if anyone please post how many types of HDFS clusters (Kerberos, etc.) and also which is the best library that is used to connect to each type of cluster(s) using python.\nThank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":16,"Q_Id":70972316,"Users Score":1,"Answer":"There's only one type of HDFS distributed by the Apache Hadoop project. There are several Hadoop compatible file systems such as Amazon S3 or GlusterFS.\nKerberos is an authorization system, not a type of Hadoop Filesystem.\nIf you want robust Hadoop communication from Python, Pyspark would be ideal, otherwise you can interface with the WebHDFS APIs using several other Python libraries that you'd find with a simple search","Q_Score":1,"Tags":"python,hadoop,hdfs","A_Id":70988898,"CreationDate":"2022-02-03T13:40:00.000","Title":"How many types of HDFS Clusters are there and what is the best way to connect to HDFS Cluster using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Excuse me if this post should be on a forum other than stack overflow.\nOn a linux cluster I am running a python script 24\/7 to connect to a data stream, process it, and push it to a database. A crontab file is setup to monitor that the script is running and if it stops it will be started again.\nI need to edit the script and test it, so created a git branch for that. I want to make sure that if the original script (on branch main) stops, the crontab will run it and not the modified file on the new branch.\nMy two questions are:\n\nDoes crontab run scripts on main (or master) by default?\nHow can I specify the git branch of file I want to run when calling it? (Mostly for verbosity)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":52,"Q_Id":70977997,"Users Score":1,"Answer":"Crontab will run on a file, so, whatever state the local repo is in, whether it's updated with main\/master or pulled to another branch, or edited locally, it will run that file.\nIf you want to explicitly run a particular branch, my advice is to point cron at a separate script, which could take the branch as a parameter, that will then prepare the script location however you want, and subsequently run it.","Q_Score":0,"Tags":"python,git,cron","A_Id":70978168,"CreationDate":"2022-02-03T20:41:00.000","Title":"Will crontab run scripts on git branch master or main by default?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hello can someone help me with this.\nSo the issue is that I started using linux and I need to make my \"python3\" line code into just \"python\". For example -- \"python3 run.py\" (this is now), but i want just\n\"python run.py\". If someone can help me it would made my day","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":70987477,"Users Score":0,"Answer":"1.Go to terminal type - python --version\n2.Log in with root privileges sudo su\n3.Execute this command update-alternatives --install \/usr\/bin\/python python \/usr\/bin\/python3\n4. Check version again","Q_Score":0,"Tags":"python,linux,terminal,line,default","A_Id":70987583,"CreationDate":"2022-02-04T13:50:00.000","Title":"How to set \"python\" as default","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hello can someone help me with this.\nSo the issue is that I started using linux and I need to make my \"python3\" line code into just \"python\". For example -- \"python3 run.py\" (this is now), but i want just\n\"python run.py\". If someone can help me it would made my day","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":70987477,"Users Score":0,"Answer":"You can install python-is-python3 package:\n$ sudo apt install python-is-python3","Q_Score":0,"Tags":"python,linux,terminal,line,default","A_Id":70987617,"CreationDate":"2022-02-04T13:50:00.000","Title":"How to set \"python\" as default","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running a Cryptocurrency-trading python program on my laptop\nIt monitors the market every second and when the condition is satisfied, it tries transactions.\nWhen I run it on windows CMD, it causes many problems below.\n\n1. Sometimes, it halts till I click the cmd window and press 'enter' key\n\n\n2. Sometimes, it causes many unknown errors.\n\nHowever, when I run that on VScode, it does not cause any problem.\nI wonder what makes the difference between those two environments.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":24,"Q_Id":71033972,"Users Score":1,"Answer":"I have had some issues with VSCode not finding libraries and similar, and the reason for that is that VSCode runs its own python. In IDLE (and CMD) you run the raw python in AppData, but VSCode runs an executable they have downloaded via extensions through the program files folder.\nMy hypothesis is that Python as in the AppData folder is corrupted or wrong in some way, but the Program Files folder is correct. It may even be on a different python version.\nTry reinstalling python from the official python website, and run it again. Also double-check that the VSCode Python extension version is the same as the one saved in \"C:\\Users\\YOURUSERNAME\\AppData\\Local\\Programs\\Python\\Python39\".\nHope it worked!","Q_Score":1,"Tags":"python,visual-studio,windows-10","A_Id":71034207,"CreationDate":"2022-02-08T12:32:00.000","Title":"Running python file on windows CMD vs VScode","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I use BashOperator to execute a python file called app.py in Airflow.\nI wrote another python script called to_es.py. There is a function called \"df_to_es()\" in it.\nThe app.py should call df_to_es() by from utils.to_es import df_to_es, but the Airflow throws an error in red words: 'there is no module called \"def_to_es\"'.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":71034221,"Users Score":0,"Answer":"Finally I come to answer my own question.\nEven though Airflow may indicate that there is a DAG import error, but if you use BashOperator to execute your Python script, you import your own python functions, classes and modules in that script, they work smoothly if you don't have some other errors. Just double check if you are using correct Airflow DAG directory.\nSo just ignore that DAG import error if you are in my situation. This is something that Airflow develop team need to improve. Something like unit test.","Q_Score":0,"Tags":"python,module,airflow","A_Id":71115168,"CreationDate":"2022-02-08T12:50:00.000","Title":"I use BashOperator to execute a python file in Airflow, how to import other self-defined function?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I just setup ubuntu deep learning instance on AWS and would like to run my existing jupyter notebook there. Im working on creating CNN model on new images dataset.\nIm stuck at reading my huge image files on my local drive from this remote server.\nHow can i read the files\/folders on my local drive via this jupyter notebook on the instance?\nIs there other solution than uploading the dataset?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":96,"Q_Id":71041103,"Users Score":2,"Answer":"Im not familiar yet with awscli, instead i transfer my dataset to the instance using winSCP. So far, it worked well. But i do appreciate for any advice, suggestion for any other methods that can be used besides winscp.","Q_Score":1,"Tags":"python,amazon-ec2,jupyter-notebook,remote-server","A_Id":71063874,"CreationDate":"2022-02-08T21:26:00.000","Title":"Accessing local files via jupyter notebook on remote AWS server","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Error Trace:\nImportError: \/lib\/arm-linux-gnueabihf\/libc.so.6: version `GLIBC_2.33' not found (required by \/home\/pi\/.local\/lib\/python3.7\/site-packages\/grpc\/_cython\/cygrpc.cpython-37m-arm-linux-gnueabihf.so)\nScenario:\nI'm using google cloud vision api to detect text in images. The program works fine on laptop but gives the above mentioned error when ran in raspberry pi. I've searched a lot but couldn't find any working solution. I'd really appreciate if any one could let me know how to solve this.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1957,"Q_Id":71054519,"Users Score":1,"Answer":"GLIBC and the kernel of the OS go hand-in-hand; you basically need a newer version of your OS, if you need a more recent GLIBC\nthe version of the GLIBC can be quickly found out with the following command:\nldd --version","Q_Score":4,"Tags":"python,raspberry-pi,glibc,libc,google-cloud-vision","A_Id":71055926,"CreationDate":"2022-02-09T18:02:00.000","Title":"GLIBC_2.33 not found in raspberry pi python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We have got microservice application deployed in Kubernetes cluster. We need to run the locust load test tool on the Kubernetes cluster and send the metrics to Prometheus. How to get the locust loadtest data to Prometheus in the Kubernetes cluster environment.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":71062315,"Users Score":0,"Answer":"If you already have metrics sent to prometheus in your service then any load-test will trigger these just like any other rest calls.","Q_Score":0,"Tags":"python,kubernetes,microservices,prometheus,locust","A_Id":71109600,"CreationDate":"2022-02-10T09:01:00.000","Title":"How to get the locust loadtest data to prometheus in the kubernetes cluster environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm working on a Windows application that will show time when my system was started. I tried to do something with WMI, I took SystemUpTime, but it gave me time since last startup. I am looking for first startup per day, so for example if user will turn on computer at 7:00 and later will do restart at 9:00 it should still show 7:00. Is there any library which can be helpful?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":71066872,"Users Score":0,"Answer":"I'm not aware of a library that would do this.\nOne option would be to:\n\nMake your program run on startup.\nSave the boot time each time your program runs\nDisplay the earliest boot time for the current day.\n\nSave the boot times in sqlite3, its built in to Python these days.","Q_Score":0,"Tags":"python,windows","A_Id":71066976,"CreationDate":"2022-02-10T14:26:00.000","Title":"Time of system startup in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I believe this question already shows that I am new to docker and alembic. I am building a flask+sqlalchemy app using docker and postgres. So far I am not using alembic, but I am about to plug it in and some questions came up. I will have to create a pg_trgm extension and also populate one of the tables with data I already have. Until now I have only created brand new databases using sqlalchemy for the tests. So here is what I am thinking\/doing:\n\nTo create the extension I could simple add a volume to the postgres docker service like: .\/pg_dump.sql:\/docker-entrypoint-initdb.d\/pg_dump.sql. The extension does not depend on any specific db, so a simple \"CREATE EXTENSION IF NOT EXISTS pg_trgm WITH SCHEMA public;\" would do it, right?\n\nIf I use the same strategy to populate the tables I need a pg_dump.sql that creates the complete db and tables. To accomplish that I first created the brand new database on sqlalchemy, then I used a script to populate the tables with data I have on a json file. I then generated the complete pg_dump.sql and now I can place this complete .sql file on the docker service volume and when I run my docker-compose the postgres container will have the dabatase ready to go.\n\nNow I am starting with alembic and I am thinking I could just keep the pg_dump.sql to create the extensions, and have a alembic migration script to populate the empty tables (dropping the item 2 above).\n\n\nWhich way is the better way? 2, 3 or none of them? tks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":304,"Q_Id":71079732,"Users Score":0,"Answer":"Create the extension in a \/docker-entrypoint-initdb.d script (1). Load the data using your application's migration system (3).\nMechanically, one good reason to do this is that the database init scripts only run the very first time you create a database container on a given storage. If you add a column to a table and need to run migrations, the init-script sequence requires you to completely throw away and recreate the database.\nPhilosophically, I'd give you the same answer whether you were using Docker or something else. You could imagine running a database on a dedicated server, or using a cloud-hosted database. You'd have to ask your database administrator to install the extension for you, but they'd generally expect to give you credentials to an empty database and have you load the data yourself; or in a cloud setup you could imagine checking a \"install this extension\" checkbox in their console but there wouldn't be a way to load the data without connecting to the database remotely.\nSo, a migration system will work anywhere you have access to the database, and will allow incremental changes to the schema. The init script setup is Docker-specific and requires deleting the database to make any change.","Q_Score":0,"Tags":"python,docker,sqlalchemy,alembic","A_Id":71079959,"CreationDate":"2022-02-11T11:58:00.000","Title":"Use alembic migration or docker volumes to populate docker postgres database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I had miniconda3 installed in ~\/miniconda. I had to reinstall my OS, so I had the entire home directory backed up. After that, I copied (most) dirs back into the newly created home dir. As well as .bashrc (which contains a few lines that make sure conda ends up on the $PATH). Pretty much everything else is the same (same distro, python still installed, the same username).\nWhen trying to run any conda command, I get the error bash: \/home\/andrei\/miniconda3\/bin\/conda: Permission denied. I tried running sudo chown -R andrei:andrei miniconda3 in ~, but I still get the same error when trying to run any conda command.\nHow would I fix this?\nI would prefer to just access the environments I have, as some of the packages were actually compiled\/took a very long time to download.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":15,"Q_Id":71087781,"Users Score":1,"Answer":"Turns out the solution was sudo chmod -R 777 miniconda3. Not sure why no other answer on SO mentioned it.","Q_Score":1,"Tags":"python,linux,conda,miniconda,chown","A_Id":71087830,"CreationDate":"2022-02-11T23:51:00.000","Title":"Using conda environments after copying entire home dir","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Whenever I enter pipenv install django in the cmd an error appears:\n\" 'pipenv' is not recognized as an internal or external command,\noperable program or batch file. \"\nI can run:\npip install pipenv\nand:\npip install django","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":71103384,"Users Score":0,"Answer":"Checking \"add python to PATH\" when installing Python will only work for Python and pip commands. However, while installing \"pipenv\" using \"pip install pipenv\" command, pipenv will be installed but it might not be added to the PATH environment variables and you have to do it manually. Doing this, most probably the error \"pipenv is not recognized...\" will disappear. I solved this issue when I realized pipenv is not added to the environment variables.","Q_Score":0,"Tags":"python,pip,pipenv","A_Id":71926630,"CreationDate":"2022-02-13T17:52:00.000","Title":"I can't run pipenv install django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"Whenever I enter pipenv install django in the cmd an error appears:\n\" 'pipenv' is not recognized as an internal or external command,\noperable program or batch file. \"\nI can run:\npip install pipenv\nand:\npip install django","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":119,"Q_Id":71103384,"Users Score":0,"Answer":"I solved it by re-installing python then checked the check box called \" add python to PATH \" and it worked perfect with no errors","Q_Score":0,"Tags":"python,pip,pipenv","A_Id":71110405,"CreationDate":"2022-02-13T17:52:00.000","Title":"I can't run pipenv install django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am working on a script that needs to be run both from a command prompt, such as BASH, and from the Console in Spyder. Running from a command prompt allows the script file name to be followed by several arguments which can then be utilized within the script; >python script1.py dataFile.csv Results outputFile.csv. These arguments are referenced within the script as elements of the list sys.argv.\nI've tried using subprocess.run(\"python script1.py dataFile.csv Results outputFile.csv\") to enable the console to behave as the command line, but sometimes it works fine and other times it needs certain arguments, like -f between python and the file name, before it will display what is displayed in the command line. Different computers disagree on whether such arguments help or hurt.\nI've searched and searched, and found some clever ways to use technicalities of the specific operating system to distinguish, but is there something native to Python I can use?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":63,"Q_Id":71118560,"Users Score":0,"Answer":"If you import sys in the console and then call sys.argv, it will show you the value ['']. While running a script within Spyder expands that array to ['script1.py'] (plus the file address), it will still not get larger than one entry.\nIf, on the other hand, you run the script from the command line the way you showed above, then sys.argv will have a value of ['script1.py', 'dataFile.csv', 'Results', 'outputFile.csv']. You can utilize the differences between these to distinguish between the cases.\nWhat are the best differences to use? You want to distinguish between two possibilities, so an if - else pair would be best in the code. What's true in one case and false in the other? if sys.argv will not work, because in both cases the list contains at least one string, so that will be effectively True in both cases.\nif len(sys.argv) > 1 works, and it adds the capability to run from the command line and go with what is coded for the console case.","Q_Score":0,"Tags":"python,bash,command-line,ipython,spyder","A_Id":71118561,"CreationDate":"2022-02-14T21:34:00.000","Title":"Distinguish runs from Command Prompt vs Spyder console","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am creating a subprocess using this line of code:\np = subprocess.Popen([\"doesItemExist.exe\", id], shell=False)\nand when I run the script while I have the Task Manager open, I can see that it creates two processes and not one. The issue is that when I go to kill it, it kills one (using p.kill()), but not the other. I've tried looking online but the only examples I find are about shell=True and their solutions don't work for me. I've confirmed that that line only gets called once.\nWhat can I do? Popen is only giving me back the one pid so I don't understand how to get the other so I can kill both.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":61,"Q_Id":71129345,"Users Score":0,"Answer":"I ended up being able to deal with this issue by creating a clean up function which just uses the following:\nsubprocess.run([\"taskkill\", \"\/IM\", \"doesItemExist.exe\", \"\/F\"], shell=True)\nThis will kill any leftover tasks. If anyone uses this, be careful that your exe has a unique name to prevent you from killing anything you don't mean to. If you want to hide the output\/errors, just set the stdout and stderr to subprocess.PIPE.\nAlso, if there is no process to kill it will report that as an error.","Q_Score":0,"Tags":"python,subprocess,popen,kill","A_Id":71166078,"CreationDate":"2022-02-15T15:46:00.000","Title":"subprocess.Popen is creating two processes instead of one","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So i don't have any code to show because this isn't a code issue.\nI have my pathing for Python set to this:\nC:\\Users\\USERNAME\\AppData\\Local\\Programs\\Python\\Python310;C:\\Users\\stone\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\nbut in command prompt it still paths:\nC:\\Users\\USERNAME>\nand delivers a \"errno 2\" error whenever I try to run a file. The only way I have gotten files to actually launch is to move the .py folders to the username folder.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":42,"Q_Id":71146543,"Users Score":-1,"Answer":"I had this issue before in the past and what I ended up doing is reinstalling python and making sure to hit the \"add to path\" option. Sorry I don't have a better way but that is what worked for me.","Q_Score":0,"Tags":"python,path","A_Id":71146581,"CreationDate":"2022-02-16T17:27:00.000","Title":"Python Pathing Issue","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm reading that using Redis Pipeline could improve performance by sending a batch of commands to the Redis server rather than sending separate messages one by one which could add up in latency time. In this way, there is a rough correlation between the number of separate commands that you have in a pipeline batch and how much you improve in speed. My question is that, is there an overhead or a downside to using Redis Pipeline that would make it not worth it in certain situations, especially when there are just a few simple commands that are being executed not so often? I understand the actual improvement in these cases would be very marginal, but I'm wondering if using Pipeline could worsen the execution time actually by introducing some sort of overhead?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":71149506,"Users Score":0,"Answer":"The overhead of pipeline is that Redis needs to queue replies for these piped commands before sending to client, i.e. cost memory. So, normally, you'd better not create a huge pipeline.\nIn your case, since your pipeline only has a few simple commands, it's not a problem.","Q_Score":0,"Tags":"python,redis,query-optimization,py-redis","A_Id":71151071,"CreationDate":"2022-02-16T21:29:00.000","Title":"Is it any downside or overhead to Redis Pipeline for executing small set of commands?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm currently trying to read from one Kafka topic, doing some transformation and producing the messages to another topic.\nHowever, i am having a lot of issues with the Consumer. First of all, if we set reasonable session timeout\/max poll records values (like 10 s), the consumer takes super long, constantly rebalances and sometimes sends duplicated messages. If we increase the params to crazy values like 30 min, the speed increases dramatically. But the problem is once it reaches the 30 min mark, it rebalances and takes around 30 min to start up again.\nI have been playing with a lot of different params but still lost on how to fix this. Any ideas? Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":71171812,"Users Score":0,"Answer":"This may be due to some configuration issues. Based on your question I would suggest to please check your auto commit property.\nBecause ideally kafka does rebalance if it does not recieve the acknowledgement of the message read before session timeout happens. If it is set as false then either set it to true or make sure to commit to kafka once you are done processing the message","Q_Score":0,"Tags":"python,apache-kafka,kafka-consumer-api","A_Id":71180171,"CreationDate":"2022-02-18T10:18:00.000","Title":"Kafka consumer speed increases with session timeout\/max poll records","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"LibreOffice on windows comes with it's own Python version. How can one install new packages for LibreOffice-Python.\nI use Linux myself and I've written a working macro that I would like to be usable for windows users as well, but it uses packages that aren't available in standard LibreOffice.\nWe tried updating by pip, but as expected it only updates the system's python. We are aware that zazpip exists, but apparently it didn't work with the tester. Therefore I am explicitly looking for other solutions.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":137,"Q_Id":71175384,"Users Score":1,"Answer":"If it comes with a specific version of Python, it may need to reference specific functions from that version. The best answer I can give you is: If Python is included in the source code, try forking the source code with your own version of Python, and compiling that. \nOr, \nIf there's a specific package manager for Python included, try using that to update Python.","Q_Score":0,"Tags":"python,windows,libreoffice","A_Id":71175614,"CreationDate":"2022-02-18T14:47:00.000","Title":"How update Libre Office Python on windows?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I schedule my DAGS daily but some dags run for 2,3 days also I have set max_active_runs=1 which means exactly one dag will run at a time and other will be queued.\nSo is there a way to get exactly when my DAG was triggered and was queued?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":71178713,"Users Score":0,"Answer":"You can get this info from scheduler logs.\nSearch for DAG name and you will find state change and picking up DAG by scheduler.","Q_Score":1,"Tags":"python-3.x,airflow","A_Id":71204353,"CreationDate":"2022-02-18T19:11:00.000","Title":"Airflow: Get trigger date of DAG Airflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed pyenv on Mac using homebrew and downloaded the version of Python 3.7.9. Everything works except when I use pyenv global 3.7.9, python3 -V still gives me version 3.9.7. How do I fix this?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":370,"Q_Id":71188577,"Users Score":0,"Answer":"you could do so by searching the process bin path (which python3 will give you the path of your python3.9.7, which python3.7 will give you the path of your python3.7) and by adding an alias into your ~\/.bashrc (assuming you're using in from your terminal) you should be fine","Q_Score":2,"Tags":"python,macos,path,homebrew,pyenv","A_Id":71361928,"CreationDate":"2022-02-19T19:45:00.000","Title":"Having trouble switching python versions using pyenv global command","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I also want to know that since heartbeat requests are sent in Apache Kakfa Consumer, does it also affect the connections.max.idle.ms?\nHow do you handle errors in kafka-apache client (Producer and Consumer and best practices around them)?\nThanks :]","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":321,"Q_Id":71208769,"Users Score":0,"Answer":"The idle socket timeout is reset on poll and heartbeat connections and could be considered an upper bound for any protocol request\nIf the idle timeout is less than either poll interval, session timeout, or heartbeat interval, then you might expect to see some dropped network connections","Q_Score":1,"Tags":"apache-kafka,kafka-python","A_Id":71211098,"CreationDate":"2022-02-21T15:19:00.000","Title":"Difference between connections.max.idle.ms and max.poll.interval.ms in Kafka configuration?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Does anyone know if the python3 command on Linux can have some sort of SOME_NAMED_OPTION=filename.py after it rather than just calling python3 filename.py?\nI have a job scheduling tool I'd like to execute a Python script with that's kind of dumb.\nIt can only run Linux CLI commands commandname param1 param2 param3 or as commandname AAA=param1 BBB=param2 CCC=param3.\nThere's an existing business convention of putting filename.py as param #1 and then just making sure your script has a lot of comments about what the subsequent numerically ordered sys.argv list members mean, and you set the scheduling tool into its first mode, so it runs python3 filename.py world mundo monde, but it'd be awesome to be able to name the scheduling tool's parameters #2+ so I can write more human-friendly Python programs.\nWith python3 -h I'm not finding a way to give a parameter-name to filename.py, but I thought I'd see if anyone else had done it and I'm just missing it.\nIt'd be cool if I could have my scheduling tool run the a command more like python3 --scriptsource=filename.py --salut=monde --hola=mundo --hello=world and then write filename.py to use argparse to grab hola's, hello's, and salut's values by name instead of by position.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":71212432,"Users Score":0,"Answer":"Using the Shebang as suggested by @daniboy000 is the preferred method for doing this, but if for some reason you can't do that, you should be able to do python3 mycommand --arg1 --arg2=x","Q_Score":2,"Tags":"python,python-3.x,command-line-interface,command-line-arguments","A_Id":71212774,"CreationDate":"2022-02-21T20:07:00.000","Title":"Can Python be invoked against a script file with a parameter-name for the script filepath?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"For example I have multiple lines log file\nI have mapper.py. this script do parse file.\nIn this case I want to do my mapper it independently","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":71217146,"Users Score":0,"Answer":"Hadoop Streaming is already \"distributed\", but is isolated to one input and output stream. You would need to write a script to loop over the files and run individual streaming jobs per-file.\nIf you want to batch process many files, then you should upload all files to a single HDFS folder, and then you can use mrjob (assuming you actually want MapReduce), or you could switch to pyspark to process them all in parallel, since I see no need to do that sequentially.","Q_Score":0,"Tags":"python,hadoop,mapreduce,hadoop-streaming","A_Id":71224112,"CreationDate":"2022-02-22T07:05:00.000","Title":"How to distribute Mapreduce task in hadoop streaming","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"os.system() invokes a command, the argument can be a single string, e.g,\nos.system(\"cmd -input xx --output yy\").\nBut subprocess, I have to pass a list for args, e.g.,\nsubprocess.run([\"cmd\", \"-input\", \"xx\", \"--output\", \"yy\"]).\nFor complex arguments, passing a list is trivial. So how to pass a single string to run a command and can also try exceptions?\nThanks.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":71232791,"Users Score":0,"Answer":"cmd_string = 'cmd -input xx --output yy'\nsubprocess.run(cmd_string.split())","Q_Score":0,"Tags":"python","A_Id":71232865,"CreationDate":"2022-02-23T07:25:00.000","Title":"Run shell in subprocess.run(cmd_line) using one line cmd string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"os.system() invokes a command, the argument can be a single string, e.g,\nos.system(\"cmd -input xx --output yy\").\nBut subprocess, I have to pass a list for args, e.g.,\nsubprocess.run([\"cmd\", \"-input\", \"xx\", \"--output\", \"yy\"]).\nFor complex arguments, passing a list is trivial. So how to pass a single string to run a command and can also try exceptions?\nThanks.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":71232791,"Users Score":0,"Answer":"you can simply split the command string and pass it to subprocess like bellow:\nsubprocess.run(\"your cmd command\".split())","Q_Score":0,"Tags":"python","A_Id":71232913,"CreationDate":"2022-02-23T07:25:00.000","Title":"Run shell in subprocess.run(cmd_line) using one line cmd string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am unable to install module pandas in my linux vm. I tried all ways to install it, but it says it has version 1.1.5 requirement already satistied. But when I try running the code, it says, no module found. The latest version of python in it is 2.7.3, but I want to install 3.8 or 3.7, but I'm unable to. Where am I going wrong?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":71235741,"Users Score":0,"Answer":"Did you try installing python3 from your package manager? You can install python 3.9 from apt using the below command\napt install python3 pip -y\nYou can also install the below package to use python in the terminal instead of python3 every time\napt install python-is-python3 -y\nI cant comment yet so using the answer section, kindly give me an upvote so I can start using the comment feature, sorry for the trouble","Q_Score":0,"Tags":"python,pandas","A_Id":71236899,"CreationDate":"2022-02-23T11:04:00.000","Title":"Unable to install pandas or other packages in linux virtual environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Python script, which connects to a data feed in the early hours of each morning, downloads a JSON data file, then moves it from where it saved it to a dedicated folder.\nWhen the script is run from the command line on the Linux Centos7 server, all works perfectly. When run through Cron (either on schedule or a run now through the Plesk Cron screen) it errors out. Having commented out the os.rename(startfile, sendfile) line the script runs perfectly in Cron.\nI thought it was todo with permissions, but if I get the same script to open a text file in the eventual destination directory that I am trying to move the datafile to, write 'JSON file downloaded ok' and close the file - that runs perfectly under CRON, so I dont think it can be permissions.\nI have run os.path.isfile(StartFile) on the datafile after downloaded (and got TRUE) and run os.path.isdir(EndFile) and got TRUE, so I know the paths are correct. I have replaced os.rename with os.replace and the same thing happens.\nWhen the script downloads it, it goes into the \/root\/ folder, as CRON runs as root. When the script exits with an error, the file is there and visible in \/root\/ and I can manually do mv file.gz to \/path\/to\/folder\/file.gz and it moves fine.\nJust CRON is having some issue with the Python file movement commands - can anyone offer any advice, I just dont know where to check next!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":33,"Q_Id":71272253,"Users Score":1,"Answer":"Thanks to @kpie in the above comments, the answer (although I suppose it is a workaround rather than a solution) was to change the working directory with os.chdir(path) before downloading the file.\nThat way the file downloads into the folder I need it in, rather than downloading and then moving it.","Q_Score":1,"Tags":"python,centos7","A_Id":71272321,"CreationDate":"2022-02-25T22:14:00.000","Title":"Python script will not move a downloaded file when run as CRON","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My computer:\nMacBook Air M1\nRunning 11.6.3\nMemory 8GB\nThings I have tried:\n\nReinstalled Visual studio code\n\nUse pyenv to install Python 3.10.2\n\npyenv install 3.10\n\npyenv global 3.10\n\nHad brew reinstall pyenv 2.2.4\n\nAlso tried command python3.10 -V just reverts back to python 2.7 right away.\n\n\nEvery time I open Visual studio code, to get python 3 to run I must run the two commands below. Just wondering if there is a more permanent solution to the problem. Once I run the command, I have no problems, and can run python programs normally, but the next time I restart Visual studio code, the same issue comes back. Any solution would be helpful... many thanks!!\nalias python=\"python3\" # to use python3 rather than python2.7\nalias idle=\"idle3\" # to use python3 idle rather than 2.7","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":80,"Q_Id":71272938,"Users Score":0,"Answer":"Simply running the command python3.10 fixed the problem. I was just using python nameoffile.py to run a program. Not aware that VSC was defaulting to Python 2.7 built into the Mac OS.\npython3.10 nameoffile.py\nMany Thanks everyone","Q_Score":0,"Tags":"python,visual-studio-code,apple-m1","A_Id":71272995,"CreationDate":"2022-02-25T23:58:00.000","Title":"Visual studio Code not running Python 3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have just started working on my new pc and just to get a feel for it I wanted first to start working on python files, so I started first by just wanting to run WSL on windows and it installed correctly but when I want to run any python using the run python file on the top right on VS code, this is what gets executed $ C:\/Users\/jaffe\/AppData\/Local\/Microsoft\/WindowsApps\/python3.10.exe f:\/Projects\/hello.py\nAnd this is the error: -bash: C:\/Users\/jaffe\/AppData\/Local\/Microsoft\/WindowsApps\/python3.10.exe: No such file or directory\nI have no idea what's causing it but when I run the file using 'Shift + Enter' which is: Python: Run Selection\/Line in Python Terminal it seems to run the single line correctly but it gives me this error instead:\nprint(\"Hello, world\")\n-bash: syntax error near unexpected token `\"Hello, world\"'\nbut when I run it using python3 hello.py, it works perfectly fine?! I'm so lost as to why this is happening and how could I fix it.\nMight be relevant: I'm using windows 10, installed python 3.10.2 from windows store, all of that is in VS code and the python code is one line: print(\"Hello, world\") and I changed the permissions of Local\/Microsoft\/WindowsApps so it's now accessible by all users to view\/read\/edit\/run, made sure that python3.10.exe exists(on the WindowsApps and it works perfectly) and reinstalled it many times, tired python3.9, and tried to install python from the website instead of the windows store and still the same, manually added python to PATH and tried .venv and didn't work. when I launch python3.10.exe outside vs code it seems to run perfectly, I have worked with python before and it used to work fine now I don't know what's wrong.\nI have seen other questions of the same problem I'm having here but none of them solve the problem.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":626,"Q_Id":71276677,"Users Score":0,"Answer":"No such file or directory C:\/Users\/...\nFor wsl, the Windows filesystem is accessible, but it has a different path. It is mounted under the \/mnt folder. So you would find your python .exe under \/mnt\/c\/Users\/jaffe\/AppData\/Local\/Microsoft\/WindowsApps\/python3.10.exe. This said, the executable file is meant to work on Windows, and it doesn't really makes sense to use it on Linux when you could run python within your wsl distro.\npython3 works perfectly fine\nThis is because most Linux distributions come with python3 pre-installed, so you can use it already. To see where it is located, you can run the command which python3, or python3 --version to check its version.\nIf you want to change version, you may consider download it from you package manager, apt.\nI also suggest to install python3-pip if you don't have it already to get the pip package manager for python.","Q_Score":0,"Tags":"python,ubuntu,visual-studio-code,windows-subsystem-for-linux","A_Id":71276777,"CreationDate":"2022-02-26T12:32:00.000","Title":"Can't run any Python files on Ubuntu(WSL)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently writing a python file that is meant to be run through the command line like pip and npm, but I also need to know when the user launches it directly through the file explorer (as in windows). Is this completely impossible (restricted to the program only knowing that it's run with no sys.argv arguments), or is there a way to make the program differentiate if it's being run directly through something like the file explorer, or if it's being run through the command line? Thanks!","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":55,"Q_Id":71297138,"Users Score":1,"Answer":"Be sure to capture the user's operating system first before implementing a Windows specific approach.","Q_Score":0,"Tags":"python","A_Id":71297572,"CreationDate":"2022-02-28T15:16:00.000","Title":"how can you know if the python file is run directly and not from the command line?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am try load GPT-J model, and he has 12gb size. And I want download\/install her in HDD disk(name E). I change startup location of Jupiter to E disk (use c.NotebookApp.notebook_dir = 'E:\\Jupyter'), but jupyter load file in C:'user name'\\ path. How I change download\/install path of jupyter?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":71298898,"Users Score":0,"Answer":"If you're using Anaconda, you need to do one extra thing. Anaconda, by default, suppressed loading the config file.\nTo enable this file, edit the shortcut that launches Jupyter Notebook. You will find a target that looks like this:\nC:\\...\\python.exe C:\\...\\cwp.py C:\\...\\python.exe C:\\...\\jupyter-notebook-script.py \"%USERPROFILE%\/\"\nRemove the \"%USERPROFILE%\".","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":71299028,"CreationDate":"2022-02-28T17:36:00.000","Title":"How to change Jupyter installation path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Lets say that on Google Cloud Storage I have bucket: bucket1 and inside this bucket I have thousands of blobs I want to rename in this way:\nOriginal blob:\nbucket1\/subfolder1\/subfolder2\/data_filename.csv\nto: bucket1\/subfolder1\/subfolder2\/data_filename\/data_filename_backup.csv\nsubfolder1, subfolder2 and data_filename.csv - they can have different names, however the way to change names of all blobs is as above.\nWhat is the most efficient way to do this? Can I use Python for that?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":71308957,"Users Score":0,"Answer":"If you have a lot of rename to perform, I recommend to perform the operation concurrently (use several thread and not perform the rename sequentially).\nIndeed, you have to know how works CLoud Storage. rename doesn't exist. You can go into the Python library and see what is done: copy then delete.\nThe copy can take time if your files are large. Delete is pretty fast. But in both case, it's API call and it take time (about 50ms if you are in the same region).\nIf you can perform 200 or 500 operations concurrently, you will significantly reduce the processing time. It's easier with Go or Node, but you can do the same in Python with await key word.","Q_Score":0,"Tags":"python,google-cloud-platform,google-cloud-storage,gsutil","A_Id":71315148,"CreationDate":"2022-03-01T13:08:00.000","Title":"how to efficiently rename a lot of blobs in GCS","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When starting IDA in GUI mode to analyze the binary, it automatically locates and displays the actual main function code (not the entry point encapsulated by the compiler, but the main function corresponding to the source code).\nMy question is how to get that address in batch mode (without GUI) via idapython script? I don't see the relevant interface in the IDAPython documentation.\nFor example, _mainCRTStartup --> ___mingw_CRTStartup --> _main is a sequence of function calls, where _mainCRTStartup is the entry point of the binary, but I want to get the address of _main, can it be done?\nAny help or direction would be much appreciated.!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":96,"Q_Id":71319464,"Users Score":0,"Answer":"Know the answer, it is idaapi.inf_get_main()","Q_Score":1,"Tags":"python,ida","A_Id":71566465,"CreationDate":"2022-03-02T08:23:00.000","Title":"How to get the actual main address use idapython when starting IDA in batch mode?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am developing an AWS Lambda function which uses a runtime of Python 3.8. The source code is packaged into a custom Docker image and then passed to the Lambda service.\nIn the Python program itself, I am executing various Terraform commands including \"plan\" and \"show\" using subprocess. I am writing the output of the plan to the \/tmp directory using the \"terraform plan -out=plan.txt\" flag. Then, I convert the plan into JSON for processing using \"terraform show -json plan.txt\".\nSince the plan file could contain sensitive data, I do not want to write it to the \/tmp directory; rather I want to keep it in-memory to increase security. I have explored mounting tmpfs to \/tmp which is not possible in this context. How can I override the behavior of Terraform's \"-out=\" flag or create an in-memory filesystem in the container?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":71328986,"Users Score":0,"Answer":"Terraform itself can only write a saved plan file to something which behaves like a file. If you are on a Unix system then you may be able to exploit the \"everything is a file\" principle to trick Terraform to writing to something that isn't a file on disk, such as a pipe passed in from the parent process as an additional inherited file descriptor, but there is no built-in mechanism for doing so, and Terraform may require the filehandle to support system calls other than just write that are available for normal files, such as seek.","Q_Score":0,"Tags":"python,docker,aws-lambda,terraform","A_Id":71344917,"CreationDate":"2022-03-02T20:51:00.000","Title":"Write Terraform plan output in-memory rather than on a filesystem","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an Apache NiFi Workflow, which takes some Files out of a FTP and puts them into a separate Folder, using ListFTP - FetchFTP and PutFile.\nThe problem is:\n\nThose Files, which were copied into the new Folder, need to be decoded, which will result in several other files. From File 1, I will generate around 300 other files and the original File will be deleted. (This is how it works and I cannot modify the behavior when it comes to the deletion of the file)\nNow, I want to have these Files decoded via a Python Script, called from an ExecuteScript Process in NiFi and eventually put the new Files into another Folder --> either via PutFile or PutHDFS (irrelevant at this point)\n\nAs far as I know, NiFi will execute it's entire Flow based on the UUID of each FlowFile. In my case, the original UUID of the FlowFile will no longer exist and 300 other files will be generated without an UUID, as these Files were never acknowledged by NiFi.\nIs it possible to generate a new UUID for each of these new Files and send them afterwards to REL_SUCCESS?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":71336922,"Users Score":0,"Answer":"When you call session.create(original) where original is the incoming FlowFile, the returned FlowFile reference is a child of the original. It will have its own UUID and possibly the parent UUID as a separate attribute (although that might be up to the processor to do, such as SplitText).","Q_Score":0,"Tags":"python,apache-nifi","A_Id":71339997,"CreationDate":"2022-03-03T12:04:00.000","Title":"Generate a new UUID for each new file and send them afterwards to REL_SUCCESS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to use SWIG to support both Python 2.7 and Python 3.10. [Yes, I know that Python 2.7 is dead, and we're doing our best to migrate users away from it as quickly as we can.]\nI generate my module via setuptools.setup with a Swig extension. I can run the setup using both Python2 and Python3. The setuptools program creates a separate shareable library for the Python2 and Python3 runs. However both runs generate a myswig.py file in the same location.\nIt turns out that the Py2 and Py3 generated files are identical, except that the Py3 generated file contains annotations for the functions and the Py2 doesn't. Python3 can read the Python2 generated file and works just fine. Python2 cannot read the Python3 generated file.\nI've tried both %feature(\"autodoc\", 0); and leaving out this line completely, and I still get annotations.\nSo is there someway of either:\n\nTurning off annotations in the generated file\nAdding from __future__ import annotations automatically to the generated file","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":71357758,"Users Score":0,"Answer":"Don't use -py3. It's not required for Python 3, but enables Python 3-specific code features like annotations.","Q_Score":0,"Tags":"python-3.x,python-2.7,setuptools,swig","A_Id":71358783,"CreationDate":"2022-03-04T22:15:00.000","Title":"How do tell \"swig -python -py3 myswig.i\" not to include annotations","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"this is in Python, python 3.10.2 if we need specifics.\nI\u2019m using the Mac Terminal and I want to access a folder from it\nI have a folder on my desktop that has a bunch of my modules. I wanna access these from the terminal in a style like: \u201cimport module\u201d with using some command to get to that folder.\nHelp would be appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":56,"Q_Id":71385164,"Users Score":0,"Answer":"You can import sys, then sys.path.append(path_to_desired_modules). sys.path is a list of the directories where Python will search for the modules to import, so if you append the target directory to that, it should be retrievable.","Q_Score":0,"Tags":"python,python-3.x,directory","A_Id":71385229,"CreationDate":"2022-03-07T17:41:00.000","Title":"How to get module from another directory from terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a spring boot application that runs certain python scripts using the process class combined with the buffered reader to read the out put. This project works fin within the intellij tomcat embedded server. However when we try to run it on a stand alone tomcat server we get the error Cannot run program \"python\": CreateProcess error=2, The system cannot find the file specified . Keep in mind this program works fine within the intellij embedded tomcat server. We have come tot he conclusion that THE stand alone tomcat is not picking up our python environment variables. How can we resolve this problem? Is there anything we need to add to tomcat to get the server to recongnize the python environment variable.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":71398799,"Users Score":0,"Answer":"Here is the answer: When running python scripts from java you must provide the full python executable path. For some reason tomcat does not recognize the system variables.","Q_Score":0,"Tags":"python,spring,spring-boot,tomcat,tomcat9","A_Id":71400817,"CreationDate":"2022-03-08T17:00:00.000","Title":"How to add python to tomcat 9 list of environment variables?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've been using Termux for a long time, but I can't solve this problem. Why when I want to install Rust using pip. After the message: Using cached matplotlib-3.5.1.tar.gz (35.3 MB) nothing happens for an hour now.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":68,"Q_Id":71407797,"Users Score":1,"Answer":"You could try updating pip to the latest version by python -m pip install --upgrade pip. And then try pip install RUST\nIf it doesn't work you could try uninstalling Rust first by pip uninstall RUST. And then reinstall it by pip install RUST","Q_Score":0,"Tags":"python,python-3.x,linux,termux","A_Id":71408006,"CreationDate":"2022-03-09T10:24:00.000","Title":"Rust Python not installing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have made an application using python using some libraries installed as needed on the go. Now I want to make it usable for person who doesn't know how to install dependencies and which ones are needed.\nWhat to do to transform it into an easy to use application and probably make it able to run on Mac too??\nPlease suggest some resources that might help me know more.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":285,"Q_Id":71408434,"Users Score":1,"Answer":"As Wouter K mentioned, you should install Pyinstaller (pip install pyinstaller for pip) and then cd to the directory you want and type pyinstaller --onefile file.py on the terminal. If you cded in the directory your file is, you just type the name and the extension (.py) of your file. Else, you will have to specify the full path of the file. Also, you can't make an mac os executable from a non mac os pc. You will need to do what I mentioned above on a mac.","Q_Score":1,"Tags":"python,dependency-management,.app","A_Id":71408912,"CreationDate":"2022-03-09T11:14:00.000","Title":"How to make a Python exe file automatically install dependancies?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Normally when I would click on \"Run\" button in VSCode for a Python script it would activate the currently selected virtual environment and simply call python in the terminal and it all worked fine.\nNow all of a sudden every time I try to run a script what is does is instead call a subprocess via conda like so:\nconda run -n --no-capture-output --live-stream python \nAnd this new version is causing some issue because for whatever reason conda refuses to recognise some of the packages as having been installed. The code still works fine when I manually type the run command in the terminal but I want the old behaviour to be back.\nAnyone knows how to fix this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":71419174,"Users Score":0,"Answer":"Input: Ctrl + Shift + P .\nEnter: Terminal: select default profile .\nChange the default to CMD .\nMaybe this can help you.","Q_Score":0,"Tags":"python,visual-studio-code,conda","A_Id":71419633,"CreationDate":"2022-03-10T05:16:00.000","Title":"VSCode suddenly started executing Python scripts as a subprocess","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"We're developing custom runtime for databricks cluster. We need to version and archive our clusters for client. We made it run successfully in our own environment but we're not able to make it work in client's environment. It's large corporation with many restrictions.\nWe\u2019re able to start EC2 instance and pull image, but there must be some other blocker. I think ec2 instance is succefully running, but I have error in databricks\n\nCluster terminated.Reason:Container launch failure\nAn unexpected error was encountered while launching containers on\nworker instances for the cluster. Please retry and contact Databricks\nif the problem persists.\nInstance ID: i-0fb50653895453fdf\nInternal error message: Failed to launch spark container on instance\ni-0fb50653895453fdf. Exception: Container setup has timed out\n\nIt should be in some settings\/permissions inside client's environment.\nHere is end of ec2 log\n\n-----END SSH HOST KEY KEYS----- [ 59.876874] cloud-init[1705]: Cloud-init v. 21.4-0ubuntu1~18.04.1 running 'modules:final' at Wed, 09\nMar 2022 15:05:30 +0000. Up 17.38 seconds. [ 59.877016]\ncloud-init[1705]: Cloud-init v. 21.4-0ubuntu1~18.04.1 finished at Wed,\n09 Mar 2022 15:06:13 +0000. Datasource DataSourceEc2Local. Up 59.86\nseconds [ 59.819059] audit: kauditd hold queue overflow [\n66.068641] audit: kauditd hold queue overflow [ 66.070755] audit: kauditd hold queue overflow [ 66.072833] audit: kauditd hold queue\noverflow [ 74.733249] audit: kauditd hold queue overflow [\n74.735227] audit: kauditd hold queue overflow [ 74.737109] audit: kauditd hold queue overflow [ 79.899966] audit: kauditd hold queue\noverflow [ 79.903557] audit: kauditd hold queue overflow [\n79.907108] audit: kauditd hold queue overflow [ 89.324990] audit: kauditd hold queue overflow [ 89.329193] audit: kauditd hold queue\noverflow [ 89.333125] audit: kauditd hold queue overflow [\n106.617320] audit: kauditd hold queue overflow [ 106.620980] audit: kauditd hold queue overflow [ 107.464865] audit: kauditd hold queue\noverflow [ 127.175767] audit: kauditd hold queue overflow [\n127.179897] audit: kauditd hold queue overflow [ 127.215281] audit: kauditd hold queue overflow [ 132.190357] audit: kauditd hold queue\noverflow [ 132.193968] audit: kauditd hold queue overflow [\n132.197546] audit: kauditd hold queue overflow [ 156.211713] audit: kauditd hold queue overflow [ 156.215388] audit: kauditd hold queue\noverflow [ 228.558571] audit: kauditd hold queue overflow [\n228.562120] audit: kauditd hold queue overflow [ 228.565629] audit: kauditd hold queue overflow [ 316.405562] audit: kauditd hold queue\noverflow [ 316.409136] audit: kauditd hold queue overflow","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":107,"Q_Id":71419465,"Users Score":0,"Answer":"This is usually caused by slowness in downloading the custom docker image, please check if you can download from the docker repository properly from the network where your VMs are launched.","Q_Score":0,"Tags":"python,linux,amazon-web-services,docker,databricks","A_Id":72107176,"CreationDate":"2022-03-10T06:00:00.000","Title":"AWS Databricks Cluster terminated.Reason:Container launch failure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using WSL 2 to run a Debian. I created a \"ansible\" on it. The thing I want to do is that when I open a new tab of this Debian, I want to see (ansible) XXXXX$:\nI tried two things yet.\n\nCrontab (But it's not appropriate with how WSL works as I understood it)\nTrying to modify my target of Debian as wsl -e source \/home\/w123183\/ansible\/bin\/activate\nThe tricky thing here is that it's not it doesn't work. It just that the shell closes immediately after it ran.\nI believe that all this shell stuffs and how it is managed is still blurry for me.\nI hope someone will be able to help me and I thank them in advance for that.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":71424814,"Users Score":0,"Answer":"Ok I actually find a solution. I just added \"source ~\/ansible\/bin\/activate\" to my .bashrc.\nIt works as I wanted now","Q_Score":0,"Tags":"python,debian,windows-subsystem-for-linux,windows-terminal","A_Id":71426684,"CreationDate":"2022-03-10T13:20:00.000","Title":"Automate venv activation when I start a new tab of Debian WSL","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"After starting my app using dev_appserver.py --enable_console true my_app\/, I go to localhost:8000, select the interactive console, and then run a Python script that initializes the datastore.\nIs there a way to run this init script from the command line?\nI looked at --python_startup_script my_init_script.py, but that is called before the app is started, and so it doesn't make sense.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":71427629,"Users Score":2,"Answer":"Moving details from comments section to full answer\nDon't know if that is possible but a possible workaround could be to put this code at the beginning of your main.py or whatever is your main file.\nThe process then becomes - your app starts, loads your main file (maybe when someone visits your home page) which checks if a flag is set. If flag is not set, it runs your datastore init script and sets the flag (maybe the flag is set in datastore itself).\nI had something similar to what I described but the code is triggered when you try to access the home page url.","Q_Score":0,"Tags":"python,google-app-engine,google-cloud-datastore,development-environment,dev-appserver","A_Id":71441991,"CreationDate":"2022-03-10T16:33:00.000","Title":"dev_appserver.py: Run init script from command line? (instead of interactive console)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Hello dears Stack Overflow Community, I'm Firas and I'm doing an apprenticeship in System integration, and I have a project to build Co2 Sensor with raspberry Pi, so I was planning to connect it with azure so I can analyze the data there and maybe set a trigger alarm to notify me per Email or ms team channel when the co2 concentration in the room become high, but my problem is when I calculated the price in Azure, the estimated price was high (around 139 Euro per month).\nDoes anyone here have experience in those type of projects and is there other way to implement the project \"in cheaper way\", I will be very thankful if someone can guide me and give me suggestions and solutions.\nThank you very much in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":71429240,"Users Score":0,"Answer":"Why don't you deploy an \"alarming system\" on RPI itself?\n(If you don't want to use rpi, I highly recommend Scaleway Stardust as a solution. It's only about 2.5 EUR per month)","Q_Score":0,"Tags":"python,azure,azure-iot-hub","A_Id":71429346,"CreationDate":"2022-03-10T18:44:00.000","Title":"Raspberry PI and co2 sensor (CCS811)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have deployed a Pod with several containers. In my Pod I have certain environment variables that I can access in Python script with os.getenv(). However, if I try to use os.getenv to access the Container's environment variables I get an error stating they don't exist (NoneType). When I write kubectl describe pod I see that all the environment variables (both Pod and Container) are set.\nAny ideas?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":77,"Q_Id":71437581,"Users Score":2,"Answer":"The issue was in creating helm tests. In order to get the environment variables from the containers in a helm test then the environment variables need to be duplicated in the test.yaml file or injected from a shared configmap.","Q_Score":1,"Tags":"python,kubernetes,environment-variables","A_Id":71438849,"CreationDate":"2022-03-11T11:07:00.000","Title":"Environment variables: Pod vs Container. Trying to access Container envvar with Python os.getenv","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When trying to write script with python, I have a fundamental hole of knowledge.\nUpdate: Thanks to the answers I corrected the word shell to process\/subprocess\nNomenclature\n\nStarting with a Bash prompt, lets call this BASH_PROCESS\nThen within BASH_PROCESS I run python3 foo.py, the python script runs in say PYTHON_SUBPROCESS\nWithin foo.py is a call to subprocess.run(...), this subprocess command runs in say `SUBPROCESS_SUBPROCESS\nWithin foo.py is subprocess.run(..., shell=True), this subprocess command runs in say SUBPROCESS_SUBPROCESS=True\n\nTest for if a process\/subprocess is equal\nSay SUBPROCESS_A starts SUBPROCESS_B. In the below questions, when I say is SUBPROCESS_A == SUBPROCESS_B, what I means is if SUBPROCESS_B sets an env variable, when it runs to completion, will they env variable be set in SUBPROCESS_A? If one runs eval \"$(ssh-agent -s)\" in SUBPROCESS_B, will SUBPROCESS_A now have an ssh agent too?\nQuestion\nUsing the above nomenclature and equality tests\n\nIs BASH_PROCESS == PYTHON_SUBPROCESS?\nIs PYTHON_SUBPROCESS == SUBPROCESS_SUBPROCESS?\nIs PYTHON_SUBPROCESS == SUBPROCESS_SUBPROCESS=True?\nIf SUBPROCESS_SUBPROCESS=True is not equal to BASH_PROCESS, then how does one alter the executing environment (e.g. eval \"$(ssh-agent -s)\") so that a python script can set up the env for the calller?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":83,"Q_Id":71456175,"Users Score":2,"Answer":"None of those equalities are true, and half of those \"shells\" aren't actually shells.\nYour bash shell is a shell. When you launch your Python script from that shell, the Python process that runs the script is a child process of the bash shell process. When you launch a subprocess from the Python script, that subprocess is a child process of the Python process. If you launch the subprocess with shell=True, Python invokes a shell to parse and run the command, but otherwise, no shell is involved in running the subprocess.\nChild processes inherit environment variables from their parent on startup (unless you take specific steps to avoid that), but they cannot set environment variables for their parent. You cannot run a Python script to set environment variables in your shell, or run a subprocess from Python to set your Python script's environment variables.","Q_Score":1,"Tags":"python,linux,process,subprocess","A_Id":71456241,"CreationDate":"2022-03-13T11:12:00.000","Title":"Python, the relationship between the bash\/python\/subprocess processes (shells)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am quite new in Airflow. However I tried to delete some DAGs in airflow (manually; using just bottom) ,but after deletion I got message (so the physically DAG do not exist anymore)\nBroken DAG: [\/usr\/local\/airflow\/dags\/reports_general\/templates\/data_quality_report_airflow__.py] Invalid control character at: line 2 column 116 (char 118)\nAnyone has idea how to resolve?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":71460988,"Users Score":0,"Answer":"You have to delete the .py dag file from the dag storage volume and then also from the UI. In the version we use the UI only deletes the data for that dag from the metadata db.","Q_Score":0,"Tags":"python,airflow","A_Id":71488988,"CreationDate":"2022-03-13T21:26:00.000","Title":"delete Airflow DAG","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So you know how all coding languages usually have a terminal command to run it, like this:\npython3 main.py\nAnd then it runs whatever is in 'main.py'? I'm trying to make something similar to that, except it's for txt files, so when you run:\nCUSTOM greeting.txt\nIt will tell Python to read everything in greeting.txt, so if 'Hello' is in greeting.txt, and you run CUSTOM greeting.txt, it will print out 'Hello' in terminal. Any help is appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27,"Q_Id":71473284,"Users Score":0,"Answer":"In your example case, alias CUSTOM=cat in your shell to have cat do the heavy lifting, but in general, yeah, just like Python or any other program can read command line arguments, so could your hypothetical interpreter.\nIf you were to implement your language in Python, I'd tell you to look at sys.argv...","Q_Score":0,"Tags":"python,shell","A_Id":71473334,"CreationDate":"2022-03-14T19:34:00.000","Title":"Is there a way to make your own custom terminal command via .sh and .py?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed Python 3.9 and when in cmd I use python3 --version it gives python 3.10.2.\nIt could be the reason for WinError 5 access denied when I try to install tensorflow?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":71480367,"Users Score":0,"Answer":"You most likely installed multiple Python versions. Open up your cmd, run where python3, delete the ones you don't want.","Q_Score":0,"Tags":"python-3.x,tensorflow","A_Id":71480651,"CreationDate":"2022-03-15T10:08:00.000","Title":"Conflict of python versions and winerror 5","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I want to use remote python interpreter to debug my code, but an error appeared:\nError running 'test': Can't run remote python interpreter: {0}. But I could directly run this code with remote python interpreter. I try to use command 'which python' and 'which python3' to get the different interpreter, but appeared the same error.\nAnybody could help me to solve this problem? It's my first time to Debug code remotely with Pycharm. Thank you.\nEnvironment: CentOS.\nPython Environment: \/home\/jumpserver\/miniconda3\/envs\/vec\/bin\/python3. (get the directory using command: 'which python3').","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":83,"Q_Id":71494752,"Users Score":1,"Answer":"After testing several methods, I finally solved this problem. I clear all the remote interpreter in Pycharm, then restart Pycharm. After secondly adding the remote python interpreter, the debug works normally.","Q_Score":1,"Tags":"python,pycharm,remote-server","A_Id":71509237,"CreationDate":"2022-03-16T09:43:00.000","Title":"Pycharm Error running 'test': can't run remote python interpreter: {0}","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to spawn a child process from python with only output privilege (absolutely NO access to file system, network, OS, input etc). It needs to terminate after the specified amount of time and only use the specified amount of memory. How can I accomplish that?\nThis will run in a Linux container so any OS\/Linux-based solution(or system calls) are also welcome. (No constraint on the distro; whatever works)","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":30,"Q_Id":71505998,"Users Score":1,"Answer":"Because you are using a container to define your client, I think the best idea would be to create a user in the client Dockerfile with limited privileges and make that user start the process.","Q_Score":0,"Tags":"python-3.x,linux,subprocess,privileges","A_Id":71560925,"CreationDate":"2022-03-17T01:16:00.000","Title":"How to control privileges, runtime, and memory usages of a child process?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"How can I mock existing Azure Databricks PySpark codes of a project (written by others) and run them locally on windows machine\/Anaconda to test and practice?\nIs it possible to mock the codes or I need to create a new cluster on Databricks for my own testing purposes?\nhow I can connect to storage account, use the Databricks Utilities, etc? I only have experience with Python & GCP and just joined a Databricks project and need to run the cells one by one to see the result and modify if required.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":71516084,"Users Score":0,"Answer":"You can test\/run PySpark code from your IDE by installing PySpark on your local computer.\nNow, to use Databricks Utilities, in fact you would need a Databricks instance and it's not available on local. You can try Databricks community Editionfor free but with some limitation\nTo acess a cloud storage account, it can be done locally from your computer or from your own Databricks instance. In both cases your will have to set up the end point of this storage account using its secrets.","Q_Score":1,"Tags":"python,azure,pyspark,databricks","A_Id":71524640,"CreationDate":"2022-03-17T16:45:00.000","Title":"How to mock and test Databricks Pyspark notebooks Locally","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The problem is that i have a small program that works in itself. I'm using vscode.\nThis programm is currently located in a subfolder of the folder that i have opened.\nHowever, if i go to File>Open Folder and select the subfolder with the program as the new main folder from which I work, nothing seems to work anymore. The program itself doesn't even start to run, no text is coloured, etc.\nDoes someone know the reason for this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":71518023,"Users Score":0,"Answer":"When you open the subfolder as a new workspace, you will\n\nlose the settings.json under .vscode folder\nthe cwd will be changed\nthe virtual environment under the previous folder will lose\n\nand so on.\nSo you need the follow the error message to fix the problem. If you can provide the error message and the screenshot, maybe we can provide some help.\nBut no text is colored, it's weird, have you selected the Dark(visual studio) theme?","Q_Score":0,"Tags":"python,visual-studio-code,subdirectory","A_Id":71522063,"CreationDate":"2022-03-17T19:28:00.000","Title":"Program works in a subfolder in vscode but not if only the folder is opened","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"In python 3.9 I wrote a TCP server that never calls receive(). And a client that sends 1KB chunks to the server. Previously I'm setting send- and receive buffer sizes in the KB-range.\nMy expectation was to be able to send (send-buffer + receive-buffer) bytes before send() would block. However:\n\nOn Windows 10: send() consistently blocks only after (2 x send-buffer + receive-buffer) bytes.\nOn Raspberry Debian GNU\/Linux 11 (bullseye):\n\nsetting buffer seizes (with setsockopt) results in twice the buffer (as reported by getsockopt).\nsend() blocks after roughly (send-buffer + 2 x receive-buffer) bytes wrt the buffer sizes set with setsockopt.\n\n\n\nQuestions: Where does the \"excess\" data go? How come, the implementation behave to differently?\nAll tests where done on the same machine (win->win, raspi->raspi) with various send\/ receive buffer sizes in the range 5 - 50 KB.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":34,"Q_Id":71541204,"Users Score":-1,"Answer":"TCP is a byte stream, there is no 1:1 relationship between sends and reads. send() copies data from the sender's buffer into a local kernel buffer, which is then transmitted to the remote peer in the background, where it is received into a kernel buffer, and finally copied by receive() into the receiver's buffer. send() will not block as long as the local kernel still has buffer space available. In the background, the sending kernel will transmit buffered data as long as the receiving kernel still has buffer space available. receive() will block only when the receiving kernel has no data available.","Q_Score":0,"Tags":"python,sockets,tcp,buffer","A_Id":71541508,"CreationDate":"2022-03-19T19:07:00.000","Title":"How many bytes can be send() over tcp without ever receive(), before send() blocks -- dependent on buffer sizes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a topic with 100 partitions and a single application running to consume messages from this topic. Since there is only 1 consumer, all 100 partitions would get assigned to this particular consumer. I wanted to understand what would happen in the case when all partitions have messages that need to be consumed.\n\nWould it go in order? i.e First consume all messages from partition #1 then move to partition #2 and so on...\nWould it consume in a round-robin fashion? i.e First consume 1 message from partition #1 then move to partition #2 and so on after consuming 1 message from all partitions come back to partition #1?\n\nI have observed the #1 but I want a behaviour where which is more like #2, can that be achieved any way?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":71555963,"Users Score":0,"Answer":"Configure max.poll.records=1 in Configuration.\nThis will poll Single kafka message in one consumer call.","Q_Score":0,"Tags":"python,apache-kafka,kafka-consumer-api,producer-consumer","A_Id":71557033,"CreationDate":"2022-03-21T10:19:00.000","Title":"Kafka python client consumer behaviour in case of 1 consumer and multiple partitions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way of listing blobs with GCP python storage package under a specific folder, and excluding a specific subfolder path?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":71560509,"Users Score":3,"Answer":"The API does not have an exclude option for prefixes.","Q_Score":0,"Tags":"python-3.x,google-cloud-storage","A_Id":71562371,"CreationDate":"2022-03-21T15:57:00.000","Title":"GCP storage list_blobs with prefix and exclude a specific subfolder","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I import my own python modules via\npip install -e path\nThe webserver warns with No module named and cannot import the dag directly.\nHowever, after refreshing, it some import the dag success while sometimes fail.\nIf I execute it, the tasks are working without error.\nHow can I solve this issue?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":71561210,"Users Score":0,"Answer":"I finally figured out. Because I am using Jupyter Notebook, it creates .ipynb_checkpoints for me.\nAfter I remove the folder, all dags work","Q_Score":0,"Tags":"python,airflow","A_Id":71575711,"CreationDate":"2022-03-21T16:49:00.000","Title":"Airflow occasionally cannot find the local python module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have installed and uninstalled azure.storage.blob multiple times, but this error persist. I have also tried installing typing-extensions myself. Any guidance would be greatly appreciated. As I can see, there weren't any updates to my existing code or the package itself. I did run pip list and typing_extensions was in there.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":71563440,"Users Score":0,"Answer":"Uninstalling and reinstalling typing_extensions itself fixed my problem.","Q_Score":0,"Tags":"python,python-3.x,azure,package","A_Id":71563592,"CreationDate":"2022-03-21T19:58:00.000","Title":"No Module named 'typing_extensions' 'from azure.storage.blob import BlobServiceClient'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Monit not invoking python script <--> OS is CentOS. first line in python script is \"#!\/usr\/bin\/env python3\" when i tried to invoke python script from my terminal its working but monit is not able to trigger the script.\nI tried to call python script from shell script in monit but no luck. even i tried adding PATH variable as second line to shell script.\nany help will be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":17,"Q_Id":71597506,"Users Score":0,"Answer":"Issue is with PATH environmental variable, like cron, monit is only taking subset of values from PATH variable instead of all values. So after explictly adding $PATH variable after shell interpreter, issue got resolved","Q_Score":0,"Tags":"python-3.x,monit","A_Id":71665032,"CreationDate":"2022-03-24T05:36:00.000","Title":"Monit not invoking python script <--> OS is CentOS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My question is about garbage-collection in general but I'm going to use python as an example. But I need to build up my question first with some facts, or at least what I think are facts, feel free to correct me.\nThe CPython implementation of python is a virtual machine written in C which contains a garbage-collector. But Jython, which is an implementation of python that runs on JVM, does not need to implement a garbage-collector, because all it's doing is compiling python to java bytecode that the JVM is going to run and JVM already has a garbage-collector.\nBut suppose I want to implement a python VM using Golang. As far as I know, when you compile your Go code, the Go compiler attaches a runtime to your compiled binary which contains a garbage-collector, so when I compile my VM, it's going to contain the Go runtime and garbage-collector. Does that mean that if I write a python VM in Golang, I won't need to implement a garbage-collector?\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":44,"Q_Id":71638331,"Users Score":1,"Answer":"The answer depends on how you implement the Python VM.\nUsually, an interpreter will need to keep track of all the symbols declared in a program. Thus even if the program removes all the references to an object, the runtime (Go program) will keep a reference to it, preventing it from being garbage collected. Such an implementation will require a custom garbage collector.\nA custom garbage collector can be avoided if your interpreter does reference counting for every symbol, and releases it as soon as all the references to it are gone. In that case, the Go garbage collector will collect it eventually. Reference counting is usually easier to implement, but may be less efficient for long-running programs.","Q_Score":0,"Tags":"python,go,garbage-collection","A_Id":71638507,"CreationDate":"2022-03-27T16:49:00.000","Title":"garbage-collection for VM implemented in a garbage-collected language","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My question is about garbage-collection in general but I'm going to use python as an example. But I need to build up my question first with some facts, or at least what I think are facts, feel free to correct me.\nThe CPython implementation of python is a virtual machine written in C which contains a garbage-collector. But Jython, which is an implementation of python that runs on JVM, does not need to implement a garbage-collector, because all it's doing is compiling python to java bytecode that the JVM is going to run and JVM already has a garbage-collector.\nBut suppose I want to implement a python VM using Golang. As far as I know, when you compile your Go code, the Go compiler attaches a runtime to your compiled binary which contains a garbage-collector, so when I compile my VM, it's going to contain the Go runtime and garbage-collector. Does that mean that if I write a python VM in Golang, I won't need to implement a garbage-collector?\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":44,"Q_Id":71638331,"Users Score":1,"Answer":"The go garbage collector will understand the structures of your VM, not about the code that the VM itself will handle.\nSo the answer is: you still need to implement a garbage collector for your VM\nFor instance: imagine a very simple Virtual Machine with some memory and few instructions. The memory can be a slice of bytes in Go. But will ne one slice, used from begin to end. The GC will not clear and return this piece of memory to the OS while the VM is using.\nBut if you can map some aspects of this VM to Go, perhaps you may have some advantage. For instance, you may map a thread to a goroutine. But a GC need some extra effort. You need to know that there is no references to that portion of memory to clean it. And this is totally specific to the VM","Q_Score":0,"Tags":"python,go,garbage-collection","A_Id":71638484,"CreationDate":"2022-03-27T16:49:00.000","Title":"garbage-collection for VM implemented in a garbage-collected language","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm new to macos and trying to get a dev environment going...I need pip but getting an error message with just pip.\nGetting error pkg_resources.DistributionNotFound: The 'pip==20.0.2' distribution was not found and is required by the application\nPython3 was installed with macos and I tried to install 2.x version and made it the global default. I think that's why I'm getting the above error.\nI uninstalled the 2.x python using pyenv.\nHow can I remove pip (i've got pip, pip3 and pip 3.8) and start over.\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":71651049,"Users Score":0,"Answer":"Can you try removing pip using this in the directory where pip pacakge is installed?\nsudo rm -r pip \nYou can get the directory where pip is installed with this: which pip and then cd into that path and run the above command.","Q_Score":0,"Tags":"python,macos,pip","A_Id":71651396,"CreationDate":"2022-03-28T17:05:00.000","Title":"Macos uninstall pip and start over","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am facing an issue where my DAG isn't importing into Airflow due to a \"ModuleNotFoundError: No Module Name package\" error.\nThe provider throwing the error shows in the Airflow UI and works if I import it inside of a Python Operator. I am installing the provider via a requirements file in a Docker Image which also shows correct instillation and show's it's installed in site packages. I am running via a Celery executor on Kubernetes. Could this have something to do with the issue?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":71651584,"Users Score":0,"Answer":"This occurred due to several Kubernetes pods not being re-built. This caused my scheduler to not have the required packages which caused the import errors. I manually restarted those nodes and re-built it from the last Docker file and everything ran smoothly.","Q_Score":0,"Tags":"airflow,python-import,directed-acyclic-graphs","A_Id":71932178,"CreationDate":"2022-03-28T17:48:00.000","Title":"Broken Airflow DAG: No Module Found From Docker Requirement Instillation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am on a Mac OS. Why are there two of them \"Python\" and the other one on top is 'Python launcher\u2019.\nimage - (file:\/\/\/Users\/Buddhikawijegunarathna\/Desktop\/Screenshot%202022-03-28%20at%203.22.27%20PM.png)\nWhat is the difference?\n\"Python launcher\" is in Applications\/python 3.10\/Python launcher\n(the current version I use of python is 3.11)\n\"python\" is in the Macintosh HD\/library\/frameworks\/python.frameworks\/resources\/python app\nI can't run python files that use modules using 'Python launcher' but can using the \u2018Python' app.\nAnd I can run a python file from anywhere, maybe desktop or in a folder or anything by using 'Python launcher\u2019, but in \u2018Python' app it either works in the desktop or a specific place, and strictly not inside folders. (if I run, it's displaying an error as the directory cannot be found.)","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":50,"Q_Id":71655147,"Users Score":-1,"Answer":"Python is a computer language, also means an overall of the whole Python system;\nWhile Python Launcher is an executable file which launches Python compiler & functions when you use Python.","Q_Score":1,"Tags":"python,module","A_Id":71655177,"CreationDate":"2022-03-29T00:28:00.000","Title":"Why are there two called \"Python\u201d and \"Python launcher\u201d seperately?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to compile some c++code that uses matplotlibcpp and I am having quite a difficult time. In my makefile, I am following the example on the matplotlibcpp documentation and adding a -lpython3.9 flag (I am using python 3.9 because my macs python2.7 doesnt allow me to link -lpython2.7).\nWhen I try to compile I get an error stating:\n\"\"ld: library not found for -lpython3.9\"\nI would like to know what the correct library name I need to use is so I can have access to python 3.9.\nFor context I am using clang++ and python 3.9 installed using homebrew\nPS: I searched in my Versions\/3.9\/lib folder and it has a file called \"libpython3.9.dylib\" which seems like it might be what I want but I dont know how to include it that same way I would -lpython3.9.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":25,"Q_Id":71655157,"Users Score":1,"Answer":"What is the equivalent of -lpython2.7 for python 3.9 (ie where is the python3 library?)\n\nIt doesn't have much to do either with C++ or with makefiles, but with a conventional Python 3.9 installation, the analogue of -lpython2.7 is -lpython3.9.\n\nWhen I try to compile I get an error stating:\n\"\"ld: library not found for -lpython3.9\"\n\nThat is almost certainly a sign that the library is outside the applicable library search path, not that you have an inappropriate -l option. You would typically resolve that by using an -L\/path\/to\/directory\/of\/libpython option earlier on the command line.","Q_Score":1,"Tags":"c++,python-3.x,matplotlib,makefile,compiler-errors","A_Id":71655222,"CreationDate":"2022-03-29T00:30:00.000","Title":"C++ Makefiles: What is the equivalent of -lpython2.7 for python 3.9 (ie where is the python3 library?)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In a jupyter notebook, I can fairly easily check with python code if some libraries are installed on the current kernel of the python notebook.\nHowever there is also the \"host\" kernel which has its own python env (ie the python process that was launched when jupyter notebook was called). Depending on what libraries\/extensions were installed on the host, it may not be possible to do specific things on the jupyter notebook client itself.\nIs there a way to query what libraries\/modules\/extensions are installed on the host, from the \"client\" notebook ? thanks","AnswerCount":6,"Available Count":3,"Score":-0.0333209931,"is_accepted":false,"ViewCount":144,"Q_Id":71668268,"Users Score":-1,"Answer":"I believe !pip list should work- if not, then pydoc most likely would, see above.","Q_Score":2,"Tags":"python,jupyter-notebook","A_Id":71785424,"CreationDate":"2022-03-29T20:13:00.000","Title":"query what libraries are on the host of the python notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"In a jupyter notebook, I can fairly easily check with python code if some libraries are installed on the current kernel of the python notebook.\nHowever there is also the \"host\" kernel which has its own python env (ie the python process that was launched when jupyter notebook was called). Depending on what libraries\/extensions were installed on the host, it may not be possible to do specific things on the jupyter notebook client itself.\nIs there a way to query what libraries\/modules\/extensions are installed on the host, from the \"client\" notebook ? thanks","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":144,"Q_Id":71668268,"Users Score":0,"Answer":"Please use !pip list --local or !pip freeze --local if you're using a virtual environment.","Q_Score":2,"Tags":"python,jupyter-notebook","A_Id":71727485,"CreationDate":"2022-03-29T20:13:00.000","Title":"query what libraries are on the host of the python notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"In a jupyter notebook, I can fairly easily check with python code if some libraries are installed on the current kernel of the python notebook.\nHowever there is also the \"host\" kernel which has its own python env (ie the python process that was launched when jupyter notebook was called). Depending on what libraries\/extensions were installed on the host, it may not be possible to do specific things on the jupyter notebook client itself.\nIs there a way to query what libraries\/modules\/extensions are installed on the host, from the \"client\" notebook ? thanks","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":144,"Q_Id":71668268,"Users Score":0,"Answer":"I guess you can use !pip list to show the modules\/libraries installed on the current env. But I don't think you can view extensions.","Q_Score":2,"Tags":"python,jupyter-notebook","A_Id":71703766,"CreationDate":"2022-03-29T20:13:00.000","Title":"query what libraries are on the host of the python notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I currently have a silly problem, because I wanted to switch from Iterm to Hyper, which seems interesting to me with plugins.\nPS: I'm on mac M1\nHowever, when I try to run hyper i... I get these 2 stupid errors:\n\/opt\/homebrew\/bin\/hyper: line 4: \/usr\/bin\/python: No such file or directory\n\/opt\/homebrew\/bin\/hyper: line 8: .\/MacOS\/Hyper: No such file or directory\nFor python, it is installed, except that it is called python3 and not python, the problem is that thanks to SIP, I can't create from simlink which is called 'python'.\nFor the second error, I don't know what it is.\nThanks in advance to anyone","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":71694519,"Users Score":0,"Answer":"I had the same error. It turned out my path to python is \/usr\/bin\/python3 and I changed that in \/opt\/homebrew\/bin\/hyper: line 4","Q_Score":0,"Tags":"python,hyperterminal","A_Id":71707645,"CreationDate":"2022-03-31T14:39:00.000","Title":"2 errors appear when I want to use Hyper CLI","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a simple python project to water my plants, it works with a Raspberry and RaspberryPi os lite.\nIn the project I have a package named app with a __main__.py to launch it, I just type python -m app in the terminal and it works fine.\nI tried to make a crontab with * * * * * \/usr\/bin\/python \/home\/pi\/djangeau\/app\nNothing happens, whereas if I launch a simple python test script it does.\nThe cron log gives me that error : 'No MTA installed, discarding output' , not sure this is useful to solve the problem.\nI hope I was clear enough. Thank you for your answers. Vincent","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":71696609,"Users Score":0,"Answer":"Finally I figured out:\nFor exemple that command in bash : python -m app\nis equivalent to this one for a crontab :\n* * * * * cd \/home\/pi\/djangeau && \/usr\/bin\/python -m app\nJust replace the correct path and stars by the schedule you want","Q_Score":0,"Tags":"cron,python-3.9,raspberry-pi-os","A_Id":71707728,"CreationDate":"2022-03-31T17:07:00.000","Title":"Crontab and python project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I came across a surprising result while fixing for a cross-platform bug in a python script.\n(sidenote: same venv specified by poetry, otoh this is basic python library functionality)\nwanting to reduce some path-trunk by a header part, i used:\nos.path.relpath('..\/data\/what\/ever\/path\/trunk.', '..\/data\/')\nwhich for linux yields:\n'what\/ever\/path\/trunk.'\nwhile windows produces:\n'what\/ever\/path\/trunk' (mark the missing dot at the end)\nwhy is this, and how can such aberrant behavior be justified ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":35,"Q_Id":71698569,"Users Score":0,"Answer":"On Windows, the last . in a filename is used to separate the file extension from the rest of the filename. If the extension is blank, the . is optional; both forms would reference the same file.\nLinux doesn't have filename extensions, so the trailing . is a valid part of the filename and must be preserved.","Q_Score":0,"Tags":"python-3.x,linux,windows,cross-platform","A_Id":71698734,"CreationDate":"2022-03-31T20:06:00.000","Title":"differences in python path handling with trailing dot ('.') between linux and windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have multiple python scripts that keep running in background. I need to distinguish between them. So I think about giving every script like a name or title that shows up in task manager or PROCESS EXPLORER .\nFrom what I searched, this is possible in Linux, and impossible in windows.\nI Tried a module SETPROCTITLE but It didn't work.\nI tried using pytoexe, It works but It's difficult as It requires to generate an exe file every time you edit your code.\nIs there any portable method to give every python script a name that display it in PROCESS EXPLORER ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":61,"Q_Id":71718139,"Users Score":1,"Answer":"Lots of applications have zillions of instances that can be hard to distinguish.\nIn Task Manager, I use the Details tab. Right click on one of the column headers, and choose \"Select columns\" from the contextual pop-up menu. If you select the \"Command line\" column (and make your window wide enough), you might be able to distinguish between your python processes by the name of the script.\nI'm pretty sure Process Explorer can also show the command line, but I don't use it frequently enough to remember the details.","Q_Score":0,"Tags":"python,windows,automation,taskmanager","A_Id":71718634,"CreationDate":"2022-04-02T13:52:00.000","Title":"Give python process a name or title for windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am sending and recieving packets between two boards (a Jeston and a Pi). I tried using TCP then UDP, theoritically UDP is faster but I want to verify this with numbers. I want to be able to run my scripts, send and recieve my packets while also calculating the latency. I will later study the effect of using RF modules instead of direct cables between the two boards on the latency (this is another reason why I want the numbers).\nWhat is the right way to tackle this?\nI tried sending the timestamps to get the difference but their times are not synched. I read about NTP and Iperf but I am not sure how they can be run within my scripts. iperf measures the trafic but how can that be accurate if your real TCP or UDP application is not running with real packets being exchanged?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":71723297,"Users Score":0,"Answer":"It is provably impossible to measure (with 100% accuracy) the latency, since there is no global clock. NTP estimates it by presuming the upstream and downstream delays are equal (but actually upstream buffer delay\/jitter is often greater).\nUDP is only \"faster\" because it does not use acks and has lower overhead. This \"faster\" is not latency. Datacam \"speed\" is a combo of latency, BW, serialization delay (time to \"clock-out\" data), buffer delay, pkt over-head, and sometimes processing delay, and\/or protocol over-head.","Q_Score":0,"Tags":"python,tcp,udp,ntp","A_Id":71924115,"CreationDate":"2022-04-03T05:42:00.000","Title":"What is the right way to measure server\/client latency (TCP & UDP)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am sending and recieving packets between two boards (a Jeston and a Pi). I tried using TCP then UDP, theoritically UDP is faster but I want to verify this with numbers. I want to be able to run my scripts, send and recieve my packets while also calculating the latency. I will later study the effect of using RF modules instead of direct cables between the two boards on the latency (this is another reason why I want the numbers).\nWhat is the right way to tackle this?\nI tried sending the timestamps to get the difference but their times are not synched. I read about NTP and Iperf but I am not sure how they can be run within my scripts. iperf measures the trafic but how can that be accurate if your real TCP or UDP application is not running with real packets being exchanged?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":71723297,"Users Score":0,"Answer":"While getting one-way latency can be rather difficult and depend on very well synchronized clocks, you could make a simplifying assumption that the latency in one direction is the same as in the other (and no, that isn't always the case) and measure round-trip-time and divide by two. Ping would be one way to do that, netperf and a \"TCP_RR\" test would be another.\nDepending on the network\/link speed and the packet size and the CPU \"horsepower,\" much if not most of the latency is in the packet processing overhead on either side. You can get an idea of that with the service demand figures netperf will report if you have it include CPU utilization. (n.b. - netperf assumes it is the only thing meaningfully consuming CPU on either end at the time of the test)","Q_Score":0,"Tags":"python,tcp,udp,ntp","A_Id":72481274,"CreationDate":"2022-04-03T05:42:00.000","Title":"What is the right way to measure server\/client latency (TCP & UDP)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am attempting to find the original file that is being printed. I am aware that there is an SPL file stored in C:\\Windows\\System32\\spool\\PRINTERS that triggers the print job, but I would like to find the file used to create this spool file.\nIs there a way to get the full path of the document printed using winspool or win32 API?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":71724652,"Users Score":1,"Answer":"In general, no. Only the application calling the print APIs is aware of any file involved (if any).","Q_Score":1,"Tags":"python,windows,winapi,printing,print-spooler-api","A_Id":71731740,"CreationDate":"2022-04-03T09:34:00.000","Title":"Is there a way to get the full path of a printed file in windows?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"If I have a main.py file in a folder, how can I create a command in my PC that, calling only main from any point in the terminal, makes my main.py file running? Thanks","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":71756133,"Users Score":0,"Answer":"Setting the path and variables in Windows 10\n\nPress the Windows key+X to access the Power User Task Menu. In the\nPower User Task Menu, select the System option. In the About window,\nclick the Advanced system settings link under Related settings on\nthe far-right side.\n\nIn the System Properties window, click the Advanced tab, then\nClick the Environment Variables button near the bottom of that\ntab.\n\nIn the Environment Variables window (pictured below),\nhighlight the Path variable in the System variables section and\nclick the Edit button.\n\nAdd or modify the path lines with the paths you want the computer\nto access. Each directory path is separated with a semicolon, as\nshown below.\nC:...\\main.py","Q_Score":0,"Tags":"python","A_Id":71756217,"CreationDate":"2022-04-05T17:38:00.000","Title":"How to create a python program that run calling name instead of name.py from root terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"If I have a main.py file in a folder, how can I create a command in my PC that, calling only main from any point in the terminal, makes my main.py file running? Thanks","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":71756133,"Users Score":0,"Answer":"If you are on Linux or Mac OS X, make sure the program has a valid shebang line such as #! \/usr\/bin\/python that points to the Python executable you want to run the script with, then enter chmod +x main.py. Finally, rename the script to remove the .py extension.\nYou can now invoke it (if you're in the same directory) with .\/main. If you want to be able to invoke regardless of the directory you're in, add the script's directory to your PATH variable.","Q_Score":0,"Tags":"python","A_Id":71756214,"CreationDate":"2022-04-05T17:38:00.000","Title":"How to create a python program that run calling name instead of name.py from root terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Disclaimer\nI'm new to poetry and I've searched around, but apologies if I've missed something obvious..\nWhat I want to do is:\n\nspecify an arbitrary python version in a poetry project (I don't care how I specify it, but presumably in pyproject.toml)\nrun a command (presumably poetry shell or poetry install, maybe in conjunction with poetry env use) that puts me into an environment that uses the python version I specified above (I don't mind if it's a few commands instead of just one)\n\nI've already tried:\n\npoetry env use 3.10 (when I don't have python 3.10 installed already)\n\nThis gives me an error: \/bin\/sh: python3.10: command not found\nNotably, I get why this error is showing up, I'd just like to have poetry install python 3.10 in this case\n\n\npoetry env use 3.10 (when I'm in a conda env that has python 3.10 installed)\n\nThis works! But... the python executeable is symlinked to the one in the conda env\nMaybe this is fine, but my goal is to use poetry instead of conda so I'd like to avoid relying on conda to install python versions\n\n\n\nWhat I've seen people do is:\nUse poetry in conjunction with pyenv (where pyenv is used to install the python version).\nSo, my question is:\nIs using something like pyenv to install python versions necessary? Is there no way to tell poetry that I want it to install a given python version?\nI've looked through the poetry docs, their GH issues, and some SO posts, but (again) apologies if I'm missing something.\nAdditional Info\n\npoetry version: 1.1.13\nOS: MacOS (Catalina)\npython version used to install poetry (and therefore the one it seems to default to): 3.7.6\nhappy to add anything else relevant :)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":295,"Q_Id":71788294,"Users Score":1,"Answer":"Poetry cannot and will not install any python interpreter for you. This is out of scope. It's the responsibility of the user to provide one.\npyenv is one way of doing it.","Q_Score":1,"Tags":"python,python-poetry","A_Id":71805500,"CreationDate":"2022-04-07T20:04:00.000","Title":"Can I install a new version of python using only poetry?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I access the folder using os.path.getmtime(folder_path), it does not return the modified time of a file. os.path.getatime(folder_path) returns the correct last modified time instead.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":46,"Q_Id":71792404,"Users Score":2,"Answer":"On Windows, a folder's modified time is the time when the folder was last updated, not when a file in the folder was last updated.\nWhen you create a file in a folder, getmtime(folder) and getatime(folder) are both updated.\nWhen you edit an existing file in a folder, only getatime(folder) is updated, not getmtime(folder).\nWhen you read an existing file in a folder, only getatime(folder) is updated, not getmtime(folder).\nTo find when the latest file was updated in a folder, neither getatime(folder) nor getmtime(folder) will help. You need to loop through the files under the folder and use getmtime(each_file_in_folder).","Q_Score":2,"Tags":"python,python-3.x,windows","A_Id":71793830,"CreationDate":"2022-04-08T06:14:00.000","Title":"What's the difference between 'access' and 'modified' times in Windows folder structure?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a stepper motor controller that I can command through a USB COM on windows. The manufacturer provided a software but I want to create my own on python (In fact, I want to include the control of the stepper into a python code that control another device using the stepper). The problem is that I don't have any information about the commands to send to the controller to move the motor. I want to know if there is a way to read the command sent to the controller using the manufacturer software (like move the motor and read the command sent) and then use that command to write my own code on python ? I want to know if my idea is pure fantasy or if this can actually be done ? Thanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":71814391,"Users Score":0,"Answer":"I think it's a bit hard since the manufacturer already has its own software meaning their software already bind with the firmware of the controller.\nOne way to do that is you have to look for the way to communicate with the firmware between python and your controller. Who know to do this? the manufacturer. If you have a basic of electrical engineering I think its possible but still hard.","Q_Score":1,"Tags":"python,windows,libusb,pyusb,stepper","A_Id":71814721,"CreationDate":"2022-04-10T06:53:00.000","Title":"Communication with a stepper motor controller through a USB COM?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Getting the below error while configuring python3.9.10 with the below command\n.\/configure --prefix=\/opt\/python3 --with-openssl=\/usr\/local\/openssl\/include\/openssl\nchecking for openssl\/ssl.h in \/usr\/local\/openssl\/bin\/openssl... no\nchecking whether compiling and linking against OpenSSL works... no\nUsing linux version 7 and openssl version OpenSSL 1.1.1n 15 Mar 2022","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":71829083,"Users Score":0,"Answer":"Install libssl-dev in Debian\/Ubuntu or openssl-devel in RH\/CentOS","Q_Score":0,"Tags":"python-3.x,linux,openssl,python-3.9","A_Id":71829440,"CreationDate":"2022-04-11T13:54:00.000","Title":"Unable to install Python 3.9.10 on Linux 7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to write basic program with Python. I'm typing Windows commands with os library. Because of that, it doesn't work and wants to be admin. There was being superuser with a command in Linux(sudo). I couldn't find any way to run my program as administrator. I tried wmic and got an error named \"Alias not found\". Are there any way to run program as administrator?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":71843876,"Users Score":0,"Answer":"If you are trying to make a program run in admin mode(the short handle we use for windows). If the object is an executable file or type of script and a few other file types I am not 100% certain of the extensive list. You would just right-click the file and select run as admin mode. If you are trying to run your program in admin mode I do believe that if you make python run as admin mode all the time your .py files should inherit the administrative privileges I would need someone to clarify that though. I am slightly confused as to your question though, as you say typing commands in os library are referring to the CMD(Command Prompt)?\nEdit: In case you were referring to CMD open the start menu and type CMD and Right click and open as admin mode.","Q_Score":0,"Tags":"python,windows","A_Id":71844122,"CreationDate":"2022-04-12T13:47:00.000","Title":"Running as administrator in Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When i run my python script from terminal i get error due to missing env variable. i can fix by this using export and setting it. But there are other python scripts that are running fine as cron jobs that require this as well. And i see this env variable being set in cron tab. So my question is if env variable set in cron is available for scripts run by user\/root from cli? Running env command as root doesn\u2019t show this variable.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":71880095,"Users Score":0,"Answer":"Environment variables are private to each process, not global. When a new process is started it inherits its initial environment from its parent. Your terminal shell isn't started by cron so it doesn't inherit the env vars you've set in your crontab.\nYour OS may provide a standard mechanism for setting env vars for all processes (e.g., your terminal which then starts your CLI shell). What I prefer to do is create a file named ~\/.environ that I then source in my ~\/.bashrc and cron jobs.","Q_Score":0,"Tags":"python,shell,cron,environment-variables","A_Id":71889044,"CreationDate":"2022-04-15T04:29:00.000","Title":"python script doesn\u2019t pick up environment variable that is set in crontab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm writing a script that processes files in directories, but I want the directories' timestamps to remain the same. So I want to get the timestamp before the operation, then set in back after.\nGetting the timestamp is easy enough (os.path.getmtime()), but I can't seem to find an equivalent set method.\nAny suggestion?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27,"Q_Id":71897351,"Users Score":0,"Answer":"use os.utime().\nShould work fine.\nDefine variables with date, time and stuff before so the actual setting function doesn't get that cluttered.","Q_Score":0,"Tags":"python,timestamp","A_Id":71897385,"CreationDate":"2022-04-16T20:41:00.000","Title":"Set folder timestamp with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I know that Batch processing relies on collection of data, Stream processing relies on continuous data.\nPlease, explain me in simply words, why Apache Airflow is not a data streaming solution, but a batch processing.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":53,"Q_Id":71909751,"Users Score":2,"Answer":"Airflow is not a data processing solution at all: stream or batch. Airflow is a \"platform to programmatically author, schedule and monitor workflows\"\nIf you want to build data processing workflow, you should delegate all calculations to data processing tools, such as Apache Spark. So, Airflow does not have its own limitations (as well as opportunities) to process data in streaming or batching ways\nBut you may notice that streaming workflows are more difficult to coordinate with Airflow. Workflows in Airflow are written as directed graphs: after one statement completes, execution moves to the next. In the case of stream processing, there is no moment of \"completion\": all processes works continuously and parallel\nSummarizing. You can use Airflow to \"coordinate\" stream processing, but you won't get any benefit from using it","Q_Score":0,"Tags":"python,airflow,airflow-scheduler,airflow-2.x","A_Id":71954837,"CreationDate":"2022-04-18T09:07:00.000","Title":"Why Apache Airflow is not a data streaming solution","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"When I enter\npython --version\nit gives:\nbash: python: command not found\nbut when I enter\nsudo apt-get install python\nit gives:\npython is already the newest version (2.7.16-1).\nWhen trying to locate the python files, they appear mostly in \/var\/lib\/dpkg\/info\/ or in \/home\/pi\/.local\/bin\/, therefore are present in the system, however they do not appear where they would normally be found in \/usr\/local\/bin.\nThe same thing goes for the pip, python3 and pip3 files.\nHow do I fix this so that I can use the python command?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":45,"Q_Id":71918706,"Users Score":1,"Answer":"To install python to the newest version run sudo apt-get install python3\nYou then need to run python3 for linux based systems so python3 --version should work.","Q_Score":0,"Tags":"python,installation,pip,raspberry-pi","A_Id":71936570,"CreationDate":"2022-04-19T01:29:00.000","Title":"Python installed on raspberry pi, but not accessible in \/usr\/local\/bin","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"What are best practice approaches of properly getting messages from Kafka and generating INSERT\/UPDATE\/DELETE statements for relational dbs using Python?\nSay, I have events that Create Entity\/Update Entity\/Delete Entity and I want those messages to be transformed into relevant SQL script.\nIs there any suggestion rather than writing serialization manually?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":30,"Q_Id":71937526,"Users Score":1,"Answer":"There is no way around deserializing the record from Kafka and serializing into the appropriate database query. I would not recommend writing literal DDL statements as Kafka records and running those directly against a database client.\nAs commented, you can instead produce data in a supported format (JSONSchema, Avro, or Protobuf being the most common \/ well-documented) from Kafka Connect (optionally using a Schema Registry), then use a Sink Connector for your database.","Q_Score":0,"Tags":"python,sql,apache-kafka","A_Id":71944221,"CreationDate":"2022-04-20T09:47:00.000","Title":"Serialize Kafka message into DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I\u2019m download the lastest version of Delphi 11.1 and all is Ok, but when I try to debug project in MAC Monterey 12.3.1 using PAServer 22, I have an error in PAServer Terminal window, it said that Framework Python can\u2019t be found in System\/library\/Frameworks\/.., I install Python 2.7 but it\u2019s not installed in System\/... is installed in Library\/... and PaServer can\u2019t find it.\nHave you any Solutions for that problem?\nThank you.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":251,"Q_Id":71944001,"Users Score":0,"Answer":"Finally I have the solution, use:\nsudo install_name_tool -change '\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Python' \/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Python liblldb.3.8.0.dylib\nI change the dylib that crash, with new path for Python.","Q_Score":1,"Tags":"python,delphi,macos-monterey,paserver","A_Id":72048605,"CreationDate":"2022-04-20T17:42:00.000","Title":"PAServer 22 is not working in Mac Monterey, it use dylib that link with framework Python 2.7 and not exist in \/System\/library\/Frameworks of Monterey","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have developed a django REST API. I send request\/data to it to perfrom a task and it does it nicely. Though, In a way, I can send it multiple request\/data to perfrom the task on each of them. The issue is that server where the task gets performed has limited memory and I need to perform these task one by one. So, I am thinking to have a queue system at django pipeline which can maintain the reqeust on hold till the task in front of the queue is done.\nI am not sure if I am on right path, but not sure if celery is the option to solve my issue?\nIt seems a simple task and I didnt understand if celery is what i need. Can you point me what should be looking at?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":71955552,"Users Score":0,"Answer":"If you want to keep your API as is, then celery will not help you. It's a good idea to keep API calls as short as possible. If you have some longer job done during API calls (sending emails, for example), then you better use celery. But the only thing you can get as a response to your API is that the task was queued.","Q_Score":0,"Tags":"python,django","A_Id":71955639,"CreationDate":"2022-04-21T13:43:00.000","Title":"Queue in django\/python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to change the title text to other names but i don't know the command.\nLike from C:\\windows\\py.exe to a better looking title like Python but I cannot find how to do it.\nI need the Python 3.10 version of Windows command 'title ...'\nThanks for your help!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":71993399,"Users Score":0,"Answer":"You can use os module system command os.system(\"title your title\").","Q_Score":1,"Tags":"python,python-3.x","A_Id":71993411,"CreationDate":"2022-04-25T00:33:00.000","Title":"How to change python window title?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"As CTT is a windows application , I couldn't find a way to make api calls to it , is there a way to open and run it ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20,"Q_Id":71993926,"Users Score":1,"Answer":"Read the documentation for the CTT, you'll see it has a CLI that allows it to be run as part of a (Windows-based) build\/CI environment.","Q_Score":1,"Tags":"python,opc-ua,opc","A_Id":72006102,"CreationDate":"2022-04-25T02:30:00.000","Title":"Is there a way to call the OPC complaince test tool using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to use wand on python on my Mac Monterrey. When I run the command on python from wand.image import Image as Img I get the error ImportError: MagickWand shared library not found. You probably had not installed ImageMagick library.\nThis is what I had done, following diverse guidelines. Any suggestions as to how to make wand find imagemagick?\nI installed imagemagick via homebrew. I can confirm that I have that the following items exist in my computer: \/opt\/homebrew\/Cellar\/imagemagick and \/opt\/homebrew\/Cellar\/imagemagick@6.\nI also did a brew unlink imagemagick && brew link imagemagick\nI added the following line to the end of my .zshrc:\nexport MAGICK_HOME=\"\/opt\/homebrew\/Cellar\/imagemagick\"\nI installed Wand via pip install in my local environment","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":72029757,"Users Score":0,"Answer":"That was the wrong path to put in the .zshrc. The correct path was export MAGICK_HOME=\"\/opt\/homebrew","Q_Score":0,"Tags":"python,imagemagick,homebrew,wand","A_Id":72041140,"CreationDate":"2022-04-27T13:34:00.000","Title":"Installation of imagemagick with Homebew not found by python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"For my work I need to write a GUI using PySide6 for a remote system. The OS is RHEL 7.9 and I have neither admin privileges nor PIP working (blocked by admins), so i can't install anything by myself (and i'm not allowed to anyways).\nThe script runs perfectly on Windows and Fedora, but it doesn't work on RHEL 7.9:\n\nSince the machine doesn't allow pip, I've included PySide6 in my virtual environment, but there are missing libraries in the system itself, like CXXABI_1.3.9 and GLIBC_2.33 that Shiboken6 needs.\nIt also didn't work in compiled form (with PyInstaller) because the GLIBC_2.29 is missing.\nNaively I copied libstdc++.so.6 and libc.so.6 from a Fedora machine to RHEL and redirected the linking to the libraries with the LD_LIBRARY_PATH environment variable, but because of other dependencies it didn't work.\n\nIs there a solution to make the script work cross-platform and independently?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":65,"Q_Id":72047360,"Users Score":0,"Answer":"This won't answer your question about the missing libraries, but I hope it helps solve your current issue with PySide.\nI've had a similar problem before, you should always develop on the target platform to get comparable results.\nThis means that you theoretically have to write, compile and package your program on the RHEL machine. You also need to always develop on the older platform. Forward compatibility is not always guaranteed. I therefore suggest, that you install CentOS 7 in a virtual machine and if your program is not too complicated try to use PySide2 instead of PySide6.","Q_Score":1,"Tags":"python,redhat,pyside","A_Id":72115390,"CreationDate":"2022-04-28T16:38:00.000","Title":"How to deal with missing libraries on Linux? [No Admin privileges and PIP is blocked]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been wanting to make a program that hides a folder that has a lot of my sensitive files on it. I use python as my main programming language.\nI need a way to hide the folder from Windows Explorer, even if the \"Show Hidden Files\" option is checked. I know how to hide it normally, right-clicking on the folder and checking the \"hidden\" option, but I need users to not be able to see it at all. I also need to be able to unhide it using the same program.\nIf anyone knows a solution, please let me know!\nThank you!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":72062151,"Users Score":0,"Answer":"I have opted for a secured, hidden folder marked as hidden, system reserved and encrypted, and requiring a python script to unlock. Using the attrib DOS command, but it does the trick. I'm also using EFS encryption on the folder, so even if the system and hidden attributes are cleared, people still can't access it.","Q_Score":0,"Tags":"python","A_Id":72067157,"CreationDate":"2022-04-29T18:13:00.000","Title":"Is there a way to hide a folder even when \"show hidden files\" is checked using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been wanting to make a program that hides a folder that has a lot of my sensitive files on it. I use python as my main programming language.\nI need a way to hide the folder from Windows Explorer, even if the \"Show Hidden Files\" option is checked. I know how to hide it normally, right-clicking on the folder and checking the \"hidden\" option, but I need users to not be able to see it at all. I also need to be able to unhide it using the same program.\nIf anyone knows a solution, please let me know!\nThank you!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":72062151,"Users Score":0,"Answer":"Theoretically, you could make a program that loads your folder into the RAM on startup while deleting it and re-saving the folder on shutdown, however this would be very risky when you press \"shutdown anyways\". If a certain condition is met, you could show the folder(e.g. saving it to file system).\nAnother Idea would be to try to remake it as the very infuriating folder in C:\\Program Files\\WindowsApps which you can only access in command-line\nAlso if you are able to complete this project, I would be very interested in using it too.","Q_Score":0,"Tags":"python","A_Id":72062347,"CreationDate":"2022-04-29T18:13:00.000","Title":"Is there a way to hide a folder even when \"show hidden files\" is checked using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I finished my python application and I converted it to .exe\nbut when I open the .exe file, I found that the cmd window is also open with the application window.\nI used python3.10 and tkinter\nalso used auto-py-to-exe to convert","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":72072190,"Users Score":0,"Answer":"You can add the --noconsole tag to the command when creating the .exe\npyinstaller \"other tags\" --noconsole app.py","Q_Score":0,"Tags":"python,cmd,auto-py-to-exe","A_Id":72072372,"CreationDate":"2022-04-30T21:07:00.000","Title":"cmd window always open when I open my .exe application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want my python code to be run in external terminal (window).\nObviously i have to edit 'launch.json' file, where I should\nchange the option 'console:internalConsole' to 'console:externalTerminal'.\nProblem is, I don't find a 'launch.json' file. I guess I have\nto set up one, but I'm not sure how to do it.\nSeems that the extension 'code runner' could do this trick,\nbut the extension breaks down.\nI tried to make changes in the settings menu, I chose code to be run\nin external terminal, but it still uses the internal one.\nMay be you can give me a direction ?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":72077096,"Users Score":0,"Answer":"it's done.Created launch.json file and 'console: externalTerminal'","Q_Score":0,"Tags":"python,visual-studio-code,terminal","A_Id":72078811,"CreationDate":"2022-05-01T13:20:00.000","Title":"How to run my code in external window (terminal)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been unable to find what status 'Active' tasks are. I'm using JUG 2.1.1, and I don't see that word appear anywhere in the manual, except in a footnote about 'active-wait'.\nI'm using an LSF array to run a large number (hundreds of thousands) of minutes-long single core jobs. Peculiarly, although jobs do move from 'Ready' to 'Complete', and none are listed as 'Failed' or 'Waiting', I have no column in the output from status for 'Running' (which I've seen in the worked examples) and instead have a column called 'Active'. The number of active tasks varies, but is between 800 and 950 for an LSF array with 2000 elements. According to LSF (output of bjobs -r), each of the elements in the job array shows status 'RUN'. Although I have not done it exhaustively, manually sshing to a node some of my jobs have landed on and then running 'htop' to look at utilization shows the expected number of processes, each pinning an available core. It is conceivable that there are some processes in my job array that are not doing this, however, since what I did amounts to a spot-check.\nDoes Running == Active for the output of jug status? Am I failing to use about 1100 processors that I am nonetheless occupying with nominally single-threaded jobs?\nThanks for the input. Happy to provide more details as needed.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":18,"Q_Id":72080275,"Users Score":1,"Answer":"(author of jug here): It does mean \"jobs running right now\".\nIf you are using the file backend, and are running 1,000s of jobs simultaneously, it may just be that the counting is not syncing properly: as jug status is working, some jobs may be running, but it does not see them as running because between the moment it starts listing the locks and going through the list of jobs, they have finished and others started. Also, the listing of locks can be out of sync on a network filesystem (it should not matter for actually creating locks, but that process is much slower and we do not wish to pay the cost for jug status).\nThis should be much less serious with the redis backend, btw.","Q_Score":1,"Tags":"python,lsf,embarrassingly-parallel","A_Id":72087940,"CreationDate":"2022-05-01T20:42:00.000","Title":"What does jug status 'Active' mean, and why does it not equal the number of procs requested?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"macos monterey 12.3.1\npython -V: python3\nwhich python3: \/usr\/local\/bin\/python3\nalias python=\"\/usr\/bin\/python3\"\npath: \/Users\/Bill\/Public\/browser\/depot_tools \/usr\/local\/bin \/usr\/bin \/bin \/usr\/sbin \/sbin\nproxy: proxychains4 + tor socks5 127.0.0.1 9150\ncloned dedepot_tools in git clone https:\/\/chromium.googlesource.com\/chromium\/tools\/depot_tools.git\nrun gclient working good\nrun: fetch v8\nrun: gclient sync working good\nrun: tools\/dev\/gm.py x64.release\nshow:\nenv: python: No such file or directory\nhow to fix it?\nshould install python-is-python3?\nbrew info python-is-python3\nError: No available formula with the name \"python-is-python3\". Did you mean python-tk@3.9?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":75,"Q_Id":72082131,"Users Score":0,"Answer":"As a short-term workaround, you have two options:\n(1) Create a symlink python in your $PATH that references python3. This can be done in one of several ways:\n\nSome Linux distros have a python-is-python3 package for that\nSome distros have other \"official\" ways to do it\nOr you can just sudo ln -s \/usr\/local\/bin\/python3 \/usr\/local\/bin\/python (or whatever the correct paths are on your system).\n\n(2) You can call gm.py with an explicit Python binary:\npython3 tools\/dev\/gm.py\nMedium-term, gm.py should be updated to require python3 directly. The fact that it doesn't do so yet is an artifact of the Python2-to-Python3 migration.","Q_Score":0,"Tags":"python-3.x,build,v8,gm,gn","A_Id":72084334,"CreationDate":"2022-05-02T03:43:00.000","Title":"gm.py x64.release Errorr---env: python: No such file or directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"As the title says, I've just upgraded to Ubuntu 22.04 LTS and my previously working setup now says ImportError: libssl.so.1.1: cannot open shared object file: No such file or directory when starting Jupyter, and equivalently throws Could not fetch URL https:\/\/pypi.org\/simple\/jupyter\/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: \/simple\/jupyter\/ (Caused by SSLError(\"Can't connect to HTTPS URL because the SSL module is not available.\")) - skipping whenever trying to use pip.\nLibssl is actually available at \/usr\/lib\/x86_64-linux-gnu\/libssl.so.1.1. I could change LD_LIBRARY_PATH but this seems to be a workaround.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":429,"Q_Id":72119046,"Users Score":0,"Answer":"I resolved this problem by reinstalling the environment.\nI use pipenv and pyenv. I removed the pipenv environment and the Python version using pyenv. Then reinstalled both.","Q_Score":0,"Tags":"python,ubuntu,libssl,ubuntu-22.04","A_Id":72216913,"CreationDate":"2022-05-04T20:20:00.000","Title":"libssl not found by Python on Ubuntu 22.04","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I once did something similar under windows, copying the whole python and specifying PYTHONPATH by a .bat script to make it work locally.\nBut today I got a Linux server that has a strict working environment and won't allow me to install anything. And unfortunately I know little about Linux. I wonder is there a similar way that I can run python on the server?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":72123285,"Users Score":0,"Answer":"Sorry, I cannot put a comment because of my low reputation.\nIn short, you cannot run a Python script directly without the interpreter installed. Fortunately, you can install a Python environment without root permission by using Miniconda (or Anaconda), then make a virtual environment and install the required packages to run your code locally for your use only.","Q_Score":0,"Tags":"python,linux","A_Id":72123419,"CreationDate":"2022-05-05T07:15:00.000","Title":"Is it possible to run python scripts without python installed in Linux?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I once did something similar under windows, copying the whole python and specifying PYTHONPATH by a .bat script to make it work locally.\nBut today I got a Linux server that has a strict working environment and won't allow me to install anything. And unfortunately I know little about Linux. I wonder is there a similar way that I can run python on the server?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":72123285,"Users Score":0,"Answer":"Yes, you can use python docker images for running python scripts.","Q_Score":0,"Tags":"python,linux","A_Id":72123308,"CreationDate":"2022-05-05T07:15:00.000","Title":"Is it possible to run python scripts without python installed in Linux?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i have a file for my discord bot which i run on boot and now i also made a react.js app and i also want this to run on boot. But when i put:\ncd \/home\/pi\/kickzraptor-site && npm run start\nalso in my .bashrc file, only the first one gets run because its an infinite loop i think. How can i run both on startup? Thanks! (This is the line already at the bottom of my bashrc file)\necho Running bot.py scipt...\nsudo python3 \/home\/pi\/bot.py","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":25,"Q_Id":72165603,"Users Score":1,"Answer":"Fastest way (not recommended) is to add & at the end of the command so that the program doesn't block for further processes. sudo python3 \/home\/pi\/bot.py &\nRecommended way is to create a systemd service that runs during or after the boot is completed (depending on its configuration). This method is also good for error handling and it provides more ability on the program.","Q_Score":0,"Tags":"python,reactjs,terminal,raspberry-pi","A_Id":72167532,"CreationDate":"2022-05-08T22:48:00.000","Title":"How can i run multiple infinite files on boot of raspberry pi?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have Arch Linux and so the latest NeoVim release is installed (0.7.0 at this moment). About a month I started using GitHub Copilot and it worked well in Bash, SH, JS and others. Yesterday I wanted to rewrite some program in Python but Copilot didn't work. Tried it in different files and languages - works everywhere but not Python! :Copilot status shows \"Copilot: Enabled and online\", but gives no suggestions. :Copilot panel shows \"Synthesizing 0\/10 solutions (Duplicates hidden)\". :Copilot log contains nothing. I remember that some time ago it worked as expected but now it does not. I don't have any ideas why is that happening. As an Arch user I reject VisualStudio Code and other IDEs and prefer working in terminal. Anything that may help?\nEdit: just discovered that opening a file without .py and printing #!\/usr\/bin\/env python3 works for Copilot, but in this case there's no syntax highlight. Reopening with :edit adds colors but breaks Copilot","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":171,"Q_Id":72174839,"Users Score":4,"Answer":"I've just solved it on my machine. I used nvm to set my NodeJS back to v16.13.0. reloaded neovim. Copilot now working as expected.\nCopilot was not working on NodeJS v18.0.0.\nWhat's annoying is tim-pope doesn't have the issues section active on the repo. So I'm sure others will run into this. Let me know if this solves it for you.","Q_Score":1,"Tags":"python,archlinux,neovim,github-copilot","A_Id":72203607,"CreationDate":"2022-05-09T15:53:00.000","Title":"GitHub Coptilot does not work in NeoVim when editing Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am very new on this platform and I need help reading a file in python. First, it was a docx file because I am using Mac, I converted it into a txt file ,but still have file not found error.\nHere is some code:\nopdoc = open('PRACT.txt')\nprint(opdoc.readline())\nfor each in opdoc :\nprint(each)\nand this the error output: FileNotFoundError: [Errno 2] No such file or directory: 'PRACT.txt'","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":72190278,"Users Score":0,"Answer":"Make a new folder and put the txt file in it and also the programme itself. Or just put them in the same directory.\nAs you have this error ( No such file or directory: 'PRACT.txt') your code couldn't find the file because they were not in the same directory. When you use open(\"README.txt\") the programme is assuming that the text is in the same directory as your code. If you don't want them in the same directory you could also try open(f\"[file_path]\") and that should work.","Q_Score":0,"Tags":"python,readfile,filenotfounderror","A_Id":72190884,"CreationDate":"2022-05-10T16:55:00.000","Title":"Reading file in python on the Mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been given a windows laptop at work to work on a web app using flask which involves a little bit of computer vision and nlp as well. I wanted to take this chance to transition to the command style in linux. Since it is my office laptop I am not allowed to dual boot or install anything unless it is specified. So I was wondering if gitbash can be used to transition to linux commands instead of the windows cmd or the anaconda prompt?\nI did try using gitbash but other than commits it is a hassle when using pip or sudo. Many times it does not recognize the command even though i have the python scripts added in PATH but it still isnt working with sudo and python. I felt i will face similar issues ahead so wanted to know if it is a good idea at all to use gitbash to practice transitioning to linux or is there any other alternate which is more smooth and hassle less.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":35,"Q_Id":72197569,"Users Score":1,"Answer":"In my experience the best way to learn Linux when you only have Windows installed is to use either Windows Subsystem for Linux (WSL) or to use Linux-based Docker containers.\ngitbash won't offer the full functionality of Linux, since it is largely relies on the binary commands and various components of Windows.\nDocker has a number of other benefits.\nFor example if your Flask application is Dockerised, it would make it easier to deploy on the server when needed.","Q_Score":1,"Tags":"python,linux,windows,command-line,git-bash","A_Id":72197705,"CreationDate":"2022-05-11T08:10:00.000","Title":"Using Git bash for python installations and task","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a container that I am working on. The container was running perfectly fine before, I was able to do a docker-compose --build and it rebuilt without any issues. I went ahead and upgraded my docker desktop on my Mac to version 4.8.1(78998), container was running and it restarted the container. I was able to down the container and start it back up without any issues. The problem is that when I attempt to rebuild the container\n\n\"docker-compose up -d --build\"\n\nI get the following error message:\n\nERROR: for secure_upload Cannot start service python: failed to\ncreate shim: OCI runtime create failed: container_linux.go:380:\nstarting container process caused: exec: \"uwsgi\": executable file not\nfound in $PATH: unknown\nERROR: for python Cannot start service python: failed to create shim:\nOCI runtime create failed: container_linux.go:380: starting container\nprocess caused: exec: \"uwsgi\": executable file not found in $PATH:\nunknown ERROR: Encountered errors while bringing up the project.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":72219252,"Users Score":0,"Answer":"I removed the container completly and deleted the image. I then started it and it rebuilt without any issues. I guess somehow the image got corrupt.","Q_Score":0,"Tags":"python,docker,docker-compose","A_Id":72219588,"CreationDate":"2022-05-12T16:36:00.000","Title":"container wont start after docker desktop upgrade","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We use opnerp (odoo) on a linux server (debian), I want to locate the python interpreter used by the odoo daemon,\nSo the question is how I can change the path to my new python interpreter.\nIn other words, how does odoo choose its interpreter to run the modules?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":72221834,"Users Score":1,"Answer":"In odoo-bin its called out like #!\/usr\/bin\/env python3","Q_Score":1,"Tags":"python,linux,odoo","A_Id":72222480,"CreationDate":"2022-05-12T20:38:00.000","Title":"what is the python interpreter for odoo serveur (openERP)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been looking around here and on the Internet, but it seems that I'm the first one having this question.\nI'd like to train an ML model (let's say something with PyTorch) and write it to an Apache Kafka cluster. On the other side, there should be the possibility of loading the model again from the received array of bytes. It seems that almost all the frameworks only offer methods to load from a path, so a file.\nThe only constraint I'm trying to satisfy is to not save the model as a file, so I won't need a storage.\nAm I missing something? Do you have any idea how to solve it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":72223191,"Users Score":1,"Answer":"One reason to avoid this is that Kafka messages have a default of 1MB max. Therefore sending models around in topics wouldn't be the best idea, and therefore why you could instead use model files, stored in a shared filesystem, and send URIs to the files (strings) to download in the consumer clients.\nFor small model files, there is nothing preventing you from dumping the Kafka record bytes to a local file, but if you happen to change the model input parameters, then you'd need to edit the consumer code, anyway.\nOr you can embed the models in other stream processing engines (still on local filesystems), as linked in the comments.","Q_Score":0,"Tags":"python,apache-kafka,scikit-learn,pytorch","A_Id":72223262,"CreationDate":"2022-05-12T23:58:00.000","Title":"Send and load an ML model over Apache Kafka","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Suppose I wrote a docker-compose.dev.yml file to set the development environment of a Flask project (web application) using Docker. In docker-compose.dev.yml I have set up two services, one for the database and one to run the Flask application in debug mode (which allows me to do hot changes without having to recreate\/restart the containers). This allows everyone on the development team to use the same development environment very easily. However, there is a problem: it is evident that while developing an application it is necessary to install libraries, as well as to list them in the requirements.txt file (in the case of Python). For this I only see two alternatives using a Docker development environment:\n\nEnter the console of the container where the Flask application is running and use the pip install ... and pip freeze > requirements.txt commands.\nManually write the dependencies to the requirements.txt file and rebuild the containers.\n\nThe first option is a bit laborious, while the second is a bit \"dirty\". Is there any more suitable option than the two mentioned alternatives?\nEdit: I don't know if I'm asking something that doesn't make sense, but I'd appreciate if someone could give me some guidance on what I'm trying to accomplish.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":239,"Q_Id":72236497,"Users Score":1,"Answer":"Install requirements in a virtualenv inside the container in an externally mounted volume. Note that the virtualenv creation and installation should happen in container run time, NOT in image building time (because there is no mounted volume).\nAssuming you are already mounting (not copying!) your project sources, you can keep it in a .\/.venv folder, which is a rather standard procedure.\nThen you work just as you would locally: issue the install once when setting up the project for the first time, requirements need not be reinstalled unless requirements change, you can keep the venv even if the container is rebuilt, restarting the app does not reinstall the requirements every time, etc, etc.\nJust don't exepect the virtualenv to be usable outside the container, e.g. by your IDE (but a bit of hacking with the site module would let you share the site-packages with a virtualenv for your machine)\n\nThis is a very different approach to how requirements are usually managed in production docker images, where sources and requirements are copied and installed in image building time. So you'll probably need two very different Dockerfiles for production deployment and for local development, just as you already have different docker-compose.yml files.\nBut, if you wanted them both to be more similar, remember there is no harm on also using a virtualenv inside the production docker image, despite the trend of not doing so.","Q_Score":2,"Tags":"python,docker,flask,docker-compose,requirements.txt","A_Id":72314260,"CreationDate":"2022-05-14T00:28:00.000","Title":"How to deal with Python's requirements.txt while using a Docker development environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Suppose I wrote a docker-compose.dev.yml file to set the development environment of a Flask project (web application) using Docker. In docker-compose.dev.yml I have set up two services, one for the database and one to run the Flask application in debug mode (which allows me to do hot changes without having to recreate\/restart the containers). This allows everyone on the development team to use the same development environment very easily. However, there is a problem: it is evident that while developing an application it is necessary to install libraries, as well as to list them in the requirements.txt file (in the case of Python). For this I only see two alternatives using a Docker development environment:\n\nEnter the console of the container where the Flask application is running and use the pip install ... and pip freeze > requirements.txt commands.\nManually write the dependencies to the requirements.txt file and rebuild the containers.\n\nThe first option is a bit laborious, while the second is a bit \"dirty\". Is there any more suitable option than the two mentioned alternatives?\nEdit: I don't know if I'm asking something that doesn't make sense, but I'd appreciate if someone could give me some guidance on what I'm trying to accomplish.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":239,"Q_Id":72236497,"Users Score":0,"Answer":"The second option is generally used in python environments. You just add new packages to requirements.txt and restart the container, which has a line with pip install -r requirements.txt in its dockerfile that do the installing.","Q_Score":2,"Tags":"python,docker,flask,docker-compose,requirements.txt","A_Id":72294372,"CreationDate":"2022-05-14T00:28:00.000","Title":"How to deal with Python's requirements.txt while using a Docker development environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm trying to start the interpreter in vs code, and I want my script to be loaded, so that I can experiment with it, like in python IDLE.\nI tried using REPL, on the terminal it shows \"c:\\currentWorkingDir> & ~...\/Python3.10.exe\", it starts the interpreter, but the script is not loaded.\nAnother way, if I manually type in the terminal \"python -t script.py\" the interpreter loads the script, but if in it, there is prompt for input and I decide to stop it (crtl^c), it throws me out of interpreter back to c:.\nIs there a way to load current code in the interpreter and if, force-stopped, to stay loaded, so I can do stuff ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":72251341,"Users Score":0,"Answer":"in terminal \"python -i script_name.py\" is working.","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":72251632,"CreationDate":"2022-05-15T18:58:00.000","Title":"vs code\/python interpreter - problem with working with current script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I recently have had trouble getting my command line to work in Mac terminal. My knowledge isn't great so I assume this is a simple fix but as of a day ago, I can't get any commands to work in mac terminal (I think I may have updated recently which might have something to do with it). When I try to install a Python module by running the command \"pip 3 install ...\" I get the error \"-bash: pip3: No such file or directory\". Before this would work fine. I have been using bash without any issues but when I open up terminal I get this message: \"The default interactive shell is now zsh.\nTo update your account to use zsh, please run chsh -s \/bin\/zsh.\" but when I try to run that I get this error: \"-bash: chsh: No such file or directory\". Is the cause that I'm in the wrong directory? I've tried using the cd command and that works without an error but none of the other commands do. Additionally, I also get this message whne opening up terminal: \"-bash: export: `\/Users\/nyname\/Library\/Python\/3.7\/bin:$PATH': not a valid identifier\" Any help would be appreciated thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":72253128,"Users Score":0,"Answer":"That last error is suspicious. Did you modify your .bashrc or .bash_profile manually? Can you show the contents of those? It looks like you've gotten the syntax for exporting a variable incorrect.","Q_Score":0,"Tags":"python,terminal","A_Id":72254146,"CreationDate":"2022-05-16T00:33:00.000","Title":"Having trouble getting command line to work in Mac Terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Every day, when I open an Ubuntu terminal and want to run a python project, I have to run previously export PYTHONPATH=$(pwd). Is there a way to avoid doing this every time I switch on my computer? Is there a way to set my PYTHONPATH permanently for that project?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":72253327,"Users Score":3,"Answer":"Put the following line in your ~\/.bashrc file:\nexport PYTHONPATH=\/the\/location\/of\/the\/path","Q_Score":1,"Tags":"python,linux,pythonpath","A_Id":72253339,"CreationDate":"2022-05-16T01:18:00.000","Title":"Is there a way to set PYTHONPATH permanently?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Every day, when I open an Ubuntu terminal and want to run a python project, I have to run previously export PYTHONPATH=$(pwd). Is there a way to avoid doing this every time I switch on my computer? Is there a way to set my PYTHONPATH permanently for that project?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":40,"Q_Id":72253327,"Users Score":1,"Answer":"You should be able to set it permanently through the ~\/.bashrc file or ~\/.profile file for your user. Just enter the line you showed into either of those files.","Q_Score":1,"Tags":"python,linux,pythonpath","A_Id":72253349,"CreationDate":"2022-05-16T01:18:00.000","Title":"Is there a way to set PYTHONPATH permanently?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have problem when I install python modules using Cygwin on windows. First, I install anaconda. Secondly, I installed Cygwin. if I install any modules using normal windows command promote, the library is installed in the anaconda directory and works perfectly. I'm trying to install pywin32 inside the Cygwin itself. Every time I got this error:\n$ pip install pywin32\nERROR: Could not find a version that satisfies the requirement pywin32 (from versions: none)\nERROR: No matching distribution found for pywin32\nI can install this library easily via anaconda, but I got the above error when I tried to install it inside the Cygwin. It seems that there are some python modules can be installed smoothly inside the Cygwin while others are not. any suggestions","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":72261499,"Users Score":0,"Answer":"This is what I do:\n\nClose all Cygwin processes.\nExecute the Cygwin setup app.\nLeave all automatic updates selected.\nSearch for any desired but uninstalled packages under \"Full\", and select them.\nComplete the setup.\nOpen a cygwin terminal.\nExecute python -V or python2 -V to confirm the version. Use python or python2 as required below.\nExecute python -m pip \u2026 where \u2026 is a pip command.\nExecute python -m pip list to see what packages are installed.\nExecute python -m pip install -U pylint, for example.\n\nThere are a couple of restrictions:\n\nAlways install the package from Cygwin setup, if it is available, because that package will have Cygwin specific changes.\nDo not install or update a package that can be installed by Cygwin setup using pip, as the PyPi package may not include Cygwin specific changes, and will not have the correct binary content for Cygwin.","Q_Score":0,"Tags":"python,python-3.x,cygwin,pywin32","A_Id":72276077,"CreationDate":"2022-05-16T15:12:00.000","Title":"Installing python modules inside Cygwin","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"The command: pyAI3.6\/Scripts\/activate\nThe results:\npyAI3.6\/Scripts\/activate: line 3: $'\\r': command not found\npyAI3.6\/Scripts\/activate: line 4: syntax error near unexpected token $'{\\r'' 'yAI3.6\/Scripts\/activate: line 4: deactivate () {\nThe command: pyAI3.6\/Scripts\/activate , works perfectly on windows","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":72283902,"Users Score":0,"Answer":"please use below command to activate:\nsource \/bin\/activate","Q_Score":0,"Tags":"linux,python-3.6","A_Id":72283939,"CreationDate":"2022-05-18T06:02:00.000","Title":"Trying to run a venv of python 3.6 on linux as an interpreter but even its activation on terminal is not possible due to the following errors","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a script 'myscript.py', i would like to run this script from my bash shell multiple times in parallel? The script takes input parameters.\nI've tried the following:\n$python myscript.py &\n$python myscript.py &\nand\n$python myscript.py &&\n$python myscript.py &&\nIs this even possible?\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":72290763,"Users Score":0,"Answer":"You could try opening two separate terminal instances and then type python myscript.py in each of them.","Q_Score":0,"Tags":"python,multithreading,parallel-processing,script","A_Id":72291029,"CreationDate":"2022-05-18T14:10:00.000","Title":"How can i run a python script multiple times in parallel?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Can an Azure Logic app which is under consumption plan call the the azure function which is under app service plan with VNet integration.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":72293330,"Users Score":0,"Answer":"Yes, you can. Depending on the size of your project, I'd recommend considering use of an APIM resource in between the Logic App and the Function App for abstraction and reduction of fragility. Not to mention APIM giving you the ability to support customer level api-keys with different rate limits, etc. in the future.","Q_Score":0,"Tags":"python,azure,azure-functions,azure-logic-apps,azure-app-service-plans","A_Id":72368048,"CreationDate":"2022-05-18T17:08:00.000","Title":"Can an Azure Logic app which is under consumption plan call the the azure function which is under app service plan with VNet integration","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"what if the containers(producer, consumer and kafka) are on the same n\/w bridge?\nI am new to kafka, just trying to run a simple producer and consumer example. I have a docker container which produces messages and pushes it to kafka (this works with by declaring kafka:9092 as a bootstrap server. Since my docker container for kafka is called kafka)\nDo i still need to declare inside and outside ports for kafka? Cant the consumer listen to the same port as producer?\nUsing kafka-python to send and receive messages.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12,"Q_Id":72313249,"Users Score":0,"Answer":"Consumers and producers don't listen on ports, but as long as you have (at least) PLAINTEXT:\/\/kafka:9092 as the advertised listener, and listeners includes port 9092, then you don't necessarily need any other listener.\nHowever, if you add other brokers in the same network for replication, I'd strongly recommend using at least SASL_PLAINTEXT for the inter-broker communication. That way all brokers in the same network \"trust\" each other as a cluster (and you can fine tune network traffic for replication, but that's not really needed for Docker)","Q_Score":0,"Tags":"python,docker,apache-kafka","A_Id":72314179,"CreationDate":"2022-05-20T03:24:00.000","Title":"Should we always have a KAFKA_LISTENERS (inside and outside) specified even if the producer and consumer are on the same n\/w?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I\u00b4ve got a new MacBook Pro with M1 chip and it seems as if not alle of the open source software is converted to arm64.\nNethertheless, I try to compile missing tools (like numpy) manuell from source, like in the good ole days. As I try to do it I have to install, deinstall a lot and one tool is \"port\" which is working except for one thing that it complains all the time about Libraries I don't use anymore (because some tools (like Eclipse) for example are not working with Python 3.10). Therefore I regressed to Python 3.9 but every time I want to install something with \"port\" (which it does), it complains about the \"old\" Python 3.10 libs. How could I get rid of this messages?\n\nWarning: Error parsing file\n\/opt\/local\/Library\/Frameworks\/Python.framework\/Versions\/3.10\/lib\/python3.10\/site-packages\/lxml\/html\/diff.cpython-310-darwin.so: Error opening or reading file","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":102,"Q_Id":72328621,"Users Score":-1,"Answer":"I was in a sort of \"Dead-Lock\" because versions, Libraries and executables were not consistent with Python, Eclipse, Python pip packages etc, because some packages weren't ported to amd64 until now. For example numpy and I tried to compile it from source which was possible but still not working.\nThen I stumbled over a hint in a different problematic were Rosetta was recommended for that specific problem. (I\u00b4ve never worked with Rosetta because most applications were running)\nSo I duplicated the \"terminal\" Application and configured one for Rosetta, started and installed Python 3.10 in it with all new packages and startet all executables from that terminal.\nAfter some fiddling Eclipse startet with Python and the packages which my application needed, like numpy.\n(And in addition to that, it seems as if it very much faster than before)","Q_Score":2,"Tags":"python,numpy,apple-m1,macports","A_Id":72329648,"CreationDate":"2022-05-21T10:15:00.000","Title":"Mac OS M1 Monterey 12.4: wrong python library with \"Port\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm building a testing suite using Python Pytest library,\nmy challenge is that I want to run the test on remote windows machines without the overhead of deploying my python code on those remote machines.\nCurrent Solution:\nUsing Jenkins I'm cloning the tests repository from bit bucket to the remote machine , and then using a PowerShell command through WINRM triggering the execution of the pytest script on the remote machine.\nDesired Solution:\nThe pytest code\/repository will reside on a machine (local\/cloud) and will execute on remote windows machines (possibly in parallel on multiple machines)\nI've investigated the paramiko\/factory packages but they both require the code to be present on the remote machines.\nAnyone encountered similar requirement ? implemented something similar?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":72337410,"Users Score":0,"Answer":"you can try a pub-sub mechanism with aws ssm","Q_Score":0,"Tags":"python,pytest,remote-execution","A_Id":72337656,"CreationDate":"2022-05-22T12:02:00.000","Title":"Execution of python code on remote machines","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"ERROR:: --system is intended to be used for pre-existing Pipfile installation, not installation of specific packages. Aborting.\nI can't use pipenv in Ubuntu18.04.\nHow can I fix it?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":34,"Q_Id":72345696,"Users Score":-1,"Answer":"You can use the python virtual environment,\n\npython -m venv venv # Creates virtual environment\nTo activate virtual environment do source venv\/bin\/activate\nThen you can install your packages using pip install lib\n\nTo deactivate virtual environment type deactivate","Q_Score":0,"Tags":"python,python-3.x,pipenv","A_Id":72345750,"CreationDate":"2022-05-23T08:49:00.000","Title":"I can't do anything with pipenv","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am doing a Heroku tutorial and web: python manage.py runserver 0.0.0.0:$PORT creates the error when I run heroku local or heroku local -p 5000 (or one of several more variants). However, web: python manage.py runserver 0.0.0.0:5000 works fine. I suspect I am making a simple error with how to pass an environment variable into the Procfile.\nThe error message is: CommandError: \"0.0.0.0:$PORT\" is not a valid port number or address:port pair.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":72369000,"Users Score":0,"Answer":"The problem was a difference between Windows and Linux. The solution is below for others' benefit.\non linux, $PORT references the variable called PORT\non windows, we instead need %PORT%\nHence, web: python manage.py runserver 0.0.0.0:%PORT% works for Procfile.windows. To run the Windows specific Procfile, run heroku local web -f Procfile.windows as the normal Procfile file should be left with web: python manage.py runserver 0.0.0.0:$PORT so it works when deployed (as heroku machines use linux)","Q_Score":1,"Tags":"python,powershell,heroku,procfile","A_Id":72369638,"CreationDate":"2022-05-24T20:17:00.000","Title":"environment variable issue with procfile heroku","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a Python Azure function which normally do the etl process. My function also includes the downloading of files from API to Temp directory and Uploading the files to container. I am getting the following error [Errno 28] No space left on device, I had tried to check every possible place since it is a space issue, I think i have enough space in my storage account and also i had restarted my func-app to clear out my temp directory.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":67,"Q_Id":72386568,"Users Score":0,"Answer":"Azure Functions have a limit of 5GB data per session max. So even if your storage account can take unlimited data, azure function will not be to handle such huge data at a time. So most probably this error comes from function itself.","Q_Score":0,"Tags":"python,azure,azure-functions,azure-web-app-service","A_Id":72386601,"CreationDate":"2022-05-26T04:07:00.000","Title":"How to debug the Python Azure function Error : [Errno 28] No space left on device?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have running python file \"cepusender\/main.py\" (and another python files), how can i restart\/kill only main.py file?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":72410838,"Users Score":0,"Answer":"kill is the command to send signals to processes.\nYou can use kill -9 PID to kill your python process, where 9 is the number for SIGKILL and PID is the python Process ID.","Q_Score":1,"Tags":"python,linux,ubuntu,script,terminate","A_Id":72410937,"CreationDate":"2022-05-27T20:37:00.000","Title":"How to restart specific running python file Ubuntu","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is it possible to give an application read and write permissions to folder and files during its work without giving users which use the application these permission?\ni write a software in python which is working with some files. I want to ensure that these files only can manipulate by the software. So my thougt was to create a folder without read an write rights and allowing only the application to read and write in this folder.\nIs this possible?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":27,"Q_Id":72459498,"Users Score":2,"Answer":"not sure if this helps bit what i would try and do is is set the application to create a local service account first with admin rights then when the application relaunches set it to run run as that service account then any folders created will be via that service account and therefore you can create a folder, modify its permissions to only be managed by that service account, brief steps below\n\napplication launches and checks if the service account exists if not then create the service account and assign to relevant admin group, once complete re-launch as that service account\n\nonce launched create the relevant directory\n\nmodify the directory's settings so all users are removed exp the service account\n\n\nI think the above might work, good luck!","Q_Score":3,"Tags":"python,security,permissions,windows-10","A_Id":72461232,"CreationDate":"2022-06-01T09:30:00.000","Title":"How to give applications (not users itself) permission to folder and files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When run my code in python or colaborator the following error, I intall all libraries from apache beam, somebody gives in one moment this error or knows about it..\nusage: aaa_users_py.py [-h] [--runner RUNNER] [--streaming] [--resource_hint RESOURCE_HINTS] [--beam_services BEAM_SERVICES]\n[--type_check_strictness {ALL_REQUIRED,DEFAULT_TO_ANY}] [--type_check_additional TYPE_CHECK_ADDITIONAL]\n[--no_pipeline_type_check] [--runtime_type_check] [--performance_runtime_type_check]\n[--allow_non_deterministic_key_coders] [--allow_unsafe_triggers]\n[--no_direct_runner_use_stacked_bundle] [--direct_runner_bundle_repeat DIRECT_RUNNER_BUNDLE_REPEAT]\n[--direct_num_workers DIRECT_NUM_WORKERS]\n[--direct_running_mode {in_memory,multi_threading,multi_processing}]\n[--dataflow_endpoint DATAFLOW_ENDPOINT] [--project PROJECT] [--job_name JOB_NAME]\n[--staging_location STAGING_LOCATION] [--temp_location TEMP_LOCATION] [--region REGION]\n[--service_account_email SERVICE_ACCOUNT_EMAIL] [--no_auth] [--template_location TEMPLATE_LOCATION]\n[--label LABELS] [--update] [--transform_name_mapping TRANSFORM_NAME_MAPPING]\n[--enable_streaming_engine] [--dataflow_kms_key DATAFLOW_KMS_KEY]\n[--create_from_snapshot CREATE_FROM_SNAPSHOT] [--flexrs_goal {COST_OPTIMIZED,SPEED_OPTIMIZED}]\n[--dataflow_service_option DATAFLOW_SERVICE_OPTIONS] [--enable_hot_key_logging]\n[--enable_artifact_caching] [--impersonate_service_account IMPERSONATE_SERVICE_ACCOUNT]\n[--hdfs_host HDFS_HOST] [--hdfs_port HDFS_PORT] [--hdfs_user HDFS_USER] [--hdfs_full_urls]\n[--num_workers NUM_WORKERS] [--max_num_workers MAX_NUM_WORKERS]\n[--autoscaling_algorithm {NONE,THROUGHPUT_BASED}] [--worker_machine_type MACHINE_TYPE]\n[--disk_size_gb DISK_SIZE_GB] [--worker_disk_type DISK_TYPE] [--worker_region WORKER_REGION]\n[--worker_zone WORKER_ZONE] [--zone ZONE] [--network NETWORK] [--subnetwork SUBNETWORK]\n[--worker_harness_container_image WORKER_HARNESS_CONTAINER_IMAGE]\n[--sdk_container_image SDK_CONTAINER_IMAGE]\n[--sdk_harness_container_image_overrides SDK_HARNESS_CONTAINER_IMAGE_OVERRIDES] [--use_public_ips]\n[--no_use_public_ips] [--min_cpu_platform MIN_CPU_PLATFORM] [--dataflow_worker_jar DATAFLOW_WORKER_JAR]\n[--dataflow_job_file DATAFLOW_JOB_FILE] [--experiment EXPERIMENTS]\n[--number_of_worker_harness_threads NUMBER_OF_WORKER_HARNESS_THREADS] [--profile_cpu]\n[--profile_memory] [--profile_location PROFILE_LOCATION] [--profile_sample_rate PROFILE_SAMPLE_RATE]\n[--requirements_file REQUIREMENTS_FILE] [--requirements_cache REQUIREMENTS_CACHE]\n[--requirements_cache_only_sources] [--setup_file SETUP_FILE] [--beam_plugin BEAM_PLUGINS]\n[--pickle_library {cloudpickle,default,dill}] [--save_main_session] [--sdk_location SDK_LOCATION]\n[--extra_package EXTRA_PACKAGES] [--prebuild_sdk_container_engine PREBUILD_SDK_CONTAINER_ENGINE]\n[--prebuild_sdk_container_base_image PREBUILD_SDK_CONTAINER_BASE_IMAGE]\n[--cloud_build_machine_type CLOUD_BUILD_MACHINE_TYPE]\n[--docker_registry_push_url DOCKER_REGISTRY_PUSH_URL] [--job_endpoint JOB_ENDPOINT]\n[--artifact_endpoint ARTIFACT_ENDPOINT] [--job_server_timeout JOB_SERVER_TIMEOUT]\n[--environment_type ENVIRONMENT_TYPE] [--environment_config ENVIRONMENT_CONFIG]\n[--environment_option ENVIRONMENT_OPTIONS] [--sdk_worker_parallelism SDK_WORKER_PARALLELISM]\n[--environment_cache_millis ENVIRONMENT_CACHE_MILLIS] [--output_executable_path OUTPUT_EXECUTABLE_PATH]\n[--artifacts_dir ARTIFACTS_DIR] [--job_port JOB_PORT] [--artifact_port ARTIFACT_PORT]\n[--expansion_port EXPANSION_PORT] [--job_server_java_launcher JOB_SERVER_JAVA_LAUNCHER]\n[--job_server_jvm_properties JOB_SERVER_JVM_PROPERTIES] [--flink_master FLINK_MASTER]\n[--flink_version {1.12,1.13,1.14}] [--flink_job_server_jar FLINK_JOB_SERVER_JAR]\n[--flink_submit_uber_jar] [--spark_master_url SPARK_MASTER_URL]\n[--spark_job_server_jar SPARK_JOB_SERVER_JAR] [--spark_submit_uber_jar]\n[--spark_rest_url SPARK_REST_URL] [--spark_version {2,3}] [--on_success_matcher ON_SUCCESS_MATCHER]\n[--dry_run DRY_RUN] [--wait_until_finish_duration WAIT_UNTIL_FINISH_DURATION]\n[--pubsub_root_url PUBSUBROOTURL] [--s3_access_key_id S3_ACCESS_KEY_ID]\n[--s3_secret_access_key S3_SECRET_ACCESS_KEY] [--s3_session_token S3_SESSION_TOKEN]\n[--s3_endpoint_url S3_ENDPOINT_URL] [--s3_region_name S3_REGION_NAME] [--s3_api_version S3_API_VERSION]\n[--s3_verify S3_VERIFY] [--s3_disable_ssl] [--input-file INPUT_FILE] --output-path OUTPUT_PATH\naaa_users_py.py: error: the following arguments are required: --output-path","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":10,"Q_Id":72480505,"Users Score":0,"Answer":"It probably means that the pipeline built in this Python script has a required customized pipeline option that includes a field named --output-path. Think it as a \"template\" to spawn jobs by ETL data from the --input-path to the --output-path, you have to tell the pipeline where to read and write before submitting it as a job.","Q_Score":0,"Tags":"python,pipeline,apache-beam","A_Id":72482201,"CreationDate":"2022-06-02T18:01:00.000","Title":"Apache_beam--python --error: the following arguments are required: --output-path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"WARNING: Ignoring invalid distribution -yproj (c:\\users\\space_junk\\appdata\\local\\programs\\python\\python310\\lib\\site-packages)\nWARNING: Ignoring invalid distribution -yproj (c:\\users\\space_junk\\appdata\\local\\programs\\python\\python310\\lib\\site-packages)\nWARNING: Ignoring invalid distribution -yproj (c:\\users\\space_junk\\appdata\\local\\programs\\python\\python310\\lib\\site-packages)","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":3284,"Q_Id":72547834,"Users Score":1,"Answer":"I had the same problem with matplotlib. It looked like I wanted to install a package from some sort of unauthorized source.\nYou should delete the folder that caused the problem in site-packages folder. In your case, it is ~yproj (In my case, it was ~atplotlib).\nIn short :\n\nFind the site-packages folder -> Type \"pip show pyproj\" or any other library you want!\nDelete the folder mentioned in the warning : ~yproj in your case","Q_Score":2,"Tags":"python,geopandas,torch,fiona,osgeo","A_Id":74625588,"CreationDate":"2022-06-08T14:49:00.000","Title":"why do I receive these errors \"WARNING: Ignoring invalid distribution -yproj \" while installing any python module in cmd","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"WARNING: Ignoring invalid distribution -yproj (c:\\users\\space_junk\\appdata\\local\\programs\\python\\python310\\lib\\site-packages)\nWARNING: Ignoring invalid distribution -yproj (c:\\users\\space_junk\\appdata\\local\\programs\\python\\python310\\lib\\site-packages)\nWARNING: Ignoring invalid distribution -yproj (c:\\users\\space_junk\\appdata\\local\\programs\\python\\python310\\lib\\site-packages)","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":3284,"Q_Id":72547834,"Users Score":8,"Answer":"I was getting a similar message that turned out be caused by a previous failed pip upgrade. I had attempted to upgrade pip from a user account that didn't have the proper rights. There was a temp directory left behind in site-packages that began with ~ip which was causing pip to complain every time it ran. I removed the directory and was able to re-upgrade pip using an account that had proper permissions. No more warnings from pip.\nDid you have a problem installing something like pyproj by any chance? The temp directory seems to be named by replacing the first letter of the library with a ~.","Q_Score":2,"Tags":"python,geopandas,torch,fiona,osgeo","A_Id":72622986,"CreationDate":"2022-06-08T14:49:00.000","Title":"why do I receive these errors \"WARNING: Ignoring invalid distribution -yproj \" while installing any python module in cmd","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to run a Python script in my terminal (mac) that takes a csv file as input. At the beginning of the Pyton script, a package named cvxpy is imported. When running the code with data in the terminal I get the error:\nImportError: No module named cvxpy.\nI'm feeling it's a directory fault, but I don't know how to fix this (e.g. how to get the Python script and python packaga in the same directory)\nSomebody got a clue?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":72560729,"Users Score":0,"Answer":"You need to have the module installed.\nTo install it, type : pip3 install cvxpy\nIf you already have it installed, please double check by typing pip3 list","Q_Score":0,"Tags":"python,terminal,package","A_Id":72560782,"CreationDate":"2022-06-09T12:58:00.000","Title":"Import python package from terminal","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm not sure if node.js 18 supports centos 7 and is it a requirement to install python 3 for node.js 18?","AnswerCount":4,"Available Count":3,"Score":-0.049958375,"is_accepted":false,"ViewCount":5884,"Q_Id":72571235,"Users Score":-1,"Answer":"I am sure you can install NodeJS 18 on Centos7.\nRegarding of the Python need.\nYes you will need python installed, NodeJS use some python code. Python is required for building node from source","Q_Score":4,"Tags":"python,node.js,linux,centos","A_Id":72572197,"CreationDate":"2022-06-10T08:26:00.000","Title":"Can I install node.js 18 on Centos 7 and do I need python 3 install too?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm not sure if node.js 18 supports centos 7 and is it a requirement to install python 3 for node.js 18?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":5884,"Q_Id":72571235,"Users Score":0,"Answer":"getting error\n\n Loaded plugins: fastestmirror\n Loading mirror speeds from cached hostfile\n * epel: mirror.sabay.com.kh\n Resolving Dependencies\n --> Running transaction check\n ---> Package nodejs.x86_64 1:16.18.1-3.el7 will be updated\n ---> Package nodejs.x86_64 2:18.14.0-1nodesource will be an update\n --> Processing Dependency: libc.so.6(GLIBC_2.28)(64bit) for package: 2:nodejs-18.14.0-1nodesource.x86_64\n --> Processing Dependency: libm.so.6(GLIBC_2.27)(64bit) for package: 2:nodejs-18.14.0-1nodesource.x86_64\n --> Finished Dependency Resolution\n Error: Package: 2:nodejs-18.14.0-1nodesource.x86_64 (nodesource)\n Requires: libc.so.6(GLIBC_2.28)(64bit)\n Error: Package: 2:nodejs-18.14.0-1nodesource.x86_64 (nodesource)\n Requires: libm.so.6(GLIBC_2.27)(64bit)\n You could try using --skip-broken to work around the problem\n You could try running: rpm -Va --nofiles --nodigest","Q_Score":4,"Tags":"python,node.js,linux,centos","A_Id":75339544,"CreationDate":"2022-06-10T08:26:00.000","Title":"Can I install node.js 18 on Centos 7 and do I need python 3 install too?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm not sure if node.js 18 supports centos 7 and is it a requirement to install python 3 for node.js 18?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":5884,"Q_Id":72571235,"Users Score":0,"Answer":"yes you can,\nbut you have to solve related issues ( upgrade make,gcc,glibc,python versions etc)","Q_Score":4,"Tags":"python,node.js,linux,centos","A_Id":76285567,"CreationDate":"2022-06-10T08:26:00.000","Title":"Can I install node.js 18 on Centos 7 and do I need python 3 install too?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I recently came across the Google Coral dev board mini in search of a ML microcontroller platform for a robotics and speech recognition project. I realized that there are minimal tutorials on creating projects from scratch for the dev board mini but a ton of example projects. The problem with these example projects is that it gets imported through a git clone through the Mendel linux terminal, which doesn't really tell me how to create my own project and where to compile and code it. To make things more clear I will use the ESP32 dev board as an example:\nTo write a program (C++) on a ESP32 dev board that controls the I\/O pins, I used PlatfromIO to compile and flash the microcontroller. What IDE can be used to perform the same functionality on the Google-Coral dev board mini? Does there exist an article about this?\nSorry if my question seems obvious, but I feel that I spent too much time searching for the solution. Thanks in advance! :)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":89,"Q_Id":72573384,"Users Score":0,"Answer":"The Google Coral dev board mini is a single board computer running a full Linux operating system, whereas the ESP32 is a microcontroller which does not run an OS, it is only flashed with a single C++ program. The difference is the same as a Raspberry Pi and an Arduino. Since the Coral dev board has a full OS, you can plug a monitor, mouse, and keyboard into it and develop code directly on it. Or you can use your PC to ssh into the coral board to copy files over and remotely run commands. Through these ways you can use any IDE you want.","Q_Score":0,"Tags":"python,google-coral","A_Id":73831913,"CreationDate":"2022-06-10T11:14:00.000","Title":"How does one run a blink LED program on Google-Coral dev board mini and what IDE is recommended?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm working on a backend microservice, using the built in python logging library to monitor what is happening in my Flask app, and using Gunicorn to spin up multiple workers with it.\nIdeally I would like to create a parent logger (logging.GetLogger('Service')), and each worker would use a child logger (logging.GetLogger('Service.Worker1'), logging.GetLogger('Service.Worker2'), etc.). However, I'm not sure if I have a way to differentiate worker processes.\nCan I pass somehow different arguments for each new gunicorn worker?\nRight now when I'm using the same logger in each worker (logging.GetLogger('Service') for all), I have no way to tell which worker created the log. This might not be an issue, but if I'm using a logging.FileHandler then separate processes may collide.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":621,"Q_Id":72576891,"Users Score":1,"Answer":"One workaround that I came up with is simply using os.getpid(), which will be unique in each worker. While this does the job and allows me to create unique names for logging, I'm still wondering if there is a better way.","Q_Score":2,"Tags":"python,logging,gunicorn","A_Id":72576942,"CreationDate":"2022-06-10T15:44:00.000","Title":"Logging from multiple workers in Gunicorn","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I wrote a python script to run some simple tasks and this issue has been happening randomly at times.\nUpon clicking on .py file, a cmd prompt window appears for a some microseconds and closes immediately without showing any text.\nI thought this issue was because the code finished running too fast at first but the code doesnt actually run. I know this because the code involves sending a text message on discord through the requests module and I can see post-running that no text has been sent.\nIf it was the prior assumed issue, I would've just added some input for the program to recieve but my program has an infinite while loop in it which should be already enough to keep the cmd window open. Hence I dont understand what's causing this.\nThe last time it happened I somehow found a solution which I followed step by step and was able to resolve it but its happening again now and I cant find that solution again unfortunately. From vague memory, I recall the solution involved running some commands on windows terminal.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":72584829,"Users Score":0,"Answer":"Are you double clicking the .py file? If so, it may not work in some situations. What you should do is open up your cmd and type the following.\nC:\\Users\\Admin> python filename.py\nYou can replace filename with your file's name.\nIf you are using Mac or any Apple devices, you will need to replace python with python3.","Q_Score":0,"Tags":"python","A_Id":72584863,"CreationDate":"2022-06-11T13:40:00.000","Title":"Python script closes without even running","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python script program.py and I wanted to have access to it everywhere in the system. Therefore, I used chmod +x and used a hard link to put it in the \/somewhere\/bin\/.\nThis successfully made the program.py executable anywhere, but at the same time I lost access to the original directory where the program resides \/original\/dir\/program.py.\nThere is a configuration file in the original directory, in a folder: \/original\/dir\/configurations\/cfg.txt and I want to also access it everywhere in the system, example: program.py configurations\/cfg.txt. How can I achieve this?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":72596533,"Users Score":0,"Answer":"you can use .bashrc to change your path and add this line in your .bahrc file:\nexport PATH=$PATH:\/original\/dir:\/original\/dir\/configurations","Q_Score":0,"Tags":"python-3.x,linux,bash","A_Id":72598671,"CreationDate":"2022-06-12T23:19:00.000","Title":"Making the original linked excutable directory accessible","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using self hosted and microsoft hosted agent to run the pipeline.\nOne of the pipeline steps is to install certain python packages on the agent so that the project unit tests can be then executed.\nDoes the agent retain the installed packages or a clean slate is given to each pipeline?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":194,"Q_Id":72598937,"Users Score":1,"Answer":"Microsoft-hosted agent: No, it's a clean slate for every job.\nSelf-hosted: That depends on how you configure your agent. But assuming it's just a single VM, then yes, what you install\/cache\/etc on that agent, will still be available for the next job to use.\nBe careful, however, as this can of course also have unintended consequences if left-over files mess up a subsequent job.","Q_Score":0,"Tags":"python,azure-devops,azure-pipelines,azure-devops-self-hosted-agent","A_Id":72599073,"CreationDate":"2022-06-13T07:13:00.000","Title":"Does the self-hosted agent retain installations made via pipeline?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So one of our servers has v2.6 installed. I\u2019ve checked and most of the scripts that runs in this server are unix scripts. There are few python scripts but I\u2019m not sure if it\u2019s still being used. They are not in cron. I don\u2019t know their users as well.\nNow I want to install another version which is v3.10. In short there will be two versions in the server \u2014 v2.6 and 3.10.\nMy question is \u2014 is there a chance that those scripts running in v2.6 will encounter any issues once we install v3.10?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":27,"Q_Id":72600473,"Users Score":2,"Answer":"If these scripts explicitly point to your python 2.x interpreter (with a shebang), then no, you won't have any issues.\nHowever, if your question is : 'will my scripts written for python 2.x run without issue within python 3.10', then the answer is it depends.\nSome python2 will run perfectly fine with python3.\nNote that even if you install python3.10 on let's say Ubuntu, then python will still refer to your python2 installation by default.","Q_Score":1,"Tags":"python,python-3.x","A_Id":72600623,"CreationDate":"2022-06-13T09:21:00.000","Title":"Install 2 python versions linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Project in container A and has a function to restore data,which may need to send files to container B\nOr I should not do like this?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":172,"Q_Id":72610869,"Users Score":0,"Answer":"Try using docker-compose and a shared volume between your containers.","Q_Score":0,"Tags":"python,docker","A_Id":72610887,"CreationDate":"2022-06-14T02:21:00.000","Title":"Is there any method to transfer files between docker container using python scripts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Project in container A and has a function to restore data,which may need to send files to container B\nOr I should not do like this?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":172,"Q_Id":72610869,"Users Score":0,"Answer":"try using volume option when you run container A & B \ndocker run -t imagename:tag -v 'data path shared on local': path data on container A container A \ndocker run -t imagename:tag -v 'data path shared on local': path data on container B container B \nwhen restore new file in container A in access in path local and container B could used it","Q_Score":0,"Tags":"python,docker","A_Id":72632268,"CreationDate":"2022-06-14T02:21:00.000","Title":"Is there any method to transfer files between docker container using python scripts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using a miniforge environment on an M1 mac, and unable to import psutil:\nImportError: dlopen(\/Users\/caspsea\/miniforge3\/lib\/python3.9\/site-packages\/psutil\/_psutil_osx.cpython-39-darwin.so, 0x0002): tried: '\/Users\/caspsea\/miniforge3\/lib\/python3.9\/site-packages\/psutil\/_psutil_osx.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '\/usr\/local\/lib\/_psutil_osx.cpython-39-darwin.so' (no such file), '\/usr\/lib\/_psutil_osx.cpython-39-darwin.so' (no such file)\nI tried uninstalling and reinstalling using pip but that did not work. I'm using python 3.9, OS Monterey 12.2.1","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4004,"Q_Id":72619143,"Users Score":7,"Answer":"Have you tried:\npip uninstall psutil\nfollowed by:\npip install --no-binary :all: psutil","Q_Score":3,"Tags":"python,apple-m1,psutil","A_Id":72619209,"CreationDate":"2022-06-14T14:54:00.000","Title":"Unable to import psutil on M1 mac with miniforge: (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e'))","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have many FastApi apps that are running in kubernetes. All of them are using some common kubernetes functionalities like liveness and readiness probe, discovering neighbour pods using ordinals(This is my next challenge). There are some logic in code that I need to implement but in general many parts of the code stay the same like:\n\ncreate route for live and readiness probes\nsending request to different ordinals of the statefulset to find neighbours, implementing the endpoints for those requests.\n\nIs there a Library that can use in my python\/FastApi code to implement generic features that are available in Kubernetes.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":814,"Q_Id":72621698,"Users Score":1,"Answer":"What specific feature of K8S do you want to implement in your FastAPI apps? Liveness- and readiness endpoints are easy (as in, they are endpoints you can define in FastAPI and then declare them in your YAML definition of your pod).\nMy understanding is, you want pods in a StatefulSet to communicate with each other, but you would need information from K8S to do so. E.g. you want FastAPI-Pod-1 to know it is pod 1 out of let's say 4. I would recommend the Downward API that K8S offers and built your logic around that (e.g. read pod information from environment variables): kubernetes.io\/docs\/tasks\/inject-data-application\/\u2026 I do not know of any standard framework that implements any logic for you..","Q_Score":1,"Tags":"python,kubernetes,fastapi,kubernetes-statefulset","A_Id":72666305,"CreationDate":"2022-06-14T18:18:00.000","Title":"Kubernetes functionalities for fastapi","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I get the following error when trying to import the librosa library into my python project and running it in the global python environment:\n\nTraceback (most recent call last): File\n\"\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/site-packages\/soundfile.py\",\nline 142, in \nraise OSError('sndfile library not found') OSError: sndfile library not found\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last): File\n\"Bloompipe\/Synthesis_Module\/bloompipe_synthesis\/testSynthesis.py\",\nline 6, in \nfrom LSD.lucidsonicdreams import LucidSonicDream File \"Bloompipe\/Synthesis_Module\/bloompipe_synthesis\/LSD\/lucidsonicdreams\/init.py\",\nline 1, in \nfrom .main import * File \"Bloompipe\/Synthesis_Module\/bloompipe_synthesis\/LSD\/lucidsonicdreams\/main.py\",\nline 15, in \nfrom .AudioAnalyse import * File \"Bloompipe\/Synthesis_Module\/bloompipe_synthesis\/LSD\/lucidsonicdreams\/AudioAnalyse.py\",\nline 3, in \nimport librosa.display File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/site-packages\/librosa\/init.py\",\nline 209, in \nfrom . import core File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/site-packages\/librosa\/core\/init.py\",\nline 6, in \nfrom .audio import * # pylint: disable=wildcard-import File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/site-packages\/librosa\/core\/audio.py\",\nline 8, in \nimport soundfile as sf File \"\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/site-packages\/soundfile.py\",\nline 162, in \n_snd = _ffi.dlopen(_os.path.join( OSError: cannot load library '\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/site-packages\/_soundfile_data\/libsndfile.dylib':\ndlopen(\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/site-packages\/_soundfile_data\/libsndfile.dylib,\n0x0002): tried:\n'\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/site-packages\/_soundfile_data\/libsndfile.dylib'\n(no such file)\nProcess finished with exit code 1\n\nI installed the libsndfile library with homebrew and also for a virtual conda environment. When trying to run the program in the conda environment it produces the following error:\n\nTraceback (most recent call last): File\n\".conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages\/soundfile.py\",\nline 143, in \n_snd = _ffi.dlopen(_libname) OSError: cannot load library '.conda\/envs\/bloompipe_synthesis\/bin\/..\/lib\/libsndfile.dylib':\ndlopen(.conda\/envs\/bloompipe_synthesis\/bin\/..\/lib\/libsndfile.dylib,\n0x0002): Library not loaded: @rpath\/libvorbis.0.4.9.dylib Referenced\nfrom:\n.conda\/envs\/bloompipe_synthesis\/lib\/libsndfile.1.0.31.dylib\nReason: tried:\n'.conda\/envs\/bloompipe_synthesis\/lib\/libvorbis.0.4.9.dylib'\n(no such file),\n'.conda\/envs\/bloompipe_synthesis\/lib\/libvorbis.0.4.9.dylib'\n(no such file),\n'.conda\/envs\/bloompipe_synthesis\/lib\/libvorbis.0.4.9.dylib'\n(no such file),\n'.conda\/envs\/bloompipe_synthesis\/lib\/libvorbis.0.4.9.dylib'\n(no such file),\n'.conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages\/..\/..\/libvorbis.0.4.9.dylib'\n(no such file),\n'.conda\/envs\/bloompipe_synthesis\/lib\/libvorbis.0.4.9.dylib'\n(no such file),\n'.conda\/envs\/bloompipe_synthesis\/bin\/..\/lib\/libvorbis.0.4.9.dylib'\n(no such file), '\/usr\/local\/lib\/libvorbis.0.4.9.dylib' (no such file),\n'\/usr\/lib\/libvorbis.0.4.9.dylib' (no such file)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last): File\n\"Bloompipe\/Synthesis_Module\/bloompipe_synthesis\/testSynthesis.py\",\nline 6, in \nfrom LSD.lucidsonicdreams import LucidSonicDream File \"Bloompipe\/Synthesis_Module\/bloompipe_synthesis\/LSD\/lucidsonicdreams\/init.py\",\nline 1, in \nfrom .main import * File \"Bloompipe\/Synthesis_Module\/bloompipe_synthesis\/LSD\/lucidsonicdreams\/main.py\",\nline 15, in \nfrom .AudioAnalyse import * File \"Bloompipe\/Synthesis_Module\/bloompipe_synthesis\/LSD\/lucidsonicdreams\/AudioAnalyse.py\",\nline 3, in \nimport librosa.display File \".conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages\/librosa\/init.py\",\nline 209, in \nfrom . import core File \".conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages\/librosa\/core\/init.py\",\nline 6, in \nfrom .audio import * # pylint: disable=wildcard-import File \".conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages\/librosa\/core\/audio.py\",\nline 8, in \nimport soundfile as sf File \".conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages\/soundfile.py\",\nline 162, in \n_snd = _ffi.dlopen(_os.path.join( OSError: cannot load library '.conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages\/_soundfile_data\/libsndfile.dylib':\ndlopen(.conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages\/_soundfile_data\/libsndfile.dylib,\n0x0002): tried:\n'.conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages\/_soundfile_data\/libsndfile.dylib'\n(no such file)\nProcess finished with exit code 1\n\nThe thing is that in both cases it is looking for the .dylib files in the wrong directories. My homebrew installation is in \/opt\/homebrew\/lib and has the files libsndfile.dylib and libsndfile.1.dylib in it and also the libvorbis.dylib file. When trying to run on the global python environment it is looking for those files in \/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/site-packages\/_soundfile_data\/ though.\nMy conda installation is in \/opt\/anaconda3\/lib and has the files libsndfile.dylib, libsndfile.1.0.31.dylib and libsndfile.1.dylib in it and also the libvorbis.dylib and libvorbis.0.4.9.dylib file. When trying to run on the conda python environment it is looking for those files in .conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages\/_soundfile_data\/.\nIn both cases when looking in those site-packages directories, the _soundfile_data folder doesn't exist even when activating the hidden files. I don't know why that doesn't exist.\nI tried executing:\n\nexport CPATH=\/opt\/homebrew\/include\nexport LIBRARY_PATH=\/opt\/homebrew\/lib\nexport PYTHONPATH=\/opt\/homebrew\/lib\n\nTo include the paths into the python path when running\nThen I printed the path variables with import sys and print(sys.path), this was the output for my global python:\n\n['Bloompipe\/Synthesis_Module\/bloompipe_synthesis',\n'Bloompipe\/Synthesis_Module\/bloompipe_synthesis',\n'\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python39.zip',\n'\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9',\n'\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/lib-dynload',\n'\/Library\/Frameworks\/Python.framework\/Versions\/3.9\/lib\/python3.9\/site-packages',\n'opt\/homebrew\/lib']\n\nAnd for the conda environment I tried:\n\nconda develop .conda\/envs\/bloompipe_synthesis\/lib\nconda develop \/opt\/homebrew\/lib\nconda develop \/opt\/anaconda3\/lib\n\nThen the sys.path output is:\n\n['Bloompipe\/Synthesis_Module\/bloompipe_synthesis',\n'.conda\/envs\/bloompipe_synthesis\/lib\/python39.zip',\n'.conda\/envs\/bloompipe_synthesis\/lib\/python3.9',\n'.conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/lib-dynload',\n'.conda\/envs\/bloompipe_synthesis\/lib\/python3.9\/site-packages',\n'.conda\/envs\/bloompipe_synthesis\/lib',\n'\/opt\/homebrew\/lib',\n'\/opt\/anaconda3\/lib']\n\nWeirdly, python is still not looking in those directories when executing the librosa import.\nFinally, I tried adding the path to the homebrew installation manually by putting sys.path.append(\"\/opt\/homebrew\/lib\") in the beginning of the python file. It still produces the exact same errors.\nSo my question is, why does the _soundfile_data directory not exist in my site-packages folders for the global python and the conda environment and why doesn't it include the .dylib files for libsndfile?\nSecondly, why does:\n\nexport LIBRARY_PATH=\/opt\/homebrew\/lib\nexport PYTHONPATH=\/opt\/homebrew\/lib\n\nnot do that those paths appear when printing the sys.path content?\nThirdly, why does python not find the libsndfile.dylib files with the conda environment, even though I added the homebrew and the conda installation of libsndfile to the sys path with the conda develop command?\nMy python3.9 is installed in \/usr\/local\/bin\/python3.9 and my conda python3.9 environment is installed in \/.conda\/envs\/bloompipe_synthesis\/bin\/python\nI'm on a new mac with Mac OS Monterey.\nAny help is greatly appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":664,"Q_Id":72623930,"Users Score":0,"Answer":"As far as I know it only works with python 3.6 and 3.7 (lucidsonicdreams), although I didn't have success on 3.6.\nI had to create a virtual environment through conda and run code through Jupyter notebook. conda install tensorflow==1.15 (will not work with higher versions), python==3.7, pip install lucidsonicdreams in your new python 3.7 environment.\nMake sure module versions line up with your Nvidia CUDA drivers or lucidsonicdreams won't work.","Q_Score":0,"Tags":"python,anaconda,pythonpath,librosa,libsndfile","A_Id":74643934,"CreationDate":"2022-06-14T22:10:00.000","Title":"Python linking to wrong library folder - sndfile library not found","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was trying to reconnect with the a local instance, but getting this error, what can be the possible workaround?\n('Unable to connect to any servers', {'127.0.0.1:8000': ConnectionRefusedError(10061, \"Tried connecting to [('127.0.0.1', 6000)]. Last error: No connection could be made because the target machine actively refused it\")})\nmy docker-compose.yml file looks like this\nnetworks:\napp-tier:\ndriver: bridge\nservices:\ncassandra:\nimage: 'cassandra:latest'\nnetworks:\n- app-tier\nexpose:\n- '6000'\nports:\n- '6000:9042'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":72634931,"Users Score":0,"Answer":"Your client tries to connect to a port 8000 while you are exposing port 6000. Change port settings.","Q_Score":0,"Tags":"cassandra,cassandra-2.0,cassandra-python-driver","A_Id":72635062,"CreationDate":"2022-06-15T16:37:00.000","Title":"ConnectionRefusedError No connection could be made because the target machine actively refused it cassandra","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hi I am having a really weird issue with a simple python script.\nI created it in one folder, A, and have later copied it to a new folder B.\nHowever when I add folder B as a project folder in Atom and run the script, it behaves as if it is still located in folder A.\nI can find no references to folder A, nothing in the script for sure and can't see anything in file properties that should give this result??\nRunning os.path.realpath() gives me the old folder A, and any output files I generate while running the script in folder B gets saved to folder A.\nAm I missing a \"magic\" way of copying python scripts to new locations?\nHope someone can help :)\n-Thomas\nedit: just realised it might be important that I am using Atom with the Hydrogen plugin to run the script and I have added folder B as a projekt folder.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":23,"Q_Id":72656518,"Users Score":0,"Answer":"\/etc\/profile.d would add it to every user's path\n~\/.bashrc would just be your own\nyou can always do \"$ source ~\/.bashrc\" to re-read the config files.","Q_Score":0,"Tags":"python,path,os.path,working-directory","A_Id":72656624,"CreationDate":"2022-06-17T08:36:00.000","Title":"copy python script to new location wont update path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The title really says it all. Files in my PYTHONPATH are not recognized when I run scripts (ModuleNotFoundError: No module named ), but they are when I open the interactive prompt in the command line. I'm running ubuntu 22.04. What could be causing this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":72665661,"Users Score":0,"Answer":"I had a similar problem that was caused by python cache directories in a parent directory of my script. I deleted them and now the path works as expected.","Q_Score":0,"Tags":"python,linux,ubuntu,path,pythonpath","A_Id":72673963,"CreationDate":"2022-06-17T23:34:00.000","Title":"Pythonpath only recognized in interactive teminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"celery.conf.update(result_serializer='pickle') uses pickle for serializing results generated by Celery tasks. Is there a way to tell which serializer (JSON, pickle, etc...) to be used at the individual task level?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":72695668,"Users Score":0,"Answer":"As far as I know, that is not possible.","Q_Score":0,"Tags":"python,rabbitmq,celery","A_Id":72712131,"CreationDate":"2022-06-21T05:18:00.000","Title":"Task specific result_serializer in Celery","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to buil a c++ application with embedded python using pybind11 on Windows.\nI've installed python 3.7, 3.8, and 3.9 none of them is PATH\nnow no matter what pyhon version I want to use in cmake (by setting pybind11_DIR to pybind11 folder in the python folder, it always links to python3.9.dll)\nWhen I rename the folder where 3.9 is installed I get following error:\nFindPythonLibsNew.cmake:133: error: Python config failure:\n....\n\/Python37\/Lib\/site-packages\/pybind11\/share\/cmake\/pybind11\/pybind11Config.cmake:250 (include)\nCMakeLists.txt:131 (find_package)\nAdding \"-DPYBIND11_PYTHON_VERSION=3.7\" or \"-DPY_PYTHON_VERSION=3.7\"\nDoes not help. So where can I tell cmake to use 3.7 and not 3.9","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":222,"Q_Id":72718123,"Users Score":0,"Answer":"Got the solution myself. The python3.9 was somewhere in the cache from an earlier build. So after deleting the whole build folder and starting everything from scratch it directly worked with python3.7","Q_Score":0,"Tags":"python,pybind11","A_Id":72718576,"CreationDate":"2022-06-22T15:21:00.000","Title":"pybind11 embedded python: multiple python versions, cmake cannot find correct version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Currently I have a group of tasks running in parallel, when an error occurs celery sends the retried task back to queue and moves on with the next task in queue, but the problem the next task will also face the same error and will cause a retry. I only fix the problem at the 3rd retry because most times just retrying the task will get the job done, so every task in the queue goes through this retry phase needlessly 3 times, before the problem can be fixed, this can be avoided if I can force celery to execute the retry tasks locally. So there anyway to tell celery to retry the tasks locally?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":72718762,"Users Score":0,"Answer":"One (me included) could argue that if you want to re-try locally you should simply handle the exception(s) in your task by yourself, especially considering that you want to preserve some states between retries.\nAs far as I know Celery will not do it for you. If I am not mistaken you could file a feature request and hope they do it in foreseeable future.","Q_Score":0,"Tags":"python,parallel-processing,celery","A_Id":72719716,"CreationDate":"2022-06-22T16:09:00.000","Title":"Celery retry task locally?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running docker-compose that has a php front end for uploading files, python watchdog for monitoring uploads via php and pandas for processing the resulting excel files (and later passed to a neo4j server).\nMy issue is that when pd.read_excel is reached in python, it just hangs with idle CPU. The read_excel is reading a local file. There are no resulting error messages. When i run the same combo on my host, it works fine. Using ubuntu:focal for base image for the php\/python\nAnyone run into a similar issue before or what could be the cause? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":72721159,"Users Score":0,"Answer":"Fixed,\nI wasn't properly logging python exceptions and was missing openpyxl module.\nA simple pip install openpyxl fixed it.","Q_Score":0,"Tags":"python-3.x,pandas,docker,docker-compose","A_Id":72736445,"CreationDate":"2022-06-22T19:40:00.000","Title":"Panda's Read_Excel function stalling in Docker Container","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using Apache Airflow 2.2.4. When I trigger a DAG run via UI click or via API call, I get context['dag_run'].external_trigger = True and context['dag_run'].run_type = 'scheduled' in both cases. I would like to distinguish between those two cases though. How can I do so?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":88,"Q_Id":72729696,"Users Score":1,"Answer":"Create new Role that doesn't have the permission action = website.\nCreate a new user that have this role for your API calls.\nfrom the context[\"dag_run\"] you can get \"owner\"","Q_Score":1,"Tags":"python,airflow","A_Id":72733376,"CreationDate":"2022-06-23T11:51:00.000","Title":"Airflow: distinguish API- and UI-triggered dag runs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to launch a server on Heroku using Flask and Gunicorn. I have a Procfile which I have both tried creating using echo \"web: gunicorn annallAPI:app\" > Procfile\nand with adding the line to a Procfile in vs code. Still everytime I get:\nremote: -----> Discovering process types remote: Procfile declares types -> (none)\nI have seen problems point to the wrong name, f.x. if it is ProcFile and another pointing to the incorrect encoding, saying it needs to be UTF-8. I have the correct name and I can't convert to UTF-8 because the original_charset has UTF-8.\nThe build succeeds and then if I make a request to the server, it of course fails. For the record, I am using an M1 Mac.\nThe error I get in the Heroku log is:\nat=error code=H14 desc=\"No web processes running\" method=GET path=\"\/\" host=xxx-api.herokuapp.com request_id=65f19b67-87a1-46bf-84f4-20f4ab36e85b fwd=\"130.208.240.12\" dyno= connect= service= status=503 bytes= protocol=https\nI tried doing heroku ps:scale web=1 to start a web process which I don't think should work if the Procfile doesn't have a set web process. It gave me the error:\nScaling dynos... ! \u25b8 Couldn't find that process type (web).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":72746164,"Users Score":0,"Answer":"Make sure that you have created requirements.txt which lists out all libraries including gunicorn\n\nCreate Procfile - this file basically has no extension and contains web: gunicorn yourwsginame:app\nPersonally, I recommend you create this file manually and edit it using a text editor","Q_Score":0,"Tags":"python,flask,heroku","A_Id":72746366,"CreationDate":"2022-06-24T15:17:00.000","Title":"Heroku Procfile not finding web process","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm trying to create a python script that automatically runs at startup but since I need it to work on all platforms it cannot be run with something like Task scheduler on Windows. Is it possible to do that with python ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":46,"Q_Id":72746318,"Users Score":1,"Answer":"You could try making a batch script that runs at startup. This batch script would cd to the directory containing the python file that you want to run, and ultimately run the python file by doing python main.py.","Q_Score":0,"Tags":"python,operating-system,startup","A_Id":72747541,"CreationDate":"2022-06-24T15:30:00.000","Title":"Running script at startup with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to list the Storage Buckets within an Organization using REST API.\nI am running this code in a VM, currently I created a user managed Service Account and passing its key as a credential in the code.\nInstead of passing the Service Account key as a credential:\n\nCan i use the service account of the VM to list the Storage Buckets in an Organization?\nHow can we configure the code to use the VM service account?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":322,"Q_Id":72767298,"Users Score":0,"Answer":"I understand you run you code inside Google Cloud under some service account. And you would like to use some Google Cloud services APIs. In your example - storage API.\nIn that case you might not need any keys or json files. You might prefer to grant your service account relevant IAM roles explicitly, maybe during deployment time. And in you code it would not be necessary to create a storage client using some key file.","Q_Score":0,"Tags":"python,google-cloud-platform,google-cloud-storage,google-compute-engine,google-iam","A_Id":72768188,"CreationDate":"2022-06-27T05:23:00.000","Title":"Can we use the default service account of a VM in gcp to call api's?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to run a python script called process.py\nIt did the job when i type through the terminal such as python3.9 \/home\/pi\/Program\/process.py\nThen I set it up into crontab with the idea so it can run automatically every 5 PM with format like this --> 0 17 * * 1-5 python3.9 \/home\/pi\/Program\/process.py , but it doesn't work as i type it manually in the terminal\nPlease help, i have been going through this issue for hours and cannot find the solution","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":65,"Q_Id":72772071,"Users Score":1,"Answer":"If python3.9 is not in \/bin it'll crash.\nAlso, cron runs with sh, for specifying bash you should run bash python3.9 \/home\/pi\/Program\/process.py\n\nTry providing it a full path, like \/usr\/local\/bin\/python3.9 \/home\/pi\/Program\/process.py\n\nRun sudo tail -f \/var\/log\/syslog | grep -C 5 cron to see what was the exact error and continue from there.\n\n\n\nsyslog can also be under \/val\/log\/messages","Q_Score":0,"Tags":"python,cron,raspberry-pi","A_Id":72772702,"CreationDate":"2022-06-27T12:24:00.000","Title":"Python script can be run manually in my Raspberry Pi 4, but it cannot be executed automatically every 5 PM through crontab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We are creating tasks to load data from GCS to big query based on dativize sequentially. Consider\ntask a(03rd June)>>task b(04th June)>>task c(05th June) .\nIf task a is failed ,we don't want to put entire flow as failure ,but should skip the failed task and execute the next task.\nCan anyone suggest the approach to follow , as we are newbies it would be great if anyone can guide us","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":488,"Q_Id":72804067,"Users Score":0,"Answer":"You can set the trigger_rule of task_b and task_c to be \"ALWAYS\"","Q_Score":0,"Tags":"python,google-cloud-platform,airflow","A_Id":72804289,"CreationDate":"2022-06-29T15:28:00.000","Title":"Skipping task on failure in Airflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm currently building a docker image that can be used to deploy a deep learning application. The image is fairly large with a size roughly 6GB. Since the deployment time is affected by the size of docker container, I wonder if there are some of best practices to reduce the image size of ml-related applications.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":73,"Q_Id":72811155,"Users Score":1,"Answer":"First, keep the data (if any) apart from the image (in volumes for example).Also, use .dockerignore to ignore files you don't want in your image.\nNow some techniques:\nA first technique is to use multistage builds. For example, an image just to install dependencies and another image that starts from the first one and run the app.\nA second technique is to minimize the number of image layers. Each RUN , COPY and FROM command creates a different layer. Try to combine commands in a single one using linux operators (like &&).\nA third technique is to take profit of the caching in docker image builds. Run every command you can before copying the actual content into the image. For exemple, for a python app, you might install dependencies before copying the contents of the app inside the image.","Q_Score":0,"Tags":"python,docker,machine-learning,deep-learning","A_Id":72811405,"CreationDate":"2022-06-30T06:25:00.000","Title":"What are some of the best practices to reduce the size of ml-related docker image?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have Python 3.10 and I am trying to install a python module but when I try to I get this:\npip : The term 'pip' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path\nwas included, verify that the path is correct and try again.\nAt line:1 char:1","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":72811276,"Users Score":0,"Answer":"Go to your python path, usually C:\\Users\\USERNAME\\AppData\\Local\\Programs\\Python. In there, you should find a folder, like Python310. Open that folder and then find the folder called Scripts. In cmd, use the command cd C:\\Users\\USERNAME\\AppData\\Local\\Programs\\Python\\Python\\Scripts. Then use pip normally. It should work.","Q_Score":1,"Tags":"python,module,pip","A_Id":72811328,"CreationDate":"2022-06-30T06:39:00.000","Title":"Why does my Windows terminal not recognize pip?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to use \".\/manage.py shell\" to run some Python commands with a specific tenant, but the code to do so is quite cumbersome because I first have to look up the tenant and then use with tenant_context(tenant)): and then write my code into this block.\nI thought that there should be a command for that provided by django-tenants, but there isn't.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":284,"Q_Id":72814364,"Users Score":2,"Answer":"I've just looked at this myself and this will work, where tenant1 is your chosen tenant:\npython3 manage.py tenant_command shell --schema=tenant1","Q_Score":2,"Tags":"python,django,shell,django-tenants","A_Id":72963091,"CreationDate":"2022-06-30T10:39:00.000","Title":"django-tenants: Python shell with specific tenant","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm in my script using the os.startfile(\"program_path\") command and it opens the program without any problem in visual studio.\nHowever as soon as I close visual studio, the program I started also closes.\nIs there a way to keep the program running even though I've closed visual studio?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":40,"Q_Id":72841840,"Users Score":1,"Answer":"you could start the script in cmd (or terminal if you are on linux) instead of visual studio using python or python3 \\AppData\\Local\\Programs\\Python\\Python310\\Scripts\nC:\\Users\\AppData\\Local\\Programs\\Python\\Python310\n\nAnd then bring the 2 path above the mingw64 path so that it will have a higher priority. Check again in terminal if this time your python version is correct. If it's still show previous version, you might want to close your cmd and reopen them again.\nI hope this helps.\nCheers.","Q_Score":1,"Tags":"python,mingw-w64","A_Id":73090579,"CreationDate":"2022-07-08T16:38:00.000","Title":"How to change Python path from using MINGW64's path to original windows path in Windows 11","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way to generate short shellcode from a malware executable (PE\/.exe)?, meaning sometimes some malware executable (PE\/.exe) are big which when converted to shellcode will lead to longer and bigger shellcode size making analysis and obfuscation difficult and time intensive. Imagine trying to obfuscate a shellcode generated from 1.5KB size trojan, by insert new instructions before, after and between existing ones, replace existing instructions with alternative ones and insert jump instructions that will change the execution flow and randomly divide the shellcode into separate blocks. Performing these insertion operations on such a big size shellcode will take many hours. If anyone have an idea on how to shorten these long shllcode I will be gratefull.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":204,"Q_Id":72965314,"Users Score":1,"Answer":"While I hate helping people that do these kinds of things, I have a feeling you won't get anywhere anyway.\nYour payload is your payload.\nYou don't try to minimize a payload. You find a way to encode it, a way that suits you.\nYou can compress it of course but you must treat a payload as a completely opaque blob of data, it could be almost incompressible as far as you know.\nFor example, a simple way to encode arbitrary data in a shellcode is by applying any transformation T to it (e.g. compress it) and then converting the result to a modified base64 where arbitrary letter pairs are swapped.\nThis prevents antiviruses from detecting the payload (checking memory in real-time is too expensive so the final payload won't be checked), uses only printable characters, lets you reduce the payload size if possible (thanks to T), and is easily automated.\nIf you need to have a shorter payload, then reduce its size and not the size of the payload plus the shellcode that bootstraps it.\nHowever, what is usually done is to adopt the well-known kill-chain: vector -> dropper -> packer -> malware.\nThe vector is how you gain execution in a particular context (e.g. a malicious MS Office macro or a process injection) and the dropper is a piece of code or an executable that will download or load the payload.\nYour shellcode should act as a dropper, shellcodes are typically very constrained (in size and shape) so they are kept short by loading the payload from somewhere else.\nIf you need to embed your payload in the shellcode then analyze the constraints and work on the payload.\nIf your payload can't satisfy them, you need to change it.\nI've only seen plain PE\/ELF payloads mostly in process injections, where the attacker can allocate remote memory for the payload and the code (which is often called a shellcode but it is not really one).\nAll shellcodes used in real binary exploitation either needed no payload (eg: spawn a reverse shell) or were droppers.","Q_Score":0,"Tags":"python,assembly,nasm,obfuscation,shellcode","A_Id":72968829,"CreationDate":"2022-07-13T11:11:00.000","Title":"Generate shorter shellcode","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a docker container running which start up few daemon processes post run with normal user (say with non-root privileges) id. The process which was running with normal user has to create some files and directories under \/dev inside the container by calling python function which executes os.system('mkdir -p \/dev\/some_dir') calls. However when run, these calls are failing without the directory being created. But I can run those cmds from container bash prompt where my id is uid=0(root) gid=0(root) groups=0(root).\nEven providing sudo before the cmd inside os.system('sudo mkdir -p \/dev\/some_dir') is not working.\nIs there any way I can make it work. I can not run the process with root user id due to security implications, but I need to create this directory as well.\nthanks for your pointers","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":86,"Q_Id":72979676,"Users Score":-1,"Answer":"You should give \/dev directory a permission to write files for your non-root user.","Q_Score":0,"Tags":"python-3.x,docker","A_Id":72981126,"CreationDate":"2022-07-14T11:18:00.000","Title":"how to execute root command inside non-root process running inside docker container","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I recently installed python3 on my vps, I want to enable it as default, so that when I type\npython I get python 3. I think the problem is its installed in \/usr\/local\/bin instead of \/usr\/bin\/ typing python on the terminal access python2 typing python3 returns bash: python3: command not found \nMost answers I have seen is a bit confusing as I am not a centos expert.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":355,"Q_Id":72979838,"Users Score":0,"Answer":"for a simple fix, you can use alias\nadd the alias to your .bashrc file\nsudo vi ~\/.bashrc\nthen add your alias at the bottom\nalias python=\"python3.9\"\nSo that when you type python you'll get python 3","Q_Score":0,"Tags":"python,centos,vps","A_Id":74793748,"CreationDate":"2022-07-14T11:33:00.000","Title":"How to make python 3.9 my default python interpreter on centos","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I understand that I need to use\nshutil.copyfile() but I keep getting an error when actually trying to copy a file to said read-only folder. I tried turning off read-only manually at the folder but it just turns itself back on.\nError: PermissionError: [Errno 13] Permission Denied: [long folder path]\nI also tried running as administrator but that did nothing.\nAlso note that I am using a windows 11 pc\nEdit: I have also tried using os.system(\"attrib -r \" + path)\nwhich led to the same exact error.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":65,"Q_Id":72982424,"Users Score":1,"Answer":"Found an answer!\nAs it turns out, after lots and lots of research, every folder for windows has that read-only box toggled but it doesn't actually do anything. Weird huh? So that wasn't really the issue. The actual issue had something to do with the\nshutil.copyfile() method. If you use shutil.copy2() it will work. Not sure why though. Couldn't get an explanation on that.","Q_Score":1,"Tags":"python,windows,cmd","A_Id":72986425,"CreationDate":"2022-07-14T14:45:00.000","Title":"How do i copy files to a read-only folder in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have virtualenvwrapper-win installed from the command line. When I try to do virtualenv env (env is the name of my virtual environment I'd like to set up) I get this:\n\n'virtualenv' is not recognized as an internal or external command, operable program or batch file\n\nI'm not sure what's wrong. Anyone know how to fix?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":141,"Q_Id":72983949,"Users Score":0,"Answer":"Try these commands :\nTo create virtual env - mkvirtualenv venv_name\nTo view all virtual env - lsvirtualenv\nTo activate any virtual env - workon venv_name","Q_Score":0,"Tags":"python,django,shell,virtualenv","A_Id":72984159,"CreationDate":"2022-07-14T16:40:00.000","Title":"Cannot make virtual environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a issue with my virutal enviroment and I couldn't find a clear and straightforward awnser for it. \nI had a fully working virtual enviroment with a lot of packages. My directory changed from \"..\/Desktop\/..\" to \"..\/Cloud_Name\/Desktop\/..\" and lets assume i can't change that anymore. \nI'm now able to cd into my eviroment and activate it. \nIf I now want to use any kind of command I get: \n\nFatal error in launcher: Unable to create process using \"C: ...\" \"C: ...\" the system cannot find the specified file.\n\nI tried sofar to change the directory in \"eviroment\/Scripts\/activate\" and \"eviroment\/Scripts\/activate.bat\", but it doesn't work. \nI don't want to install a new enviroment. \nI'd be very thankfull if someone has a working solution to show my eviroment where its packages are.\nThank you in advance for your time and have a great day!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":72990965,"Users Score":0,"Answer":"If you are able to activate your virtual environment, I suggest storing the installed packages (their names and version) into a requirements file by running pip freeze > requirements.txt Then recreate a new environment. After which you can reinstall your previous packages through pip install -r requirements.txt.\nVirtualenv usually creates a symbolic link to reference package locations and I think after you changed the location of the environments, it did not (though it usually does not) update the symbolic links to the new location.","Q_Score":1,"Tags":"python,django,virtualenv,filepath,directory-structure","A_Id":72993441,"CreationDate":"2022-07-15T08:12:00.000","Title":"Python Virtual Environment: changed path of enviroment - can't find packages anymore (How to show the enviroment where its packages are located)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm working on an app using the Spotipy library and would like to use the Spotify recommendations call to get recommended songs by only a specific artist. I know there are tuneable attributes for more specific characteristics but is there a parameter\/argument that would allow you to restrict the results by artists? Scoured the docs and couldn't find anything.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":110,"Q_Id":72999367,"Users Score":0,"Answer":"This would require access to the underlying representation of tracks\/artists (to be able to compute similarity with specific input artists as you'd like, for example) used in Spotify's own recommender models, which is unfortunately not public.\nAs a workaround, something you could try is to leverage the seeds in the Recommendation endpoint, and add to these some tracks from the desired artist in the hope that this will also steer recommendations toward the artist.","Q_Score":2,"Tags":"python,spotify,spotipy","A_Id":73023708,"CreationDate":"2022-07-15T20:37:00.000","Title":"Spotify API: Restricting recommendations to specific artists?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"im pretty new to coding and stuff but for whatever reason, python, pip or even doing cd C:\\Python38\\scripts cmd will just tell me the directory isnt found or that it isnt a command, i did echo %PATH% and it is in there. (this is probably a really simple solution, im not advanced with stuff like this at all)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":73008919,"Users Score":0,"Answer":"Open a new CMD with the start menu\ngo to the location where python is installed.\npress on the Path next to Search Bar.\nCopy, back to CMD, cd (paste)\nThis will set the working directory to where python is installed.\nYou can test now with python command, and check it it works , then the issue is only path related.\nNow for checking with Path, You will need to add the complete path to the python.exe the one you just copied into CMD.\nFor example\nC:\\Users\\George\\AppData\\Local\\Programs\\Python\\Python39 at this path there will be a python.exe where You can execute with CMD.\nIf you have an issue with the path and have to update to the new one, Make sure to start a new CMD window to take the effects of the Path update.","Q_Score":1,"Tags":"python,windows","A_Id":73011318,"CreationDate":"2022-07-17T02:40:00.000","Title":"Python isnt recognized in cmd and i cant use pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've set up I18n for my Python3 Django4.0 app and it runs locally without problems. When I deploy it to GAE standard the translated text isn't shown. The active language changes, but the text does not change.\nThe catalog files exist, but it looks like they are not being loaded.\nI am aware that GAE-standard only allows access to tmp\/ directory. Could this be the reason? Are there particular cache requirements?\nAny advice or examples would be super useful.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":73013328,"Users Score":2,"Answer":"The problem was caused by an auto-generated .gcloudignore file including .mo files.\nIf the .mo file (created by running makemessages) doesnt exist, django wont raise any errors.","Q_Score":0,"Tags":"python-3.x,django,google-app-engine,django-i18n,django-cache","A_Id":73013942,"CreationDate":"2022-07-17T16:04:00.000","Title":"Cannot load translation catalog for Django I18n on Google App Engine Standard","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I created a docker image for my FastAPI application then I created a container from that image. Now When I connect to that container using docker exec -it through my terminal, I am able to access it but the problem is that autocomplete doesn't work when I press TAB.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":607,"Q_Id":73013781,"Users Score":5,"Answer":"What I have understood from your question is when you enter into the docker environment, you are unable to autocomplete filenames and folders.\nUsually when you enter into the container via shell, the autocomplete not works properly. Tried to enter into the container using bash environment i.e., docker exec -it bash. Now you can use TAB to autocomplete files and folders.","Q_Score":0,"Tags":"python,docker,terminal","A_Id":73014382,"CreationDate":"2022-07-17T17:05:00.000","Title":"How to enable autocomplete when connect to docker container through CLI?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using SQL transform of apache_beam python and deploy to Dataflow by Flex Template. The pipeline show the error: Java must be installed on this system to use. I know the SQL transform of beam python using Java, I researched the way to add Java to pipeline but all failed.\nCan you give any advice on how to fix this error? Thank a lot.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":137,"Q_Id":73034979,"Users Score":0,"Answer":"You need to have either Java8 or Java11 installed locally to start a Java expansion service to expand your SqlTransforms into Java SDK transforms. This is pipeline construction different from later pipeline execution and could be where your issue occurred.","Q_Score":1,"Tags":"python,java,sql,apache-beam","A_Id":73040886,"CreationDate":"2022-07-19T10:06:00.000","Title":"Java must be installed on this system to use this when using dataflow flex template python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using VSCode v1.69.0. My OS is MacOS v10.15.7 (Catalina).\nI have a python code that I'm usually debugging with VSCode Python debugger (using the launch.json file).\nFor no apparent reason the debugger recently stopped working, i.e. when I click on \"Run Debug\", the debug icons (stop, resume, step over etc.) show up for 1 second, then disappear and nothing else happens. There is no error message in the terminal or anything (no command is launched apparently).\nI tried to uninstall VSCode and reinstall it => no success.\nI deleted the Python extension in \/userprofile\/.vscode\/extensions and then reinstalled it (both the current v2022.10.1 and pre-release v2022.11.12011103) => no success.\nThe python program runs properly on another IDE.\nAny idea?\nThanks a lot!\nSOLUTION: as pointed out by JialeDu, the last versions of the python extension are no longer compatible with python 3.6. I could have installed an older version of the extension but instead I created a new python environment based on python 3.7.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":319,"Q_Id":73040397,"Users Score":1,"Answer":"If you are using python3.6 version then latest version of debugger no longer supports it.\nYou can use the historical version 2022.08.*. Or use a new python version.","Q_Score":1,"Tags":"python,visual-studio-code,vscode-debugger","A_Id":73073946,"CreationDate":"2022-07-19T16:19:00.000","Title":"VSCode Python debugger stopped launching","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm writing a simple CLI application using python.\nI have a list of records that I want to print in the terminal and I would like to output them just like git log does. So as a partial list you can load more using the down arrow, from which you quit by typing \"q\" (basically like the output of less, but without opening and closing a file).\nHow does git log do that?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":87,"Q_Id":73048269,"Users Score":1,"Answer":"How does git log do that?\n\ngit log invokes less when the output will not fit on the terminal. You can check that by running git log (if the repo doesn't have a lot of commits you can just resize the terminal before running the command) and then checking the running processes like so ps aux | grep less","Q_Score":1,"Tags":"python,linux,stdout","A_Id":73048724,"CreationDate":"2022-07-20T08:27:00.000","Title":"How to print to terminal like git log?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to deploy a python streamlit app using compute engine as my company does not provide access to app engine yet. Is there a way to deploy the app using compute engine rather than app engine on google cloud. I have searched multiple forum but unable to find relevant answers.\nSorry for the more general question; I hope someone can help me get over this hurdle or maybe point me to a resource.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":113,"Q_Id":73049228,"Users Score":0,"Answer":"If your application is containerized, I would suggest deploying it to Google Cloud Run instead of Compute Engine. Cloud Run is a serverless cloud service used to easily deploy pre-built applications. One of its main advantages is that it automates most of the resources management process. Therefore, all you have to do is to tell Cloud Run where your Docker image is, and then Cloud Run will deploy it on a serverless environment without needing to specify the optimal number of resources for example.","Q_Score":1,"Tags":"python,deployment,google-compute-engine,streamlit","A_Id":73051459,"CreationDate":"2022-07-20T09:36:00.000","Title":"How to deploy a streamlit app in compute engine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I use Odoo 14 enterprise hosted on premise\nI duplicated my database then I had this error : 502 Bad Gateway nginx\/1.14.0 (Ubuntu)\nI restart the nginx and Odoo service, nothing change.\nhow can I solve it ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":263,"Q_Id":73066738,"Users Score":0,"Answer":"First enter to the database user,then stop the server and next go for restart","Q_Score":0,"Tags":"python,ubuntu,nginx,odoo,odoo-14","A_Id":73370856,"CreationDate":"2022-07-21T13:02:00.000","Title":"502 Bad Gateway nginx\/1.14.0 (Ubuntu) odoo14","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"As the title suggests, I need a way to get the resolution of the media running in the Gstreamer pipeline. I know it has something to do with the caps in the pipeline. But how do I access it?\nAlso, let's say I somehow got it and it is of the type Gst.caps, how do I get the actual width and height from this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":73071640,"Users Score":0,"Answer":"If running your pipeline by gst-launch-1.0, you can actually see the caps negotiation by putting GST_DEBUG=7 before gst-launch-1.0. However, GST_DEBUG=5 may get you the information you need without the huge amount of trace information given by GST_DEBUG=7.","Q_Score":0,"Tags":"python,gstreamer","A_Id":73296919,"CreationDate":"2022-07-21T19:23:00.000","Title":"Python: Getting the resolution of the media in a gstreamer pipeline","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"What does hashed mean when applied to a path in linux or Mac bash?\nWhen I use the command in bash:\n\n\n\ntype python3\nI get:\npython3 is hashed (\/usr\/local\/bin\/python3)\n\n\n\nWhat does hashed mean. Sometimes I get hashed and sometimes just the path line.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":154,"Q_Id":73072290,"Users Score":2,"Answer":"Theoretically, every time you type a command name like foo that doesn't include a \/, the shell looks at each directory in your PATH variable to find a command named foo to execute.\nThis is somewhat time-consuming and redundant (your commands don't really move much), so the shell only performs the full PATH lookup once, and caches the result. Typically, it uses a hash table so that command names can be looked up quickly, so \"python3 is hashed (\/usr\/local\/bin\/python3)\" is short for\n\npython3 was found in the hash table and mapped to the path \/usr\/local\/bin\/python3\n\nThe difference between foo is bar and foo is hashed (bar) is that in the former, foo hasn't been executed yet; type itself does not store the result in the hash table on a successful lookup.","Q_Score":1,"Tags":"python,linux,path","A_Id":73072314,"CreationDate":"2022-07-21T20:29:00.000","Title":"What does hashed mean","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"What does hashed mean when applied to a path in linux or Mac bash?\nWhen I use the command in bash:\n\n\n\ntype python3\nI get:\npython3 is hashed (\/usr\/local\/bin\/python3)\n\n\n\nWhat does hashed mean. Sometimes I get hashed and sometimes just the path line.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":154,"Q_Id":73072290,"Users Score":2,"Answer":"It's a performance thing; instead of searching the whole path for the binary every time it is called, it's put into a hash table for quicker lookup. So any binary that's already in this hash table, is hashed. If you move binaries around when they're already hashed, it will still try to call them in their old location.\nSee also help hash, or man bash and search for hash under builtin commands there.","Q_Score":1,"Tags":"python,linux,path","A_Id":73072320,"CreationDate":"2022-07-21T20:29:00.000","Title":"What does hashed mean","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Im trying to write a script:\nenv PYTHONPATH=$PYTHONPATH: $Dir\/scripts find * -name \u2018*.py' -exec pylint (} \\\\; | grep . && exit 1\nThe code is finding all scripts in the root directory instead of using the environmental variables I set. Any help on writing this code to only look for files in the directory I set as a value in PYTHONPATH.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":245,"Q_Id":73074234,"Users Score":0,"Answer":"There\u2019s no space between the PYTHONPATH value, it was a typo mistake, I want to run the command on a CLI instead of a script.","Q_Score":0,"Tags":"python,linux,shell","A_Id":73087742,"CreationDate":"2022-07-22T01:39:00.000","Title":"env command not working with find command","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"You know that a python file converted to an exe file can be deciphered and its codes can be displayed.\nWhat is the way to prevent this decryption event? I don't want to people see the codes.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":73090144,"Users Score":0,"Answer":"You can use Nuitka to convert .py file into a C-based standalone executable. Then, pass a resulting .exe file through VMProtect to obfuscate the binaries.","Q_Score":0,"Tags":"python,python-3.x,encryption,cryptography,exe","A_Id":73090266,"CreationDate":"2022-07-23T10:58:00.000","Title":"Blocking deciphering py.exe file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am building a desktop python application that uses the MSAL authorization code workflow by opening up a browser window for authentication. I keep getting back an invalid grant error (code 70000) for some accounts but not others when trying to get an authorization token. It seems to work just fine for the personal Microsoft account through which the application is registered in the Azure portal. It also works fine for my university account (a school Microsoft account), but not for other personal Microsoft accounts.\nThrough the Azure portal, the application is registered with the ability for all Microsoft accounts to work with it. The scopes listed there also match the scopes that I am requesting in the python application.\nThe authorize endpoint does return a valid looking authorization code, but then when I try to use that code to get a valid token, I get the error. More specifically, the message associated with the error says:\nAADSTS70000: The request was denied because one or more scopes requested are unauthorized or expired. The user must first sign in and grant the client application access to the requested scope.\\r\\nTrace ID: 6afddbd2-308e-44df-8640-976dc1c1f601\\r\\nCorrelation ID: bdb626d0-0a3d-4333-ac8f-b5ff510ca046\\r\\nTimestamp: 2022-07-24 18:50:23Z\nWhat might be causing this issue to occur?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":196,"Q_Id":73101247,"Users Score":1,"Answer":"It turns out this was an issue with the scopes I was providing to the authorization endpoint. The scopes profile, openid, and offline_access should be specified to allow some features of Microsoft's Graph API to function properly. In my case, it was the offline_access scope that did the trick. Also note that these scopes cannot be added to the authorization token request, at least through the Python MSAL library. These scopes need to be specified during the process of getting the authorization code only, not the token.","Q_Score":0,"Tags":"python,msal","A_Id":73101480,"CreationDate":"2022-07-24T19:05:00.000","Title":"Having trouble getting MSAL Authorization Code workflow to work","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using Pycharm to create a simple program that uses snap7 lib. to read a S7-1200 PLC, it works correctly when i run it with Pycharm but when i try to run it with an .exe file it prompts this error message:\nTraceback (most recent call last):\nFile \"main.py\", line 1, in \nModuleNotFoundError: No module named 'snap7'\nthe snap7.dll and .lib are both in sys32 folder aswell as in environment variables PATH route.\nboth the python file and my PC are x64 so i used the x64 version of the DLL and lib files.\nWhat am i missing?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":73102473,"Users Score":0,"Answer":"How did you install the snap7 library?\npy -m pip install python-snap7 is the way i installed it. I never had any problems with it.","Q_Score":0,"Tags":"python,snap7","A_Id":73952223,"CreationDate":"2022-07-24T22:20:00.000","Title":"Snap7 module not found with python .exe","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to make a simple azure function app that uses tabula-py. However, this package has a java dependency. When I try to run it, I get an error about java not being in PATH.\nI've tried to add java to the fileshare but I get the same error.\nWhenever I use jdk, I get the error PermissionError: [Errno 13] Permission denied: '\/home\/.jre\/jdk-15.0.2+7-jre\/lib\/server\/classes.jsa'\nIs there a way to get java and python running in the same function app?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":151,"Q_Id":73112431,"Users Score":0,"Answer":"Officially only one runtime is supported on Azure function where you specify language of your function app is maintained in the FUNCTIONS_WORKER_RUNTIME as documented here: learn.microsoft.com\/en-us\/azure\/azure-functions\/\u2026 In case if any issue with your function app, then it may not be supported. I think function on Linux using a custom container would be the possible solution where you have full control.","Q_Score":0,"Tags":"python,azure,azure-functions","A_Id":73189023,"CreationDate":"2022-07-25T16:19:00.000","Title":"How to get Python Azure Function App with Java","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Is there any way to accurately convert RTF Files to PDF and DOC files on linux server with Python ? I have gone through a number of past questions and here is what I concluded :\n\nThe libreoffice command line converter is not accurate for my PDF and it does not work at all for DOC.\nPython libraries like PyWin32 work on Windows. I would have to make scripts and host it separately on Windows Server to work with Windows\/Microsoft environment. Although I am not sure if it's worth giving it a try ?\nThere are .NET libraries like Aspose.words which will do the work but they are way too costly for a startup.\n\nAny help would be much appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":345,"Q_Id":73121352,"Users Score":1,"Answer":"As per suggestions and further research by me, I concluded that it is best to implement the 2nd option mentioned in my question itself because of it's balance between a quality solution and being cost optimal. Although, utilities like libreoffice, latex etc. would do the work on linux but none of them are as accurate and real as a MS Word generated report. For my usecase, I implemented a separate Flask API to run on a windows server with MS Office license on it. It contains just a couple of endpoints, taking RTF file as input, and generating PDF and DOCX each.","Q_Score":1,"Tags":"python,linux,pdf,docx,rtf","A_Id":73266088,"CreationDate":"2022-07-26T09:57:00.000","Title":"Converting RTF Files to DOCX and PDF Files on Python-Django and linux server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I installed Anaconda, but it did not include the GUI app, Anaconda-Navigator app in the Applications folder. What do I need to do to get the GUI app?\nDetails:\nComputer: 2021 14-inch MacBook Pro, M1 Max\nOS: macOS Monterey 12.5\nA month ago, I had the full Intel version of Anaconda, including Anaconda-Navigator, running fine.\nI decided I wanted the M1 version, so I uninstalled it using the method detailed on the Anaconda website (anaconda-clean), rm -rf ~\/opt\/anaconda3, and remove conda section of .zshrc. I also deleted Anaconda-Navigator from the Applications folder and removed all ~\/Library\/Receipts, and restarted the laptop.\nI the used the GUI installer for the M1 version, which set up conda and seemingly the complete ~\/anaconda3 folder, but it didn't install the Anaconda-Navigator app.\nI repeated the full uninstall and used the shell installer, getting the same result - no Anaconda-Navigator.\nAny suggestions on how I can get Anaconda-Navigator GUI?\nThanks!!\nMike","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":3768,"Q_Id":73188906,"Users Score":1,"Answer":"I had the same issue. After installing with conda install anaconda-navigator run it from terminal with anaconda-navigator and lock it to the Dock (right click -> options -> keep in dock)","Q_Score":0,"Tags":"python,anaconda3","A_Id":74452739,"CreationDate":"2022-08-01T04:40:00.000","Title":"Anaconda-Navigator.app missing after installation on M1 macOS Monterey","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I installed Anaconda, but it did not include the GUI app, Anaconda-Navigator app in the Applications folder. What do I need to do to get the GUI app?\nDetails:\nComputer: 2021 14-inch MacBook Pro, M1 Max\nOS: macOS Monterey 12.5\nA month ago, I had the full Intel version of Anaconda, including Anaconda-Navigator, running fine.\nI decided I wanted the M1 version, so I uninstalled it using the method detailed on the Anaconda website (anaconda-clean), rm -rf ~\/opt\/anaconda3, and remove conda section of .zshrc. I also deleted Anaconda-Navigator from the Applications folder and removed all ~\/Library\/Receipts, and restarted the laptop.\nI the used the GUI installer for the M1 version, which set up conda and seemingly the complete ~\/anaconda3 folder, but it didn't install the Anaconda-Navigator app.\nI repeated the full uninstall and used the shell installer, getting the same result - no Anaconda-Navigator.\nAny suggestions on how I can get Anaconda-Navigator GUI?\nThanks!!\nMike","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":3768,"Q_Id":73188906,"Users Score":3,"Answer":"run -> conda install anaconda-navigator\nWorked for my on my Mac mini M1. Was looking for ages to find this. Hope this helps someone looking for it as well.","Q_Score":0,"Tags":"python,anaconda3","A_Id":73827377,"CreationDate":"2022-08-01T04:40:00.000","Title":"Anaconda-Navigator.app missing after installation on M1 macOS Monterey","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've just installed python 3.10.6 and checked the box \"Add python to your path\"\nand now when I open the cmd to check for python availability , I got python not found.\n\"Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.\"\nAny help please !!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":535,"Q_Id":73216491,"Users Score":0,"Answer":"Exit from current session or reboot (not recommended) and also you can set PATH for current cmd session: set Path=full path to python.exe but all others paths in current cmd PATH will be cleaned, so first may be check current PATH, then add item\ncheck path from cmd by echo %Path% or PowerShell $Env:Path","Q_Score":0,"Tags":"python,path,environment-variables","A_Id":73216669,"CreationDate":"2022-08-03T05:00:00.000","Title":"Python 3.10.6(64 bit) PATH","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to add OpenCV to my python by using pip install but i get the error\n##'pip' is not recognized as an internal or external command,\noperable program or batch file.\nwhen i use the echo %PATH% i get this\n##C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\Users\\jashp\\AppData\\Local\\Programs\\Python\\Python39;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\dotnet;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\Users\\jashp\\AppData\\Local\\Programs\\Python\\Python39;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\dotnet;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\ProgramData\\Oracle\\Java\\javapath;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0;C:\\Windows\\System32\\OpenSSH;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\dotnet;C:\\Users\\jashp\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Python34\\Scripts;;C:\\Python34\\Scripts\nI even tried C:\\Users\\jashp>setx PATH \"%PATH%;C:\\pip\" and got\n##SUCCESS: Specified value was saved.\nthen i tried C:\\Users\\jashp>pip install numpy and got\n'pip' is not recognized as an internal or external command,\noperable program or batch file.\nThe path to my Python is -C:\\Users\\jashp\\AppData\\Roaming\\Python","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":73221449,"Users Score":0,"Answer":"You need to add the path of your pip installation to your PATH system variable. By default, pip is installed to C:\\Python34\\Scripts\\pip (pip now comes bundled with new versions of python), so the path \"C:\\Python34\\Scripts\" needs to be added to your PATH variable.\nTo check if it is already in your PATH variable, type echo %PATH% at the CMD prompt\nTo add the path of your pip installation to your PATH variable, you can use the Control Panel or the set command. For example:\nset PATH \"%PATH%;C:\\Python34\\Scripts\"\nNote: According to the official documentation, \"variables set with set variables are available in future command windows only, not in the current command window\". In particular, you will need to start a new cmd.exe instance after entering the above command in order to utilize the new environment variable.","Q_Score":0,"Tags":"python","A_Id":73221544,"CreationDate":"2022-08-03T12:07:00.000","Title":"How to install OpenCV on python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to create a python .exe file in order to sell it to a client, however I'm worried that python can easily be reversed and the source code can be found, rendering my work less valuable for the future. Is there any way I can compile python code and make it irreversible?\nAny other suggestions to go about solving this issue are welcomed.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":272,"Q_Id":73228990,"Users Score":0,"Answer":"you can also install auto_py_to_exe with pip, call it in your terminal to open it, just select your .py file and itll convert it to an .exe for you","Q_Score":0,"Tags":"python,compilation,exe,reverse-engineering","A_Id":73229064,"CreationDate":"2022-08-03T23:56:00.000","Title":"Can I compile a python script to an executable file, so that cannot be reversed back to the source code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Python program that is feeding JSON input to a binary running in GDB which then reads that input using the C function fread().\nThe problem is that fread() needs a EOF\/Ctrl-D to stop reading but the JSON input I am passing to the binary in GDB are strings so the binary just hangs waiting for more input or a Ctrl-D.\nI have red that apparently there isn't a way to pass an EOF instruction using Python. Is there a way to define an instruction in GDB that sends an EOF to the binary instead maybe?\nIf this was possible I could then trigger that instruction in Python after I send the input.\nI am not sure if something like this is possible in GDB using the 'define' feature.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":86,"Q_Id":73274020,"Users Score":0,"Answer":"Once you've sent the generated JSON, close the file descriptor to signal EOF to your C program.\nIf your generator in written in Python, you should just be able to call f.close() on the file object you wrote the JSON string to. If that is stdout, sys.stdout.close() should do the trick. I would suggest posting the source of your generator program if you want an exact answer.","Q_Score":0,"Tags":"python,python-3.x,gdb,gdbserver,gdb-python","A_Id":73274222,"CreationDate":"2022-08-08T07:09:00.000","Title":"Sending an automated Ctrl+D within GDB","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've started running a Python script that takes about 3 days to finish. My understanding is that Linux processes make a copy of a python script at the time they're created, so further edits on the file won't impact the running process.\nHowever, I just realized I've made a mistake on the code and the processes running for over 2 days are going to crash. Is it possible to edit the copy of the python script the process loaded?\nI'm basically trying to figure out if there's a way to fix the mistake I've made without having to restart the script execution.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":73284997,"Users Score":1,"Answer":"No, you can't. The interpreter byte compiles your source code when it initially reads it. Updating the file won't change the byte code that is running.","Q_Score":1,"Tags":"python,linux,process","A_Id":73285057,"CreationDate":"2022-08-08T23:21:00.000","Title":"Can I edit the copy of the python script a process is running?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've been having trouble getting python-confluent-kafka to work on my windows server.\nWhen creating a simple consumer on my local machine, everything works fine.\nHowever, once on the windows server, I will receive the messages but get the following error:\n\nb'Decompression (codec 0x4) of message at 24023756 of 9550 bytes failed: Local: Not implemented'\n\nI copied the exact conda environment I have on my local machine to the server.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":129,"Q_Id":73298124,"Users Score":0,"Answer":"Turns out it had to do with the .dll's not being found due to the Path\/Environment Variables not being configured properly.","Q_Score":0,"Tags":"python,apache-kafka,compression,confluent-kafka-python","A_Id":73912662,"CreationDate":"2022-08-09T21:08:00.000","Title":"0x4 Decompression error in Python Confluent Kafka","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been trying to no avail to pull an environment variable from my windows system with Python. I have tried os.getenv() and os.environ.get(), but neither seem to have worked.\nI have a feeling that \"Environment Variable\" might refer to two different things, or that python may not have access to the environment variables so it makes up its own. I am trying to pull the information stored in the environment variable that is set by going to system > advanced system settings > environment variables.\nI am able to get the \"PATH\" environment variable in python, but it seems to be different than the one in the windows setting.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":357,"Q_Id":73311917,"Users Score":0,"Answer":"The answer to my question was that I was trying to run my code in an environment that was different from my system environment. Running the code via command prompt worked, and it worked in VS Code when I defined the environment in the \"launch.json\" file.","Q_Score":0,"Tags":"python,environment-variables","A_Id":73312735,"CreationDate":"2022-08-10T19:45:00.000","Title":"Python won't pull Windows system environment variables","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to scale on cloud a one off pipeline I have locally.\n\nThe script takes data from a large (30TB), static S3 bucket made up of PDFs\nI pass these PDFs in a ThreadPool to a Docker container, which gives me an output\nI save the output to a file.\n\nI can only test it locally on a small fraction of this dataset. The whole pipeline would take a couple days to run on a MacbookPro.\nI've been trying to replicate this on GCP - which I am still discovering.\n\nUsing Cloud functions doesn't work well because of its max timeout\nA full Cloud composer architecture seems a bit of an overkill for a very straightforward pipeline which doesn't require Airflow.\nI'd like to avoid coding this in Apache Beam format for Dataflow.\n\nWhat is the best way to run such a python data processing pipeline with a container on GCP ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":157,"Q_Id":73319079,"Users Score":1,"Answer":"Thanks to the useful comments in the original post, I explored other alternatives on GCP.\nUsing a VM on Compute Engine worked perfectly. The overhead and setup is much less than I expected ; the setup went smoothly.","Q_Score":0,"Tags":"python,google-cloud-platform,google-cloud-run,dataflow","A_Id":73379502,"CreationDate":"2022-08-11T10:21:00.000","Title":"Running large pipelines on GCP","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm new in this environment and currently learning the basics of Python through YouTube courses. While using VSCode, I always encounter the same problem when I try to use the terminal. The command (like a pip install for ex) won't run because my users folder name has a space in it.\nIt shows \"C:\\User \" is not valid.\nOtherwise when I run code with CodeRunner everything works fine.\nI'm sorry to not adjoint a screenshot to it it seems I'm not able to.\nIs there anyway to fix this?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":103,"Q_Id":73326679,"Users Score":0,"Answer":"People will probably say:\n\nDon't put spaces in your paths to begin with.\nTry updating VSCode?\n\nIf you're running the commands in the terminal, put quotes around the path. E.g., python \"path\/to\/spaced file.py\".\nOtherwise, if this is a task that you're running, simply put \"\" around ${file} in tasks.json.\nThough, I strongly recommend not using spaces, and replacing with something like - or _, or just a capitalisation rule like camelCase or PascalCase.","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":73326762,"CreationDate":"2022-08-11T20:40:00.000","Title":"Troubles in VSCode terminal because of user folder name","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a webserver hosted on cloud run that loads a tensorflow model from cloud file store on start. To know which model to load, it looks up the latest reference in a psql db.\nOccasionally a retrain script runs using google cloud functions. This stores a new model in cloud file store and a new reference in the psql db.\nCurrently, in order to use this new model I would need to redeploy the cloud run instance so it grabs the new model on start. How can I automate using the newest model instead? Of course something elegant, robust, and scalable is ideal, but if something hacky\/clunky but functional is much easier that would be preferred. This is a throw-away prototype but it needs to be available and usable.\nI have considered a few options but I'm not sure how possible either of them are:\n\nCreate some sort of postgres trigger\/notification that the cloud run server listens to. Guess this would require another thread. This ups complexity and I'm unsure how multiple threads works with Cloud Run.\nSimilar, but use a http pub\/sub. Make an endpoint on the server to re-lookup and get the latest model. Publish on retrainer finish.\ncould deploy a new instance and remove the old one after the retrainer runs. Simple in some regards, but seems riskier and it might be hard to accomplish programmatically.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":95,"Q_Id":73353518,"Users Score":2,"Answer":"Your current pattern should implement cache management (because you cache a model). How can you invalidate the cache?\n\nRestart the instance? Cloud Run doesn't allow you to control the instances. The easiest way is to redeploy a new revision to force the current instance to stop and new ones to start.\nSetting a TTL? It's an option: load a model for XX hours, and then reload it from the source. Problem: you could have glitches (instances with new models and instances with the old one, up to the cache TTL expires for all the instances)\nOffering cache invalidation mechanism? As said before, it's hard because Cloud Run doesn't allow you to communicate with all the instances directly. So, push mechanism is very hard and tricky to implement (not impossible, but I don't recommend you to waste time with that). Pull mechanism is an option: check a \"latest updated date\" somewhere (a record in Firestore, a file in Cloud Storage, an entry in CLoud SQL,...) and compare it with your model updated date. If similar, great. If not, reload the latest model\n\nYou have several solutions, all depend on your wish.\n\nBut you have another solution, my preference. In fact, every time that you have a new model, recreate a new container with the new model already loaded in it (with Cloud Build) and deploy that new container on Cloud Run.\nThat solution solves your cache management issue, and you will have a better cold start latency for all your new instances. (In addition of easier roll back, A\/B testing or canary release capability, version management and control, portability, local\/other env testing,...)","Q_Score":0,"Tags":"python-3.x,tensorflow,google-cloud-platform,fastapi,distributed-system","A_Id":73366096,"CreationDate":"2022-08-14T17:01:00.000","Title":"How to reload tensorflow model in Google Cloud Run server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am running Flask 1.1.4 via Python 3.5.3, hosted via an Apache 2 server on Debian Stretch. I am attempting to log various messages from the program, using the python logging module. This works normally. However, if I restart the Apache server using sudo service apache2 restart, the Flask application errors out with PermissionError: [Errno 13] Permission denied: (log file name). Is this an issue anyone else has run into?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":262,"Q_Id":73367502,"Users Score":0,"Answer":"It turns out that there was a cron process that was editing the ownership of all files in my log folder to root. Once the program was restarted and needed to re-obtain the file reference, it was unable to do so.","Q_Score":0,"Tags":"python,apache,flask,python-logging","A_Id":73392656,"CreationDate":"2022-08-16T00:03:00.000","Title":"Why am I getting PermissionError [Errno 13] when attempting to write to log in Flask hosted by Apache?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"When I connect a node to my head it gives me this message:\n\n(gcs_server) gcs_server.cc:283: Failed to get the resource load: GrpcUnavailable: RPC Error message: failed to connect to all addresses; RPC Error details:\n\nI'm using ray 1.13.0\nMy head node is running on a WSL2 instance with all possible ports open and forward to it.\n\nray start --head --port 6379 --ray-client-server-port 10001 --redis-shard-ports 20000,20001,20002,20003,20004 --dashboard-port 8265 --node-manager-port 6380 --object-manager-port 6381 --worker-port-list=10000,10002,10003,10004 --dashboard-host 0.0.0.0\n\nOn another pc running windows (also happens with another one running Ubuntu 20.04) i'm able to start ray with:\n\nray start --address='MY_IP:6379'\nAnd it gives me:\nRay runtime started.\n\nNow, as soon as I connect a node, it starts giving me the message:\n\n(gcs_server) gcs_server.cc:283: Failed to get the resource load: GrpcUnavailable: RPC Error message: failed to connect to all addresses; RPC Error details:\n\nand if I try to run ray memory I get:\n\ngrpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:\nstatus = StatusCode.UNAVAILABLE\ndetails = \"failed to connect to all addresses\"\ndebug_error_string = \"{\"created\":\"@1660685529.361531380\",\"description\":\"Failed to pick subchannel\",\"file\":\"src\/core\/ext\/filters\/client_channel\/client_channel.cc\",\"file_line\":3134,\"referenced_errors\":[{\"created\":\"@1660685529.361530960\",\"description\":\"failed to connect to all addresses\",\"file\":\"src\/core\/lib\/transport\/error_utils.cc\",\"file_line\":163,\"grpc_status\":14}]}\"\n\nI've been at this for days and nothing I find on google makes any difference","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":301,"Q_Id":73380596,"Users Score":1,"Answer":"I was having a similar issue and I managed to solve my issue by specifying the node public ip address.\nOn the node try a similar command to the following:\nray start --address='HEAD_IP:6379' --node-ip-address='NODE_PUBLIC_IP'\nIf you don't specify the node-ip-address then it registers itself as 127.0.0.1 and then the head node is unable to communicate with it.\nHope this helps.","Q_Score":1,"Tags":"python,windows-subsystem-for-linux,ray","A_Id":73382902,"CreationDate":"2022-08-16T21:34:00.000","Title":"GrpcUnavailable when setting up a local Ray Cluster","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"How to add health check(liveness\/readiness) in Kafka consumer pods in my Django application. New to this infra-structure related things. Please help in implementing a periodic health check for Kafka consumer pods in my application so that it does not allow pods to rotate if their health is not perfect if there is any issue.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":719,"Q_Id":73417557,"Users Score":1,"Answer":"You could add a simple GET \/health endpoint that returns a 200 status.\nOr if you want something more dynamic to what the Kafka consumer is doing, catch exceptions in your consumer and flip a boolean variable that is inspected as part of your \/health route to return 500 code upon any severe error. Kubernetes then will terminate any pods with non-200 HTTP status probe","Q_Score":1,"Tags":"python,django,apache-kafka,kafka-python,health-check","A_Id":73419399,"CreationDate":"2022-08-19T13:28:00.000","Title":"Add health check to Kafka consumer k8s pods","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'd like to stream audio in real-time from mic to speaker using PyAudio, with an opportunity to read \/ modify \/ write the sample buffers as they go by.\nWhat is the idiomatically correct way to do this in PyAudio?\nI understand that in callback mode, the output stream driving the speaker wants to \"pull\" samples in its callback function. Similarly, the input stream consuming samples from the microphone wants to \"push\" samples in its callback function. I also understand that callbacks run in their own threads, and that the docs say:\n| Do not call Stream.read() or Stream.write() if using non-blocking operation.\nGiven those constraints, it's not clear how to connect a microphone's stream to a speaker's stream. (And I understand the complexities if the microphone and speaker clocks are not synchronized.)\nAssuming that the microphone and speaker clocks ARE synchronized, how would you stream from mic to speaker?\nUpdate: I tried allocating multiple buffers, initially passing them to the mic stream callback and then to the speaker stream callback in round-robin style, but I got three mic callbacks in a row before getting three speaker callbacks, so clearly that doesn't work.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":228,"Q_Id":73430801,"Users Score":0,"Answer":"[UPDATE: I thought the following would work, but it doesn't -- see \"Update\" in the original question. But I'm leaving in place for posterity.]\nYou could allocate three frames ahead of time and cycle through them: one gets passed to the microphone callback, one is available for read\/modify\/write processing, the third gets passed to the speaker callback. (You might actually need four frames to allow for latency and delays.)\nI didn't see any documentation on the format of the frames themselves; are they just arrays of ints (for 16 bit sample data)? [UPDATE: \"out_data is a byte array whose length should be the (frame_count * channels * bytes-per-channel)\"] So that's easy...","Q_Score":0,"Tags":"python-3.x,pyaudio","A_Id":73430836,"CreationDate":"2022-08-20T23:50:00.000","Title":"real-time streaming from mic to speaker in PyAudio","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a problem with accessing system variables by regular (non-admin) users.\nLet's say I created a system variable MY_VAR. When I access it via Python from the admin account, it works fine, but when I'm trying to do that on regular user's account, it cannot see the variable.\nI could set it as a user environment variable for the regular user account, but I need this variable to be available for all users (current and future), so it won't work on a long run.\nI will appreciate any help.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":48,"Q_Id":73436584,"Users Score":1,"Answer":"It turned out a system restart was needed. After that it started to work as expected.","Q_Score":0,"Tags":"python,windows,environment-variables","A_Id":73437271,"CreationDate":"2022-08-21T17:06:00.000","Title":"Windows system variables not visible for non-admin user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I seem to have a \"rogue\" copy of python.exe on my Windows 11 machine. When I use it to create a virtual environment with the command \"python -m venv venv\", it produces an environment in which pip always fails. I have uninstalled Python from the add\/remove probrams menu but will when I open a command prompt or a power shell and give the command Python, it responds cheerfully with Python 3.10.5 (main) [GCC 12.1...\"\nI can't use pip in any virtual environment\nHow can I determine where it is finding Python? How can I override it with a good Python?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":73448445,"Users Score":0,"Answer":"In the Windows Powershell you can use the Get-Command command to find where a specific executable is being ran from. In CMD you can use where.\nIf you want to use a Python runtime that you have recently installed, and you know where it is located, I would just add that location to my PATH towards the top of the list so that when you run python in your terminal, you get that python.","Q_Score":0,"Tags":"python","A_Id":73448508,"CreationDate":"2022-08-22T16:42:00.000","Title":"How to find location of an executable?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I already installed VS CODE on my Mac and did all the settings. I put codes in it, and it responded, \"\/bin\/sh: python: command not found.\"\nI tried all kinds of methods, but they didn't seem to help.\nHere is a screenshot of the error.\nPlease help me.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2877,"Q_Id":73452790,"Users Score":0,"Answer":"Probably the path of the python interpreter is misconfigured.\nYou can open a terminal and write the following command: $ whereis python\nIf this command does not return anything, it is likely that python is not installed (or undefined in the var env 'path') on your Mac.\nIf you have a path to an interpreter, change the interpreter path used by vscode.\nGo to the settings and edit the field : \"python : interpreter path\" with the right one.\nGood day.","Q_Score":1,"Tags":"python-3.x","A_Id":73455953,"CreationDate":"2022-08-23T03:22:00.000","Title":"How can I solve \/bin\/sh: python: command not found on Mac?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I installed Python 3.10.6. Apparently, if you type \"python\" into Command Prompt, it'll return your Python application specs if you have it installed, and return Python is not recognized as an internal or external command, operable program or batch file. if you don't. However, I'm getting redirected to the Microsoft store (which I presume means that Command Prompt thinks I don't have Python installed), even though I do have Python installed.\nRight now, Python is in C:\\Users\\(me)\\Downloads\\. Do I need to move it somewhere else?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":261,"Q_Id":73463321,"Users Score":2,"Answer":"OK, I've reinstalled it and checked the \"add as environment variable\" box in the installation wizard. Now it seems like the issue's fixed. Thanks, tcotts and Charles Duffy!","Q_Score":1,"Tags":"python,cmd","A_Id":73464508,"CreationDate":"2022-08-23T18:10:00.000","Title":"(Windows 11) Command Prompt doesn't recognize that Python (3.10.6) is installed?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Python script that should have slightly different behaviour if it's running inside a Kubernetes pod.\nBut how can I find out that whether I'm running inside Kubernetes -- or not?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":188,"Q_Id":73475423,"Users Score":1,"Answer":"An easy way I use (and it's not Python specific), is to check whether kubernetes.default.svc.cluster.local resolves to an IP (you don't need to try to access, just see if it resolves successfully)\nIf it does, the script\/program is running inside a cluster. If it doesn't, proceed on the assumption it's not running inside a cluster.","Q_Score":0,"Tags":"python,kubernetes","A_Id":73476045,"CreationDate":"2022-08-24T14:58:00.000","Title":"How can a Python app find out that it's running inside a Kubernetes pod?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm new to QT and Windows environment programming and tried to integrate some stuffs I developed in Python with another guy in QT. My python code deals with .ods and .xls files by using linux packages gnumeric and libreoffice. I found that WSL may be a convenient way to run my original Python code and it went well after I installed WSL2 Ubuntu.\nThen I installed QT5.15.2 with MinGW and try to run my Python code after click a button. I tested in QT Creator and found this line works: QProcess::execute(\"cmd \/c mkdir C:\\\\Test\"); that creates a folder in C. However, this line won't work: QProcess::execute(\"cmd \/c wsl -h >> res.txt\"); It can't recognize what wsl is. But I also test the QT MinGW terminal and it recognized wsl. Why it can't recognize in exe? Do I need to select different compiler or debugger? Or any other suggestion such as Docker for this kind of integration? Thank you~","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":239,"Q_Id":73491051,"Users Score":1,"Answer":"This works with C\nsystem(QString(\"wsl.exe\").toStdString().c_str());","Q_Score":0,"Tags":"python,qt,windows-subsystem-for-linux","A_Id":73542582,"CreationDate":"2022-08-25T16:41:00.000","Title":"How to use Windows QT to call WSL cmd?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I came across this function . I want to know if it is possible to actually remove the operating system if it's already running . If it is , then what would be the steps that follow once it starts executing ? \nos.rmdir(\"C:\\Windows\\System32\")","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":123,"Q_Id":73493164,"Users Score":1,"Answer":"First, the directory is not empty, so os.rmdir() will fail.\nSecond, you would have to at least run the python program as administrator, because the System32 folder is protected.\nAnd finally, if you did manage to delete System32 it would eventually delete the process that is deleting files, and stop. Although by then the system will be critically damaged.","Q_Score":0,"Tags":"python,windows,operating-system","A_Id":73493248,"CreationDate":"2022-08-25T20:04:00.000","Title":"What will happen on executing os.rmdir(\"C:\\Windows\\System32\") in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to install third-party Python modules from the Python interactive shell rather than the Terminal or Command Prompt. What Python instructions do I run to do this so I can then import the installed module and begin using it?\n(For those asking why I want to do this: I need reliable instructions for installing third-party modules for users who don't know how to navigate the command-line, don't know how to set their PATH environment variable, may or may not have multiple versions of Python installed, and can also be on Windows, Mac, or Linux and therefore the instructions would be completely different. This is a unique setup, and \"just use the terminal window\" isn't a viable option in this particular case.)","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":361,"Q_Id":73494513,"Users Score":4,"Answer":"From the interactive shell (which has the >>> prompt) run the following, replacing MODULE_NAME with the name of the module you want to install:\nimport sys, subprocess; subprocess.run([sys.executable, '-m', 'pip', 'install', '--user', 'MODULE_NAME'])\nYou'll see the output from pip. You can verify that the module installed successfully by trying to import it:\nimport MODULE_NAME\nIf no error shows up when you import it, the module was successfully installed.","Q_Score":2,"Tags":"python,pip","A_Id":73494514,"CreationDate":"2022-08-25T23:13:00.000","Title":"How do I run pip from the Python interactive shell?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My python code takes in a text file and outputs the common words. I want to be able to right-click a text a file and be able to \"Open with Application: MyCode.py\", but I have no clue how.\nDo I have to make a .exe? I probably need to import something...\nI'm on Linux but Windows answer is also welcome. Thx.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":39,"Q_Id":73499567,"Users Score":-1,"Answer":"check out pyinstaller, its a library which turns your py file into an exe which you can run by double clicking.","Q_Score":1,"Tags":"python","A_Id":73499585,"CreationDate":"2022-08-26T10:26:00.000","Title":"How to use Python to \"Open Application With:\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Would like to understand if there exists a library or some alternative mechanism via which I can resume\/restart the execution of the consumer when there are messages in the SQS queue and suspend\/sleep them when there are no more messages in the SQS queue to consume.\nAs of now, the consumer is always running via a while(1) loop. I am looking for a way to restart\/suspend the execution of the consumers to improve on their performance of the consumers.\nMy application is scheduler based and runs after 12 hours. Before the next schedule, the consumers remain idle for almost 4-5 hours.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":126,"Q_Id":73511983,"Users Score":0,"Answer":"The following setup could be used.\n\nTo turn off the Fargate cluster when the queue hits zero.\nCreate a cloud watch alarm on the SQS ApproximateNumberOfMessagesVisible Metric. The alarm could be set for example if the metric sees there are no messages available in the queue for X data points consecutively.\nIn an alarm state, it should trigger a lambda which then turns off the Fargate cluster(by setting desired tasks as zero). Hence Zero consumers\n\nAs you mentioned, you have a schedule for your tasks. So also create a CW rule using cron for the same schedule which again triggers a new lambda which is responsible for turning on the Fargate cluster (by setting non zero value for desired tasks).","Q_Score":0,"Tags":"python,python-3.x,boto3,amazon-sqs,aws-sqs-fifo","A_Id":73513283,"CreationDate":"2022-08-27T14:49:00.000","Title":"Looking for a mechanism to stop\/suspend the execution of consumers of an SQS queue while idle","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to get the balances of a custom token on BSC (can be any BSC token \u2013 BUSD, WEJS, APX etc\u2026 you name it). To that end I have the following questions:\n\nIs it possible to get the balances of a token without ABI?\nIf not, is there a way to automatically collect ABI information (like a Saas API)?\n\nPS: I know BSCscan provides ABI for some verified tokens but it does not provide it for many of the traded tokens\u2026","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":215,"Q_Id":73543796,"Users Score":0,"Answer":"Is it possible to get the balances of a token without ABI?\n\nNo, it is not possible, but you don't need the full ABI of each and every token. The most tokens are made basing on some standards (presumably almost all tokens that you want to interact with are BEP20 [aka ERC20] or BEP721 [aka ERC721] compatible). Therefore instead of using their actual ABI - it is enough to use respective standard's ABI for basic and the most needed interaction such as getting balance or transferring tokens.\nTherefore, just search the web for these ABI or simply compile from the libraries (for instance, OpenZeppelin) 'dumb' contracts of respective standards and use their ABI - it would be compatible with all tokens those following the standard (and all token makers, who need\/hope for the third party support of their tokens do follow these standards).\n\nIf not, is there a way to automatically collect ABI information (like a Saas API)?\n\nI'm not aware of this, but as I stated before - you can avoid relying on them. And token standards mechanism was designed with this very purpose.","Q_Score":0,"Tags":"python,blockchain,ethereum,abi,binance-smart-chain","A_Id":73581084,"CreationDate":"2022-08-30T14:04:00.000","Title":"Get balances of a custom token on Binance Smart Chain","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to start a rpyc server on a machine I'm connected with over ssh. The general SSH connection is working but when starting the server I receive ERRNO 111 Connection refused. When starting the server manually, by logging in via SSH on the machine and starting the py file, I can connect. I tried:\nssh.exec_command(\"python3 \/tmp\/recalibration\/rpc_service_pictures.py\")\nssh.exec_command(\"python3 \/tmp\/recalibration\/rpc_service_pictures.py &\")\nssh.exec_command(\"nohup python3 \/tmp\/recalibration\/rpc_service_pictures.py &\")\nssh.exec_command(\"nohup python3 \/tmp\/recalibration\/rpc_service_pictures.py > \/dev\/null 2>&1 &\")\nbut nothing is changing the Connection Problem, any ideas?\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":174,"Q_Id":73552116,"Users Score":0,"Answer":"Turns out you cant be connected via SSH at the same time.\nI had an open SSH session at the same time to debug, and because of that I couldnt connect. Seems obvious when you know it, but if you dont you are completely lost :D","Q_Score":0,"Tags":"python,ssh,rpyc","A_Id":73553944,"CreationDate":"2022-08-31T07:11:00.000","Title":"ERRNO 111 Connection refused when starting server over SSH","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wrote a simple telegram bot and it works great without conflicting with my firewall. But my question is this, in the firewall I have ports 80 and 443 allowed for my site, but when I write a TCP socket in Python that should work through port 443 or port 80, the OS tells me that I need to run the program from the user's root, but if I start the bot, then the OS does not swear at all about the rights and the bot works quietly. If I still decide to run a socket on port 443 or 80, then the OS replies that these ports are busy.\nSo, please explain to me why the telegram bot does not conflict with processes and ports?\nMy server is Ubuntu 22.04\nP.S. I already asked this question on stackexchange, but as I understand it, they do not understand telegram bots, I hope you can help me.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":239,"Q_Id":73562856,"Users Score":0,"Answer":"You're confusing two things, I think.\nnginx\/apache\/a python server process trying to listen on port 443 or 80 need to be run by root (or another user with elevated privilege levels).\nA python bot trying to talk to a telegram server on port 443 doesn't have that limitations; browsers also don't need to run as root.\nIf this doesn't answer your question you need to be a bit clearer on what you're doing.","Q_Score":0,"Tags":"python,linux,telegram-bot","A_Id":73562996,"CreationDate":"2022-08-31T23:25:00.000","Title":"Why telegram bot doesn't conflict with nginx?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm really bad with python. This is on a CentOS7 vm\nProblem:\nWhen trying to use awscli in a python virtual environment, I get an error:\n(python3ve) [user@ncwv-jlnxnode01 ~]$ aws --version\nTraceback (most recent call last):\n File \"\/home\/user\/venv\/python3ve\/bin\/aws\", line 27, in \n sys.exit(main())\n File \"\/home\/user\/venv\/python3ve\/bin\/aws\", line 23, in main\n return awscli.clidriver.main()\n File \"\/home\/user\/venv\/python3ve\/lib64\/python3.6\/site-packages\/awscli\/clidriver.py\", line 69, in main\n driver = create_clidriver()\n File \"\/home\/user\/venv\/python3ve\/lib64\/python3.6\/site-packages\/awscli\/clidriver.py\", line 79, in create_clidriver\n event_hooks=session.get_component('event_emitter'))\n File \"\/home\/user\/venv\/python3ve\/lib64\/python3.6\/site-packages\/awscli\/plugin.py\", line 44, in load_plugins\n modules = _import_plugins(plugin_mapping)\n File \"\/home\/user\/venv\/python3ve\/lib64\/python3.6\/site-packages\/awscli\/plugin.py\", line 61, in _import_plugins\n module = __import__(path, fromlist=[module])\nModuleNotFoundError: No module named '\/root\/'\nultimately i'm trying to put together a step by step method in an ansible playbook for not only installing awscli, but also awscli-plugin-endpoint, so i'd prefer to install this through pip instead of the centos repos and instead of just downloading the binaries.\nInstallation Steps:\n\nremove python3 and everything python3 related on the system.\n\n~$ rm -rf ~\/venv\/python3ve\/\n~$ sudo yum remove -y python3\n~$ sudo yum autoremove -y\n~$ sudo find \/ -name \"python3*\" > ~\/file\n~$ sudo xargs rm -r ~\/file (missing the arrow because stackoverflow formatting is freaking out with it)\n\ninstall\n\n~$ sudo yum install -y python3\n~$ \/usr\/bin\/python3 -m venv ~\/venv\/python3ve\n~$ source ~\/venv\/python3ve\/bin\/activate\n~$ ~\/venv\/python3ve\/bin\/python3 -m pip install --upgrade pip\n~$ ~\/venv\/python3ve\/bin\/python3 -m pip install --upgrade awscli\n~$ which aws\n~\/venv\/python3ve\/bin\/aws\n~$ aws --version\n---output is in the problem description above---\nsuggestions?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":53,"Q_Id":73586207,"Users Score":1,"Answer":"ultimately found that the error was stemming from my ~\/.aws\/config which I wasnt removing when I reinstalled. that had a reference to the plugin not yet installed and also the old site-packages path (pre venv)\ncli_legacy_plugin_path=\/root\/.local\/lib\/python3.6\/site-packages\/\nendpoint = awscli_plugin_endpoint\nOnce I removed those, it worked fine again.\n~$ aws --version\naws-cli\/1.24.10 Python\/3.6.8 Linux\/3.10.0-957.el7.x86_64 botocore\/1.26.10\nThe error was referencing \/root\/ because of how _import_plugins within \/awscli\/plugin.py splits the path based on . if present\nmodule = path.rsplit('.', 1)","Q_Score":0,"Tags":"python,python-3.x,linux,amazon-web-services,centos7","A_Id":73586966,"CreationDate":"2022-09-02T17:35:00.000","Title":"awscli fails to execute within python virtual environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I am generating reports with Python and Ninja in the ASCIIDoc format.\nBut from within my app I need to convert them into PDF and upload them to another system.\nI have seen that there are multiple HowTo for command line that involve ASCIIDoctor or other tools, but they always are invoked at OS level by starting a program or running a docker container and writing the output to a file.\nIsn't there a way to perform those action within my app and get the PDF as a string that I can use for the upload?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":73591022,"Users Score":0,"Answer":"You can certainly use the available tools to generate a PDF, which you could then read into memory as an opaque string that could be uploaded as needed.\nIf your question is: how do I generate and upload a PDF without installing any other tools?\nThen the answer is that you'd have to implement the PDF generation logic yourself, rather than using tested tooling.","Q_Score":0,"Tags":"python,pdf,asciidoc","A_Id":73628854,"CreationDate":"2022-09-03T09:15:00.000","Title":"ACIIDOC to PDF in Python without need for files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm building an application that can run user-submitted python code. I'm considering the following approaches:\n\nSpinning up a new AWS lambda function for each user's request to run the submitted code in it. Delete the lambda function afterwards. I'm aware of AWS lambda's time limit - so this would be used to run only small functions.\nSpinning up a new EC2 machine to run a user's code. One instance per user. Keep the instance running while the user is still interacting with my application. Kill the instance after the user is done.\nSame as the 2nd approach but also spin up a docker container inside the EC2 instance to add an additional layer of isolation (is this necessary?)\n\nAre there any security vulnerabilities I need to be aware of? Will the user be able to do anything if they gain access to environment variables in their own lambda function\/ec2 machine? Are there any better solutions?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":73596976,"Users Score":0,"Answer":"Any code which you run on AWS Lambda will have the capabilities of the associated function. Be very careful what you supply.\nEven logging and metrics access can be manipulated to incur additional costs.","Q_Score":0,"Tags":"python,amazon-web-services,security,amazon-ec2,aws-lambda","A_Id":73597751,"CreationDate":"2022-09-04T05:25:00.000","Title":"Strategies to run user-submitted code on AWS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I closed the temp.py spyder file by mistake and I wanted to recover it.\nWhere can it be found on a mac?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":73607449,"Users Score":0,"Answer":"Managed to find it.\nThe file is located here:\n\/Users\/bilal.mussa\/.spyder-py3\/temp.py","Q_Score":0,"Tags":"python,spyder","A_Id":73609948,"CreationDate":"2022-09-05T09:46:00.000","Title":"How to locate temp.py and open in spyder on mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have written a simple c program with a bufferoveflow. It is basically a game to guess 4 digits number but starts by asking players to enter their name and this is where buffer overflow happens...I have written an exploit to basically inject shellcode when the \"Please enter your name\" When I run it without program attached to the immunity debugger it works fine but when I attach the exe file to the immunity debugger python script does noting as it is not something that is running on the debugger.....so basically nothing happens when I execute the code. Python code is below:\nimport sys, struct, os\nimport subprocess\nimport time\nfrom subprocess import Popen, PIPE\nlocation ='C:\\Users\\ZEIT8042\\Desktop\\Guessthenumber\\guess.exe\np= Popen([location],stdin=PIPE,stdout=PIPE,stderr=PIPE)\ntime.sleep(15) #tried this to make the program stall for 15 seconds so that it can be attached to immunity debugger.\njunk='A'*40\no,e= p.communicate(input=junk)\nprint(o)\nWhat I am trying to do is check if the program is running...if it is running then inject the shellcode when the exe asks for the name.....any help would be appreciated...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":73610635,"Users Score":0,"Answer":"elif is used in multiple conditions that is seen wrong.is this wrong meaning","Q_Score":0,"Tags":"python-2.7,windows-7,buffer-overflow,exploit","A_Id":73618019,"CreationDate":"2022-09-05T14:03:00.000","Title":"inject code onto command line input using python27","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have two services which runs in python2 and python3. One will be socket service other would be general calculation service based on the data from socket.\nI would like to communicate between one to other service via message queues before we emit any event to client.\nIs there any existed feature for both redis and kafka to trigger events for each services ? so that we can create a pool over that and use pubsub or publish and consume methodology. I have gone through some of the documents but couldn't conclude on the approach.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":100,"Q_Id":73627043,"Users Score":0,"Answer":"Kafka does not push events; your client will poll the broker for events in some loop\/external trigger. Kafka Connect can be setup to read from Redis changes, which the consumer will pickup...\nIf you don't need a message queue, you can use standard socket server, Socket.IO, gRPC, etc which all will allow you to emit and receive true \"service events\" without managing external services.","Q_Score":0,"Tags":"python,apache-kafka,redis,message-queue","A_Id":73637334,"CreationDate":"2022-09-06T19:42:00.000","Title":"Message queues to trigger events for kafka and Redis from one service to other service","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently using the subprocess module to play .mp3 files. It works just fine getting them to play, but once I do subprocess.call([\"afplay\", \"..\/music\/songname.mp3\"]), that's all the code will do until the entire duration of the song has finished playing. I want to make things happen while the song is playing. I don't know how easy this is but I've struggled to find people online asking about the same thing. Is it possible to use a different command with the same subprocess module and achieve this result? Is there a completely separate way to achieve this? I'm open to anything, but keep in mind I'm very new to this.\nI have created a loop track so that, after a certain time in the song has been reached, it will instead play an identical track which has set beginning and end times that, when played back-to-back, create a perfect music loop. Once that first problem is solved, how can I rig this track to repeat infinitely in the background?\nI'm very new to this.","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":118,"Q_Id":73629675,"Users Score":-1,"Answer":"subprocess forks then execs a process, but it waits for the process to finish before returning. you will need to do this in a separate thread, use an asyncio suprocess interface or do an initial a fork yourself. it is a bit tricky. what is also complicated, is that you will likely not get a perfect gapless playback by looping subprocess commands, as python will likely take a millisecond or two to re-execute the command.","Q_Score":0,"Tags":"python,subprocess","A_Id":73629744,"CreationDate":"2022-09-07T03:09:00.000","Title":"How can I play looping music in the background as other functions are being performed?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"3rd week learning python. I am trying to pip install pyperclip. When I run this in the command prompt I am getting a permission error -\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\my_user\\AppData\\Local\\Temp\\tmpybqp3z0q'\nI can go into the folder Temp and delete tmpybqp3z0q, but when I run the command prompt again it will create a new file in this folder and give me the same error, now referencing the newly created file.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":254,"Q_Id":73642493,"Users Score":1,"Answer":"Run your terminal as an administrator and run\npython -m pip install \u2013upgrade pip\nthen run\npip install --user pyperclip","Q_Score":1,"Tags":"python,pip","A_Id":73642523,"CreationDate":"2022-09-08T00:00:00.000","Title":"pip - process cannot access the file because it is being used by another process","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I\u2019m working in program that need to save my settings data in drives, so I put my data\u2019s file in C drive next to the my program files. But My program can\u2019t change them , windows needs permission to let program change files, what can I do that my program can change the data file !?\nThanks everyone","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":73647819,"Users Score":0,"Answer":"Just launch your program with administrators rights. It should probably solve the write\/change permissions.","Q_Score":0,"Tags":"python,windows,tkinter","A_Id":73649508,"CreationDate":"2022-09-08T10:45:00.000","Title":"How change files in :C drive on windows with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I get this error ImportError: No module named _bootlocale, when I tried to convert python script into exe on linux using pyinstaller\n, I am using python version 3.10.5 and pyinstaller version 3.5","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":130,"Q_Id":73674407,"Users Score":0,"Answer":"I fixed this issue by installing newer version of pyinstaller which is 5.4\nusing this command\npip install pyinstaller==5.4 then I accessed the newer version by its absolute path which was \/home\/kali\/.local\/bin\/pyinstaller\nand I run the command \/home\/kali\/.local\/bin\/pyinstaller \/home\/kali\/Desktop\/main.py --onefile\n, so I convert python script to exe successfully","Q_Score":0,"Tags":"python,pyinstaller","A_Id":73674408,"CreationDate":"2022-09-10T18:46:00.000","Title":"I get error ImportError: No module named _bootlocale while runnig pyinstaller on linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to run shell script on ec2 instance every day, cronjob has disabled. how can we run script using python boto3.\nDo we have any options to schedule a job in aws ec2 instance without cron","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":117,"Q_Id":73679484,"Users Score":0,"Answer":"You can use Event Bridge with Systems Manager to run a script every day without using cron","Q_Score":0,"Tags":"python,amazon-web-services,boto3","A_Id":73679509,"CreationDate":"2022-09-11T12:58:00.000","Title":"run script everyday on ec2 instance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i run flask app FLASK_ENV=development flask run\ni haven't touched locally installed python for 2-3 months\nunexpected behaviour:\n\nserver starts slowly than previously\nusually with FLASK_ENV=development server reloads when files change(after file changed reloading will perform in 40-50 seconds)\n\ni did:\n\nre-install pythons(3.8,3.9)\nre-install wsl(debian, ubuntu)\n\np.s. usually wsl command installs high distributive version, but today 09.12.2022 wsl command installs debian 9???","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":127,"Q_Id":73691871,"Users Score":0,"Answer":"problem have solved through switching to wsl version 1\np.s. reason to post this question - poke the problem for another developers","Q_Score":0,"Tags":"python-3.x,windows-subsystem-for-linux","A_Id":73691951,"CreationDate":"2022-09-12T15:45:00.000","Title":"wsl python works slow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've been trying to run a lengthy Python script in the task scheduler in order to be able to run it for a long periods of time.\nI've tried using bat files, inserting just the path to the actual python file, python.exe files, etc., but nothing seems to be working. I also have a path to another file in my script, I changed that to a full file path but nothing happens.\nThe script, when I run it in cmd or VS code connects to an API and adds specified elements to a SQL Server, however when I run it from the Task Scheduler, the database remains empty and nothing is added despite the fact that it is 'running'.\nThis is my current .bat file:\n\"C:\\Users\\n\\AppData\\Local\\Programs\\Python\\Python310\\Lib\\venv\\scripts\\nt\\python.exe\" \"C:\\Users\\n\\Desktop\\data\\shared_links.py\" pause","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":171,"Q_Id":73703920,"Users Score":0,"Answer":"Eventually I got it to work.\nWhat I changed in the end that worked was changing the path to the pyton.exe file to C:\\Users\\\\AppData\\Local\\Programs\\Python\\Python310\\python.exe, as my previous one was a virtual environment path. I also added @echo off in the beginning and deleted pause at the end of my .bat file.","Q_Score":0,"Tags":"python,sql-server,scheduled-tasks,windows-task-scheduler","A_Id":73705661,"CreationDate":"2022-09-13T13:33:00.000","Title":"Task scheduler shows python script shows as running but not working","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm following a build process within mingw-w64.\nContains several instructions to type a python file which is on $PATH.\nHow do I set up mingw-w64 to allow that? Currently I need to type python3 followed by the full path of the py file.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":73705045,"Users Score":0,"Answer":"The shebang line (= first line starting with #!) is not supported in Windows, so you have to call the python executable with the script as first argument (and any arguments after that).","Q_Score":0,"Tags":"python,mingw-w64","A_Id":73720410,"CreationDate":"2022-09-13T14:49:00.000","Title":"How do you get python programs to run when you type them in mingw64-win?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"This knowledge post isn't a duplication of other similar ones, since it's related to 12\/September\/2022 Xcode update, which demands a different kind of solution\nI have come to my computer today and discovered that nothing runs on my terminal Every time I have opened my IDE (VS Code or PyCharm), it has given me this message in the start of the terminal.\nI saw so many solutions, which have said to uninstall pyenv and install python via brew, which was a terrible idea, because I need different python versions for different projects.\nAlso, people spoke a lot about symlinks, which as well did not make any sense, because everything was working until yesterday.\nFurthermore, overwriting .oh-my-zsh with a new built one did not make any difference.","AnswerCount":6,"Available Count":6,"Score":0.1651404129,"is_accepted":false,"ViewCount":5690,"Q_Id":73708478,"Users Score":5,"Answer":"Also solved by by running xcodebuild -runFirstLaunch after installing the command line tools","Q_Score":33,"Tags":"python,xcode,macos,oh-my-zsh,xcode-command-line-tools","A_Id":73932726,"CreationDate":"2022-09-13T19:41:00.000","Title":"The git (or python) command requires the command line developer tools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"This knowledge post isn't a duplication of other similar ones, since it's related to 12\/September\/2022 Xcode update, which demands a different kind of solution\nI have come to my computer today and discovered that nothing runs on my terminal Every time I have opened my IDE (VS Code or PyCharm), it has given me this message in the start of the terminal.\nI saw so many solutions, which have said to uninstall pyenv and install python via brew, which was a terrible idea, because I need different python versions for different projects.\nAlso, people spoke a lot about symlinks, which as well did not make any sense, because everything was working until yesterday.\nFurthermore, overwriting .oh-my-zsh with a new built one did not make any difference.","AnswerCount":6,"Available Count":6,"Score":1.2,"is_accepted":true,"ViewCount":5690,"Q_Id":73708478,"Users Score":79,"Answer":"I was prompted to reinstall commandLine tools over and over when trying to accept the terms\nI FIXED this by opening xcode and confirming the new update information","Q_Score":33,"Tags":"python,xcode,macos,oh-my-zsh,xcode-command-line-tools","A_Id":73709260,"CreationDate":"2022-09-13T19:41:00.000","Title":"The git (or python) command requires the command line developer tools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"This knowledge post isn't a duplication of other similar ones, since it's related to 12\/September\/2022 Xcode update, which demands a different kind of solution\nI have come to my computer today and discovered that nothing runs on my terminal Every time I have opened my IDE (VS Code or PyCharm), it has given me this message in the start of the terminal.\nI saw so many solutions, which have said to uninstall pyenv and install python via brew, which was a terrible idea, because I need different python versions for different projects.\nAlso, people spoke a lot about symlinks, which as well did not make any sense, because everything was working until yesterday.\nFurthermore, overwriting .oh-my-zsh with a new built one did not make any difference.","AnswerCount":6,"Available Count":6,"Score":0.0,"is_accepted":false,"ViewCount":5690,"Q_Id":73708478,"Users Score":0,"Answer":"If you have 2 different Xcode versions like me\n(one living on documents and the other as an application on Launchpad)\nyou need to open the one which is on the Launchpad and accept the terms, otherwise is going to keep asking for the command line tools to download every time.\nThat fixed my issue.","Q_Score":33,"Tags":"python,xcode,macos,oh-my-zsh,xcode-command-line-tools","A_Id":73739190,"CreationDate":"2022-09-13T19:41:00.000","Title":"The git (or python) command requires the command line developer tools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"This knowledge post isn't a duplication of other similar ones, since it's related to 12\/September\/2022 Xcode update, which demands a different kind of solution\nI have come to my computer today and discovered that nothing runs on my terminal Every time I have opened my IDE (VS Code or PyCharm), it has given me this message in the start of the terminal.\nI saw so many solutions, which have said to uninstall pyenv and install python via brew, which was a terrible idea, because I need different python versions for different projects.\nAlso, people spoke a lot about symlinks, which as well did not make any sense, because everything was working until yesterday.\nFurthermore, overwriting .oh-my-zsh with a new built one did not make any difference.","AnswerCount":6,"Available Count":6,"Score":0.0333209931,"is_accepted":false,"ViewCount":5690,"Q_Id":73708478,"Users Score":1,"Answer":"Apple have released an update for their Xcode today. This update has broken the command line tools.\nDeleting completely Xcode and command line tool and reinstalling them have solved this problem.","Q_Score":33,"Tags":"python,xcode,macos,oh-my-zsh,xcode-command-line-tools","A_Id":73708479,"CreationDate":"2022-09-13T19:41:00.000","Title":"The git (or python) command requires the command line developer tools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"This knowledge post isn't a duplication of other similar ones, since it's related to 12\/September\/2022 Xcode update, which demands a different kind of solution\nI have come to my computer today and discovered that nothing runs on my terminal Every time I have opened my IDE (VS Code or PyCharm), it has given me this message in the start of the terminal.\nI saw so many solutions, which have said to uninstall pyenv and install python via brew, which was a terrible idea, because I need different python versions for different projects.\nAlso, people spoke a lot about symlinks, which as well did not make any sense, because everything was working until yesterday.\nFurthermore, overwriting .oh-my-zsh with a new built one did not make any difference.","AnswerCount":6,"Available Count":6,"Score":0.0996679946,"is_accepted":false,"ViewCount":5690,"Q_Id":73708478,"Users Score":3,"Answer":"Didn't need to delete\/reinstall Xcode, just installing the new Xcode update fixed this for me","Q_Score":33,"Tags":"python,xcode,macos,oh-my-zsh,xcode-command-line-tools","A_Id":73709317,"CreationDate":"2022-09-13T19:41:00.000","Title":"The git (or python) command requires the command line developer tools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"This knowledge post isn't a duplication of other similar ones, since it's related to 12\/September\/2022 Xcode update, which demands a different kind of solution\nI have come to my computer today and discovered that nothing runs on my terminal Every time I have opened my IDE (VS Code or PyCharm), it has given me this message in the start of the terminal.\nI saw so many solutions, which have said to uninstall pyenv and install python via brew, which was a terrible idea, because I need different python versions for different projects.\nAlso, people spoke a lot about symlinks, which as well did not make any sense, because everything was working until yesterday.\nFurthermore, overwriting .oh-my-zsh with a new built one did not make any difference.","AnswerCount":6,"Available Count":6,"Score":1.0,"is_accepted":false,"ViewCount":5690,"Q_Id":73708478,"Users Score":8,"Answer":"in my case I had to open Xcode after installing the update to \"fix\" git","Q_Score":33,"Tags":"python,xcode,macos,oh-my-zsh,xcode-command-line-tools","A_Id":73710753,"CreationDate":"2022-09-13T19:41:00.000","Title":"The git (or python) command requires the command line developer tools","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was using Big Sur with Python 3.8.2 (not Homebrew's, but native OS' Python) and I had a lot of packages installed (around 60). Now I updated my OS to Monterey 12.6 and I updated Xcode to 14.0 which updated Python to 3.9.6. And I had a very nasty surprise - all of my packages are gone. There is not a single package I installed when I was using Python 3.8.2. I sure hope it didn't delete them for good. I found some of them in ~\/Library\/Python\/3.8 but not all. If I knew this would happen, I would use pip freeze. How can I fix this?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":252,"Q_Id":73716563,"Users Score":0,"Answer":"In my case, after update Xcode my project on Python do not connecting to mysql database with error MySQLdb.OperationalError: (2002, \"Can't connect to server on 'xxx.xxx.xxx.xxx' (60)\"). In this time Dbeaver still connecting and work perfect","Q_Score":0,"Tags":"python,xcode,macos","A_Id":73730887,"CreationDate":"2022-09-14T12:06:00.000","Title":"Update of Xcode removes all Python packages on macOS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was using Big Sur with Python 3.8.2 (not Homebrew's, but native OS' Python) and I had a lot of packages installed (around 60). Now I updated my OS to Monterey 12.6 and I updated Xcode to 14.0 which updated Python to 3.9.6. And I had a very nasty surprise - all of my packages are gone. There is not a single package I installed when I was using Python 3.8.2. I sure hope it didn't delete them for good. I found some of them in ~\/Library\/Python\/3.8 but not all. If I knew this would happen, I would use pip freeze. How can I fix this?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":252,"Q_Id":73716563,"Users Score":1,"Answer":"To answer my own question. For some reason update of Xcode decides to update Python also. So beware when you are updating it, you packages are still there, but for older version which will become unavailable.","Q_Score":0,"Tags":"python,xcode,macos","A_Id":73814655,"CreationDate":"2022-09-14T12:06:00.000","Title":"Update of Xcode removes all Python packages on macOS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am doing unit tests on a python program who, for QA purposes, gets the repository name and the current commit hash from the .git in the directory\nFor my unit tests on that program I would like to have a dummy .git directory in the tests directory. That .git repository would have a single initialization commit and a remote that would not be used\nWhen attempting to add a .git to my tool's repository, git seems to ignore it and indicates that there are no differences in the status and commit\nHow can I add the .git directory to my project repository ? Something like tests\/.git","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":73718245,"Users Score":3,"Answer":"You can't do that. It's inherently forbidden by Git.\nYou can store a tar or ZIP archive that contains the repository, and then have your test routine extract it to a temporary location. If you go that route, I recommend to use an uncompressed archive format, because it allows Git's own compression algorithms to work more efficient.","Q_Score":0,"Tags":"python-3.x,git,unit-testing,python-unittest","A_Id":73718334,"CreationDate":"2022-09-14T14:05:00.000","Title":"How can I add a .git directory to a git repository?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am doing unit tests on a python program who, for QA purposes, gets the repository name and the current commit hash from the .git in the directory\nFor my unit tests on that program I would like to have a dummy .git directory in the tests directory. That .git repository would have a single initialization commit and a remote that would not be used\nWhen attempting to add a .git to my tool's repository, git seems to ignore it and indicates that there are no differences in the status and commit\nHow can I add the .git directory to my project repository ? Something like tests\/.git","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":73718245,"Users Score":0,"Answer":"I think we would need more details about what you want to achieve to provide like the best answer... but I think you should look at git bundle. You can track a bundle file and then use it to regenerate a git repo.","Q_Score":0,"Tags":"python-3.x,git,unit-testing,python-unittest","A_Id":73718463,"CreationDate":"2022-09-14T14:05:00.000","Title":"How can I add a .git directory to a git repository?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to use AWS services to implement a real-time email-sending feature for one of my projects. It is like someone uses my app to schedule a reminder from my project and then the email will be sent to them nearby or at the actual time that he scheduled.\nI know the AWS services such as AWS CloudWatch rules (CRONs) and DynamoDB stream (TTL based). But that is not perfect for such a feature. Can anyone please suggest a better way to implement such a feature?\nAny type of guidance is acceptable.\n-- Thanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":29,"Q_Id":73769670,"Users Score":2,"Answer":"Imagine your service at huge scale. At such scale, there are likely to be multiple messages going off every minute. Therefore, you could create:\n\nA database that stores the reminder times (this could be DynamoDB or an Amazon RDS database)\nAn AWS Lambda function that is configured to trigger every minute\n\nWhen the Lambda function is triggered, it should check the database to retrieve all reminders that should be sent for this particular minute. It can then use Amazon Simple Email Service (SES) to send emails.\nIf the number of emails to be sent is really big, then rather than having the Lambda function call SES in series, it could put a message into an Amazon SQS queue for each email to be sent. The SQS queue could then trigger another Lambda function that sends the email via SES. This allows the emails to be sent in parallel.","Q_Score":0,"Tags":"python-3.x,amazon-web-services","A_Id":73772956,"CreationDate":"2022-09-19T07:11:00.000","Title":"Real time email feature using AWS services","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a question that I would like to solve:\nI have 4 python scripts as follows:\n\nmain.py\n\nscript1.py\nscript2.py\nscript3.py\n\n\n\nScripts 1, 2 and 3 are run invoked in the main.py.\nWhat I need is to be able to easily schedule this main.py to run once a week.\nWhat AWS services would be best for this? From the architecture side I don't know much.\nThank you!","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":238,"Q_Id":73773750,"Users Score":2,"Answer":"You can deploy the script in lambda if your execution time is under 15 minutes plus cloudwatch events for scheduling\nFor execution scripts > 15 minutes, I would suggest using AWS Batch to have the script on schedule on any of the supported compute environment like ECS or EC2","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda,schedule,aws-event-bridge","A_Id":73774098,"CreationDate":"2022-09-19T13:00:00.000","Title":"What is the best way to schedule a python script with AWS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a question that I would like to solve:\nI have 4 python scripts as follows:\n\nmain.py\n\nscript1.py\nscript2.py\nscript3.py\n\n\n\nScripts 1, 2 and 3 are run invoked in the main.py.\nWhat I need is to be able to easily schedule this main.py to run once a week.\nWhat AWS services would be best for this? From the architecture side I don't know much.\nThank you!","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":238,"Q_Id":73773750,"Users Score":2,"Answer":"For something running once per week, you clearly do not want to have any infrastructure running continuously. This leaves AWS Fargate (good for containerization) or AWS Lambda (good for scripts that run in less than 15 minutes).\nBased on the limited information you have provided, AWS Lambda would be a good option since it is simple to schedule.\nYou would need to change the scripts slightly to fit in the AWS Lambda environment. The Lambda 'function' is triggered via a call to a particular function you nominate. Since you have multiple scripts, you would need to determine whether to combine them all into the one Lambda function, or whether to have them each as a separate function and have the 'main' Lambda function call the other Lambda functions.\nAWS Lambda also supports parallel runs by deploying multiple Lambda functions. This could be useful if you have a lot of work to perform, or you can ignore it and just run your code as a 'single thread'.","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda,schedule,aws-event-bridge","A_Id":73792962,"CreationDate":"2022-09-19T13:00:00.000","Title":"What is the best way to schedule a python script with AWS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"[GAUSS-51400] : Failed to execute the command: python3 '\/soft\/openGauss\/script\/local\/PreInstallUtility.py' -t create_cluster_paths -u omm -g dbgrp -X '\/soft\/openGauss\/clusterconfig.xml' -l '\/gaussdb\/log\/omm\/om\/gs_local.log'.Error:\n[GAUSS-50202] : The \/gaussdb must be empty. Or user [omm] has write permission to directory \/gaussdb. Because it will create symbolic link [\/gaussdb\/app] to install path [\/gaussdb\/app_78689da9] in gs_install process with this user.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":17,"Q_Id":73788453,"Users Score":0,"Answer":"Under the OMM user, execute vi~\/. bashrc, clear the environment variables, and re execute the initialization script.","Q_Score":0,"Tags":"python-3.x,database","A_Id":73995873,"CreationDate":"2022-09-20T14:34:00.000","Title":"openGauss pre-installation environment error, error information see details, how to answer?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"when I run \"docker exec -it docker-name bash\" on centOS7 service ,it will go into docker container and can run \" python xx.py config.yaml \" to execute some works .\nbut if I use Jenkins shell run \"docker exec -it docker-name bash\" ,it will have no response ,I write \"python xx.py config.yaml \" behind ,Jenkins show [ python: can't open file 'xxx.py': [Errno 2] No such file or directory ] ,I think this error is not into the docker container ,so can't find the python file that in the docker container .How can I enter the docker container with Jenkins shell .","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":99,"Q_Id":73823953,"Users Score":0,"Answer":"When you run docker exec -it docker-name bash, you get an interactive shell inside the container that gets connected to your console and the next command you type to the console is executed in that shell.\nBut Jenkins has no console. It is executing a script, with the standard input connected to a null device (which always returns end of file on read). So in effect it is executing the equivalent of\ndocker exec -it docker-name bash <\/dev\/null (the \/dev\/null is the null device and < connects it to standard input of the command). And if you do that on your console, nothing happens and you'll get your original prompt again.\nBut you don't have to, and shouldn't be, running bash in this case at all. You give docker exec the command you want to run in the container and it runs it there. So you just do\ndocker exec -i docker-name python xx.py config.yaml\nand that runs the python command, prints any output and when the command ends, disconnects from the container again.\nI've omitted the -t because that instructs docker to use the terminal (console), but Jenkins does not have any console, just the -i, instructing it to connect the stdin, stdout and stderr, is good enough.\nNow there is also a way to send the commands on the standard input of the bash similar to what the console would do, but I strongly recommend reading the documentation of bash before attempting that.","Q_Score":0,"Tags":"python,bash,docker,jenkins","A_Id":73824176,"CreationDate":"2022-09-23T06:51:00.000","Title":"use Jenkins shell run ''docker exec -i docker-name bash\" no response","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wanted to check out OpenAI whisper and see if I could find some personal applications for it.\nI went on github and followed the instructions to set it up.\nMy primary system is on Windows 11 and I get this error; \"FileNotFoundError: [WinError 2] The system cannot find the file specified\" when trying to run the test script on my system.\nThings I did to troubleshoot:\n\nDisabled MS defender and all antivirus.\nMoved the script and audio file to the same directory.\nMoved the script and audio file to various directories.\nRan VSCODE with admin privileges.\nTried the \"command-line usage\".\nTried everything above on a second system that run windows 10.\nThe script ran on a third system with Ubuntu installed.\n\nI think this might be a permission issue from windows but I can't seem to resolve it, any suggestion will be greatly appreciated.\nI would prefer not to use the linux system because it lacks a dGPU.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3153,"Q_Id":73845566,"Users Score":0,"Answer":"I had the same problem, then I solved like this:\n\nOpen powershell in administration mode\nran choco install ffmpeg command\n\nWhen you run in admin mode, then it automatically updates the PATH variable and everything worked fine afterwards","Q_Score":2,"Tags":"python-3.x,filenotfounderror,openai-whisper","A_Id":76535762,"CreationDate":"2022-09-25T15:08:00.000","Title":"OpenAI Whisper; FileNotFoundError: [WinError 2] The system cannot find the file specified","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wanted to check out OpenAI whisper and see if I could find some personal applications for it.\nI went on github and followed the instructions to set it up.\nMy primary system is on Windows 11 and I get this error; \"FileNotFoundError: [WinError 2] The system cannot find the file specified\" when trying to run the test script on my system.\nThings I did to troubleshoot:\n\nDisabled MS defender and all antivirus.\nMoved the script and audio file to the same directory.\nMoved the script and audio file to various directories.\nRan VSCODE with admin privileges.\nTried the \"command-line usage\".\nTried everything above on a second system that run windows 10.\nThe script ran on a third system with Ubuntu installed.\n\nI think this might be a permission issue from windows but I can't seem to resolve it, any suggestion will be greatly appreciated.\nI would prefer not to use the linux system because it lacks a dGPU.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":3153,"Q_Id":73845566,"Users Score":1,"Answer":"As mentioned by others, the key is to install ffmpeg.exe and have it accessible via command prompt. So, the solution should be:\n\nInstall ffmpeg.exe (executable version, not source) to local directory (anywhere)\nAdd ffmpeg folder path to PATH environment variable\nOpen a Command Prompt and test if cmd: \"ffmpeg -version\" finds it or not. If not working, step #2 is still not correct and will run into \"File not found\" error when running whisper related code.\n\nThere are a few ways to add path to env %PATH%, but if you prefer manual GUI way, just right-click on Start Menu, open \"System\" and search for \"environment variables\".","Q_Score":2,"Tags":"python-3.x,filenotfounderror,openai-whisper","A_Id":75573570,"CreationDate":"2022-09-25T15:08:00.000","Title":"OpenAI Whisper; FileNotFoundError: [WinError 2] The system cannot find the file specified","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wanted to check out OpenAI whisper and see if I could find some personal applications for it.\nI went on github and followed the instructions to set it up.\nMy primary system is on Windows 11 and I get this error; \"FileNotFoundError: [WinError 2] The system cannot find the file specified\" when trying to run the test script on my system.\nThings I did to troubleshoot:\n\nDisabled MS defender and all antivirus.\nMoved the script and audio file to the same directory.\nMoved the script and audio file to various directories.\nRan VSCODE with admin privileges.\nTried the \"command-line usage\".\nTried everything above on a second system that run windows 10.\nThe script ran on a third system with Ubuntu installed.\n\nI think this might be a permission issue from windows but I can't seem to resolve it, any suggestion will be greatly appreciated.\nI would prefer not to use the linux system because it lacks a dGPU.","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":3153,"Q_Id":73845566,"Users Score":2,"Answer":"I solved this by installing ffmpeg and adding the ffmpeg binary to my PATH environment variable. I was using the cmd.exe terminal not Code.\nEdit: Tested in VS Code and it worked there too after including ffmpeg binary in PATH.","Q_Score":2,"Tags":"python-3.x,filenotfounderror,openai-whisper","A_Id":73854780,"CreationDate":"2022-09-25T15:08:00.000","Title":"OpenAI Whisper; FileNotFoundError: [WinError 2] The system cannot find the file specified","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"After installing Python, I am now trying to install the pipenv dependency by running this command in the terminal python 3.8 -m pip install --upgrade pip pipenv. However, after attempting to execute the command, I receive this error zsh: command not found: python. I find it odd because Python is definitely installed. I've tried uninstalling then reinstalling the app, but I've had no success.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":26,"Q_Id":73856304,"Users Score":1,"Answer":"Try using command to run the environment python3 ...","Q_Score":0,"Tags":"python,pipenv","A_Id":73856464,"CreationDate":"2022-09-26T15:18:00.000","Title":"Pipenv having trouble installing despite Python being installed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Currently when I need to use python in the terminal or run something with python I need to write \"python3 .....\"\nFor example \"python3 manage.py makemigrations\"\nIs there any way I can rename it to something shorter for simplicity like \"py\"","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":23,"Q_Id":73876932,"Users Score":1,"Answer":"Sure, you can create a symlink in your PATH or alias in your terminal. Do not rename the executable itself...\nBut some OS prefer you use python3 since python commonly refers to Python 2.x executable, which is end of life, but still exists as a dependency for some programs","Q_Score":0,"Tags":"python","A_Id":73876977,"CreationDate":"2022-09-28T06:15:00.000","Title":"Is there any way to change the name of python when using the terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running a python script inside a kubernetes pod with kubectl exec -it bash.Its a long running script which might take a day to complete.i executed the python script from my laptop inside the kubernetes pod.\nIf i close my laptop,will the script stop running inside the pod?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":277,"Q_Id":73883993,"Users Score":0,"Answer":"If you are running Kubernetes on Cloud, the script will continue until it is finished succefully or throws an error even if you close your laptop,\nOthewise, if you are running local Kubernetes Cluster, for example: with minikube, cluster will shut down and so is your script","Q_Score":1,"Tags":"python,bash,kubernetes,exit","A_Id":73885231,"CreationDate":"2022-09-28T15:21:00.000","Title":"will python script running inside a kubernetes pod exit if my laptop\/system turned off","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm in serious need of some guidance on how to properly deploy a desktop python application (generate a trusted executable and installer).\nFirst of all, I'm a mech. engineer. Not a full-stack programmer, although circumstances have made me delve deep into all sorts of programming for several years now. But I still lack some project insights that might be trivial for some professional programmers.\nI've built a project for the company I work at using Tkinter, all in VS Code. The app is fully functional and runs great. Has git versioning, unit tests, dynamic screen sizing, and a login\/licensing system (that part I had the help of a third-party company that made the login backend on .NET, I just call\/send requests in the main python program to communicate with the server). I even have a beautiful landing page ready to be in production on the company's website.\nHowever, I'm sort of stuck now. I can generate an executable using pyinstaller, create an installer with Inno Setup and even pack it into a .msi with MSI Wrapper. Which is what I did. But I run into window's trust issues and eventual virus warnings, even though there's no malicious code in it.\nThat's certainly not the proper way of doing a serious company app deployment intended for mass distribution and selling licenses. I think that might have something to do with using Visual Studio Enterprise, azure devops, maybe having a code signing certificate, an app manifest .xml, etc. That's the sort of thing that I have no experience with whatsoever and find myself lost now.\nI'd like to know which steps I'd have to take now to properly deploy this app (i.e. have a trusted windows executable and installer, in the company's name). How would you proceed with this?\nExtra info:\nThe app is fully written in python, with several open source libraries such as matplotlib, numpy, PIL, etc. And all the GUI was made with tkinter. Aside from that, it only needs images\/icons from a folder to assemble the GUI elements and a .ttf font to write some specific text.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":145,"Q_Id":73886669,"Users Score":0,"Answer":"Python can for sure provide a professional program for distribution.\nBut you mentioned some points that many times are hard to attack if you don't have prior experience.\nFor the python aspect the only problem comes with a rather easy decompilation of the source code of the application.Obscufation of the code can be achieved at a very good extend either by compiling with Cython,using an encrypting library like bcrypt or use a remote internet application to interogate for secret data.(If ofcourse obscufation is needed)\nAlso you need to check the licenses for all 3d party libraries,because some of them are not free for professional business wich have earnings.\nThe trust issues on windows are due to UAC and it exists for all application in Windows.You can sign an application (many times costs some money) so UAC do not 'complain' about the authenticity of the application.\nNow for the legality issues ofcourse you should read the license you will provide with MSI very carefully or even advise a lawyer.Also because the python application many times will need the equivalent VC redist to be distributed with the application MSI it is a little obscure if this is totally legal to distribute VC redist in your application.","Q_Score":5,"Tags":"python,windows,pyinstaller","A_Id":73886839,"CreationDate":"2022-09-28T19:07:00.000","Title":"How to distribute a Python application (professionaly)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a requirement to develop a python application which can run on a normal machine like windows, linux or databricks. My requirement is to allow python application to dynamically identify where the script is actually running. If my python code is running on Windows, it should know that the code is running on windows. Like wise for Linux. I use Platform.system() to get the information. But databricks will also have a OS platform. How can it differenciate a databricks node from a normal Linux\/Unix node? Apart from using dbutils or sparksession, will we be able to run a command and know that the node on which the command has run was indeed a databricks node?\nI don't think that the databricks cli is installed on databricks cluster. so I haven't got any command to find out if the platform is Linux\/Windows\/Databricks.\nNote:The application will be deployed on Windows\/Linux\/Databricks as a wheel file. So the requirement is that the application should identify, on which node the code is running. If it is running on windows\/unix, it has to access the local file system and create some files on local file system. If it is running on databricks, it should access the mount point pointing to Azure ADLS and create\/access files on\/from ADLS.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":99,"Q_Id":73890811,"Users Score":0,"Answer":"The simplest check that you can do is to check presence of the \/databricks folder on the driver node. Additional checks could be done by checking for \/databricks\/jars director,, \/databricks\/DBR_VERSION file, etc.","Q_Score":1,"Tags":"python-3.x,azure-databricks","A_Id":73959837,"CreationDate":"2022-09-29T05:50:00.000","Title":"Python command to identify if a node is a databricks node or general node","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i have some function to get shop order data from shopee.co.id open api to be implemented on GF (google function) and trigger by cloud scheduler. the problem is.the target shop order data took me 1,5 hours to get 10K orders data. for my understand, CF gen 2nd have max 1 hours timeout when triggered. my question is:\nis it possible to continues\/retrigger the rest of code left in\nfunction again after timeout occured either in cloud function or\ncloud scheduler like snapshot?\nis google function is good solution for long task like my case. if not should i use app engine? (notes: my scripts is not web services that need flask is just one hit script to be run by cron scheduler to push data straight to bigquery after get the data from api)\nthank you","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":73912465,"Users Score":0,"Answer":"is it possible to continues\/retrigger the rest of code left in function again after timeout occured either in cloud function or cloud scheduler like snapshot?\n\nNo, it's not that simple. You will end up doing a lot of coding to correctly implement retries and snapshotting the progress of your work so you don't duplicate or miss anything. Google Cloud won't do any of that work for you.\n\nis google function is good solution for long task like my case?\n\nNo, Cloud Functions is not meant for long-running batch work.\nYou could consider App Engine or Compute Engine instead.","Q_Score":0,"Tags":"python,google-cloud-platform,cron,google-cloud-functions","A_Id":73913292,"CreationDate":"2022-09-30T18:06:00.000","Title":"Google Cloud Function : how to continues timeout function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"i have some function to get shop order data from shopee.co.id open api to be implemented on GF (google function) and trigger by cloud scheduler. the problem is.the target shop order data took me 1,5 hours to get 10K orders data. for my understand, CF gen 2nd have max 1 hours timeout when triggered. my question is:\nis it possible to continues\/retrigger the rest of code left in\nfunction again after timeout occured either in cloud function or\ncloud scheduler like snapshot?\nis google function is good solution for long task like my case. if not should i use app engine? (notes: my scripts is not web services that need flask is just one hit script to be run by cron scheduler to push data straight to bigquery after get the data from api)\nthank you","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":73912465,"Users Score":0,"Answer":"Cloud function V2 is not adapted for long running jobs and I think it will complicated to implement your own logic for snapshotting or retry.\nYou can think about other solutions :\n\nSolution 1 : Apache Beam\/Dataflow job\n\nDataflow is adapted for long running jobs and it's serverless, it's based on Apache Beam open source model.\nBeam is proposed with Python, Java or GO sdk and it's easy to read an api and write the result to Bigquery via IO classes given natively by the sdk\n\nSolution 2 : job on Compute Engine VM or App Engine\n\nYou can deploy your current script to Compute Engine VM or App Engine (Google Python clients), there is no limit in one hour in this case.","Q_Score":0,"Tags":"python,google-cloud-platform,cron,google-cloud-functions","A_Id":73914205,"CreationDate":"2022-09-30T18:06:00.000","Title":"Google Cloud Function : how to continues timeout function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"i have some function to get shop order data from shopee.co.id open api to be implemented on GF (google function) and trigger by cloud scheduler. the problem is.the target shop order data took me 1,5 hours to get 10K orders data. for my understand, CF gen 2nd have max 1 hours timeout when triggered. my question is:\nis it possible to continues\/retrigger the rest of code left in\nfunction again after timeout occured either in cloud function or\ncloud scheduler like snapshot?\nis google function is good solution for long task like my case. if not should i use app engine? (notes: my scripts is not web services that need flask is just one hit script to be run by cron scheduler to push data straight to bigquery after get the data from api)\nthank you","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":73912465,"Users Score":0,"Answer":"For me, the best design here is to scale out. I mean, instead of processing 10k order on only one instance, you can do that on multiple instance. For instance, process 1k orders over 10 instances.\nYou can also try to leverage multi CPU processing, especially with Cloud Run.\nIf you can share more about the detail of the connection, and your processing flow, we can advise more clearly","Q_Score":0,"Tags":"python,google-cloud-platform,cron,google-cloud-functions","A_Id":73917973,"CreationDate":"2022-09-30T18:06:00.000","Title":"Google Cloud Function : how to continues timeout function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"i installed python3.10.7 a couple of days ago. I have never installed python2, but when I wanted to install pip it gave me that error that I have python2.7 Then I see the version of python in cmd and it gives me python2.7. Ihave never install that and I do not want version2.7 at all. what should I do","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":96,"Q_Id":73940016,"Users Score":1,"Answer":"use python3 python-script.py\ninstead of previous\npython python-script.py","Q_Score":1,"Tags":"python-3.x,python-2.7","A_Id":73940334,"CreationDate":"2022-10-03T19:32:00.000","Title":"How can I fix the problem of python2.7 instead of pytho","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a linux machine which I do not have administrator access to. I wish to run a python program (locally) that uses TKinter import. How would I got about installing both into userspace so that the gui would run given the command .\/{filename}. I have gotten rid of the .py extension and marked it as executable already.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":82,"Q_Id":73940501,"Users Score":1,"Answer":"If you do not have administrator access to install it, download the tarball, untar and run it in terminal. Download all dependencies and access them in the same shell to execute the required command.","Q_Score":0,"Tags":"python,linux,user-interface,tkinter","A_Id":73941742,"CreationDate":"2022-10-03T20:25:00.000","Title":"installing python and imports locally on Linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have the next question. I have installed python3.9 and python3.10 on my Windows. Can I choose python version directly in work in cmd? Or I must do something else like to modify system environment variables list?\nThanks for answers!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":73964917,"Users Score":0,"Answer":"If you call in the cmd to py you will skip the most recent version, in your case Python 3.10, if you want the other version you have to call it as py -3.9 \"name of the program\".py, the same you can do with the first, do py -3.10 \"name of the program\".py","Q_Score":0,"Tags":"python,windows,cmd","A_Id":73965837,"CreationDate":"2022-10-05T18:39:00.000","Title":"How can choose I python version in directly cmd?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a simple python pipeline that crawls a property page for data, the data on the page is divided into states and what type of property. The steps of the pipeline are as follow:\n\nA loop over all combinations of state and property type\nFor each combination, a crawler goes through the corresponding page and collects the URLs of all properties\nFor each property, data is crawled, cleaned and enriched before being stored in a SQLite DB\n\nCurrently this is a single-threaded and very simple process. I want to improve this and I am looking for modern tools to use in my new pipeline. Both to visualize the status of the processing and run it as a multiprocessing pipeline.\nCurrently I have a first idea of using Kafka and Airflow. One process crawls a page for property URLs, and creates Kafka messages for each URL. A second process then takes a single Kafka message and processes it; crawl, clean, enrich, store. Meanwhile in Airflow I can have a nice overview of the status of processes and even retry failed ones. There each combination of state and property type is split into separate DAGs.\nThe issue is however that crawling is not something I can do with multiprocessing, as that will cause too many request to the target page and calls will become blocked eventually. The pipeline will fail.\nMy new idea is to also include Kubernetes. I will have one pod that does the crawling of property URLs. Then a second pod will crawl one property URL at a time. The final pod would be responsible for processing the property data (clean,enrich,store), but this pod I want to have X instances of because crawling the data will be faster than processing it.\nBecause there is a lot of data crawled for each property (around 20 fields, at least one contains a long description of the property), I do not think Kafka is a good option to transfer information between the pods. But I see no other option to include a work queue. The only option I could think of was that messages always only include the URL of a listing. But after crawling, data is stored in SQLite, and the final pod that will clean and enrich the data, will instead need to pull the data from the SQLite DB. Is that a reasonable idea, or are there better options?\nI have tried to google for tutorials and suggestions on how to setup a system with Kubernetes+Airflow+Kafka, but I find nothing. Some pages are specifically only about running Airflow withing Kubernetes, but there is never information about Kafka. Does this mean the combination is not possible, and if so, why not? Also do you have suggestions for better tools or complete systems that I should look into instead?\nApologies if my question is too vague or open, I could not find other places where I could find suggestions for building up this pipeline in the best way possible and give me skills to find a job.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":165,"Q_Id":73999818,"Users Score":0,"Answer":"Airflow is for scheduling programs, maybe within Kubernetes pods, but not needed; you could run a standalone Airflow worker cluster.\nRegarding Kafka, Airflow isn't really needed for consuming since Kafka topics are endless and continuous. You could publish a url to a multi partitioned Kafka topic, then run as many consumers (threads or processes) as partitions, then you have your parallel processing. Since processing is in parallel, don't use sqlite, since that would require only one instance consuming all data.\nYou can still use Kubernetes to do that processing with Knative or OpenFaaS, instead, for example.\nYou could also use NATS or RabbitMQ since you just need a Queue. Or Celery and Redis are commonly used with Python.","Q_Score":0,"Tags":"python,kubernetes,apache-kafka,multiprocessing,airflow","A_Id":74001436,"CreationDate":"2022-10-08T19:08:00.000","Title":"Multiprocessing pipeline with Kafka+Airflow+Kubernetes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"We are building our data pipeline in pyspark and the first step is to fetch metadata (for each dataset) which is stored in HBase. Also we have a couple of steps in the pipeline and are publishing our logs in the form of Kafka events.\n\nDo we use the hbase-python api in the driver to fetch the metadata or use spark.read.format(\"org.apache.hadoop.hbase.spark\") - which is more efficient.\n\nFor publishing log messages to kafka, do we use a standard python-kafka producer in the driver or use the df.write.format(\"kafka\") - does one of the approaches have better performance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":74024540,"Users Score":0,"Answer":"You should always use Spark's supported formats rather than include standalone Python libraries in your applications.\nThis way, there will be no issues with serialization in Py4j between Spark executors.\nYou will also stay within SparkSQL dataframe formats, and should therefore be able to reduce code necessary to use HBase and Kafka from Spark methods.","Q_Score":0,"Tags":"python,apache-spark,pyspark,apache-kafka,hbase","A_Id":74034296,"CreationDate":"2022-10-11T07:36:00.000","Title":"Fetching Job metadata and publishing logs - driver or executor","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am making use of Databricks Workflows. I have a job that consists of three tasks:\n\nExtract - references a normal databricks notebook\nDLT - references a dlt pipeline\nPostExec - references a normal databricks notebook\n\nI pass a parameter into the first task making use of the parameters options. In the notebook, I register the parameter with the following code, so that I can reference it later in the following tasks:\ndbutils.jobs.taskValues.set(\"parmater_1\", parameter_value)\nI can then reference this parameter in the tasks that also reference notebooks with the following code:\nparameter_1 = dbutils.jobs.taskValues.get(taskKey=\"Extract\",key=\"parmater_1\")\nBut I cannot reference this value in the tasks that refer to DLT pipelines. When I run the above code, it produces the following error:\nTypeError: Must pass debugValue when calling get outside of a job context. debugValue cannot be None.\nI know DLT uses configuration, but is it possible to persist a parameter in the first step to be passed programmatically to the DLT step?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":74024920,"Users Score":0,"Answer":"Task values aren't supported for DLT yet... You can pass only the configuration parameters defined in the pipeline's settings or Spark conf.","Q_Score":1,"Tags":"python,databricks,delta-live-tables","A_Id":74040608,"CreationDate":"2022-10-11T08:08:00.000","Title":"Passing Parameters to DLT task in Databricks workflows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My requirement is to generate a list of the absolute path of all the files in a Windows folder recursively.\nI have tried glob.glob and glob.iglob but both take around 1.5 hours to generate the list.\nNow it wouldn\u2019t be a problem if it was a one-off situation. This program would run daily - and this is where the problem starts.\nAfter the first time run, I would only need to get the list of the latest modified files (after the last run, using the timestamp of that run). This is so that even if the first run takes 1.5 hours, the daily run should take a few minutes at max. Now there is no way I can get the timestamp of each individual file without going through each of the files which means I would need to go through the entire folder regardless and check for the timestamp on each of those.\nCan I optimize this in any way? Sure ideally, the data would be arranged by date but it\u2019s not the case.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":74025094,"Users Score":0,"Answer":"I'm not sure if there are pure Python approaches that will be faster. You basically want to have a service running that keeps track of changes in real-time.\nIn principle you could put the whole folder under version control (Git), and check the pending changes once per day.","Q_Score":0,"Tags":"python,python-3.x,windows,list,glob","A_Id":74025175,"CreationDate":"2022-10-11T08:23:00.000","Title":"Fastest way to generate the absolute paths of all the files in a folder recursively in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have issue to configure PyDev Python interpreter with Eclipse in order to be able to run script with \"-m\" python interpreter argument\/option. Does anybody know where exactly one could set that argument? In command line one would do like this:\npython -m some_package.tests.core_test\nBut in order to start same package in PyDev I am missing place where I could enter pythong argument \"-m\".\nregards,\nMilan","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":29,"Q_Id":74040415,"Users Score":0,"Answer":"You need to set that in the preferences.\ni.e.: in Preferences > PyDev > Run select Launch modules with \"python -m mod.name\" instead of \"python filename.py\".\nAfter changing that, any Python launch done in PyDev should automatically compute the module name based on your PYTHONPATH and then use the module name in the launch along with the -m.","Q_Score":0,"Tags":"python,configuration,arguments,pydev,interpreter","A_Id":74060036,"CreationDate":"2022-10-12T10:34:00.000","Title":"PyDev python interpreter argument\/option field missing?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to schedule my dag to run on the last Tuesday of every month, so for OCT, my dag should run on the 25 whereas for nov the dag should run on the 29th day. Any ideas on how I could schedule this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":74043746,"Users Score":0,"Answer":"Not possible with Cron schedule, at least in the current version of Airflow.\nBut you can do a little hack to achieve that.\nSchedule dag for any day between 21-31 of the month. Then add a new \"sensor\" task at the beginning of that dag which will just check what day of the week is today and if it isn't Tuesday just skip execution of downstream tasks. It should run then every day till it will match your desired day of the week.","Q_Score":0,"Tags":"python-3.x,airflow","A_Id":74060446,"CreationDate":"2022-10-12T14:39:00.000","Title":"Airflow DAG Scheduling at last tuesday of every month","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm writing a Python script to load, filter, and transform a large dataset using pandas. Iteratively changing and testing the script is very slow due to the load time of the dataset: loading the parquet files into memory takes 45 minutes while the transformation takes 5 minutes.\nIs there a tool or development workflow that will let me test changes I make to the transformation without having to reload the dataset every time?\nHere are some options I'm considering:\n\nDevelop in a jupyter-notebook: I use notebooks for prototyping and scratch work, but I find myself making mistakes or accidentally making my code un-reproducible when I develop in them. I'd like a solution that doesn't rely on a notebook if possible, as reproducibility is a priority.\nUse Apache Airflow (or a similar tool): I know Airflow lets you define specific steps in a data pipeline that flow into one another, so I could break my script into separate \"load\" and \"transform\" steps. Is there a way to use Airflow to \"freeze\" the results of the load step in memory and iteratively run variations on the transformation step that follows?\nStore the dataset in a proper Database on the cloud: I don't know much about databases, and I'm not sure how to evaluate if this would be more efficient. I imagine there is zero load time to interact with a remote database (because it's already loaded into memory on the remote machine), but there would likely be a delay in transmitting the results of each query from the remote database to my local machine?\n\nThanks in advance for your advice on this open ended question.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":74047401,"Users Score":0,"Answer":"For a lot of work like that, I'll break it up into intermediate steps and pickle the results. I'll check if the pickle file exists before running the data load or transformation.","Q_Score":0,"Tags":"python,pandas,database,jupyter-notebook,airflow","A_Id":74047421,"CreationDate":"2022-10-12T19:58:00.000","Title":"How to efficiently test code that loads a large dataset?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I deployed a DolphinDB service using Docker and subscribed to a stream table from the Python client. The subscription was successful but I didn\u2019t receive any data. Below are the node log entries:\n\n2022-09-23 20:03:26.084867 :The publish connection to site localhost:20001 doesn't exist.\n2022-09-23 20:03:26.084839 :Received a request to stop publishing table [trades] to site localhost:20001\n2022-09-23 20:03:26.083032 :New connection from ip = 172.17.0.1 port = 58250*\n2022-09-23 20:03:26.082969 :Created a new socket connection. Number of connections: 9* \n2022-09-23 20:03:26.081608 :Close a connection with index=9. Number of remaining connections: 8*\n2022-09-23 20:03:26.081052 :AsynchronousPublisherImp::closeConnection 172.17.0.1:20001 #0 with error: Failed to connect. Connection refused*\n\nThe port on the host machine telnet 127.0.0.1 20001 was working properly.\nWhy was the subscribed data pushed to the port in the Docker bridge network instead? What configurations are required to receive data on the client side?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":74049852,"Users Score":0,"Answer":"Stream data is now pushed to the listening port on the client side. Please check whether your Docker application can access the subscription port on your host machine.","Q_Score":0,"Tags":"python,docker,streaming,subscription,dolphindb","A_Id":74063553,"CreationDate":"2022-10-13T02:23:00.000","Title":"Subscribing to DolphinDB stream table deployed on Docker via Python API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an MPI application written in C++. When I run the application using mpirun or mpiexec on my local machine shell, it works fine. The problem is that the mpi application should be executed after a request from the network (e.g. HTTP request) is received. Assuming that the application server is written in python, how can I execute the mpi application using mpiexec command from a python script? I used subprocesses.run() command but nothing happens.\nMy general question: What is the best way to run an MPI application in client\/server architecture?\nThank you","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":104,"Q_Id":74061836,"Users Score":0,"Answer":"I found the solution. In the main python script, MPI should not get imported from mpi4py package. Otherwise subprocess.run(\"mpiexec\") does not do anything.","Q_Score":0,"Tags":"python,mpi,client-server,hpc,mpiexec","A_Id":74068830,"CreationDate":"2022-10-13T21:10:00.000","Title":"How to execute mpirun or mpiexec command from a python script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have deployed a service in cloud run that takes about 40\/45 minutes to run.\nIt should run everyday but I don't know how to cron it.\nI have tried with cloud scheduler but I received a status DEADLINE_EXCEEDED message.\nCan you recommend another service?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":63,"Q_Id":74073637,"Users Score":0,"Answer":"Sadly, there is no serverless service with a timeout above 30 minutes for now (Cloud Workflows, Cloud Task, Scheduler are limited to 30 minutes max, 10 minutes for PubSub).\nHowever, even if your Cloud Scheduler log in error the invocation (because no answer has been received in the 30 minutes), your process continue to run. You simply doesn't have the retry features (retries on error, timeout, ....)\nIf you want to implement retries, you have complex architecture to build based on Cloud Logging, sinks, PubSub and Cloud Functions\/Cloud Run to be able to re invoke your long running service (in mode \"fire and forget\")","Q_Score":0,"Tags":"python,google-cloud-platform,google-cloud-run,google-cloud-scheduler","A_Id":74074699,"CreationDate":"2022-10-14T18:50:00.000","Title":"How to execute Google Cloud Run service every day?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was hoping to get input into a python design problem that I am facing at work.\nMy team is currently developing end-user facing utility CLI tools in Python for our data scientists to use, the tools mostly automate system actions\/interactions that we are trying to abstract away from users. We think in total we will have something like 20-30 of such python tools to maintain. Within the system the data scientists are able to work in a number of pre-defined docker containers; namely:\n\na RHEL container with SAS installed\na Ubuntu-Focal container with R installed\na Ubuntu-Focal container with Python installed\n\nThat is, users can create & delete new containers at will based upon their current task. I.e. if they want to run their project from scratch they may delete their current container and re-load a fresh new one.\nOur key design challenge is how to deploy\/install our utility tools into those containers whilst also respecting some key constraints that we think are important to ensure a good UX, namely:\n\nAvoiding having to have the users manually install our tools as many of our users have 0 knowledge of python \/ shebangs\nAvoiding having to have users restart their containers if the tool is updated. I.e. we want to be able to update our tools and make those updates available to users with no \/little action from the end user\nHaving a process for upgrading to new versions of python for our tools as older versions of python become retired\/no longer supported (we expect the system to live for 10-15 years).\nAllow tool developers to have freedom to select which python modules and corresponding module versions their tools use without having to worry \/ be constrained by what other tools have used.\nEnsuring that tools can run on multiple OS\u2019s i.e. as stated above at the very least we need our tools to run on Focal and RHEL\n\nWe would be super grateful if anyone has any ideas that could help with this. Thank you!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":41,"Q_Id":74088144,"Users Score":1,"Answer":"There are a few different ways to approach this problem, and the best solution will likely depend on your specific needs and constraints.\nOne option would be to create a custom Python virtual environment for each tool, and then install the tool into that environment. This would allow each tool to have its own set of dependencies, and would also make it easy to upgrade to new versions of Python as needed.\nAnother option would be to use a tool like Docker Compose to manage your different tools. This would allow you to define each tool as a separate service, and then easily deploy and update them as needed.\nFinally, you could also use a tool like Ansible to manage your different tools. This would allow you to define each tool as an Ansible playbook, and then easily deploy and update them as needed.\nUltimately, the best solution for you will likely depend on your specific needs and constraints. However, all of the options above should be able to meet your requirements.","Q_Score":0,"Tags":"python,docker,architecture,command-line-interface,sysadmin","A_Id":74088154,"CreationDate":"2022-10-16T15:00:00.000","Title":"How to deploy and manage multiple custom python cli tools in docker containers?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can find python version with python --version\nBut I cannot find the location of python executable. Is there a command like python --path? If not, is there a reason why?","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":37,"Q_Id":74102484,"Users Score":2,"Answer":"use 'where python' in your terminal to get the path to it\nedit\nwhere python works for windows and which python works for linux","Q_Score":1,"Tags":"python,windows","A_Id":74102496,"CreationDate":"2022-10-17T19:48:00.000","Title":"Why is there no command line to find python location?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can find python version with python --version\nBut I cannot find the location of python executable. Is there a command like python --path? If not, is there a reason why?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":74102484,"Users Score":0,"Answer":"Use which python or which python3.\nWork on unix based OS.\nFor Windows, see other answers.","Q_Score":1,"Tags":"python,windows","A_Id":74102493,"CreationDate":"2022-10-17T19:48:00.000","Title":"Why is there no command line to find python location?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Recently I discovered that the little arrow next to lines in vscode, that allows you to fold parts of the code, had disappeared. I then noticed this was the case only in my Python files.\nI scoped the internet looking for an answer, but nothing worked\nI'v tried fixing the setting (by checking that the \"folding\" setting in the settings UI was ticked) but it did nothing, I tried removing the last extensions I had installed to see if they were interfering or something, but no.\nThanks for the info on #region, but even that doesn't allow me to fold the code. I've tried with the command \"fold\" from the command palette and with 'Ctrl+Shift+[' and 'Ctrl+Shift+]' but it didn't work\nI'm on Arch Linux using VsCode-OSS btw","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":641,"Q_Id":74117813,"Users Score":0,"Answer":"If none of other answers helps you, you can create your custom folding range using Ctrl+K Ctrl+, on windows (I hope smth like that is in linux also, try searching folding range). Selected lines willbe folded. To delete folding region, use Ctrl+K Ctrl+..","Q_Score":2,"Tags":"python,visual-studio-code,code-folding","A_Id":74500657,"CreationDate":"2022-10-18T21:50:00.000","Title":"I don't have the option to fold code anymore in vscode in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am very new to GCP, my plan is create a webhook target on GCP to listen for events on a thirdparty application, kick off scripts to download files from webhook event and push to JIRA\/Github. During my research read alot about cloud functions, but there were also cloud run, app engine and PubSub. Any suggestions on which path to follow?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":155,"Q_Id":74119702,"Users Score":1,"Answer":"There are use cases in which Cloud Functions, Cloud Run and App Engine can be used indistinctively (not Pubsub as it is a messaging service). There are however use cases that do not fit some of them properly.\nCloudFunctions must be triggered and each execution is (should be) isolated, that implies you can not expect it to keep a connection alive to your third party. Also they have limited time per execution. They tend to be atomic in a way that if you have complex logic between them you must be careful in your design otherwise you will end with a very difficult to manage distributed solution.\nApp Engine is an application you deploy and it is permanently active, therefore you can mantain a connection to your third party app.\nCloud Run is somewhere in the middle, being triggered when is used but it can share a context and different requests benefit from that (keeping alive connections temporarily or caching, for instance). It also has more capabilities in terms of technologies you can use.\nPubSub, as mentioned, is a service where you can send information (fire and forget) and allows you to have one or more listeners on the other side that may be your Cloud Function, App Engine or Cloud Run to process the information and proceed.\nBTW consider using Cloud Storage for your files, specially if you expect to be there between different service calls.","Q_Score":0,"Tags":"python,google-cloud-platform,google-cloud-functions,cloud","A_Id":74121239,"CreationDate":"2022-10-19T03:57:00.000","Title":"Hosting webhook target GCP Cloud Function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I installed Pyenv using brew and set path using\necho 'eval \"$(pyenv init --path)\"' >> ~\/.zshrc\nNow I deleted it with brew, and I got .zshrc:1: command not found: pyenv every time I opened my terminal. I understand that I need to simply remove the pyenv init invocations from my shell startup configuration. But can someone give me the right command line to do so? Thanks\nI am on MacOS by the way","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":124,"Q_Id":74155774,"Users Score":0,"Answer":"Assuming you have Vim installed, you can use vim ~\/.zshrc to open the file and then edit it out.","Q_Score":1,"Tags":"python,pyenv","A_Id":74155817,"CreationDate":"2022-10-21T15:15:00.000","Title":"How to delete Pyenv","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've used Python under Cygwin for a number of years, but it stopped working, I think when I installed Python under the base Win10. In desperation, I blew away my Cygwin and reinstalled it from scratch, together with Python and Vim. It still didn't work, it seems because Cygwin's $PATH was including Window's %PATH%, and picking up the Windows Python executable and libraries. I found a way to stop that, but the Windows Python now doesn't work, though as of when, I not sure.\nSo I've just uninstalled the Windows Python (and a Windows Python2) and reinstalled the latest version (3.10.8). It works from the command line, but Idle doesn't. Calling it up from the icon in the start menu under Recently Added, it says this action is only valid for products that are installed. Invoking it from the newly added Python 3.10 group in my start menu just gives a busy cursor momentarily, then nothing. That icon points to\nC:\\Users\\Philip\\AppData\\Local\\Programs\\Python\\Python310\\pythonw.exe \"C:\\Users\\Philip\\AppData\\Local\\Programs\\Python\\Python310\\Lib\\idlelib\\idle.pyw\"\nHelp, anyone?\n(And yes, I've just rebooted one more time - no change. And yes, I did reboot after uninstalling the previous Python.)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":74159266,"Users Score":0,"Answer":"Further googling came up with at least a partial answer. My system environment variables contained\nTCL_LIBRARY C:\\Program Files (x86)\\CSR\\BlueSuite 2.6.6\nThat other program installations should fiddle with this is a bit naughty, in my book.\nHowever, the IDLE entry in Recently Added Programs at the top of my start menu still gives a pop-up \"This action is only valid for products that are currently installed\". In fact it seems to refer to a Python2.7 which I've removed. Can't imagine what Window was thinking of. I'll just remove that from the top of the start menu.","Q_Score":0,"Tags":"windows-10,cygwin,python-idle","A_Id":74166554,"CreationDate":"2022-10-21T21:22:00.000","Title":"Python-Idle not working after reinstall - cygwin conflict?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a microservice project. I am using DRF. Now I need to add notification system to this project. Since we have multiple service notification can come from any service. We also have web version and mobile version in frontend.\nWhich will be better option for notification service? Notification with firebase or Notification with celery?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":118,"Q_Id":74161274,"Users Score":0,"Answer":"if you have mobile version, use Firebase Cloud Messagin and firestore with listening realtime updates. I will speed up your development with possibility to add new firebase features later.","Q_Score":0,"Tags":"python,django-rest-framework,architecture,notifications,microservices","A_Id":74175437,"CreationDate":"2022-10-22T05:22:00.000","Title":"Microservice notification better with Firebase or celery?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Short description:\nDataflow is processing the same input element many times, even at the same time in parallel (so this is not fail-retry build-in mechanism of dataflow, because previous process didn't fail).\nLong description:\nPipeline gets pubsub message in which path to GCS file is stored.\nIn next step (DoFn class) this file is open and read line by line, so sometimes for very big files this is long process and takes up to 1 hour (per file).\nMany times (very often) those big files are processing at the same time.\nI see it based on logs messages, that first process loads already 500k rows, another one 300k rows and third one just started, all of them are related to the same file and all of them based on the same pubsub message (the same message_id).\nAlso pubsub queue chart is ugly, those messages are not acked so unacked chart does not decrease.\nAny idea what is going on? Have you experienced something similar?\nI want to underline that this is not a issue related to fail and retry process.\nIf first process fails and second one started for the same file - that is fine and expected.\nUnexpected is, if those two processes lives at the same time.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":79,"Q_Id":74179315,"Users Score":1,"Answer":"Here is a likely possibility:\n\nreading the file is being \"fused\" with reading the message from Cloud Pubsub, so that the hour of processing happens before the result is saved to Dataflow's internal storage and the message can be ACKed\nsince your processing is so long, Cloud Pubsub will deliver the message again\nthere is no way that Dataflow cancel's your DoFn processing, so you will see them both processing at the same time, even though one of them is expired and will be rejected when processing is complete\n\nWhat you really want is for the large file reads to be split and parallelized. Beam can do this easily (and currently I believe is the only framework that can). You pass the filenames to TextIO.readFiles() transform and the reading of each large file will be split and performed in parallel, and there will be enough checkpointing that the pubsub message will be ACKed before it expires.\nOne thing you might try is to put a Reshuffle in between the PubsubIO.read() and your processing.","Q_Score":1,"Tags":"python,google-cloud-dataflow,apache-beam","A_Id":74214957,"CreationDate":"2022-10-24T09:48:00.000","Title":"Dataflow streaming job processes the same element many times at the same time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm using Django on Ubuntu 18.04.\nI've got everything set up. And I type python manage.py run_huey in the server (through an SSH connection) to start huey, and it works.\nHowever this is done through the command line through SSH and it will shut off when I close the SSH connection.\nHow do I keep run_huey running so that it will stay active at all times? Furthermore, after a system reboot, how do I get run_huey to automatically start?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":74243241,"Users Score":0,"Answer":"You may explore supervisorctl utility for ubuntu, it keeps process running and can log into file and any other features. Google it.","Q_Score":0,"Tags":"python-huey","A_Id":74613571,"CreationDate":"2022-10-29T06:24:00.000","Title":"Using Huey with Django: How to keep run_huey command running?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am using airflow to invoke lambda function using lambda hook.\nI have been trying to get the lambda execution results back in airflow but unable to do so.\nWhile checking the airflow logs, I could see this:\n{{python.py:152}} INFO- Done. Returned value was: None\nCan anyone please help me with this?\nI have tried using airflow xcoms but that too isn't working out.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":104,"Q_Id":74264944,"Users Score":1,"Answer":"I am using Managed Apache Airflow from AWS, in my use case upstream team was returning a small json and it was getting captured in Xcoms. Though its not recommended to pass large dataset using xcoms.","Q_Score":1,"Tags":"python,amazon-web-services,aws-lambda,airflow,mwaa","A_Id":76529567,"CreationDate":"2022-10-31T14:37:00.000","Title":"Is there a way to get AWS Lambda's Execution Results in Airflow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm creating a microservice and need some guidance in deciding the architecture. I have a \"create\" operation that can take up to an hour, and needs to be able to handle a \"cancel\" request from a user.\nThe only way I can think of to achieve this (in Python) is\n\ncall an async helper function to run the main functionality and write to an event log when complete\nOpen an infinite while loop with 2 exit conditions - either the create() function has written to the event log that it is complete; or a user requests a \"cancel\" event. If the user issues a cancel command, then I need to run a shutdown function. \\\n\nAm I on the right track? Or is this where I should look at event driven microservices? Should I be looking at running 2 threads in my microservice - one executing the create() and one looking for user-input?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":174,"Q_Id":74308788,"Users Score":0,"Answer":"The trick here is to understand that your request is a persistent piece of state that needs to be tracked outside the currently executing thread. As such, you really want to externalize it like you would any other piece of state. As external state, it should be persistent, atomic, and scalable.\nThis could be a file, a database, or, as the commenter mentioned, a task queue like celery. It all depends on the kind of scaling factors you need, whether the data is updateable, do you need to report on it, etc.\nPersonally, I tend toward a database in this kind of situation, something queryable, not too normalized, etc. Depending on your tech-stack you may already have one, and you should strongly tend toward that solution.\nOnce you have decided upon your data store, the rest of this is pretty straightforward, decide on the data to store, how often your executing thread needs to check-in, how to clean-up if everything crashes, etc. An hour is a LONG time in compute time, and you should plan for things to go sideways so that you don't leave things dangling.","Q_Score":0,"Tags":"python,architecture,event-handling,microservices","A_Id":74352866,"CreationDate":"2022-11-03T19:46:00.000","Title":"Handling user-requests while executing a microservice","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I made some scraping files and to run them at the same time I usually open multiple terminals in VS code and write \" python filename.py \" for each file, there's any method to automate this process to call each file and run it?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":29,"Q_Id":74323319,"Users Score":1,"Answer":"Simplest solution would be to run python script1.py & python script2.py & python script3.py &","Q_Score":0,"Tags":"python,automation","A_Id":74323348,"CreationDate":"2022-11-04T21:50:00.000","Title":"How to make a python script to run other files in separate terminals?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have to delete my unattached ebs volumes only if that ebs is unattached for last weeks.\nfor that I need to find exact date & time of ebs got unattached from ec2.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":55,"Q_Id":74344135,"Users Score":0,"Answer":"There are two ways to achieve this-\nManual- Use AWS cloudtrail to find the event and get the timestamp from event. Delete the resource manually.\nAutomatically- Create a script, probably using python boto3 to find unattached ebs for 1 week and delete. Either schedule it using Lambda or add it as a cron in any server having access to required aws resources.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-ec2,boto3","A_Id":74350576,"CreationDate":"2022-11-07T09:07:00.000","Title":"how to know at what date time my ebs volume got unattached","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running into a weird error when trying to install pip\nThis is the sequence that I typed into my command line:\npip install opencv-python\nWhat could be causing this?\nThis is what I get when I type in echo %PATH%:\nC:\\Program Files (x86)\\Common Files\\Intel\\Shared Libraries\\redist\\intel64\\compiler;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0;C:\\Windows\\System32\\OpenSSH;C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn;C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn;C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\DTS\\Binn;C:\\Program Files\\Java\\jdk1.7.0_80\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0;C:\\WINDOWS\\System32\\OpenSSH;C:\\Program Files\\dotnet;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\DAL;E:\\MATLAB\\R2022b\\bin;D:\\nodejs;:\\users\\\u201cnaglaa\u201c\\AppData\\Programs\\Python\\Python310;C:\\Program Files (x86)\\Common Files\\Intel\\Shared Libraries\\redist\\intel64\\compiler;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0;C:\\Windows\\System32\\OpenSSH;C:\\Program Files\\Microsoft SQL Server\\Client SDK\\ODBC\\170\\Tools\\Binn;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\Tools\\Binn;C:\\Program Files\\Microsoft SQL Server\\150\\Tools\\Binn;C:\\Program Files\\Microsoft SQL Server\\150\\DTS\\Binn;C:\\Program Files (x86)\\Microsoft SQL Server\\150\\DTS\\Binn;C:\\Program Files\\Java\\jdk1.7.0_80\\bin;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0;C:\\WINDOWS\\System32\\OpenSSH;C:\\Program Files\\dotnet;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\DAL;E:\\MATLAB\\R2022b\\bin;D:\\nodejs;:\\users\\\u201cnaglaa\u201c\\AppData\\Programs\\Python\\Python310;C:\\Users\\naglaa\\AppData\\Local\\Microsoft\\WindowsApps;;C:\\Program Files\\Azure Data Studio\\bin;C:\\Users\\naglaa","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":74356221,"Users Score":0,"Answer":"I don't think pip has been added into your PATH.\nUsually pip.exe is located in your \"...\\Python\\Python310\\Scripts\". (You do have python added in the PATH tho since you have \"...\\Python\\Python310\" in the list).\nSo all you have to do is to add the existing \"Scripts\" folder into PATH variable and you will be good to go.","Q_Score":0,"Tags":"python","A_Id":74356692,"CreationDate":"2022-11-08T05:54:00.000","Title":"'pip' is not recognized as an internal or external command in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running Python 3.6 on Ubuntu 18.06. I wanted to know about python-pptx module's OS dependencies rather than Python dependency as I need to launch the functionality on a server after developing a model on either Ubuntu 18.04 or 20.04. I looked into the documentation of the module but the information needed is not provided. Does the module update fit my requirements?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":74357478,"Users Score":0,"Answer":"yes, python-pptx will work on Ubuntu","Q_Score":0,"Tags":"python,ubuntu,powerpoint,ubuntu-18.04,python-pptx","A_Id":74406185,"CreationDate":"2022-11-08T08:08:00.000","Title":"OS dependencies and compatibility of python-pptx module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm making an rpi based terminal in python and I want to run a powershell command on my computer. How can I send a command to a usb device","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":74360724,"Users Score":0,"Answer":"You could run socat on your Windows PC to read from serial and execute whatever you receive - if you like big security holes\ud83d\ude09 Try adding socat tag to attract the right folk if that's an option.\nOr you could run a Python script that sits in a loop reading from serial and then using subprocess.run() to execute the commands it receives.","Q_Score":0,"Tags":"python,linux,windows,powershell","A_Id":74361649,"CreationDate":"2022-11-08T12:25:00.000","Title":"How to send powershell command to pc through usb with python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can't find the exact guide of what I want to do, it's more of a structural and architectural issue:\nTooling:\n\nPython 3.9\nFastAPI\nUvicorn\nSome scripts to monitor the folders\nIt'll run under docker when its done\n\nThe exact task:\nI want to build a web-app that lists the photos in a directory and shows them in a grid in the browser.\nThe key points here:\n\nIt will use watchdog to immediately get any added or removed items.\nClients will connect with a web socket (I've followed those tutorials)\nDeltas will be send to observing clients\n\nThe last bit is my issue, and to the point of the question:\nWhat is a \"accepted\/best practice\" for having my watchdog script send the added\/removed items to the connected client web sockets.\nI can't for the life of me work out how they communicate, running in uvicorn I just can't start an arbitrary background job.... I know in a dev environment I can start uvicorn manually, but I want it to follow scalable patterns even if it's just for light usage.\nIn short: How can a listening python script inform my fastAPI there's new data\nThe easy\/obvious answer is to expose a management API that the wathcdog script can send... but is there any sort of message bus that fastAPI can listen to?\nAWSGI is new to me, I have some experience with python async\/scheduler, but have mostly used WSGI frameworks like Bottle where scheduling\/threading isn't an issue.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":74369127,"Users Score":0,"Answer":"Ok, unless anyone has any amazing ideas, the best solution is:\nConnect to REDIS, pull existing values at the time the client web socket connects.\nThe worker process(es) can push new values via REDIS.\nSince the connected client handler can use asyncio, they can subscribe to the pub\/sun model.\nProblem solved, yes it requires REDIS but that\u2019s easy enough in docker.\nWhy REDIS?\nLow boiler plate code needed, full pub\/sub support and low setup pain.","Q_Score":2,"Tags":"python,websocket,fastapi","A_Id":74373639,"CreationDate":"2022-11-09T01:57:00.000","Title":"How to LISTEN\/GET for updated data to send to subscribed websocket clients using FastAPI","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I was using librosa with conda virtual environment in MAC M1 silicon machine. But it doesn't allow to run even import librosa code snippet and popping up this error message.\nOSError: cannot load library '\/opt\/homebrew\/lib\/libsndfile.dylib': dlopen(\/opt\/homebrew\/lib\/libsndfile.dylib, 0x0002): tried: '\/opt\/homebrew\/lib\/libsndfile.dylib' (no such file)\nwhat would be the error I made","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":159,"Q_Id":74386679,"Users Score":0,"Answer":"This error popped up because I had three separate python environment in my M1 machine and python interpreter unable to locate the lib directory and load it to the code.\nMy advice to any person who will going to refer this question is, If you own a Mac m1 environment, setup only a one Conda environment, if you are having multiple virtual environment it will cause errors since Mac uses swap memory to load libs in python","Q_Score":0,"Tags":"python,conda,librosa","A_Id":74422668,"CreationDate":"2022-11-10T09:22:00.000","Title":"OSError: cannot load library '\/opt\/homebrew\/lib\/libsndfile.dylib': dlopen(\/opt\/homebrew\/lib\/libsndfile.dylib, 0x0002)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"pip install kivy\nCollecting kivy\nUsing cached Kivy-2.1.0.tar.gz (23.8 MB)\nInstalling build dependencies ... error\nerror: subprocess-exited-with-error\n\u00d7 pip subprocess to install build dependencies did not run successfully.\n\u2502 exit code: 1\n\u2570\u2500> [10 lines of output]\nCollecting setuptools\nUsing cached setuptools-65.5.1-py3-none-any.whl (1.2 MB)\nCollecting wheel\nUsing cached wheel-0.38.4-py3-none-any.whl (36 kB)\nCollecting cython!=0.27,!=0.27.2,<=0.29.28,>=0.24\nUsing cached Cython-0.29.28-py2.py3-none-any.whl (983 kB)\nCollecting kivy_deps.gstreamer_dev~=0.3.3\nUsing cached kivy_deps.gstreamer_dev-0.3.3-cp311-cp311-win_amd64.whl (3.9 MB)\nERROR: Could not find a version that satisfies the requirement kivy_deps.sdl2_dev~=0.4.5 (from versions: 0.5.1)\nERROR: No matching distribution found for kivy_deps.sdl2_dev~=0.4.5\n[end of output]\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nerror: subprocess-exited-with-error\n\u00d7 pip subprocess to install build dependencies did not run successfully.\n\u2502 exit code: 1\n\u2570\u2500> See above for output.\nnote: This error originates from a subprocess, and is likely not a problem with pip.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":196,"Q_Id":74406314,"Users Score":0,"Answer":"from their docs\nKivy 2.1.0 officially supports Python versions 3.7 - 3.10.\nyou are using 3.11. Try using python 3.10\n(although from your prints it looks like they are in the process of supporting 3.11 (since it could install other kivy specific requirements for 3.11))","Q_Score":0,"Tags":"python,kivy,subprocess","A_Id":74406407,"CreationDate":"2022-11-11T17:44:00.000","Title":"Unable To Install Kivy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running some dramatiq as kubernetes jobs. The usecase require the pods to shutdown if no messages are in the queue. So what I need is that the dramatiq worker process shutsdown if not messages are in the queue.\nIs it possible to do that in dramatiq ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":120,"Q_Id":74414830,"Users Score":1,"Answer":"you could use the @dramatiq.actor(max_retries=-1) decorator, that will result in the worker quitting once it has handled all of the messages in the queue","Q_Score":1,"Tags":"python,rabbitmq,dramatiq","A_Id":75025738,"CreationDate":"2022-11-12T16:52:00.000","Title":"Is it possible for dramatiq worker to auto shutdown if no messages are present in the queue?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to trigger an Azure Function from Logic Apps. Running the Azure function takes more than 2 minutes as it is reading a file from a location, converts it to another format and then writes it to a different location. The problem is that the Logic Apps is creating a request, waits for 2 minutes to get a response, but this response doesn't come because the function is not finishing that fast. So the logic app assumes there is an error and recreates the request.\nI read in the documentation that there is no way to increase the timeout period. I tried creating two threads in the azure function. One returns 202 http status code to the logic app, and the other one would remain as a daemon and keeps running. But the file doesn't seem to be copied.\nDoes anyone have any idea how could this be achieved?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":74446349,"Users Score":0,"Answer":"Continue the work on another logic app.\nJust change your logic app to return Accepted\/OK response and calls the function.\nThe function does the work and after it finishes (or fails) it calls another logic app where it continues the work (or deal with the error).","Q_Score":0,"Tags":"python,azure-functions,azure-logic-apps","A_Id":74476921,"CreationDate":"2022-11-15T13:20:00.000","Title":"Triggering an Azure Function that takes more than 2 minutes to run from logic apps","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a dockerized Python code that exposes some APIs via Swagger docs using FastAPI. This software allows me to schedule tasks that run every 10 minutes indefinitely until I delete the task.\nAfter running the application for 20-30 days, it gets exceptionally slow (going from 9 seconds to over 5 minutes per execution). I believe there's some memory leak occurring and want to implement garbage collection.\nHowever, I am not sure where to put garbage collection. Would I write import gc and gc.enable() in my FastAPI main.py file where all my APIs are? Or would I have to import gc in each Python module?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":327,"Q_Id":74452013,"Users Score":0,"Answer":"Garbage collection is enabled by default, so running gc.enable() or gc.collect() will not change anything.","Q_Score":2,"Tags":"python,memory-management,garbage-collection","A_Id":75353321,"CreationDate":"2022-11-15T20:50:00.000","Title":"Where to implement garbage collection in a Dockerized FastAPI application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a python CLI interface tool which is used by more than 80 people in my team.\nEvery time we make changes to our code and release it, users also have to download the latest code on their\nWindows Machine and then run it.\nLike this we have other tools as well like GCC, Company's internal software to be installed on every users PC.\nUsers sometimes face issues in installing the software and upgrading to newer version of Python CLI tool.\nI want these tools and softwares to be managed at a single place and then user can access them from there.\nHow to resolve this problem on Windows Platform?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":25,"Q_Id":74457197,"Users Score":1,"Answer":"I'm not sure about the environment, some ideas;\n\ncould share a onedrive folder and sync it from there.\nGroup policy that runs an install script on startup","Q_Score":0,"Tags":"python,windows,shell,cloud","A_Id":74457227,"CreationDate":"2022-11-16T08:19:00.000","Title":"How to keep python code in cloud and then make multiple users execute the code on their local machines?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How can I remove existing and future pycahce files from git repository in Windows? The commands I found online are not working for example when I send the command \"git rm -r --cached __pycache__\" I get the command \"pathspec '__pycache__' did not match any files\".","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":74462238,"Users Score":0,"Answer":"Well, you don't need __pycache__ files in your git repositories and you'd better to ignore all related files to it by adding __pycache__\/ to your .gitignore file.","Q_Score":0,"Tags":"python,git,pyc","A_Id":74528490,"CreationDate":"2022-11-16T14:30:00.000","Title":"Removing pycache in git","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a ML model with fast api wrapper running on google cloud VM, it runs fine when ssh terminal is open. but once I close the terminal it runs for 10 more minutes maybe and then the api returns 502 bad gate way\nI'm using nginx with this config\n server{listen 80; server_name: public ip; location \/{proxy_pass http:\/\/127.0.0.1:8000;}}\nplease let me know if there is any way I can fix this problem.\nreran everything sill same error","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":74481058,"Users Score":0,"Answer":"When you close the SSH terminal session, the applications that you started will be killed. Use a program such as tmux, screen, etc. to create sessions that you can attach to and detach from.\nHowever, since you are using Nginx, there are better methods of managing applications that are being proxied. For development, your current method is OK. For production, your proxied applications should be started and managed by the system as a service.","Q_Score":0,"Tags":"python,python-3.x,nginx,google-cloud-platform,fastapi","A_Id":74482469,"CreationDate":"2022-11-17T19:26:00.000","Title":"fast api stopping after a while on google cloud vm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I can enter into my terminal (wsl) python3 filename.py and the code executes in the terminal just fine. But when I hit the play button (Run Python File) I get errors\nC:\/Users\/user1\/AppData\/Local\/Programs\/Python\/Python311\/python.exe \"c:\/Online Learning\/Coder Academy\/Python\/Lesson-3\/test.py\"\nzsh: no such file or directory: C:\/Users\/user1\/AppData\/Local\/Programs\/Python\/Python311\/python.exe\nI don't see why if the code is executing fine from the terminal by typing in the command. Why can't I hit the play button without error?\nI've tried a lot of things including using extension Code-Runner. Uninstalling and re-installing various versions of python. I've tried pyenv, defining various different interpreter paths.\nI'm thinking it's not the set up of my python in wsl it's something to do with a setting in VSCode.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":181,"Q_Id":74497048,"Users Score":0,"Answer":"I would agree with the setting in vscode.\nhitting cntrl+shift+p, type python, hit 'select' interpreter will let you pinpoint exactly where vscode thinks the interpreter is.\nFor many reasons I would strongly recommend using a virtual env. For the purposes of this discussion, you know that you'll be creating a folder that has a version of python in it that you can select.\nBest of luck. these issues can be very frustrating","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":74497531,"CreationDate":"2022-11-19T02:06:00.000","Title":"I have this issue with my vscode executing a python file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I can enter into my terminal (wsl) python3 filename.py and the code executes in the terminal just fine. But when I hit the play button (Run Python File) I get errors\nC:\/Users\/user1\/AppData\/Local\/Programs\/Python\/Python311\/python.exe \"c:\/Online Learning\/Coder Academy\/Python\/Lesson-3\/test.py\"\nzsh: no such file or directory: C:\/Users\/user1\/AppData\/Local\/Programs\/Python\/Python311\/python.exe\nI don't see why if the code is executing fine from the terminal by typing in the command. Why can't I hit the play button without error?\nI've tried a lot of things including using extension Code-Runner. Uninstalling and re-installing various versions of python. I've tried pyenv, defining various different interpreter paths.\nI'm thinking it's not the set up of my python in wsl it's something to do with a setting in VSCode.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":181,"Q_Id":74497048,"Users Score":0,"Answer":"I'm not totally sure, but it seems to be coughing up that \"python.exe\" doesn't exist. What I remember doing is checking if \"py.exe\" works and see if the problem is resolved. If so, go to where VSCode says Python is and copy py.exe to your desktop, rename it to python.exe and paste it back to the folder where py.exe was. It may not be good practice, but was my workaround. I guess the best thing to do is just to reinstall Python.","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":74497243,"CreationDate":"2022-11-19T02:06:00.000","Title":"I have this issue with my vscode executing a python file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Traceback (most recent call last):\nFile \"\/home\/airflow\/.local\/lib\/python3.8\/site-packages\/airflow\/www\/templates\/airflow\/dags.html\", line 44, in top-level template code\n{% elif curr_ordering_direction == 'asc' and request.args.get('sorting_key') == attribute_name %}\nFile \"\/home\/airflow\/.local\/lib\/python3.8\/site-packages\/airflow\/www\/templates\/airflow\/main.html\", line 21, in top-level template code\n{% from 'airflow\/_messages.html' import show_message %}\nFile \"\/usr\/local\/lib\/python3.8\/site-packages\/flask_appbuilder\/templates\/appbuilder\/baselayout.html\", line 2, in top-level template code\n{% import 'appbuilder\/baselib.html' as baselib %}\nFile \"\/usr\/local\/lib\/python3.8\/site-packages\/flask_appbuilder\/templates\/appbuilder\/init.html\", line 12, in top-level template code\n{% block head_meta %}\nFile \"\/home\/airflow\/.local\/lib\/python3.8\/site-packages\/airflow\/www\/templates\/airflow\/dags.html\", line 63, in block 'head_meta'\n\nFile \"\/usr\/local\/lib\/python3.8\/site-packages\/flask\/helpers.py\", line 370, in url_for\nwith_categories: bool = False, category_filter: t.Iterable[str] = ()\nFile \"\/usr\/local\/lib\/python3.8\/site-packages\/flask\/app.py\", line 2216, in handle_url_build_error\nsubdomain = None\nFile \"\/usr\/local\/lib\/python3.8\/site-packages\/flask\/_compat.py\", line 39, in reraise\nFile \"\/usr\/local\/lib\/python3.8\/site-packages\/flask\/helpers.py\", line 357, in url_for\n# always in sync with the session object, which is not true for session\nFile \"\/usr\/local\/lib\/python3.8\/site-packages\/werkzeug\/routing.py\", line 2179, in build\nwerkzeug.routing.BuildError: Could not build url for endpoint 'Airflow.legacy_graph'. Did you mean 'Airflow.graph' instead?enter code here","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":74515301,"Users Score":0,"Answer":"While starting the Airflow webserver, get Airflow.legacy_graph error.\nLet me know, how can we resolve such kind of error?","Q_Score":0,"Tags":"python-3.x,airflow","A_Id":74515302,"CreationDate":"2022-11-21T07:05:00.000","Title":"Airflow 2.0, Could not build url for endpoint 'Airflow.legacy_graph'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm struggling with some kind of connection problem.\nHere's the problem that I wanted to resolve\n\nWhat I want to do is getting video streaming data from a IP camera (RTSP)\nThe IP camera is attached to the router which has access to the internet\nI want to connect to this IP camera from remote computer.\nIP cam --- Router --- Internet --- My computer\nI know that I can do this by setting port forwarding option of the router.\nHowever, I cannot set the option because the router is not mine, which means I cannot access to the router's admininstration server (192.168.0.1)\nI'm trying to figure out this issue by connecting a small edge computer (e.g., raspberry pi) to the router's subnetwork and send streaming data to my computer through the Internet.\nIP cam --------- Router --- Internet --- My computer\nminicomputer ---\nIt's certain that the minicomputer can access to my computer through ssh, so I think It's possible to use the minicom as a proxy.\nWhat is the best the way to get the IP camera's streaming in my circumstance?\nPlease help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":58,"Q_Id":74558953,"Users Score":0,"Answer":"I think a good idea would be to use a VPN. Install a VPN-Server (openvpn, wireguard, etc...) on your minicomputer in the same network as your camera. Than connect to your vpn from your computer. Now you should be able to access the camera.\nI have a few ideas how to view the camera stream, depending how you would normaly access it.\n\nIf it is a software to connect to the camera, install a desktop-environment on your minicomputer and connect to it via VNC (more or less a linux equivalent to rdp on windows) or RDP. Then open the software and view your stream. It could be a bit laggy because it has to be transmitted two times (camera -> minipc -> your pc)\n\nIf you can access the stream via a url, you could setup a webserver (nginx or apache2) on your minicomputer and build a small html website, that displays the stream. This should be more perfomant than the first solution, but involves a bit more tinkering. If you should decide to use this solution, I should have an example HTML-Page somewhere. Just let me know and i will try to find it and share it.\n\nDepending on how you setup your VPN-Server maybe you can connect to your Camera directly via it's IP. To do that, your VPN-Server has to do some routing between the subnets.\n\n\nI know these are just some Ideas from the top of my head, but I hope I can help a bit. If you have more questions or I didn't explain it in a way it is understandable, feel free to ask again.","Q_Score":0,"Tags":"python,connection,streaming,router","A_Id":74559656,"CreationDate":"2022-11-24T10:09:00.000","Title":"How can I access to the IP camera connected to a subnetwork of a router without port forwarding?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In April 2021 Kafka released a version with early access to eliminating its dependency on Zookeeper. I've read many posts (mostly from 2021) saying that it was still not a good idea to use those versions on prod becasue they were too new. Every tutorial for kafka-python I've read starts with building a local Kafka instance and running Zookeeper for that. Are thos tutorials outdated when it comes to building the instance or is it still better to download older Kafka versions and continue using Zookeeper?\nI have no code to show because it's more of a theoretical question.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":74563293,"Users Score":0,"Answer":"Kafka clients newer than Kafka 0.9 have never needed to connect to Zookeeper. The brokers still require it, but that has nothing to do with Python.\nKafka 3.3.x announced Kraft mode as production ready, by the way.","Q_Score":0,"Tags":"apache-kafka,apache-zookeeper,kafka-python,kraft","A_Id":74567074,"CreationDate":"2022-11-24T15:47:00.000","Title":"When creating a local Kafka instance for kafka-python, is it a good idea using Kafka 2.8+ that don't need Zookeeper?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've made an Python+Django+git docker container.\nNow, I would like to 'Attach to a running container..' with VSCode to develop, i.e. run and debug, a Python app inside.\nIs it good idea? Or it is better only setting up VSCode to run app inside the container?\nI don't want VSCode make a docker container by itself.\nThanks.\nI tried to 'Attach to a running container..' but have got 'error xhr failed...' etc.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":74572709,"Users Score":0,"Answer":"Visual Studio Code, and Docker Desktop each offer a feature called \"Dev Containers\" (VSCode) or \"Dev Environments\" (DD), or CodeSpaces (GitHub)\nIn this approach, a Docker container is created by scanning the source, and generating a container that contains the development toolchain. Visual Studio then attaches to the container, and allows you to develop even though you do not have node\/python3\/dotnet\/etc. installed on your development PC.\nThe xhr error indicates something went wrong downloading a scanning image or something else is strange about your project.\nThere is an optional Dockerfile that can be created if scanning fails to find an image, that is normally kept in a .devcontainers \/ .devenvironments folder depending on which of Docker \/ VSCode \/ GitHub \/ other you are using.\nYour project might also have one (or more) Dockerfile's that are used to package the running app up as a docker image, so don't be confused if you end up with 2. Thats not a problem and is expected really.","Q_Score":0,"Tags":"python,docker,visual-studio-code","A_Id":74573101,"CreationDate":"2022-11-25T12:19:00.000","Title":"How to use VSCode with the existing docker container","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've a django application with waitress (gunicorn doesn't work on windows) to serve it. Because its production code and its based on windows 2012 server. But I want the django application to run in daemon mode is it possible?\nDaemon mode - app running without command prompt opening\/visible also it'll be helpful to open the shell without closing the server. AutoStart if for some reason system has to restart.\nNote:\nLimitations: The project cannot be moved to UNIX based system.\nThird-Party applications like any .exe file cannot be used.\nYou cannot use Docker as it consumes a lot of space.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":142,"Q_Id":74574627,"Users Score":0,"Answer":"Another solution - to use Docker, and in your docker you can use gunicorn or any other linux feature","Q_Score":0,"Tags":"python,django,windows,daemon,waitress","A_Id":74674579,"CreationDate":"2022-11-25T15:02:00.000","Title":"Django waitress- How to run it in Daemon Mode","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've a django application with waitress (gunicorn doesn't work on windows) to serve it. Because its production code and its based on windows 2012 server. But I want the django application to run in daemon mode is it possible?\nDaemon mode - app running without command prompt opening\/visible also it'll be helpful to open the shell without closing the server. AutoStart if for some reason system has to restart.\nNote:\nLimitations: The project cannot be moved to UNIX based system.\nThird-Party applications like any .exe file cannot be used.\nYou cannot use Docker as it consumes a lot of space.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":142,"Q_Id":74574627,"Users Score":0,"Answer":"Just add & at the end of command","Q_Score":0,"Tags":"python,django,windows,daemon,waitress","A_Id":74574671,"CreationDate":"2022-11-25T15:02:00.000","Title":"Django waitress- How to run it in Daemon Mode","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to share a python class instance between my child processes that are created with the subprocess.Popen\nHow can I do it ? What arguments should I use of the Popen?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":74602221,"Users Score":0,"Answer":"You can share a pickle over a named pipe.","Q_Score":0,"Tags":"python,process,multiprocessing,subprocess","A_Id":74602455,"CreationDate":"2022-11-28T14:37:00.000","Title":"Python: Share a python object instance between child processes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"When I am trying to call the API from POSTMAN in Airflow DAG, I am facing a 403 Forbidden error.\nI have enabled the headers for basic authentication with the username and password in Postman. In the airflow.cfg file, I have enabled auth_backend = airflow.contrib.auth.backends.password_auth. This error occurs when I attempt to work solely in Postman. When I copy the same URL and try it directly in the browser, I am able to access the link.\nI'm having trouble with authorization now that I've enabled authentication.\nI attempted to use the curl command but received the same forbidden error.\u00a0\nThe airflow version is 1.10.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":142,"Q_Id":74609067,"Users Score":2,"Answer":"The basic auth seems fine, it is base64 encoded already. 403 means you are authorized in the application but this specific action is forbidden. In airflow there are different roles admin\/dag manager\/operator and not all roles are allowed to do DAG operations. Can you specify the user role and operations you try to do? Have in mind that base64 auth string can be easily decoded to plain text and people can see your username and password.\nIn the picture you have shared the verb you are using is POST, opening the link in tbe browser is probably a GET operation which is different in terms of permissions required.","Q_Score":0,"Tags":"python,postman,airflow","A_Id":74610905,"CreationDate":"2022-11-29T04:07:00.000","Title":"403 Forbidden in airflow DAG Triggering API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running python on a m1 Mac with Rosetta, on a x86_64 architecture.\nDuring the execution I need to use subprocess.run to launch some external program. However that program need to run under arm64 architecture.\nIs there a possible solution for doing that? Simply running from an arm64 terminal does not do the trick, and it gets overridden by the Python architecture.\nI am using python==3.8.2.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":74615048,"Users Score":0,"Answer":"The root of the problem was actually not in the subprocess.run, the process I was trying to spawn was compiled such that the binary is multi-arch support, supporting both arm64 and x86_64 (the support for the latter was mainly launching the program and crashing after not supported error).\nAs the call for subprocess.run came from a x86_64 architecture, the binary defaulted to that architecture.\nThe solution was to just compile the binary only for arm64, with no multi-arch support. After that the process was spawned with the correct architecture, even tho the call was made from a different architecture.","Q_Score":0,"Tags":"python,macos,subprocess,apple-m1,rosetta-2","A_Id":74639952,"CreationDate":"2022-11-29T13:42:00.000","Title":"Change subprocess.run architecture from x86 to arm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Docker container running python code on an ubuntu 20 image, the host is also ubuntu 20.\nInconsistently sometimes the container just gets stuck \/ freezes.\nLogs stop being added to the console, the docker's status is \"running\".\nEven when I try to kill the process that runs the python code inside the Docker, it does not affect it, the process does not die.\nRestarting the container solves it.\nI put a Python code into my service that listens to a specific signal and when I send the signal it should print the stack trace for me, but as mentioned, the processor does not respond to my signals...\nDoes anyone have an idea what is causing this or how I can debug it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":352,"Q_Id":74625646,"Users Score":0,"Answer":"The problem was that the code used the requests.post function without setting a timeout, the server was probably not available or changed address (Docker's internal network) and it just waited there.","Q_Score":1,"Tags":"python,docker,ubuntu,debugging,devops","A_Id":75444209,"CreationDate":"2022-11-30T09:45:00.000","Title":"Docker stuck \\ frozen in \"running\" status","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to install octoprint on ubuntu 18 using python 3.7\nThe installation fails with the message:\nModuleNotFoundError: No module named 'wrapt'\nI naturally tried installing\npip3 install wrapt\nAnd it fails too with the same message. It looks like I am in a loop where I need wrapt, but wrapt needs itself.\nplease advise","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":441,"Q_Id":74646961,"Users Score":1,"Answer":"If your using an Anaconda Virtiual Enviorment try using conda install wrapt That worked for me.","Q_Score":0,"Tags":"python,python-3.x,octoprint","A_Id":74868625,"CreationDate":"2022-12-01T19:03:00.000","Title":"Installing Module wrapt says - ModuleNotFoundError: No module named 'wrapt'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm deploying django app to pythonanywhere where i used APScheduler for automatically send expire mail whenever subscription end date exceed.\nI don't know how to enable threads, so that my web app runs perfectly on pythonanywhere.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":64,"Q_Id":74651948,"Users Score":1,"Answer":"On hosting platforms like PythonAnywhere, there might be multiple copies of your site running at different times, in order to serve the traffic that you get. So you should not use an in-process scheduler to perform periodic tasks; instead, you should use the platform's built-in scheduled tasks function.","Q_Score":0,"Tags":"django,django-rest-framework,deployment,pythonanywhere","A_Id":74655227,"CreationDate":"2022-12-02T07:00:00.000","Title":"The scheduler seems to be running under uWSGI, but threads have disabled.You must run uWSGI with the --enable-threads option for the scheduler to work","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am designing a string based game where the real time positions of characters are represented as a string as follows:\n -----A-----o-----\nI am changing the position of the character \"A\" based upon user keyboard inputs\neg:\nupdated position:\n --------A--o-----\nI don't want to print the string line by line as It gets updated instead I want to modify it every time in the same place when being output in command line, as the constraint I am working with is :\nThe entire game map should run on a single line on the command line -\nchanging the state of the game should not spawn a new line every time.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":74655251,"Users Score":0,"Answer":"when printing the output use the end attribute for print statements and set it as \"\\r\" to get back the output position to starting of the line.","Q_Score":0,"Tags":"python,string,input,output","A_Id":74659375,"CreationDate":"2022-12-02T11:41:00.000","Title":"How to get a continuously changing user input dependent output in the same place at command line","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a ray cluster that is started manually on several nodes using ray start. How can I schedule tasks to run on the cluster, such that they are exclusive, i.e., no tasks are ran in parallel on one node?\nOne option would be to specify each node as having only 1 CPU. Another would be to introduce a custom resource 'node', with 1 instance per node.\nBut this seems like a common scenario, is their a cleaner way to handle this?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":41,"Q_Id":74692174,"Users Score":1,"Answer":"Using custom resources is the way to go right now.","Q_Score":0,"Tags":"python,ray","A_Id":74733743,"CreationDate":"2022-12-05T17:34:00.000","Title":"How to stop ray from running multiple tasks on the same cluster node","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an embedded linux system that I need to run a python script whenever it boots. The python script needs to have a terminal interface so the user can interact and see outputs. The script also spawns another process to transfer large amounts of data over SPI, this was written in C.\nI've managed to get the script to start on launch and have terminal access by adding\n@reboot \/usr\/bin\/screen -d -m python3 \/scripts\/my_script.py\nto the crontab. I can then do \"screen -r\" and interact with the script. However if launched in this way the script fails to start the external SPI script. In python I launch the script with subprocess.Popen\nproc=subprocess.Popen([\".\/spi_newpins,\"-o\",\"\/media\/SD\/\"+ latest_file\"])\nand this works perfectly whenever I manually launch the script, even within screen. Just not when it is launched by crontab. Does anyone have any ideas on how to get the spi subprocess to also work from crontab?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":74731780,"Users Score":0,"Answer":"Fixed now, I had to add an absolute path to the spi_newpins function call\nproc=subprocess.Popen([\"\/scripts\/.\/spi_newpins\",\"-o\",\"\/media\/SD\/\"+ latest_file\"])","Q_Score":0,"Tags":"python,cron,embedded-linux","A_Id":74733159,"CreationDate":"2022-12-08T14:26:00.000","Title":"Embedded linux start python from crontab with terminal access and subprocess permissions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to airflow and would appreciate your help:\nMy project looks like this:\n--AIRFLOWHOME\n----dags\n------my_dag.py\n------labs\n--------_init_.py\n--------db_connections.py\nIn the file my_dag.py I'm trying import my module like this:\nfrom labs import db_connection\nIt looks fine but when I try to run the following command\nairflow dags list-import-errors\nI get an error:\nImportError: cannot import name 'db_connection' from 'labs'\nmy airflow is not installed on Docker\nwhat is my worng?\nI tried to do this,\nsys.path.append('C:\\Users\\xxxx\\AIRFLOWHOME\\dags\\labs')\nbut it didn't help\nThank You!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":31,"Q_Id":74768032,"Users Score":0,"Answer":"Airflow allows you to use your own Python modules in the DAG and in the Airflow configuration. The following article will describe how you can create your own module so that Airflow can load it correctly, as well as diagnose problems when modules are not loaded properly.\nOften you want to use your own python code in your Airflow deployment, for example common code, libraries, you might want to generate DAGs using shared python code and have several DAG python files.\nYou can do it in one of those ways:\nadd your modules to one of the folders that Airflow automatically adds to PYTHONPATH\nadd extra folders where you keep your code to PYTHONPATH\npackage your code into a Python package and install it together with Airflow.","Q_Score":0,"Tags":"python,airflow","A_Id":74768067,"CreationDate":"2022-12-12T07:48:00.000","Title":"How can i import my own modules from DAG airflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I get this error \"ImportError: The 'pyparsing' package is required\" after trying to run .py file with from google.cloud import bigquery line. Import was working before and is still working in the Jupyter Notebook or in Ipython.\nI looked at existing options here and tried:\n\npip install pyparsing\ndowngrade setuptools\nuninstall pyparsing and setuptools and installing them back\nuninistall and purge pip and install it back\n\nDoes anyone have suggestions? Thanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":28,"Q_Id":74799058,"Users Score":1,"Answer":"I found the problem. It is silly, but happens to me from time to time. Do not name files in your project like - html.py =) . It was in one of the folders of my project. Really annoying, but nevertheless, hope it will help someone. Maybe you have same problem with different file name, but look up for files with common use names!)","Q_Score":0,"Tags":"python,google-bigquery,setuptools,pyparsing","A_Id":74803188,"CreationDate":"2022-12-14T13:32:00.000","Title":"Bigquery import asks for pyparsing in shell run","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am running a command os.system(\"unit run\" + directoryPath + \" urun shell\"), which opens the shell prompt of the unit. How should I run commands on the shell prompt that is a whole new prompt getting open up with Python?\nI tried executing the command os.system(\"unit run\" + directoryPath + \" urun shell \/c command\"), but that didn't worked as I was expecting that the command should have ran on the shell prompt.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":74803234,"Users Score":0,"Answer":"As far as I know you can just call os.system() again with your shell-command.","Q_Score":0,"Tags":"python,automation,teamcenter","A_Id":74803359,"CreationDate":"2022-12-14T19:18:00.000","Title":"I am executing os.system (\"unit run\"+ directoryPath+\" urun shell\"), which opens the shell prompt of the unit, how to run commands on the shell promt?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"This is the error I am receiving:\nRuntimeError: The 'apxs' command appears not to be installed or is not executable. Please check the list of prerequisites in the documentation for this package and install any missing Apache httpd server packages.\nHow can I get around this? I have received this while trying to install different packages. I am working on a Django project. I have already made sure the apache2-dev package is installed. I am developing on linux.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":171,"Q_Id":74829709,"Users Score":0,"Answer":"You can add some dependencies to the apache2.\nsudo apt-get -y install apache2 apache2-utils apache2-dev\nThis helped me go around the missing apxs.","Q_Score":1,"Tags":"python,django","A_Id":76635507,"CreationDate":"2022-12-16T20:59:00.000","Title":"Receiving Error: 'apxs' command appears not to be installed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a RHEL 7 Linux server using Apache 2.4 as the httpd daemon. One of the pages served by Apache is a simple https form that is generated using Python 3.11. Currently, the form is submitting and being processed properly, but we have no way to track where the form was submitted from.\nIdeally, there would be a field for users to enter their user name, but we have no way of validating if the user name is valid or not.\nI would like to add a hidden field to the form that would contain one of the following:\n\nUser name used to log into the clients computer from where the form was submitted.\nComputer name of the clients computer from where the form was submitted.\nIP address of the clients computer from where the from was submitted.\n\nI do not care if this data is discovered by Python while the page is being generated, or by a client side script embedded in the generated web page.\nThe majority of users will be using Windows 10 and Chrome or Edge as their browser, but there will be Apple and Linux users and other browsers as well.\nIs this possible? If so, how?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":74880709,"Users Score":0,"Answer":"Would you like every website to have access to your local user- or computer-name, or other local information available to your browser? While there is an aweful lot of information available to webapps, this privacy invasion is not.\nThe server will have a record about the sending IP address though, naturally - even without it being part of a form.\nAs to the \"how\": The Python script that processes the submitted form does have access to the request parameters, and with it typically the remote IP address. What you do with it (e.g. save it) is yours. You'll obviously also find the remote IP address in the Apache logs - but there it's disassociated with the actual form submission.","Q_Score":0,"Tags":"python,apache,authentication,https,rhel","A_Id":74887410,"CreationDate":"2022-12-21T19:06:00.000","Title":"How to identify the user or machine making an HTTP request to an Apache web server on RHEL 7 using server side Python or client side script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Background:\nIn WSL2 (ubuntu 20.04) I created a python virtual environment inside a directory. Using the command python3 -m venv venv my system's python version was set to python3.11 (after downloading) via sudo update-alternatives --config python3 and then choosing the version. I noticed I was having some errors of missing modules when I started WSL2 (happening after a computer restart), I read this was because I was using a different python version than the one ubuntu 20.04 came with so I switched back to 3.8 via the config menu as before. I am also using VS code that's connected to my WSL2.\nThese are some of the contents of my venv directory: venv\/bin\/python venv\/bin\/python3 venv\/bin\/python3.11 venv\/bin\/pip venv\/bin\/pip3\nQuestion:\nAfter activating my virutal env via source venv\/bin\/activate, when I do python3 --version I still get a version of 3.8.10 despite creating the virtual environment with 3.11. I was able to get the interpretor set to 3.11 on VS code.I know I was in the virtual environment since my command prompt had (venv) in front. I went into the python console while in the virtual env and did import sys and sys.path this was my output ['', '\/usr\/lib\/python38.zip', '\/usr\/lib\/python3.8', '\/usr\/lib\/python3.8\/lib-dynload']. Why isn't the python version changing, am I misunderstanding something or did I not do something correctly? Seems like pip isn't working either but works when I switch my system python to 3.11 (I tried installing it on 3.8 but it said it was already installed).\nSolved:\nAnswered below, just re-created the virtual env while making sure my system python version was 3.11 (may have been some mixup earlier).","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":74914720,"Users Score":0,"Answer":"I deleted my venv directory and recreated my virtual environment while on python3.11. This has resolved my issue.","Q_Score":2,"Tags":"python,python-3.x,virtualenv,wsl-2","A_Id":74915745,"CreationDate":"2022-12-25T17:05:00.000","Title":"Python versions are not changing despite activating virtual environment in WSL2","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Background:\nIn WSL2 (ubuntu 20.04) I created a python virtual environment inside a directory. Using the command python3 -m venv venv my system's python version was set to python3.11 (after downloading) via sudo update-alternatives --config python3 and then choosing the version. I noticed I was having some errors of missing modules when I started WSL2 (happening after a computer restart), I read this was because I was using a different python version than the one ubuntu 20.04 came with so I switched back to 3.8 via the config menu as before. I am also using VS code that's connected to my WSL2.\nThese are some of the contents of my venv directory: venv\/bin\/python venv\/bin\/python3 venv\/bin\/python3.11 venv\/bin\/pip venv\/bin\/pip3\nQuestion:\nAfter activating my virutal env via source venv\/bin\/activate, when I do python3 --version I still get a version of 3.8.10 despite creating the virtual environment with 3.11. I was able to get the interpretor set to 3.11 on VS code.I know I was in the virtual environment since my command prompt had (venv) in front. I went into the python console while in the virtual env and did import sys and sys.path this was my output ['', '\/usr\/lib\/python38.zip', '\/usr\/lib\/python3.8', '\/usr\/lib\/python3.8\/lib-dynload']. Why isn't the python version changing, am I misunderstanding something or did I not do something correctly? Seems like pip isn't working either but works when I switch my system python to 3.11 (I tried installing it on 3.8 but it said it was already installed).\nSolved:\nAnswered below, just re-created the virtual env while making sure my system python version was 3.11 (may have been some mixup earlier).","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":45,"Q_Id":74914720,"Users Score":0,"Answer":"By changing the selection in sudo update-alternatives --config python3 you change the selected python version also for the chosen vitrual environment (at least when using venv, it might be different with other tools).\nThat can cause issues, because when creating a new virtual environment envname using venv from a specific python version xx.xx, a directory named pythonxx.xx is created in \/envname\/lib\/, and inside it a directory named site-packages that contains the packages installed by the pip of this specific environment.\nSo changing back to the original python version of the environment through sudo update-alternatives --config python3 should solve the issue, and probably the errors of missing modules are due to the incompatibility of the current selected python version with the original version which you installed the virtual environment from.\nPersonally, to avoid confusing, I name my virtual environments with the python version as a suffix, e.g envname_py3.11.1. But there might be a better method which I am not aware of.","Q_Score":2,"Tags":"python,python-3.x,virtualenv,wsl-2","A_Id":74914934,"CreationDate":"2022-12-25T17:05:00.000","Title":"Python versions are not changing despite activating virtual environment in WSL2","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been looking to communicate two or multiple microservices in django and need to make them communicate with each. I've reasearched about it and didnt get proper info about it. what i understood is each microservices application completely dont depend on one another including the database. Now how to communicate each microservices with one another. there are 2 methods **synchronous and asynchronous ** method.\ni dont want to use synchronous. how to communicate the endpoints of api in asynchronous way? i found some message brokers like rabbitMQ, kafka, gRPc... Which is the best brokers. and how to communicate using those service? i didnt get any proper guidance. I'm willing to learn , can anybody please explain with some example ? It will be huge boost for my work.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":212,"Q_Id":74926266,"Users Score":0,"Answer":"There are a few different ways to communicate between microservices in a Django Rest Framework (DRF) application. Here are a few options:\nUse HTTP requests: One option is to use HTTP requests to send data between microservices. This can be done using the requests library in Python or using JavaScript's fetch API in the frontend.\nUse a message queue: Another option is to use a message queue, such as RabbitMQ or Kafka, to send messages between microservices. This can be useful if you need to decouple the services and handle asynchronous communication.\nUse a database: You can also use a database, such as PostgreSQL or MongoDB, to store data that is shared between microservices. This can be done using Django's ORM or a database driver in your preferred language.\nWhich method you choose will depend on your specific requirements and the nature of the communication between your microservices.","Q_Score":0,"Tags":"python,django,rabbitmq,microservices,rabbitmq-exchange","A_Id":74926309,"CreationDate":"2022-12-27T06:21:00.000","Title":"How to communicate two or more microservices in django rest framework?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"How can I install python module in our android linux environment Termux. If you anyone have Idea about this topic let me know\nI think I can solve my problem from here","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":193,"Q_Id":74939034,"Users Score":0,"Answer":"you can try pkg install python to install python then verify with python --version. If you want a specific python version use pkg search python and verify if your desired version is available. then pkg install python3.9 by example, to install python3.9.\nWith python installed you can install your modules or packages using pip like this: pip install your_module_name.","Q_Score":0,"Tags":"python,linux,python-2.7,module,termux","A_Id":74940238,"CreationDate":"2022-12-28T10:23:00.000","Title":"How to use python module in termux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How can I install python module in our android linux environment Termux. If you anyone have Idea about this topic let me know\nI think I can solve my problem from here","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":193,"Q_Id":74939034,"Users Score":0,"Answer":"USE THIS COMMAND:\npkg install python\npkg install python2\npkg install python3","Q_Score":0,"Tags":"python,linux,python-2.7,module,termux","A_Id":74941417,"CreationDate":"2022-12-28T10:23:00.000","Title":"How to use python module in termux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was hoping you would be able to help me get pylint fully functional in nvim.\nEnv:\nMac OS Venture 13.1\nnvim v0.8.1\npylint 2.15.9\npython 3.11.1 (accessed through 'python3')\npip 22.3.1 (accessed through 'pip3')\nI am using the latest versions of null-ls and Mason and the related libraries to tie all of this together.\nMy problem is that pylint does not recognise any of the packages I have fetched with pip3. My code executes as expected when I run it using python3, so the packages are installed and the modules are loaded correctly. I have checked :Mason in nvim and it has access to the right python and pip executables.\nIf I install pylint outside Neovim, it gives me the same error. I can correct it by running it with --init-hook=\"import sys; sys.path.append('\/Library\/Fr...)\" which points to the directory where pip3 saves the packages that are installed.\nHow do I check which paths pylint uses to search for packages to import? And how can I neatly add the right paths to direct it to the correct place?\nI seem to be missing some fundamental piece of information to understand the problem. Any ideas?\nThank you all so much for the help and support! <3 And I look forward to continue my coding journey!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":904,"Q_Id":74943296,"Users Score":0,"Answer":"I reinstalled pylint using mason and that fixed it for me.","Q_Score":2,"Tags":"python,macos,pylint,neovim,pythonpath","A_Id":76198263,"CreationDate":"2022-12-28T17:33:00.000","Title":"Pylint in Neovim using Mason and null-ls cannot load packages","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to fabric.\nI am running a command as res = fabric.api.sudo(f\"pip install {something}\",user=user)\nI expect the command to return stderr or abort when the package\/version is not found i.e. pip install fails. However I am getting a res.return_code=0, res.stderr, as empty on an error condition. I do get the ERROR message on stdout. Is it expected behavior ? How can I make the stderr have the error condition and the correct return_code?\nVersion:\nUsing Fabric3 with version 1.14.post1\nAny help would be great, thanks.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":19,"Q_Id":74955225,"Users Score":0,"Answer":"The command had multiple commands with pipes. So needed to leverage PIPESTATUS to get the right return code.","Q_Score":0,"Tags":"python,python-3.x,paramiko,invoke,fabric","A_Id":74995229,"CreationDate":"2022-12-29T19:16:00.000","Title":"fabric.api.sudo() returning empty stderr on error condition","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have correctly got a microbit working with serial communication via COM port USB.\nMy aim is to use COM over bluetooth to do the same.\nSteps I have taken:\n\n(on windows 10) bluetooth settings -> more bluetooth settings -> COM ports -> add -> incoming\nin device manager changed the baud rate to match that of the microbit (115,200)\npaired and connected to the microbit\ntried to write to both the serial and uart bluetooth connection from the microbit to the PC (using a flashed python script)\nusing Tera Term, setup -> serial port... -> COM(number - in my case 4), with all necessary values (including 115,200 baud rate)\n\nAfter doing all of these, I see no incoming message on Tera Term. Have I missed anything?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":74963246,"Users Score":1,"Answer":"This is not directly possible via BLE UART communication because it uses different protocols (as mentioned above by ukBaz).\nYou are able to, however, communicate via custom BLE libraries such as bleak.\nBleak has some good examples on its github repo of how to scan GATT characteristics and services to find the TX and RX characteristics of your BLE device.\nFrom there you're able to connect to the microbit directly over bluetooth and read and write to it's GATT table and not using the proprietary BLE protocols etc.\nI'll make a tutorial at some point and link it back here when it's done.","Q_Score":0,"Tags":"python,bluetooth,serial-port,bbc-microbit,spp","A_Id":75134319,"CreationDate":"2022-12-30T15:34:00.000","Title":"Using serial ports over bluetooth with micro bit","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I deployed a model using Azure ML managed endpoint, but I found a bottleneck.\nI'm using Azure ML Managed Endpoint to host ML models for object prediction. Our endpoint receives a URL of a picture and is responsible for downloading and predicting the image.\nThe problem is the bottleneck: each image is downloaded one at a time (synchronously), which is very slow.\nIs there a way to download images async or to create multiple threads ? I expected a way to make if faster.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":74986098,"Users Score":0,"Answer":"We recommend to use Azure Blob storage to host the images and then use blob storage SDK to fetch the images.","Q_Score":1,"Tags":"python,machine-learning,azure-machine-learning-service,azure-machine-learning-studio","A_Id":75105118,"CreationDate":"2023-01-02T19:00:00.000","Title":"How to use Async or Multithread on Azure Managed Endpoint","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"The organization I work for solely relies on windows task scheduler for running their daily python scripts. However, this makes it hard to monitor all the scripts and can be a bit unreliable at times.\nAlso; I can't imagine that it is best practice for a medium sized company to use windows task scheduler to automatically run Python scripts.\nWhat is best practice in this case? I heard from other that Azure is frequently used but this is not possible for us yet. I heard of applications like cron but it seems that these are mostly used for personal use.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":75028214,"Users Score":0,"Answer":"It depends on your needs and your constraints, also you must consider costs.\nHowever there are plenty solution that you can use:\n\nWindows Task Scheduler: it can be suffiscent except if you are\nfinding it unreliable or difficult to manage.\n\nCloud: Azure or another provider such as Amazon Web Services (AWS) or\nGoogle Cloud Platform (GCP), which also offer scheduled execution of\nscripts. for example with azure, you can create a \"Logic App\" that is\ntriggered by a specified schedule and runs your Python script. You\ncan also use Azure Functions, which are a serverless computing\nplatform. With aws, there is lambda that I had tested, it is a good\noption to threat parallelism and to optimize costs.\n\ncron: cron is a Unix utility that allows you to schedule scripts or\nother commands to run automatically. I use personally to automate\nrunning some processes in Ubuntu system. For example, to run your\nscript every day at 8:00 am you can use:\n0 8 * * * \/path\/to\/script.py\n\nThird-party scheduling tools: There are also a number of third-party\ntools available that allow you to schedule scripts like Jenkins.","Q_Score":0,"Tags":"python,automation,scheduling","A_Id":75028405,"CreationDate":"2023-01-06T07:43:00.000","Title":"For a medium sized company, what is the best, most consistent way to schedule Python scripts which have to run every day?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to open files from webpage. For example when we try to download a torrent file it redirects us to utorrent app and it continues it work. I also want to open a local file somehow using OS software. Like a video file using pot player. Is there any possible solution for me ,like making a autorun in pc to run that . Anything it may be please help me.\ud83d\ude14\ud83d\ude14\nI searched and found a solution to open a software using protocol, but in this way I cannot open a file in that software.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":75031941,"Users Score":0,"Answer":"the link acts as a magnet so your torrent application is opened maybe delete torrent for sometime till you finish the project, i know how to open image in local files in html but it will only be visible to you, you can do audio and video files also using ","Q_Score":0,"Tags":"javascript,python,html,protocols","A_Id":75032911,"CreationDate":"2023-01-06T14:03:00.000","Title":"Cannot open a local file from webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to open files from webpage. For example when we try to download a torrent file it redirects us to utorrent app and it continues it work. I also want to open a local file somehow using OS software. Like a video file using pot player. Is there any possible solution for me ,like making a autorun in pc to run that . Anything it may be please help me.\ud83d\ude14\ud83d\ude14\nI searched and found a solution to open a software using protocol, but in this way I cannot open a file in that software.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":75031941,"Users Score":0,"Answer":"Opening a specific file in a specific software would usually depend on passing some URL parameters to the protocol-URL of the app (e.g., opening a file in VSCode would use a URL like vscode:\/\/\/Users\/me\/file.html, but this functionality would have to be explicitly handled by the app itself though, so the solution for each app would be different).\nOtherwise, if the app doesn't support opening a specific file itself through a URL, you'd have to use some scripting software (e.g. AppleScript if you're on macOS) to dynamically click\/open certain programs on a user's computer.","Q_Score":0,"Tags":"javascript,python,html,protocols","A_Id":75031988,"CreationDate":"2023-01-06T14:03:00.000","Title":"Cannot open a local file from webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"How can I change my Python scripts and simultaneously running bash script, without the bash script picking up on the new changes?\nFor example I run bash script.sh whose content is\n\npython train1.py\npython train2.py\n\nWhile train1.py is running, I edit train2.py. This means that train2.py will use the old code not the new one.\nHow to set up such that train2.py uses the old code?\nRunning and editing on two different PCs is not really a solution since I need the GPU to debug for editting. Merging them is also not a good idea because of the abstraction.\nSpecs:\nRemote Server\nUbuntu 20.04\nPython - Pytorch\nI imagine there is some git solution but have not found one.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":75040517,"Users Score":0,"Answer":"Any changes done to train2.py, which are commited to disk before the bash script executes train2.py will be used by the script.\nThere is no avoiding that because contents of train2.py are not loaded into memory until the shell attempts to execute train2.py . That behaviour is the same regardless of the OS distro or release.\nKeep the \"master\" for train2.py in a sub-directory, then have the bash script remove train2.done at the start of the script, and touch train2.done when it has completed that step.\nThen have a routine that only \"copies\" train2.py from the subdir to the production dir if it sees the file train2.done is present, and wait for it if it missing.\nIf you are doing this constantly during repeated runs of the bash script, you probably want to have the script that copies train2.py touch train2.update before copying the file and remove that after successful copy of train2.py ... then have the bash script check for the presence of train2.update and if present, go into a loop for a short sleep, then check for the presence again, before continuing with the script ONLY if that file has been removed.","Q_Score":0,"Tags":"python,bash,deep-learning,version-control","A_Id":75044583,"CreationDate":"2023-01-07T12:58:00.000","Title":"Editing Python scripts while running bash script that contains the Python scripts","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a problem starting Playwright in Python maximized. I found some articles for other languages but doesn't work in Python, also nothing is written about maximizing window in Python in the official documentation.\nI tried browser = p.chromium.launch(headless=False, args=[\"--start-maximized\"])\nAnd it starts maximized but then automatically restores back to the default small window size.\nAny ideas?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":45,"Q_Id":75144059,"Users Score":2,"Answer":"I just found the answer:\nI need to set also the following and it works: browser.new_context(no_viewport=True)","Q_Score":2,"Tags":"python,playwright,playwright-python","A_Id":75144132,"CreationDate":"2023-01-17T09:03:00.000","Title":"Python Playwright start maximized window","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am building a DWH based on data I am collecting from an ERP API.\ncurrently, I am fetching the data from the API based on an incremental mechanism I built using python: The python script fetches all invoices whose last modified date is in the last 24 hours and inserts the data into a \"staging table\" (no changes are required during this step).\nThe next step is to insert all data from the staging area into the \"final tables\". The final tables include primary keys according to the ERP (for example invoice number).\nThere are no primary keys defined at the staging tables.\nFor now, I am putting aside the data manipulation and transformation.\nIn some cases, it's possible that a specific invoice is already in the \"final tables\", but then the user updates the invoice at the ERP system which causes the python script to fetch the data again from the API into the staging tables. In the case when I try to insert the invoice into the \"final table\" I will get a conflict due to the primary key restriction at the \"final tables\".\nAny idea of how to solve this?\nI am thinking to add a field that details the date and timestamp at which the record land at the staging table (\"insert date\") and then upsert the records if\ninsert date at the staging table > insert date at the final tables\nIs this best practice?\nAny other suggestions? maybe use a specific tool\/data solution?\nI prefer using python scripts since it is part of a wider project.\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":75161670,"Users Score":0,"Answer":"Instead of a straight INSERT use an UPSERT pattern. Either the MERGE statement if your database has it, or UPDATE the existing rows, followed by INSERTing the new ones.","Q_Score":0,"Tags":"python,etl,data-warehouse","A_Id":75196474,"CreationDate":"2023-01-18T15:36:00.000","Title":"DWH primary key conflict between staging tables and DWH tables","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a python script which is executed from terminal as\nscript.py 0001\nwhere 0001 indicates the subcase to be run. If I have to run different subcases, then I use\nscript.py 0001 0002\nQuestion is how to specify a range as input? Lets say I want to run 0001..0008. I got to know seq -w 0001 0008 outputs what I desire. How to pipe this to Python as input from terminal? Or is there a different way to get this done?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":75161984,"Users Score":0,"Answer":"Tried the following already but did not work earlier as I did not have the subcases pulled in the script repo. The following works:\nscript.py 000{1..8}","Q_Score":0,"Tags":"python,bash,terminal,sequence","A_Id":75162059,"CreationDate":"2023-01-18T16:02:00.000","Title":"Passing range of numbers from terminal to Python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm searching a way how to check the current value of Couchbase cluster timeout, and how to set up a desired timeout using the Python SDK.\nI know the method to set up a timeout using ClusterTimeoutOptions but it doesn't work.\nThere are no problems with timeouts if I disable it using couchbase-cli:\ncouchbase-cli setting-query --set --timeout -1","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":43,"Q_Id":75171894,"Users Score":0,"Answer":"I resolve it and it works as I expected. I converted cURL commands which I found at the Couchbase documentation website to Python's requests, and I was able to check timeout and update it.\nTo check:\nrequests.get('http:\/\/localhost:8093\/admin\/settings', auth=(user, password))\nTo update:\nrequests.post('http:\/\/localhost:8091\/settings\/querySettings', headers=headers, data=data, auth=(user, password))","Q_Score":1,"Tags":"python,timeout,couchbase","A_Id":75244354,"CreationDate":"2023-01-19T11:58:00.000","Title":"How to check and set up the Couchbase timeout using the Python SDK?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I know how to create topics on Confluent Cloud with the confluent_kafka AdminClient instance but I\u2019m not sure how to set the topic\u2019s message schema programmatically? To clarify, I have the schema I want to use saved locally in an avro schema file(.avsc)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":75179766,"Users Score":0,"Answer":"Use the AdminClient to create the topic and then use the SchemaRegistryClient to register the schema for the topic.","Q_Score":0,"Tags":"apache-kafka,avro,confluent-schema-registry,confluent-kafka-python","A_Id":75179798,"CreationDate":"2023-01-20T01:18:00.000","Title":"How do I tell a topic on confluent cloud to use a specific schema programmatically?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"there may very well be an answer to this question, but it's really hard to google for.\nyou can add commands to gdb by writing them in python. I am interested in debugging one of those python scripts that's running in gdb session.\nmy best guess is to run gdb on gdb and execute the user added command and somehow magically break on the python program code?\nhas anybody done anything like this before? I don't know the mechanism by which gdb calls python code, so if it's not in the same process space as the gdb that's calling it, I don't see how I'd be able to set breakpoints in the python program.\nor do I somehow get pdb to run in gdb? I guess I can put pdb.set_trace() in the python program, but here's the extra catch: I'd like to be able to do all this from vscode.\nso I guess my question is: what order of what things do I need to run to be able to vscode debug a python script that was initiated by gdb?\nanybody have any idea?\nthanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":75189350,"Users Score":0,"Answer":"so I figured it out. it's kinda neat.\nyou run gdb to debug your program as normal, then in another window you attach to a running python program.\nin this case the running python program is the gdb process.\nonce you attach, you can set breakpoints in the python program, and then when you run commands in the first window where the gdb session is, if it hits a breakpoint in the python code, it will pop up in the second window.\nthe tipoff was that when you run gdb there does not appear to be any other python process that's a child of gdb or related anywhere, so I figured gdb must dynamically link to some python library so that the python compiler\/interpreter must be running in the gdb process space, so I figured I'd try attaching to that, and it worked.","Q_Score":0,"Tags":"python,gdb,gdb-python","A_Id":75190767,"CreationDate":"2023-01-20T21:07:00.000","Title":"How do I debug through a gdb helper script written in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Just for reference I am coming from AWS so any comparisons would be welcome.\nI need to create a function which detects when a blob is placed into a storage container and then downloads the blob to perform some actions on the data in it.\nI have created a storage account with a container in, and a function app with a python function in it. I have then set up a event grid topic and subscription so that blob creation events trigger the event. I can verify that this is working. This gives me the URL of the blob which looks something like https:\/\/.blob.core.windows.net\/\/. However then when I try to download this blob using BlobClient I get various errors about not having the correct authentication or key. Is there a way in which I can just allow the function to access the container in the same way that in AWS I would give a lambda an execution role with S3 permissions, or do I need to create some key to pass through somehow?\nEdit: I need this to run ASAP when the blob is put in the container so as far as I can tell I need to use EventGrid triggers not the normal blob triggers","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":75223506,"Users Score":0,"Answer":"The answer lied somewhere between @rickvdbosch's answer and Abdul's comment. I first had to assign an identity to the function giving it permission to access the storage account. Then I was able to use the azure.identity.DefaultAzureCredential class to automatically handle the credentials for the BlobClient","Q_Score":0,"Tags":"python,amazon-web-services,azure,azure-functions,azure-blob-storage","A_Id":75231879,"CreationDate":"2023-01-24T15:15:00.000","Title":"Access blob in storage container from function triggered by Event Grid","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Someone setup for us cmake to use pybind to create a .pyd module we package, together with some pure python files, into a wheel.\nWe are switching from an old 3.7 python to a modern one, so we want to support wheels for both the old and new python version, at least for now.\nI've read the pybind documentation and, due to my unfamiliarity with cmake, I found it unclear. So I'm looking for clarification.\nMy understanding is that you would have to compile twice, one time \"targeting\" 3.7 and another time targeting the newer version. But I wouldn't expect this to matter at all (if you were to handcode wrapping to python), or at most I'd expect it to matter if we were targeting two different major version (i.e. python2 vs python3).\nMy question is if this is really needed. Can I just avoid a second compilation and slam the .pyd I get when compiling \"for python 3.7\" into the wheel we build for the newer python too?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":23,"Q_Id":75226530,"Users Score":1,"Answer":"Yes, it is necessary. The CPython ABI changes from version to version, often in incompatible ways, so you have to compile for each version separately.","Q_Score":0,"Tags":"c++,python-3.x,pybind11,python-wheel","A_Id":75230484,"CreationDate":"2023-01-24T19:46:00.000","Title":"pybind c++ for multiple python versions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i have problem with speed of MapReduce . Is there any faster library instead of this ?\ni tried this for many time but not work as good as we want.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":23,"Q_Id":75227813,"Users Score":0,"Answer":"You can use Apache Spark Mlib It's 100x faster than MapReduce.","Q_Score":1,"Tags":"python,python-imaging-library","A_Id":75227842,"CreationDate":"2023-01-24T22:12:00.000","Title":"Running a job on mapreduce produces error code 2","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"`Fatal error from pip prevented installation. Full pip output in file:\nC:\\Users\\arman.local\\pipx\\logs\\cmd_2023-01-24_23.27.56_pip_errors.log\npip failed to build packages:\nbitarray\ncytoolz\nyarl\nSome possibly relevant errors from pip install:\nerror: subprocess-exited-with-error\nerror: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2\ncytoolz\/dicttoolz.c(209): fatal error C1083: No se puede abrir el archivo incluir: 'longintrepr.h': No such file or directory\nyarl\/_quoting_c.c(196): fatal error C1083: No se puede abrir el archivo incluir: 'longintrepr.h': No such file or directory\nError installing eth-brownie.`\nAfter I run the line-code above, it outputs this error and I've tried uninstalling and installing pipx again but this just doesn\u00b4t work.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":141,"Q_Id":75230007,"Users Score":2,"Answer":"This error is generated due to an incompatibility between Python 3.11 and Cython. While it will most likely get fixed in later builds, downgrading to an earlier Python version usually does the trick. Here are a few steps I would recommend:\n\nInstall Python 3.10 or lower. This can run concurrently with your current Python version, but you need to change the priority version in PATH, use a virtual environment, or directly call it in the shell using py -3.10.\nUninstall pipx (run pip uninstall pipx) and reinstall it using the lower Python version py -3.10 -m pip install --user pipx. You might need to clean up earlier attempts to install brownie by deleting the eth-brownie folder under users\/your-username\/.local\/pipx\/venvs.\nAlso, remember to call pipx ensurepath after reinstallation\nUninstall Cython (run pip uninstall cython) and reinstall it using the lower Python version (run pip install cython).\nReattempt installing eth-brownie pipx install eth-brownie.\n\nIf brownie installation still doesn't work, try one or both of the following:\n\nForget pipx altogether and pip install brownie instead.\nDownload the Visual Studio Build tools 2019, and install all the dependencies.","Q_Score":2,"Tags":"python,solidity","A_Id":75773803,"CreationDate":"2023-01-25T05:33:00.000","Title":"Error installing eth-brownie with `pipx install eth-brownie`","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i would like to have a separate window open for the ouput of the console, like when i run my program without pycharm instead of the output going in the \"run\" tab of pycharm.\nthanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":75276590,"Users Score":0,"Answer":"First of all you didnt provide the information, I wouldve loved to see which OS you use or what are your prefferences for visualizing your output which really needed to solve your problem so il answer your question genarally.\n\nIf you are using windows you can use the Commad Prompt also called the cmd to run your python file, using a simple WinKey + R and then writing cmd which will open a simple Command Prompt, then you will need to navigate to your python file path with the command cd which you can read about in the internet and then run python file_name or python3 file_name depending on what you have, this will give you your code output.\nUsing linux dist it will be very simmilar to the windows one pressing ALT+T will open the terminal for you which is like the twin brother for your cmd and then youll need to follow the cd step and further in my first note.\n\nBoth ways will give you to run your code and will show you the output of your code without using pycharm.\nHope i helped :)","Q_Score":0,"Tags":"python,pycharm","A_Id":75277378,"CreationDate":"2023-01-29T16:43:00.000","Title":"How to not use the embedded console of pycharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My py -0e shows two version:\n3.19\n3.7\nI need the 3.7 uninstalled, but the programs\/features on windows only shows the python launcher installation.\nI checked the folder for 3.7 - but it has nothing to uninstall. Neither is there anything for 3.19.\nI see multiple registry entries for 3.7 which means it was probably installed properly.\nBut i don't see any option to cleanly uninstall - so the references are removed.\nThe 3.19 is updated and marked as the default at windows variables and also when i use the py -0e.\nANy help would be greatly appreciated\nI see multiple registry entries for 3.7 which means it was probably installed properly.\nBut i don't see any option to cleanly uninstall - so the references are removed.\nThe 3.19 is updated and marked as the default at windows variables and also when i use the py -0e.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":75299965,"Users Score":0,"Answer":"If you still have the installer for python 3.7, you can open it and select uninstall.","Q_Score":1,"Tags":"python","A_Id":75300053,"CreationDate":"2023-01-31T16:12:00.000","Title":"Uninstall a python version doesnt show up on programs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"My py -0e shows two version:\n3.19\n3.7\nI need the 3.7 uninstalled, but the programs\/features on windows only shows the python launcher installation.\nI checked the folder for 3.7 - but it has nothing to uninstall. Neither is there anything for 3.19.\nI see multiple registry entries for 3.7 which means it was probably installed properly.\nBut i don't see any option to cleanly uninstall - so the references are removed.\nThe 3.19 is updated and marked as the default at windows variables and also when i use the py -0e.\nANy help would be greatly appreciated\nI see multiple registry entries for 3.7 which means it was probably installed properly.\nBut i don't see any option to cleanly uninstall - so the references are removed.\nThe 3.19 is updated and marked as the default at windows variables and also when i use the py -0e.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":75299965,"Users Score":0,"Answer":"Open Control Panel\nClick \"Uninstall a Program\"\nScroll down to Python and click uninstall for 3.7 version which is the one you don't want anymore.","Q_Score":1,"Tags":"python","A_Id":75300000,"CreationDate":"2023-01-31T16:12:00.000","Title":"Uninstall a python version doesnt show up on programs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Recently I have install python 3.9.9 in my windows 10.it want show the path\nI have typed cmd promt \"Wchich Python\" it want show","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":75305542,"Users Score":0,"Answer":"In Command Prompt, either which python or where python will print the path to your python executable.\nIf which python or where python does not show the path to your Python executable it is likely that it is not in your PATH variable.\nTo add your executable to the PATH variable you, search for Environment Variables in the Settings application. This will open the Advanced tab in System Properties. Click the Environment Variables button towards the bottom. You can then edit the PATH variable to include the path to your Python executable. Once you have applied the changes and restarted Command Prompt you can then run which python or where python to confirm your changes have taken effect.","Q_Score":0,"Tags":"python","A_Id":75305674,"CreationDate":"2023-02-01T04:06:00.000","Title":"How to identify python in windows 10","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Recently I have install python 3.9.9 in my windows 10.it want show the path\nI have typed cmd promt \"Wchich Python\" it want show","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":75305542,"Users Score":0,"Answer":"Just type python or python3 in cmd","Q_Score":0,"Tags":"python","A_Id":75305623,"CreationDate":"2023-02-01T04:06:00.000","Title":"How to identify python in windows 10","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Recently I have install python 3.9.9 in my windows 10.it want show the path\nI have typed cmd promt \"Wchich Python\" it want show","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":75305542,"Users Score":0,"Answer":"You can use in your cmd\n\nwhere python\n\nIt will show you the path of all installed python in your device","Q_Score":0,"Tags":"python","A_Id":75305641,"CreationDate":"2023-02-01T04:06:00.000","Title":"How to identify python in windows 10","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to start a python program when the system boots. It must run in the background (forever) such that opening a terminal session and closing it does not affect the program.\nI have demonstrated that by using tmux this can be done manually from a terminal session. Can the equivalent be done from a script that is run at bootup?\nThen where done one put that script so that it will be run on bootup.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":75330853,"Users Score":0,"Answer":"It appears that in addition to putting a script that starts the program in \/etc\/init.d, one also has to put a link in \/etc\/rc2.d with\nsudo ln -s \/etc\/init.d\/scriptname.sh\nsudo mv scriptname.sh S01scriptname.sh\nThe S01 was just copied from all the other files in \/etc\/rc2.d","Q_Score":0,"Tags":"python,background,boot","A_Id":75346392,"CreationDate":"2023-02-03T02:15:00.000","Title":"ubuntu run python program in background on startup","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was trying learning about logging in python for the first time today. i discovered when i tried running my code from VS Code, i received this error message\n\/bin\/sh: 1: python: not found however when i run the code directly from my terminal, i get the expected result. I need help to figure out the reason for the error message when i run the code directly from vscode\nI've tried checking the internet for a suitable solution, no fix yet. i will appreciate your responses.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":24,"Q_Id":75344761,"Users Score":-1,"Answer":"The error message you are receiving indicates that the \"python\" executable is not found in the PATH environment variable of the terminal you are using from within Visual Studio Code.\nAdd the location of the Python executable to the PATH environment variable in your terminal.\nSpecify the full path to the Python executable in your Visual Studio Code terminal.\nYou can find the full path to the Python executable by running the command \"which python\" in your terminal.","Q_Score":2,"Tags":"python,python-3.x,visual-studio-code,logging,error-log","A_Id":75357343,"CreationDate":"2023-02-04T11:26:00.000","Title":"Configuring Python execution from VS Code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am receiving following error while converting python file to .exe\nI have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue\nCommand\n(base) G:>pyinstaller --onefile grp.py\nError\nThe 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\\Users\\alpha\\anaconda3\\lib\\site-packages) using conda remove then try again.\nPython Version\n(base) G:>python --version\nPython 3.9.16","AnswerCount":6,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":9580,"Q_Id":75476135,"Users Score":6,"Answer":"I've experienced the same problem. I managed to solve it by downgrading pyInstaller to 5.1 (from 5.8) without touching pathlib. An additional possibility to consider.","Q_Score":2,"Tags":"python,python-3.x,anaconda,conda,exe","A_Id":75687401,"CreationDate":"2023-02-16T17:49:00.000","Title":"How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am receiving following error while converting python file to .exe\nI have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue\nCommand\n(base) G:>pyinstaller --onefile grp.py\nError\nThe 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\\Users\\alpha\\anaconda3\\lib\\site-packages) using conda remove then try again.\nPython Version\n(base) G:>python --version\nPython 3.9.16","AnswerCount":6,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":9580,"Q_Id":75476135,"Users Score":2,"Answer":"I face with the same problem, and I input the 'conda remove pathlib', it didn't work. The result is Not found the packages, so I found the lir 'lib', there was a folder named 'path-list-....', finally I delete it, and it began working!","Q_Score":2,"Tags":"python,python-3.x,anaconda,conda,exe","A_Id":75640516,"CreationDate":"2023-02-16T17:49:00.000","Title":"How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am receiving following error while converting python file to .exe\nI have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue\nCommand\n(base) G:>pyinstaller --onefile grp.py\nError\nThe 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\\Users\\alpha\\anaconda3\\lib\\site-packages) using conda remove then try again.\nPython Version\n(base) G:>python --version\nPython 3.9.16","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":9580,"Q_Id":75476135,"Users Score":0,"Answer":"The error message you received suggests that the 'pathlib' package installed in your Anaconda environment is causing compatibility issues with PyInstaller. As a result, PyInstaller is unable to create a standalone executable from your Python script.","Q_Score":2,"Tags":"python,python-3.x,anaconda,conda,exe","A_Id":75640542,"CreationDate":"2023-02-16T17:49:00.000","Title":"How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"error: subprocess-exited-with-error\n\u00d7 pip subprocess to install build dependencies did not run successfully.\n\u2502 exit code: 1\n\u2570\u2500> [10 lines of output]\nCollecting setuptools\nUsing cached setuptools-67.4.0-py3-none-any.whl (1.1 MB)\nCollecting wheel\nUsing cached wheel-0.38.4-py3-none-any.whl (36 kB)\nCollecting cython!=0.27,!=0.27.2,<=0.29.28,>=0.24\nUsing cached Cython-0.29.28-py2.py3-none-any.whl (983 kB)\nCollecting kivy_deps.gstreamer_dev~=0.3.3\nUsing cached kivy_deps.gstreamer_dev-0.3.3-cp311-cp311-win_amd64.whl (3.9 MB)\nERROR: Could not find a version that satisfies the requirement kivy_deps.sdl2_dev~=0.4.5 (from versions: 0.5.1)\nERROR: No matching distribution found for kivy_deps.sdl2_dev~=0.4.5\n[end of output]\nI was trying to install kivy with \"pip install kivy[full]\" but isted of installing kivy suprocess error occored then I tryed installing subprocess with \"pip install subprocess.run\" it was installed sucessfully but again the same error is occruing","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":75600538,"Users Score":0,"Answer":"To fix this error, try to install Kivy using the pre-built wheel from the Kivy website. Download the wheel and install it using the pip command:\npip install ","Q_Score":1,"Tags":"python,android,windows,kivy","A_Id":75600592,"CreationDate":"2023-03-01T07:12:00.000","Title":"How to install Kivy with dependencies in Windows 10 using pip?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using python and confluent_kafka\nI am building a Queue Management for Kafka where we can view the pending(uncommitted) messages of each topic, delete topic and purge topic. I am facing the following problems.\n\nI am using same group ID for all the consumers so that I can get the uncommitted messages.\nI have 2 consumers one (say consumer1) consuming and committing and another one (say consumer2) just consuming without committing.\n\nIf I run consumer1 and consumer2 simultaneously only one consumer will start consuming and another just keep on waiting and hence cause heavy loading time in the frontend.\nIf I assign different group Id for each it works but, the messages committed by consumer1 are still readable by consumer2.\nExample:\nIf I have pushed 100 messages and say consumer1 consumed 80 messages and when I try to consume from consumer2 it should consume only remaining 20, but it is consuming all 100 including the messages committed by consumer1.\nHow can I avoid that or solve?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":159,"Q_Id":75673129,"Users Score":1,"Answer":"Unclear what you mean by uncommitted. Any message in a topic has been committed by a producer.\nFrom the consumer perspective, this isn't possible. Active Kafka consumers in the same group cannot be assigned the same partitions\nMore specifically, how would \"consumer2\" know when\/if \"consumer1\" was \"done consuming 80 records\" without consumer1 becoming inactive?\nIf you have an idle consumer with only two consumers in the same group, sounds like you only have one partition... If you want both to be active at the same time, you'll need multiple partitions, but that won't help with any \"visualizations\" unless you persist your consumed data in some central location. At which point, Kafka Connect might be a better solution than Python.\nIf you want to view consumer lag (how far behind a consumer is processing), then there are other tools to do this, such as Burrow with its REST API. Otherwise, you need to use the get_watermark_offsets() function to find the topic's offsets and compare to the current polled record offset","Q_Score":1,"Tags":"python,apache-kafka,confluent-kafka-python","A_Id":75674699,"CreationDate":"2023-03-08T12:38:00.000","Title":"Consuming messages from Kafka with different group IDs using confluent_kafka","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am in a situation that requires (decrypting\/removeLabel) in Linux eg. Debian or rasbian ?\nI have looked into azure products and they clearly stated that there is no \"official\" support for Linux..... but my question here is .... is there any workaround to achieve this in Linux?\nOne silly way I could think of is to get this protected file in a windows pc, decrypt it and send it to the Linux PC over scp.... but this requires me to have a sort of a bypass PC to run just for this ...seems pretty silly ... any ideas\/suggestion?\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":75919230,"Users Score":0,"Answer":"Azure Information Protection dont have an official client for linux, one solution would be to use a windows virtual machine on your linux system and install the AIP client in it, VirtualBox or Vmware are good choices here. You can also use the AIP PowerShell module to automate the removal of labels","Q_Score":1,"Tags":"python,linux,azure","A_Id":75925218,"CreationDate":"2023-04-03T11:58:00.000","Title":"azure information protection remove label in linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Poetry install fails with ChefBuildError: Backend operation failed: HookMissing('build_editable')\nMy poetry version is 1.4.2","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1582,"Q_Id":75934738,"Users Score":1,"Answer":"This worked for me\nI believe this is caused by a change to how the build-backend is defined in the pyproject.toml between poetry ^1.3 and poetry ^1.4. Assuming you have poetry ^1.4 installed you have two options:\n\nIn your pyproject.toml change build-backend = \"poetry.masonry.api\" to build-backend = \"poetry.core.masonry.api\"\n\nIf, like me you have other code the assumes poetry ^1.3 then simply downgrade you poetry version poetry self update 1.3.2\n\n\n\nIf you go with option 2 you may get a bunch of RuntimeError hash for xxx errors. If that's the case you will also need to rm -r ~\/.cache\/pypoetry\/artifacts and rm -r ~\/.cache\/pypoetry\/cache.","Q_Score":1,"Tags":"python-3.x,python-poetry","A_Id":75934739,"CreationDate":"2023-04-05T00:09:00.000","Title":"Poetry install ChefBuildError","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"this is my first question here :)\nWhen i try to install Anaconda on my MacBook (M1) with Ventura 13.3.1 i receive the following error:\n\"This package is incompatible with this version of macOS.\"\nI tried the arm64 installer and the x86 installer, both lead to the same error message.\nI used Anaconda on the same MacBook just a few days ago, but after the update from Ventura 13.2.1 to 13.3 I couldn't open Jupyter Notebooks from within the Anaconda Navigator. First I thought that the problem might be caused by Anaconda, so I uninstalled it. However, now here I am, unable to install it again. I also did a complete reset of my MacBook, nothing changed.\nDoes anyone have the same issue or know how I can fix this?\nThanks a lot","AnswerCount":5,"Available Count":4,"Score":0.1586485043,"is_accepted":false,"ViewCount":4353,"Q_Id":75968081,"Users Score":4,"Answer":"I had the same problem as you, I installed on my account only instead of Macintosh HD and it worked like a breeze.","Q_Score":2,"Tags":"python,installation,anaconda,macos-ventura","A_Id":76047692,"CreationDate":"2023-04-08T23:09:00.000","Title":"I can't install Anaconda on a MacBook Pro M1 with Ventura 13.3.1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"this is my first question here :)\nWhen i try to install Anaconda on my MacBook (M1) with Ventura 13.3.1 i receive the following error:\n\"This package is incompatible with this version of macOS.\"\nI tried the arm64 installer and the x86 installer, both lead to the same error message.\nI used Anaconda on the same MacBook just a few days ago, but after the update from Ventura 13.2.1 to 13.3 I couldn't open Jupyter Notebooks from within the Anaconda Navigator. First I thought that the problem might be caused by Anaconda, so I uninstalled it. However, now here I am, unable to install it again. I also did a complete reset of my MacBook, nothing changed.\nDoes anyone have the same issue or know how I can fix this?\nThanks a lot","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":4353,"Q_Id":75968081,"Users Score":1,"Answer":"Try selecting a different partition\/folder for the installation.","Q_Score":2,"Tags":"python,installation,anaconda,macos-ventura","A_Id":75985899,"CreationDate":"2023-04-08T23:09:00.000","Title":"I can't install Anaconda on a MacBook Pro M1 with Ventura 13.3.1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"this is my first question here :)\nWhen i try to install Anaconda on my MacBook (M1) with Ventura 13.3.1 i receive the following error:\n\"This package is incompatible with this version of macOS.\"\nI tried the arm64 installer and the x86 installer, both lead to the same error message.\nI used Anaconda on the same MacBook just a few days ago, but after the update from Ventura 13.2.1 to 13.3 I couldn't open Jupyter Notebooks from within the Anaconda Navigator. First I thought that the problem might be caused by Anaconda, so I uninstalled it. However, now here I am, unable to install it again. I also did a complete reset of my MacBook, nothing changed.\nDoes anyone have the same issue or know how I can fix this?\nThanks a lot","AnswerCount":5,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":4353,"Q_Id":75968081,"Users Score":3,"Answer":"if you have homebrew installed you should be able to run \"brew install anaconda\"","Q_Score":2,"Tags":"python,installation,anaconda,macos-ventura","A_Id":75988377,"CreationDate":"2023-04-08T23:09:00.000","Title":"I can't install Anaconda on a MacBook Pro M1 with Ventura 13.3.1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"this is my first question here :)\nWhen i try to install Anaconda on my MacBook (M1) with Ventura 13.3.1 i receive the following error:\n\"This package is incompatible with this version of macOS.\"\nI tried the arm64 installer and the x86 installer, both lead to the same error message.\nI used Anaconda on the same MacBook just a few days ago, but after the update from Ventura 13.2.1 to 13.3 I couldn't open Jupyter Notebooks from within the Anaconda Navigator. First I thought that the problem might be caused by Anaconda, so I uninstalled it. However, now here I am, unable to install it again. I also did a complete reset of my MacBook, nothing changed.\nDoes anyone have the same issue or know how I can fix this?\nThanks a lot","AnswerCount":5,"Available Count":4,"Score":-0.0399786803,"is_accepted":false,"ViewCount":4353,"Q_Id":75968081,"Users Score":-1,"Answer":"thanks \"brew install anaconda\".","Q_Score":2,"Tags":"python,installation,anaconda,macos-ventura","A_Id":76641130,"CreationDate":"2023-04-08T23:09:00.000","Title":"I can't install Anaconda on a MacBook Pro M1 with Ventura 13.3.1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I use Pycharm with python version 3.9 and scripts run just fine,\nbut when I write python in cmd, it opens Microsoft Store on the page of Python3.10.\nDo I need to give cmd some kind of premission to python? How do I do that?\nI searched online, but I couldn't find a way to make cmd run python without downloading it again.\nedit: I want to use cmd as a terminal and to be able to run scripts through it.\nIn order to use cmd as python terminal I write python but cmd can't find it.\nThis is the location\/path of python:\nC:\\Users\\ofira\\AppData\\Local\\Programs\\Python\\Python39\\python.exe","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":54,"Q_Id":75984001,"Users Score":1,"Answer":"The solution is easy. When the microsoft store open's type python3.9 in the search tab. Download it and everything will work like a charm. Downloading it form the store will automatically add the path in the system environment variable.","Q_Score":1,"Tags":"python,cmd,pycharm","A_Id":75984201,"CreationDate":"2023-04-11T08:47:00.000","Title":"Python script running in pycharm but not in cmd","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a flask project, usually i push it to dockerhub and i run it using docker run dockerhub-image, and after i updated things inside static folder, i stopped and removed the container and also removed the image, after that i repush it again to dockerhub and re-run it, but when visiting it on the web, the files inside static folder does not change at all (other files outside static folder are changed), had no idea to fix this.\nI have searched questions related to this issue but couldn't find any of it.\nQ: how do fix this issue?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":76097898,"Users Score":0,"Answer":"Docker tags should ideally be immutable. If you're just using docker run rather than docker run :, then you're just running the previously pulled, locally cached latest tag. Running an image doesn't automatically pull the newest latest tag\nAlso, as commented, your browser can cache static web assets","Q_Score":1,"Tags":"python,docker,flask","A_Id":76248524,"CreationDate":"2023-04-25T05:37:00.000","Title":"Files inside static folder are not updated on docker","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm trying to set up my Python application to send data to AppDynamics. I have the AppDynamics controller up and running and on my local my application works, but no data is found on AppDynamics\nI've been told to use this repo as a template (and I can confirm it works, sending data to the AppDynamics instance I'm working on) https:\/\/github.com\/jaymku\/py-k8s-init-scar\/blob\/master\/kube\/web-api.yaml\nI have some doubts though, and they might be the cause of the issues that I'm having.\nI had in my Dockerfile a CMD at the end like first.sh && python3 second and I've changed it to be ENTRYPOINT \"first.sh && python3 second\". Note no [] format here and also that there are two concatenated commands.\nOn the value of the APP_ENTRY_POINT variable I'm trying just the same.\nThere are no errors when I run this, my application works correctly, except the data is not sent to AppDynamics. Nothing seems to fail, I can't find any error messages. Any ideas what I'm missing?\nAlso, where can I find out, within AppDynamics, the value that we need to set for the APPDYNAMICS_CONTROLLER_PORT variable? I'm pretty sure it will be 443 in our case, since we seem to be using that in other proyects in AppDynamics that are working, but checking it would be a good idea. It might also be related with this issue, I don't know.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":76136697,"Users Score":0,"Answer":"I managed to have this working by using CMD instead of ENTRYPOINT, using the command that is found inside the suggested entrypoint. So, I did the same that was supposed to done the entrypoint but inserting the command myself","Q_Score":1,"Tags":"python,dockerfile,appdynamics","A_Id":76294759,"CreationDate":"2023-04-29T14:16:00.000","Title":"Set AppDynamics integration with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have been trying to install pyrebase4 using pip install pyrebase4 but when ran it throws below error\n\"C:\\Users\\ChodDungaTujheMadhadchod\\anaconda3\\envs\\sam_upgraded\\lib\\site-packages\\requests_toolbelt\\adapters\\appengine.py\", line 42, in from .._compat import gaecontrib ImportError: cannot import name 'gaecontrib' from 'requests_toolbelt._compat'\nAs I see the error direct directly to requests_toolbelt , but I cannot figure out the possible way to fix it, I tried upgrading to latest version as which is requests-toolbelt==1.0.0 . So is there any way to fix it.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1817,"Q_Id":76208396,"Users Score":10,"Answer":"Okay so what I found that the latest requests-toolbelt 1.0.1 is currently throwing this issue. So downgrading it to next previous version requests-toolbelt==0.10.1 fixes the issue.","Q_Score":4,"Tags":"python,pyrebase,python-requests-toolbelt","A_Id":76208526,"CreationDate":"2023-05-09T10:34:00.000","Title":"Pyrebase4 error cannot import name 'gaecontrib'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have an executable file I made using Python and I am trying to get it to run automatically upon being plugged in via a USB on an external drive. I've run into the issue that Windows seems to have removed that feature, but autorun is a key feature of what I am trying to create. Is there some workaround that has been found, or am I out of luck?\nI tried making an INF file, here is what that looked like:\n[autorun]\nopen=myfilename.exe\nThis doesn't work (meaning plugging in the USB acts as normal as opposed to starting up my executable) likely due to Windows disabling that feature past Windows 7.\nOne last thing is that it needs to work on any computer running Windows 10 with zero user input if at all possible. Is it time to go back to the drawing board?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":76441877,"Users Score":0,"Answer":"This has been permanently revoked due to security problems. If you can find a way to do this, file a vulnerability report.\nAccording to documentation, autorun still works on CD-ROMs but I don't trust it at all. If it works on USB CD-ROMS expect that to be taken away.\n\"One last thing is that it needs to work on any computer running Windows 10 with zero user input if at all possible.\" We don't want this anymore. It was a good idea in 1995. It was a bad idea in 1998. It was an unacceptable risk in 2005. It's gone in 2020.","Q_Score":1,"Tags":"python,windows,executable,autorun,inf","A_Id":76443548,"CreationDate":"2023-06-09T15:52:00.000","Title":"Is there a workaround for getting an executable to autorun in Windows 10?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm building an app based on python3 that uses tkinter and several other packages and I would like to create a an appImage or pyinstaller onefile I can deliver to my remote team members. After spending 4 days trying to get pyinstaller working without success I decided to just create a venv with the required python packages minimal bash scripting and distribute a tar file instead. I thought that would be a quick & straightforward way to go, but even that is proving not to be as easy as I thought it would be. I'm open to suggestions.\nI started by creating a folder with a python3 venv (python3 -m venv .) and added all my app files. I activate the venv and use pip to install the python dependencies. I test my app and it works as expected, then I create a tar image of the folder.\nWhen I extract the tar file on a new VM and activate the venv to test, it fails b\/c the packages aren't found. Why? the VM is the same OS and machine architecture I used to create the app. I do a pip install of one of the packages that should already be in the venv and sure enough none of them are showing up.\nGoing back to the dev system I dbl checked if the packages were in the folder I tared up and they were \"already satisfied\". So what is happening?\nMoreover, I discovered that tcl\/tk that tkinter relies on isn't installed by default, so that is an external dependency the venv can't resolve, so my choices seem to be narrowing.\nI'm just puzzled why the venv didn't preserve the packages my app requires.\nNext I'll look into what it will take to create an appImage.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":76561226,"Users Score":0,"Answer":"Yes but not the way you are doing it. venv\/virtualenv ARE MADE TO SOLVE THIS (mostly). the problem is that you aren't using venv properly.\nYou are trying to distribute the virtual environment itself along with your code which is not how it works. You need to create a requirements file with pip freeze > requirements.txt and include the requirements.txt file in your tar along with your project code, that's it. Then the recipient after untarring has to recreate the venv on their host or shell and then install the requirements ( pip install -r requirements.txt)\nAs for external dependencies that's a different issue. But a combination of Docker and VENV could solve that. Or you could denote what libraries the user needs to install on their system in a README.\nThe main key is that you DO NOT put your project files to be distributed into the venv\/virtualenv folder itself. That folder is just used by pip for managing package installations for your virtual environment. That folder has to be recreated by the other users manually using the requirements file you produce by freezing your requirements (packages that have been installed into your virtual environment)","Q_Score":1,"Tags":"python,virtualenv,python-packaging,python-venv","A_Id":76590670,"CreationDate":"2023-06-27T02:17:00.000","Title":"Is a python venv portable to other machines with same architecture and OS type?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm building an app based on python3 that uses tkinter and several other packages and I would like to create a an appImage or pyinstaller onefile I can deliver to my remote team members. After spending 4 days trying to get pyinstaller working without success I decided to just create a venv with the required python packages minimal bash scripting and distribute a tar file instead. I thought that would be a quick & straightforward way to go, but even that is proving not to be as easy as I thought it would be. I'm open to suggestions.\nI started by creating a folder with a python3 venv (python3 -m venv .) and added all my app files. I activate the venv and use pip to install the python dependencies. I test my app and it works as expected, then I create a tar image of the folder.\nWhen I extract the tar file on a new VM and activate the venv to test, it fails b\/c the packages aren't found. Why? the VM is the same OS and machine architecture I used to create the app. I do a pip install of one of the packages that should already be in the venv and sure enough none of them are showing up.\nGoing back to the dev system I dbl checked if the packages were in the folder I tared up and they were \"already satisfied\". So what is happening?\nMoreover, I discovered that tcl\/tk that tkinter relies on isn't installed by default, so that is an external dependency the venv can't resolve, so my choices seem to be narrowing.\nI'm just puzzled why the venv didn't preserve the packages my app requires.\nNext I'll look into what it will take to create an appImage.","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":46,"Q_Id":76561226,"Users Score":-1,"Answer":"A virtual environment (venv) serves as a separate environment for your project and its dependencies. Please ensure that you verify whether your venv is activated or not within the virtual machine (VM). It is important to note that the venv can only be activated through the command prompt.\nWhen it comes to porting the project to a different machine, you can utilize Docker.","Q_Score":1,"Tags":"python,virtualenv,python-packaging,python-venv","A_Id":76561249,"CreationDate":"2023-06-27T02:17:00.000","Title":"Is a python venv portable to other machines with same architecture and OS type?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0}]