[{"Question":"I changed my project code from python 2.7 to 3.x. \nAfter these changes i get a message \"cannot find declaration to go to\" when hover over any method and press ctrl \nI'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist.\nDo you have any idea how can i fix it?","AnswerCount":5,"Available Count":5,"Score":0.0399786803,"is_accepted":false,"ViewCount":28559,"Q_Id":49749981,"Users Score":1,"Answer":"I had a case where the method was implemented in a base class and Pycharm couldn't find it.\nI solved it by importing the base class into the module I was having trouble with.","Q_Score":24,"Tags":"python-3.x,pycharm","A_Id":52749421,"CreationDate":"2018-04-10T09:23:00.000","Title":"Pycharm - Cannot find declaration to go to","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I changed my project code from python 2.7 to 3.x. \nAfter these changes i get a message \"cannot find declaration to go to\" when hover over any method and press ctrl \nI'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist.\nDo you have any idea how can i fix it?","AnswerCount":5,"Available Count":5,"Score":0.1973753202,"is_accepted":false,"ViewCount":28559,"Q_Id":49749981,"Users Score":5,"Answer":"I had same issue and invalidating cache or reinstalling the app didn't help.\nAs it turned out the problem was next: for some reasons *.py files were registered as a text files, not python ones. After I changed it, code completion and other IDE features started to work again.\nTo change file type go Preferences -> Editor -> File types","Q_Score":24,"Tags":"python-3.x,pycharm","A_Id":70948307,"CreationDate":"2018-04-10T09:23:00.000","Title":"Pycharm - Cannot find declaration to go to","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I changed my project code from python 2.7 to 3.x. \nAfter these changes i get a message \"cannot find declaration to go to\" when hover over any method and press ctrl \nI'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist.\nDo you have any idea how can i fix it?","AnswerCount":5,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":28559,"Q_Id":49749981,"Users Score":66,"Answer":"Right click on the folders where you believe relevant code is located ->Mark Directory as-> Sources Root\nNote that the menu's wording \"Sources Root\" is misleading: the indexing process is not recursive. You need to mark all the relevant folders.","Q_Score":24,"Tags":"python-3.x,pycharm","A_Id":50335132,"CreationDate":"2018-04-10T09:23:00.000","Title":"Pycharm - Cannot find declaration to go to","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I changed my project code from python 2.7 to 3.x. \nAfter these changes i get a message \"cannot find declaration to go to\" when hover over any method and press ctrl \nI'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist.\nDo you have any idea how can i fix it?","AnswerCount":5,"Available Count":5,"Score":0.1973753202,"is_accepted":false,"ViewCount":28559,"Q_Id":49749981,"Users Score":5,"Answer":"What worked for me was right-click on the folder that has the manage.py > Mark Directory as > Source Root.","Q_Score":24,"Tags":"python-3.x,pycharm","A_Id":63009947,"CreationDate":"2018-04-10T09:23:00.000","Title":"Pycharm - Cannot find declaration to go to","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I changed my project code from python 2.7 to 3.x. \nAfter these changes i get a message \"cannot find declaration to go to\" when hover over any method and press ctrl \nI'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist.\nDo you have any idea how can i fix it?","AnswerCount":5,"Available Count":5,"Score":0.0399786803,"is_accepted":false,"ViewCount":28559,"Q_Id":49749981,"Users Score":1,"Answer":"The solution for me: remember to add an interpreter to the project, it usually says in the bottom right corner if one is set up or not. Just an alternate solution than the others.\nThis happened after reinstalling PyCharm and not fully setting up the ide.","Q_Score":24,"Tags":"python-3.x,pycharm","A_Id":68959559,"CreationDate":"2018-04-10T09:23:00.000","Title":"Pycharm - Cannot find declaration to go to","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I just made the transition from Spyder to VScode for my python endeavours. Is there a way to run individual lines of code? That's how I used to do my on-the-spot debugging, but I can't find an option for it in VScode and really don't want to keep setting and removing breakpoints.\nThanks.","AnswerCount":5,"Available Count":2,"Score":0.1194272985,"is_accepted":false,"ViewCount":18994,"Q_Id":49771589,"Users Score":3,"Answer":"In my ver of VSCode (1.25), shift+enter will run selection. Note that you will want to have your integrated terminal running python.","Q_Score":15,"Tags":"python,debugging,visual-studio-code","A_Id":52346976,"CreationDate":"2018-04-11T09:36:00.000","Title":"VScode run code selection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I just made the transition from Spyder to VScode for my python endeavours. Is there a way to run individual lines of code? That's how I used to do my on-the-spot debugging, but I can't find an option for it in VScode and really don't want to keep setting and removing breakpoints.\nThanks.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":18994,"Q_Id":49771589,"Users Score":0,"Answer":"I'm still trying to figure out how to make vscode do what I need (interactive python plots), but I can offer a more complete answer to the question at hand than what has been given so far:\n1- Evaluate current selection in debug terminal is an option that is not enabled by default, so you may want to bind the 'editor.debug.action.selectionToRepl' action to whatever keyboard shortcut you choose (I'm using F9). As of today, there still appears to be no option to evaluate current line while debugging, only current selection.\n2- Evaluate current line or selection in python terminal is enabled by default, but I'm on Windows where this isn't doing what I would expect - it evaluates in a new runtime, which does no good if you're trying to debug an existing runtime. So I can't say much about how useful this option is, or even if it is necessary since anytime you'd want to evaluate line-by-line, you'll be in debug mode anyway and sending to debug console as in 1 above. The Windows issue might have something to do with the settings.json entry\n\"terminal.integrated.inheritEnv\": true,\nnot having an affect in Windows as of yet, per vscode documentation.","Q_Score":15,"Tags":"python,debugging,visual-studio-code","A_Id":67669127,"CreationDate":"2018-04-11T09:36:00.000","Title":"VScode run code selection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Basically I downloaded django project from SCM, Usually I run the project with with these steps\n\ngit clone repository\nextract\nchange directory to project folder\npython manage.py runserver \n\nBut this project does not contains manage.py , how to run this project in my local machine???\nbr","AnswerCount":4,"Available Count":1,"Score":0.2449186624,"is_accepted":false,"ViewCount":4321,"Q_Id":49775020,"Users Score":5,"Answer":"Most likely, this is not supposed to be a complete project, but a plugin application. You should create your own project in the normal way with django-admin.py startproject and add the downloaded app to INSTALLED_APPS.","Q_Score":1,"Tags":"python,django","A_Id":49775359,"CreationDate":"2018-04-11T12:21:00.000","Title":"How to run a django project without manage.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new to Python programming and stumbled across this feature of subtracting in python that I can't figure out. I have two 0\/1 arrays, both of size 400. I want to subtract each element of array one from its corresponding element in array 2. \nFor example say you have two arrays A = [0, 1, 1, 0, 0] and B = [1, 1, 1, 0, 1].\nThen I would expect A - B = [0 - 1, 1 - 1, 1 - 1, 0 - 0, 0 - 1] = [-1, 0, 0, 0, -1]\nHowever in python I get [255, 0, 0, 0, 255].\nWhere does this 255 come from and how do I get -1 instead?\nHere's some additional information:\nThe real variables I'm working with are Y and LR_predictions.\nY = array([[0, 0, 0, ..., 1, 1, 1]], dtype=uint8)\nLR_predictions = array([0, 1, 1, ..., 0, 1, 0], dtype=uint8)\nWhen I use either Y - LR_predictions or numpy.subtract(Y, LR_predictions) \nI get: array([[ 0, 255, 255, ..., 1, 0, 1]], dtype=uint8)\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5861,"Q_Id":49784499,"Users Score":0,"Answer":"I can't replicate this but it looks like the numbers are 8 bit and wrapping some how","Q_Score":0,"Tags":"python,arrays,math,subtraction","A_Id":49784641,"CreationDate":"2018-04-11T21:08:00.000","Title":"Python - Subtracting the Elements of Two Arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've been using Python for a few months, but I'm sort of new to Files. I would like to know how to save text files into my Documents, using \".txt\".","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":546,"Q_Id":49786119,"Users Score":0,"Answer":"If you do not like to overwrite existing file then use a or a+ mode. This just appends to existing file. a+ is able to read the file as well","Q_Score":2,"Tags":"python","A_Id":49787478,"CreationDate":"2018-04-12T00:14:00.000","Title":"How do I save a text file in python, to my File Explorer?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I would highly appreciate any help on this. I'm constructing dynamic highcharts at the backend and would like to send the data along with html to the frontend. \nIn highcharts, there is a specific field to accept Date such as:\nx:Date.UTC(2018,01,01)\nor x:2018-01-01. However, when I send dates from the backend, it is always surrounded by quotes,so it becomes: x:'Date.UTC(2018,01,01)'\nand x:'2018-01-01', which does not render the chart. Any suggestions on how to escape these quotes?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":49804663,"Users Score":0,"Answer":"Highcharts expects the values on datetime axes to be timestamps (number of miliseconds from 01.01.1970). Date.UTC is a JS function that returns a timestamp as Number. Values surrounded by apostrophes are Strings.\nI'd rather suggest to return a timestamp as a String from backend (e.g. '1514764800000') and then convert it to Number in JS (you can use parseInt function for that.)","Q_Score":0,"Tags":"python,jquery,web,highcharts,web.py","A_Id":49812323,"CreationDate":"2018-04-12T19:41:00.000","Title":"Send data from Python backend to Highcharts while escaping quotes for date","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have around 20TB of time series data stored in big query.\nThe current pipeline I have is:\nraw data in big query => joins in big query to create more big query datasets => store them in buckets\nThen I download a subset of the files in the bucket:\nWork on interpolation\/resampling of data using Python\/SFrame, because some of the time series data have missing times and they are not evenly sampled.\nHowever, it takes a long time on a local PC, and I'm guessing it will take days to go through that 20TB of data.\n\nSince the data are already in buckets, I'm wondering what would the best Google tools for interpolation and resampling?\nAfter resampling and interpolation I might use Facebook's Prophet or Auto ARIMA to create some forecasts. But that would be done locally.\n\nThere's a few services from Google that seems are like good options.\n\nCloud DataFlow: I have no experience in Apache Beam, but it looks like the Python API with Apache Beam have missing functions compared to the Java version? I know how to write Java, but I'd like to use one programming language for this task.\nCloud DataProc: I know how to write PySpark, but I don't really need any real time processing or stream processing, however spark has time series interpolation, so this might be the only option?\nCloud Dataprep: Looks like a GUI for cleaning data, but it's in beta. Not sure if it can do time series resampling\/interpolation.\n\nDoes anyone have any idea which might best fit my use case?\nThanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":522,"Q_Id":49809084,"Users Score":0,"Answer":"I would use PySpark on Dataproc, since Spark is not just realtime\/streaming but also for batch processing. \nYou can choose the size of your cluster (and use some preemptibles to save costs) and run this cluster only for the time you actually need to process this data. Afterwards kill the cluster.\nSpark also works very nicely with Python (not as nice as Scala) but for all effects and purposes the main difference is performance, not reduced API functionality.\nEven with the batch processing you can use the WindowSpec for effective time serie interpolation\nTo be fair: I don't have a lot of experience with DataFlow or DataPrep, but that's because out use case is somewhat similar to yours and Dataproc works well for that","Q_Score":1,"Tags":"python,apache-spark,google-cloud-platform,google-cloud-dataflow,google-cloud-dataproc","A_Id":49812019,"CreationDate":"2018-04-13T03:56:00.000","Title":"Google Cloud - What products for time series data cleaning?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm quite new to Python, and was wondering how I flatten the following nested list using list comprehension, and also use conditional logic.\nnested_list = [[1,2,3], [4,5,6], [7,8,9]]\nThe following returns a nested list, but when I try to flatten the list by removing the inner square brackets I get errors. \nodds_evens = [['odd' if n % 2 != 0 else 'even' for n in l] for l in nested_list]","AnswerCount":4,"Available Count":1,"Score":-0.049958375,"is_accepted":false,"ViewCount":630,"Q_Id":49850821,"Users Score":-1,"Answer":"To create a flat list, you need to have one set of brackets in comprehension code. Try the below code:\nodds_evens = ['odd' if n%2!=0 else 'even' for n in l for l in nested_list]\nOutput:\n['odd', 'odd', 'odd', 'even', 'even', 'even', 'odd', 'odd', 'odd']","Q_Score":3,"Tags":"python","A_Id":49850915,"CreationDate":"2018-04-16T06:29:00.000","Title":"Nested list comprehension to flatten nested list","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have a python script that is scheduled to run at a fixed time daily\nIf I am not around my colleague will be able to access my computer to run the script if there is any error with the windows task scheduler\nI like to allow him to run my windows task scheduler but also to protect my source code in the script... is there any good way to do this, please?\n(I have read methods to use C code to hide it but I am only familiar with Python)\nThank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":656,"Q_Id":49874829,"Users Score":1,"Answer":"Compile the source to the .pyc bytecode, and then move the source somewhere inaccessible.\n\nOpen a terminal window in the directory containing your script\nRun python -m py-compile (you should get a yourfile.pyc file)\nMove somewhere secure\nyour script can now be run as python \n\nNote that is is not necessarily secure as such - there are ways to decompile the bytecode - but it does obfuscate it, if that is your requirement.","Q_Score":0,"Tags":"python,password-protection","A_Id":49877159,"CreationDate":"2018-04-17T09:44:00.000","Title":"Password protect a Python Script that is Scheduled to run daily","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I've got an issue with scrapy and python.\nI have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link.\nSo I can't match url of each subpage with the outputed data.\nLike: crawled url, data1, data2, data3.\nData 1, data2, data3 => It's ok, because it comes from one loop, but how can I add to the loop current url or can I set the order of link's list? Like first from the list is crawled as first, second is crawled as second...","AnswerCount":3,"Available Count":2,"Score":-0.0665680765,"is_accepted":false,"ViewCount":79,"Q_Id":49896079,"Users Score":-1,"Answer":"Ok, It seems that the solution is in settings.py file in scrapy.\nDOWNLOAD_DELAY = 3 \nBetween requests.\nIt should be uncommented. Defaultly it's commented.","Q_Score":0,"Tags":"python,scrapy","A_Id":49899202,"CreationDate":"2018-04-18T09:29:00.000","Title":"Scrapy - order of crawled urls","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've got an issue with scrapy and python.\nI have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link.\nSo I can't match url of each subpage with the outputed data.\nLike: crawled url, data1, data2, data3.\nData 1, data2, data3 => It's ok, because it comes from one loop, but how can I add to the loop current url or can I set the order of link's list? Like first from the list is crawled as first, second is crawled as second...","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":49896079,"Users Score":0,"Answer":"time.sleep() - would it be a solution?","Q_Score":0,"Tags":"python,scrapy","A_Id":49898314,"CreationDate":"2018-04-18T09:29:00.000","Title":"Scrapy - order of crawled urls","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am installing pyodbc on Redhat 6.5. Python 2.6 and 2.7.4 are installed. I get the following error below even though the header files needed for gcc are in the \/usr\/include\/python2.6.\nI have updated every dev package: yum groupinstall -y 'development tools'\nAny ideas on how to resolve this issue would be greatly appreciated???\nInstalling pyodbc...\nProcessing .\/pyodbc-3.0.10.tar.gz\nInstalling collected packages: pyodbc\n Running setup.py install for pyodbc ... error\n Complete output from command \/opt\/rh\/python27\/root\/usr\/bin\/python -u -c \"import setuptools, tokenize;file='\/tmp\/pip-JAGZDD-build\/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\\r\\n', '\\n'), file, 'exec'))\" install --record \/tmp\/pip-QJasL0-record\/install-record.txt --single-version-externally-managed --compile:\n running install\n running build\n running build_ext\n building 'pyodbc' extension\n creating build\n creating build\/temp.linux-x86_64-2.7\n creating build\/temp.linux-x86_64-2.7\/tmp\n creating build\/temp.linux-x86_64-2.7\/tmp\/pip-JAGZDD-build\n creating build\/temp.linux-x86_64-2.7\/tmp\/pip-JAGZDD-build\/src\n gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DPYODBC_VERSION=3.0.10 -DPYODBC_UNICODE_WIDTH=4 -DSQL_WCHART_CONVERT=1 -I\/Applications\/Xcode.app\/Contents\/Developer\/Platforms\/MacOSX.platform\/Developer\/SDKs\/MacOSX10.8.sdk\/usr\/include -I\/opt\/rh\/python27\/root\/usr\/include\/python2.7 -c \/tmp\/pip-JAGZDD-build\/src\/cnxninfo.cpp -o build\/temp.linux-x86_64-2.7\/tmp\/pip-JAGZDD-build\/src\/cnxninfo.o -Wno-write-strings\n In file included from \/tmp\/pip-JAGZDD-build\/src\/cnxninfo.cpp:8:\n **\n**\/tmp\/pip-JAGZDD-build\/src\/pyodbc.h:41:20: error: Python.h: No such file or directory \/tmp\/pip-JAGZDD-build\/src\/pyodbc.h:42:25: error: floatobject.h: No such file or directory \/tmp\/pip-JAGZDD-build\/src\/pyodbc.h:43:24: error: longobject.h: No such file or directory \/tmp\/pip-JAGZDD-build\/src\/pyodbc.h:44:24: error: boolobject.h: No such file or directory \/tmp\/pip-JAGZDD-build\/src\/pyodbc.h:45:27: error: unicodeobject.h: No such file or directory \/tmp\/pip-JAGZDD-build\/src\/pyodbc.h:46:26: error: structmember.h: No such file or directory\n**\n In file included from \/tmp\/pip-JAGZDD-build\/src\/pyodbc.h:137,\n from \/tmp\/pip-JAGZDD-build\/src\/cnxninfo.cpp:8:\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:61:28: error: stringobject.h: No such file or directory\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:62:25: error: intobject.h: No such file or directory\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:63:28: error: bufferobject.h: No such file or directory\n In file included from \/tmp\/pip-JAGZDD-build\/src\/cnxninfo.cpp:8:\n \/tmp\/pip-JAGZDD-build\/src\/pyodbc.h: In function \u2018void _strlwr(char*)\u2019:\n \/tmp\/pip-JAGZDD-build\/src\/pyodbc.h:92: error: \u2018tolower\u2019 was not declared in this scope\n In file included from \/tmp\/pip-JAGZDD-build\/src\/pyodbc.h:137,\n from \/tmp\/pip-JAGZDD-build\/src\/cnxninfo.cpp:8:\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h: At global scope:\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:71: error: expected initializer before \u2018*\u2019 token\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:81: error: \u2018Text_Buffer\u2019 declared as an \u2018inline\u2019 variable\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:81: error: \u2018PyObject\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:81: error: \u2018o\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:82: error: expected \u2018,\u2019 or \u2018;\u2019 before \u2018{\u2019 token\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:93: error: \u2018Text_Check\u2019 declared as an \u2018inline\u2019 variable\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:93: error: \u2018PyObject\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:93: error: \u2018o\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:94: error: expected \u2018,\u2019 or \u2018;\u2019 before \u2018{\u2019 token\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:104: error: \u2018PyObject\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:104: error: \u2018lhs\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:104: error: expected primary-expression before \u2018const\u2019\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:104: error: initializer expression list treated as compound expression\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:109: error: \u2018Text_Size\u2019 declared as an \u2018inline\u2019 variable\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:109: error: \u2018PyObject\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:109: error: \u2018o\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:110: error: expected \u2018,\u2019 or \u2018;\u2019 before \u2018{\u2019 token\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:118: error: \u2018TextCopyToUnicode\u2019 declared as an \u2018inline\u2019 variable\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:118: error: \u2018Py_UNICODE\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:118: error: \u2018buffer\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:118: error: \u2018PyObject\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:118: error: \u2018o\u2019 was not declared in this scope\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:118: error: initializer expression list treated as compound expression\n \/tmp\/pip-JAGZDD-build\/src\/pyodbccompat.h:119: error: expected \u2018,\u2019 or \u2018;\u2019 before \u2018{\u2019 token\n error: command 'gcc' failed with exit status 1","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":441,"Q_Id":49908470,"Users Score":0,"Answer":"The resolution was to re-install Python2.7","Q_Score":0,"Tags":"python,python-2.7,gcc,pyodbc,python-2.6","A_Id":50045919,"CreationDate":"2018-04-18T20:24:00.000","Title":"gcc error when installing pyodbc","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I've been searching for a bit now and haven't been able to find anything similar to my question. Maybe i'm just not searching correctly. Anyways this is a question from my exam review. Given a binary tree, I need to output a list such that each item in the list is the number of nodes on a level in a binary tree at the items list index. What I mean, lst = [1,2,1] and the 0th index is the 0th level in the tree and the 1 is how many nodes are in that level. lst[1] will represent the number of nodes (2) in that binary tree at level 1. The tree isn't guaranteed to be balanced. We've only been taught preorder,inorder and postorder traversals, and I don't see how they would be useful in this question. I'm not asking for specific code, just an idea on how I could figure this out or the logic behind it. Any help is appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1773,"Q_Id":49909109,"Users Score":1,"Answer":"The search ordering doesn't really matter as long as you only count each node once. A depth-first search solution with recursion would be:\n\nCreate a map counters to store a counter for each level. E.g. counters[i] is the number of nodes found so far at level i. Let's say level 0 is the root.\nDefine a recursive function count_subtree(node, level): Increment counters[level] once. Then for each child of the given node, call count_subtree(child, level + 1) (the child is at a 1-deeper level).\nCall count_subtree(root_node, 0) to count starting at the root. This will result in count_subtree being run exactly once on each node because each node only has one parent, so counters[level] will be incremented once per node. A leaf node is the base case (no children to call the recursive function on).\nBuild your final list from the values of counters, ordered by their keys ascending.\n\nThis would work with any kind of tree, not just binary. Running time is O(number of nodes in tree). Side note: The depth-first search solution would be easier to divide and run on parallel processors or machines than a similar breadth-first search solution.","Q_Score":3,"Tags":"python,python-3.x,tree,binary-tree,tree-traversal","A_Id":49911007,"CreationDate":"2018-04-18T21:14:00.000","Title":"Count number of nodes per level in a binary tree","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using the Spyder editor and I have to go back and forth from the piece of code that I am writing to the definition of the functions I am calling. I am looking for shortcuts to move given this issue. I know how to go to the function definition (using Ctrl + g), but I don't know how to go back to the piece of code that I am writing. Is there an easy way to do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2018,"Q_Id":49915867,"Users Score":4,"Answer":"(Spyder maintainer here) You can use the shortcuts Ctrl+Alt+Left and Ctrl+Alt+Right to move to the previous\/next cursor position, respectively.","Q_Score":3,"Tags":"python,keyboard-shortcuts,spyder","A_Id":49985209,"CreationDate":"2018-04-19T08:06:00.000","Title":"Going back to previous line in Spyder","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I tried to run a python script on my mac computer, but I ended up in troubles as it needed to install pandas as a dependency.\nI tried to get this dependency, but to do so I installed different components like brew, pip, wget and others including different versions of python using brew, .pkg package downloaded from python.org.\nIn the end, I was not able to run the script anyway.\nNow I would like to sort out the things and have only one version of python (3 probably) working correctly.\nCan you suggest me the way how to get the overview what I have installed on my computer and how can I clean it up?\nThank you in advance","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":5129,"Q_Id":49920886,"Users Score":4,"Answer":"Use brew list to see what you've installed with Brew. And Brew Uninstall as needed. Likewise, review the logs from wget to see where it installed things. Keep in mind that MacOS uses Python 2.7 for system critical tasks; it's baked-into the OS so don't touch it.\nAnything you installed with pip is saved to the \/site-packages directory of the Python version in which you installed it so it will disappear when you remove that version of Python.\nThe .pkg files installed directly into your Applications folder and can be deleted safely like any normal app.","Q_Score":2,"Tags":"python,macos,pip,homebrew","A_Id":49921063,"CreationDate":"2018-04-19T12:15:00.000","Title":"clean up python versions mac osx","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a API build using Python\/Flask, and I have a endpoint called \/build-task that called by the system, and this endpoint takes about 30 minutes to run. \nMy question is that how do I lock the \/build-task endpoint when it's started and running already? So so other user, or system CANNOT call this endpoint.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":709,"Q_Id":49930033,"Users Score":2,"Answer":"You have some approaches for this problem:\n1 - You can create a session object, save a flag in the object and check if the endpoint is already running and respond accordingly.\n2 - Flag on the database, check if the endpoint is already running and respond accordingly.","Q_Score":1,"Tags":"python,flask","A_Id":49930152,"CreationDate":"2018-04-19T20:49:00.000","Title":"Python\/Flask: only one user can call a endpoint at one time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have Redis as my Cache Server. When I call delay() on a task,it takes more than 10 tasks to even start executing. Any idea how to reduce this unnecessary lag?\nShould I replace Redis with RabbitMQ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":788,"Q_Id":49930990,"Users Score":0,"Answer":"It's very difficult to say what the cause of the delay is without being able to inspect your application and server logs, but I can reassure you that the delay is not normal and not an effect specific to either Celery or using Redis as the broker. I've used this combination a lot in the past and execution of tasks happens in a number of milliseconds.\nI'd start by ensuring there are no network related issues between your client creating the tasks, your broker (Redis) and your task consumers (celery workers).\nGood luck!","Q_Score":1,"Tags":"python,django,asynchronous,redis,celery","A_Id":49931421,"CreationDate":"2018-04-19T22:13:00.000","Title":"After delay() is called on a celery task, it takes more than 5 to 10 seconds for the tasks to even start executing with redis as the server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have code like this, I want to check in the time range that has overtime and sum it.\ncurrently, am trying out.hour+1 with this code, but didn't work.\n\n\n overtime_all = 5\n overtime_total_hours = 0\n out = datetime.time(14, 30)\n\n while overtime_all > 0:\n overtime200 = object.filter(time__range=(out, out.hour+1)).count()\n overtime_total_hours = overtime_total_hours + overtime200\n overtime_all -=1\n\n print overtime_total_hours\n\n\nhow to add 1 hour every loop?...","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2174,"Q_Id":49955821,"Users Score":1,"Answer":"Timedelta (from datetime) can be used to increment or decrement a datatime objects. Unfortunately, it cannot be directly combined with datetime.time objects. \nIf the values that are stored in your time column are datetime objects, you can use them (e.g.: my_datetime + timedelta(hours=1)). If they are time objects, you'll need to think if they represent a moment in time (in that case, they should be converted to datetime objects) or a duration (in that case, it's probably easier to store it as an integer representing the total amount of minutes, and to perform all operations on integers).","Q_Score":2,"Tags":"python,django","A_Id":49955858,"CreationDate":"2018-04-21T12:34:00.000","Title":"add +1 hour to datetime.time() django on forloop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have code like this, I want to check in the time range that has overtime and sum it.\ncurrently, am trying out.hour+1 with this code, but didn't work.\n\n\n overtime_all = 5\n overtime_total_hours = 0\n out = datetime.time(14, 30)\n\n while overtime_all > 0:\n overtime200 = object.filter(time__range=(out, out.hour+1)).count()\n overtime_total_hours = overtime_total_hours + overtime200\n overtime_all -=1\n\n print overtime_total_hours\n\n\nhow to add 1 hour every loop?...","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2174,"Q_Id":49955821,"Users Score":1,"Answer":"I found the solution now, and this is work.\n\n\n overtime_all = 5\n overtime_total_hours = 0\n out = datetime.time(14, 30)\n\n while overtime_all > 0:\n overtime200 = object.filter(time__range=(out,datetime.time(out.hour+1, 30))).count()\n overtime_total_hours = overtime_total_hours + overtime200\n overtime_all -=1\n\n print overtime_total_hours\n\ni do change out.hour+1 to datetime.time(out.hour+1, 30) its work fine now, but i dont know maybe there more compact\/best solution.\nthank you guys for your answer.","Q_Score":2,"Tags":"python,django","A_Id":49956349,"CreationDate":"2018-04-21T12:34:00.000","Title":"add +1 hour to datetime.time() django on forloop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I Have data-set for which consist 2000 lines in a text file.\nEach line represents x,y,z (3D coordinates location) of 20 skeleton joint points of human body (eg: head, shoulder center, shoulder left, shoulder right,......, elbow left, elbow right). I want to do k-means clustering of this data.\nData is separated by 'spaces ', each joint is represented by 3 values (Which represents x,y,z coordinates). Like head and shoulder center represented by \n.0255... .01556600 1.3000... .0243333 .010000 .1.3102000 .... \nSo basically I have 60 columns in each row, which which represents 20 joints and each joins consist of three points. \nMy question is how do I format or use this data for k-means clustering,","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":394,"Q_Id":49961977,"Users Score":0,"Answer":"You don't need to reformat anything.\nEach row is a 60 dimensional vector of continous values with a comparable scale (coordinates), as needed for k-means.\nYou can just run k-means on this.\nBut assuming that the measurements were taken in sequence, you may observe a strong correlation between rows, so I wouldn't expect the data to cluster extremely well, unless you set up the use to do and hold certain poses.","Q_Score":0,"Tags":"python,cluster-analysis,k-means,data-science","A_Id":49968114,"CreationDate":"2018-04-22T02:32:00.000","Title":"k-means clustering multi column data in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to create table in odoo 10 with the following columns: quantity_in_the_first_day_of_month,input_quantity,output_quantity,quantity_in_the_last_day_of_the_month.\nbut i don't know how to get the quantity of the specified date","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":185,"Q_Id":49965402,"Users Score":0,"Answer":"You can join the sale order and sale order line to get specified date.\nselect \n sum(sol.product_uom_qty)\nfrom \n sale_order s,sale_order_line sol \nwhere \n sol.order_id=s.id and\n DATE(s.date_order) = '2018-01-01'","Q_Score":0,"Tags":"python,python-3.x,python-2.7,odoo,odoo-10","A_Id":50134759,"CreationDate":"2018-04-22T11:28:00.000","Title":"How to get the quantity of products in specified date in odoo 10","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Let's say I am running multiple python processes(not threads) on a multi core CPU (say 4). GIL is process level so GIL within a particular process won't affect other processes.\nMy question here is if the GIL within one process will take hold of only single core out of 4 cores or will it take hold of all 4 cores?\nIf one process locks all cores at once, then multiprocessing should not be any better than multi threading in python. If not how do the cores get allocated to various processes?\n\nAs an observation, in my system which is 8 cores (4*2 because of\n hyperthreading), when I run a single CPU bound process, the CPU usage\n of 4 out of 8 cores goes up.\n\nSimplifying this:\n4 python threads (in one process) running on a 4 core CPU will take more time than single thread doing same work (considering the work is fully CPU bound). Will 4 different process doing that amount of work reduce the time taken by a factor of near 4?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3869,"Q_Id":49993687,"Users Score":0,"Answer":"Process to CPU\/CPU core allocation is handled by the Operating System.","Q_Score":0,"Tags":"python,multiprocessing,gil","A_Id":49993795,"CreationDate":"2018-04-24T04:53:00.000","Title":"How do CPU cores get allocated to python processes in multiprocessing?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a model already trained by dynet library. But i forget the --dynet-seed parameter when training this model. \nDoes anyone know how to read back this parameter from the saved model?\nThank you in advance for any feedback.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":78,"Q_Id":50003397,"Users Score":0,"Answer":"You can't read back the seed parameter. Dynet model does not save the seed parameter. The obvious reason is, it is not required at testing time. Seed is only used to set fixed initial weights, random shuffling etc. for different experimental runs. At testing time no parameter initialisation or shuffling is required. So, no need to save seed parameter.\nTo the best of my knowledge, none of the other libraries like tensorflow, pytorch etc. save the seed parameter as well.","Q_Score":0,"Tags":"python,lstm,dynet","A_Id":50051554,"CreationDate":"2018-04-24T13:49:00.000","Title":"How to read back the \"random-seed\" from a saved model of Dynet","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm currently working on gateway with an embedded Linux and a Webserver. The goal of the gateway is to retrieve data from electrical devices through a RS485\/Modbus line, and to display them on a server.\nI'm using Nginx and Django, and the web front-end is delivered by \"static\" files. Repeatedly, a Javascript script file makes AJAX calls that send CGI requests to Nginx. These CGI requests are answered with JSON responses thanks to Django. The responses are mostly data that as been read on the appropriate Modbus device.\nThe exact path is the following :\nRandomly timed CGI call -> urls.py -> ModbusCGI.py (import an other script ModbusComm.py)-> ModbusComm.py create a Modbus client and instantly try to read with it.\nNext to that, I wanted to implement a Datalogger, to store data in a DB at regular intervals. I made a script that also import the ModbusComm.py script, but it doesn't work : sometime multiple Modbus frames are sent at the same time (datalogger and cgi scripts call the same function in ModbusComm.py \"files\" at the same time) which results in an error.\nI'm sure this problem would also occur if there are a lot of users on the server (CGI requests sent at the same time). Or not ? (queue system already managed for CGI requests? I'm a bit lost)\nSo my goal would be to make a queue system that could handle calls from several python scripts => make them wait while it's not their turn => call a function with the right arguments when it's their turn (actually using the modbus line), and send back the response to the python script so it can generate the JSON response.\nI really don't know how to achieve that, and I'm sure there are better way to do this. \nIf I'm not clear enough, don't hesitate to make me aware of it :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":389,"Q_Id":50010615,"Users Score":0,"Answer":"I had the same problem when I had to allow multiple processes to read some Modbus (and not only Modbus) data through a serial port. I ended up with a standalone process (\u201cserial port server\u201d) that exclusively works with a serial port. All other processes work with that port through that standalone process via some inter processes communication mechanism (we used Unix sockets).\nThis way when an application wants to read a Modbus register it connects to the \u201cserial port server\u201d, sends its request and receives the response. All the actual serial port communication is done by the \u201cserial port server\u201d in sequential way to ensure consistency.","Q_Score":0,"Tags":"python,django,multithreading,asynchronous,concurrency","A_Id":50105654,"CreationDate":"2018-04-24T20:57:00.000","Title":"Django\/Python - Serial line concurrency","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I would like to ask if it is possible to make Python 3 a default interpreter on Mac OS 10 when typing python right away from the terminal? If so, can somebody help how to do it? I'm avoiding switching between the environments.\nCheers","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":13084,"Q_Id":50011518,"Users Score":1,"Answer":"You can do that by changing alias, typing in something like $ alias python=python3 in the terminal.\nIf you want the change to persist open ~\/.bash_profile using nano and then add alias python=python3. CTRL+O to save and CTRL+X to close.\nThen type $ source ~.\/bash_profile in the terminal.","Q_Score":3,"Tags":"python,python-3.x,macos","A_Id":50011593,"CreationDate":"2018-04-24T22:23:00.000","Title":"Make Python 3 default on Mac OS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Trying to Import 200 contacts from CSV file to telegram using Python3 Code. It's working with first 50 contacts and then stop and showing below:\ntelethon.errors.rpc_error_list.FloodWaitError: A wait of 101 seconds is required\nAny idea how I can import all list without waiting?? Thanks!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":287,"Q_Id":50012489,"Users Score":0,"Answer":"You can not import a large number of people in sequential. \u064fThe telegram finds you're sperm.\nAs a result, you must use \u200dsleep between your requests","Q_Score":0,"Tags":"python,csv,telegram,telethon","A_Id":50310718,"CreationDate":"2018-04-25T00:38:00.000","Title":"can't import more than 50 contacts from csv file to telegram using Python3","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using pytest to test my app.\npytest supports 2 approaches (that I'm aware of) of how to write tests:\n\nIn classes:\n\n\ntest_feature.py -> class TestFeature -> def test_feature_sanity\n\n\nIn functions:\n\n\ntest_feature.py -> def test_feature_sanity\n\nIs the approach of grouping tests in a class needed? Is it allowed to backport unittest builtin module?\nWhich approach would you say is better and why?","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":39428,"Q_Id":50016862,"Users Score":57,"Answer":"There are no strict rules regarding organizing tests into modules vs classes. It is a matter of personal preference. Initially I tried organizing tests into classes, after some time I realized I had no use for another level of organization. Nowadays I just collect test functions into modules (files).\nI could see a valid use case when some tests could be logically organized into same file, but still have additional level of organization into classes (for instance to make use of class scoped fixture). But this can also be done just splitting into multiple modules.","Q_Score":59,"Tags":"python,pytest","A_Id":50028551,"CreationDate":"2018-04-25T07:54:00.000","Title":"Grouping tests in pytest: Classes vs plain functions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using pytest to test my app.\npytest supports 2 approaches (that I'm aware of) of how to write tests:\n\nIn classes:\n\n\ntest_feature.py -> class TestFeature -> def test_feature_sanity\n\n\nIn functions:\n\n\ntest_feature.py -> def test_feature_sanity\n\nIs the approach of grouping tests in a class needed? Is it allowed to backport unittest builtin module?\nWhich approach would you say is better and why?","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":39428,"Q_Id":50016862,"Users Score":20,"Answer":"Typically in unit testing, the object of our tests is a single function. That is, a single function gives rise to multiple tests. In reading through test code, it's useful to have tests for a single unit be grouped together in some way (which also allows us to e.g. run all tests for a specific function), so this leaves us with two options:\n\nPut all tests for each function in a dedicated module\nPut all tests for each function in a class\n\nIn the first approach we would still be interested in grouping all tests related to a source module (e.g. utils.py) in some way. Now, since we are already using modules to group tests for a function, this means that we should like to use a package to group tests for a source module.\nThe result is one source function maps to one test module, and one source module maps to one test package.\nIn the second approach, we would instead have one source function map to one test class (e.g. my_function() -> TestMyFunction), and one source module map to one test module (e.g. utils.py -> test_utils.py).\nIt depends on the situation, perhaps, but the second approach, i.e. a class of tests for each function you are testing, seems more clear to me. Additionally, if we are testing source classes\/methods, then we could simply use an inheritance hierarchy of test classes, and still retain the one source module -> one test module mapping.\nFinally, another benefit to either approach over just a flat file containing tests for multiple functions, is that with classes\/modules already identifying which function is being tested, you can have better names for the actual tests, e.g. test_does_x and test_handles_y instead of test_my_function_does_x and test_my_function_handles_y.","Q_Score":59,"Tags":"python,pytest","A_Id":57532692,"CreationDate":"2018-04-25T07:54:00.000","Title":"Grouping tests in pytest: Classes vs plain functions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Suppose we have a joint distribution p(x_1,x_2), and we know x_1,x_2,p. Both are discrete, (x_1,x_2) is scatter, its contour could be drawn, marginal as well. I would like to show the area of 95% quantile (a scale of 95% data will be contained) of the joint distribution, how can I do that?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":305,"Q_Id":50017251,"Users Score":0,"Answer":"If you are interested in finding a pair x_1, x_2 of real numbers such that\nP(X_1<=x_1, X_2<=x_2) = 0.95 and your distribution is continuous then there will be infinitely many of these pairs. You might be better of just fixing one of them and then finding the other","Q_Score":0,"Tags":"python,numpy,scipy,confidence-interval,credible-interval","A_Id":50017530,"CreationDate":"2018-04-25T08:16:00.000","Title":"How to calculate a 95 credible region for a 2D joint distribution?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Suppose we have a joint distribution p(x_1,x_2), and we know x_1,x_2,p. Both are discrete, (x_1,x_2) is scatter, its contour could be drawn, marginal as well. I would like to show the area of 95% quantile (a scale of 95% data will be contained) of the joint distribution, how can I do that?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":305,"Q_Id":50017251,"Users Score":1,"Answer":"As the other points out, there are infinitely many solutions to this problem. A practical one is to find the approximate center of the point cloud and extend a circle from there until it contains approximately 95% of the data. Then, find the convex hull of the selected points and compute its area.\nOf course, this will only work if the data is sort of concentrated in a single area. This won't work if there are several clusters.","Q_Score":0,"Tags":"python,numpy,scipy,confidence-interval,credible-interval","A_Id":50017700,"CreationDate":"2018-04-25T08:16:00.000","Title":"How to calculate a 95 credible region for a 2D joint distribution?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Recently started working on influxDB, can't find how to add new measurements or make a table of data from separate measurements, like in SQL we have to join table or so.\nThe influxdb docs aren't that clear. I'm currently using the terminal for everything and wouldn't mind switching to python but most of it is about HTTP post schemes in the docs, is there any other alternative?\nI would prefer influxDB in python if the community support is good","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":176,"Q_Id":50022212,"Users Score":1,"Answer":"The InfluxDB query language does not support joins across measurements. \nIt instead needs to be done client side after querying data. Querying, without join, data from multiple measurements can be done with one query.","Q_Score":1,"Tags":"influxdb,influxdb-python","A_Id":50022881,"CreationDate":"2018-04-25T12:20:00.000","Title":"queires and advanced operations in influxdb","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I try to write a defense system by using mininet + pox.\nI have l3_edited file to calculate entropy. I understand when a host attacked. \nI have my myTopo.py file that create a topo with Mininet. \nNow my question:\nI want to change hosts' ips when l3_edited detect an attack. Where should I do it?\nI believe I should write program and run it in mininet. (not like custom topo but run it after create mininet, in command line). If it's true, how can I get hosts' objest? If I can get it, I can change their IPs.\nOr should I do it on my myTopo.py ??? Then, how can I run my defense code, when I detect an attack?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":185,"Q_Id":50052832,"Users Score":0,"Answer":"If someone looking for answer...\nYou can use your custom topology file to do other task. Multithread solved my problem.","Q_Score":0,"Tags":"python,mininet,pox","A_Id":51859775,"CreationDate":"2018-04-26T22:40:00.000","Title":"Run external python file with Mininet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm working on a Dataframe with 1116 columns, how could I select just the columns in a period of 17 ? \nMore clearly select the 12th, 29th,46th,63rd... columns","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":50062936,"Users Score":0,"Answer":"df.iloc[:,[i*17 for i in range(0,65)]]","Q_Score":1,"Tags":"python,pandas,dataframe","A_Id":50063200,"CreationDate":"2018-04-27T12:58:00.000","Title":"Select columns periodically on pandas DataFrame","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Cython-based package which depends on other C++ SO libraries. Those libraries are binary different between Ubuntu (dev) and RedHat (prod). So the SO file generated by Cython has to be different as well. If I use Wheel to package it the file name is same for both environments:\npackage-version-cp27-cp27mu-linux_x86_64.whl\nSo if I upload it to pypi it will conflict with RedHat based distribution of the same package. I have to upload it to pypi because the project is then PEX-ed (via Pants) and PEX tries to download from pypi and fails if it does not find it with the following exception.\nException caught: 'pex.resolver.Unsatisfiable'\nAny ideas how to resolve it?\nThx.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":211,"Q_Id":50065595,"Users Score":0,"Answer":"I found a solution by using a different PyPi instance. So our DEV Ubuntu environment and PROD RedHat just use two different PyPi sources.\nTo do that I had to make two configurations ~\/.pypic and ~\/.pip\/pip.conf to upload.","Q_Score":0,"Tags":"python,ubuntu,cython,redhat","A_Id":50101049,"CreationDate":"2018-04-27T15:23:00.000","Title":"How to create different Python Wheel distributions for Ubuntu and RedHat","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new to python and thought it would be great to have my very first python project running on AWS infrastructure. Given my previous node.js experience with lambdas, I thought that every function would have its own code and the app is only glued together by the persistence layer, everything else are decoupled separate functions.\nIn Python lambdas there are serverless microframeworks like Chalice or Zappa that seem to be an accepted practice. For me though it feels like they are hacking around the concept of serverless approach. You still have a full-blown app build on let's say Flask, or even Django, and that app is served through lambda. There is still one application that has all the routing, configs, boilerplate code, etc instead of small independent functions that just do their job. I currently do not see how and if this makes like any easier.\n\nWhat is the benefit \/ reason for having the whole code base served through lambdas as opposed to individual functions?\nIs there an execution time penalty if using flask\/django\/whatever else with serverless apps? \nIf this depends on the particular project, what would be the guidance when to use framework, and when to use individual functions?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":596,"Q_Id":50080592,"Users Score":0,"Answer":"Benefits. You can use known concept, and adopt it in serverless.\nPerformance. The smaller code is the less ram it takes. It must be loaded, processed, and so on. Just to process single request? For me that was always too much.\nLet's say you have diango project, that is working on elastic beanstalk, and you need some lamdas to deal with limited problems. Now. Do you want to have two separate configurations? What about common functions? \n\nServerless looks nice, but... let's assume that you have permissions, so your app, for every call will pull that stuff. Perhaps you have it cached - in redis, as a hole permissions for user... Other option is dynamodb, which is even more expensive. Yes there is nice SLA, but API is quite strange, also if you plan keeping more data there... the more data you have the slower it work - for same money. In other words - if you put more data, fetching will cost more - if you want same speed.","Q_Score":3,"Tags":"python,python-3.x,aws-lambda","A_Id":50080743,"CreationDate":"2018-04-28T20:06:00.000","Title":"Why use zappa\/chalice in serverless python apps?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm currently developing a keyword-spotting system that recognizes digits from 0 to 9 using deep neural networks. I have a dataset of people saying the numbers(namely the TIDIGITS dataset, collected at Texas Instruments, Inc), however the data is not prepared to be fed into a neural network, because not all the audio data have the same audio length, plus some of the files contain several digits being spoken in sequence, like \"one two three\".\nCan anyone tell me how would I transform these wav files into 1 second wav files containing only the sound of one digit? Is there any way to automatically do this? Preparing the audio files individually would be time expensive.\nThank you in advance!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3097,"Q_Id":50087271,"Users Score":1,"Answer":"I would split each wav by the areas of silence. Trim the silence from beginning and end. Then I'd run each one through a FFT for different sections. Smaller ones at the beginning of the sound. Then I'd normalise the frequencies against the fundamental. Then I'd feed the results into the NN as a 3d array of volumes, frequencies and times.","Q_Score":3,"Tags":"python,audio,machine-learning,deep-learning,speech-recognition","A_Id":50088667,"CreationDate":"2018-04-29T13:47:00.000","Title":"How to preprocess audio data for input into a Neural Network","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I was wondering how to generate a random 4 digit number that has no duplicates in python 3.6\nI could generate 0000-9999 but that would give me a number with a duplicate like 3445, Anyone have any ideas\nthanks in advance","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":537,"Q_Id":50091226,"Users Score":-1,"Answer":"Generate a random number\ncheck if there are any duplicates, if so go back to 1\nyou have a number with no duplicates\n\nOR\nGenerate it one digit at a time from a list, removing the digit from the list at each iteration.\n\nGenerate a list with numbers 0 to 9 in it.\nCreate two variables, the result holding value 0, and multiplier holding 1.\nRemove a random element from the list, multiply it by the multiplier variable, add it to the result.\nmultiply the multiplier by 10\ngo to step 3 and repeat for the next digit (up to the desired digits)\nyou now have a random number with no repeats.","Q_Score":1,"Tags":"python,python-3.x,random","A_Id":50091280,"CreationDate":"2018-04-29T20:58:00.000","Title":"How would i generate a random number in python without duplicating numbers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have this doubt when I fit a neural network in a regression problem. I preprocessed the predictors (features) of my train and test data using the methods of Imputers and Scale from sklearn.preprocessing,but I did not preprocessed the class or target of my train data or test data.\nIn the architecture of my neural network all the layers has relu as activation function except the last layer that has the sigmoid function. I have choosen the sigmoid function for the last layer because the values of the predictions are between 0 and 1.\ntl;dr: In summary, my question is: should I deprocess the output of my neuralnet? If I don't use the sigmoid function, the values of my output are < 0 and > 1. In this case, how should I do it?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":99,"Q_Id":50103377,"Users Score":0,"Answer":"Usually, if you are doing regression you should use a linear' activation in the last layer. A sigmoid function will 'favor' values closer to 0 and 1, so it would be harder for your model to output intermediate values. \nIf the distribution of your targets is gaussian or uniform I would go with a linear output layer. De-processing shouldn't be necessary unless you have very large targets.","Q_Score":0,"Tags":"python,scikit-learn,neural-network,keras","A_Id":53323802,"CreationDate":"2018-04-30T15:12:00.000","Title":"Keras Neural Network. Preprocessing","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm working on Json Web Tokens and wanted to reproduce it using python, but I'm struggling on how to calculate the HMAC_SHA256 of the texts using a public certificate (pem file) as a key.\nDoes anyone know how I can accomplish that!?\nTks","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1106,"Q_Id":50110748,"Users Score":2,"Answer":"In case any one found this question. The answer provided by the host works, but the idea is wrong. You don't use any RSA keys with HMAC method. The RSA key pair (public and private) are used for asymmetric algorithm while HMAC is symmetric algorithm.\nIn HMAC, the two sides of the communication keep the same secret text(bytes) as the key. It can be a public_cert.pem as long as you keep it secretly. But a public.pem is usually shared publicly, which makes it unsafe.","Q_Score":1,"Tags":"python,jwt,hmac","A_Id":51356271,"CreationDate":"2018-05-01T02:43:00.000","Title":"How to calculate the HMAC(hsa256) of a text using a public certificate (.pem) as key","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm coding watermarking images in JES and I was wondering how to Watermark a picture by automatically scaling a watermark image?\nIf anyone can help me that would be great.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":174,"Q_Id":50111820,"Users Score":0,"Answer":"Ill start by giving you a quote from the INFT1004 assignment you are asking for help with.\n\"In particular, you should try not to use code or algorithms from external sources, and not to obtain help from people other than your instructors, as this can prevent you from mastering these concepts\"\nIt specifically says in this assignment that you should not ask people online or use code you find or request online, and is a breach of the University of Newcastle academic integrity code - you know the thing you did a module on before you started the course. A copy of this post will be sent along to the course instructor.","Q_Score":1,"Tags":"python,python-3.x,python-2.7,jython,jes","A_Id":50282582,"CreationDate":"2018-05-01T05:40:00.000","Title":"How to auto scale in JES","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I realize there's another question with a similar title, but my dataset is very different.\nI have nearly 40 million rows and about 3 thousand labels. Running a simply sklearn train_test_split takes nearly 20 minutes.\nI initially was using multi-class classification models as that's all I had experience with, and realized that since I needed to come up with all the possible labels a particular record could be tied to, I should be using a multi-label classification method.\nI'm looking for recommendations on how to do this efficiently. I tried binary relevance, which took nearly 4 hours to train. Classifier chains errored out with a memory error after 22 hours. I'm afraid to try a label powerset as I've read they don't work well with a ton of data. Lastly, I've got adapted algorithm, MlkNN and then ensemble approaches (which I'm also worried about performance wise).\nDoes anyone else have experience with this type of problem and volume of data? In addition to suggested models, I'm also hoping for advice on best training methods, like train_test_split ratios or different\/better methods.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":450,"Q_Id":50117450,"Users Score":2,"Answer":"20 minutes for this size of a job doesn't seem that long, neither does 4 hours for training. \nI would really try vowpal wabbit. It excels at this sort of multilabel problem and will probably give unmatched performance if that's what you're after. It requires significant tuning and will still require quality training data, but it's well worth it. This is essentially just a binary classification problem. An ensemble will of course take longer so consider whether or not it's necessary given your accuracy requirements.","Q_Score":2,"Tags":"python,scikit-learn,multilabel-classification","A_Id":50117662,"CreationDate":"2018-05-01T13:33:00.000","Title":"Multi-label classification methods for large dataset","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm looking to set up a constraint-check in Python using PULP. Suppose I had variables A1,..,Xn and a constraint (AffineExpression) A1X1 + ... + AnXn <= B, where A1,..,An and B are all constants. \nGiven an assignment for X (e.g. X1=1, X2=4,...Xn=2), how can I check if the constraints are satisfied? I know how to do this with matrices using Numpy, but wondering if it's possible to do using PULP to let the library handle the work.\nMy hope here is that I can check specific variable assignments. I do not want to run an optimization algorithm on the problem (e.g. prob.solve()).\nCan PULP do this? Is there a different Python library that would be better? I've thought about Google's OR-Tools but have found the documentation is a little bit harder to parse through than PULP's.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":965,"Q_Id":50123308,"Users Score":1,"Answer":"It looks like this is possible doing the following:\n\nDefine PULP variables and constraints and add them to an LpProblem\nMake a dictionary of your assignments in the form {'variable name': value}\nUse LpProblem.assignVarsVals(your_assignment_dict) to assign those values\nRun LpProblem.valid() to check that your assignment meets all constraints and variable restrictions\n\nNote that this will almost certainly be slower than using numpy and Ax <= b. Formulating the problem might be easier, but performance will suffer due to how PULP runs these checks.","Q_Score":1,"Tags":"python,optimization,constraints,pulp","A_Id":50138245,"CreationDate":"2018-05-01T20:23:00.000","Title":"PULP: Check variable setting against constraints","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I am quite new in Python coding, and I am dealing with a big dataframe for my internship.\nI had an issue as sometimes there are wrong values in my dataframe. For example I find string type values (\"broken leaf\") instead of integer type values as (\"120 cm\") or (NaN).\nI know there is the df.replace() function, but therefore you need to know that there are wrong values. So how do I find if there are any wrong values inside my dataframe?\nThank you in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1435,"Q_Id":50138110,"Users Score":0,"Answer":"\"120 cm\" is a string, not an integer, so that's a confusing example. Some ways to find \"unexpected\" values include:\nUse \"describe\" to examine the range of numerical values, to see if there are any far outside of your expected range.\nUse \"unique\" to see the set of all values for cases where you expect a small number of permitted values, like a gender field.\nLook at the datatypes of columns to see whether there are strings creeping in to fields that are supposed to be numerical.\nUse regexps if valid values for a particular column follow a predictable pattern.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":50140995,"CreationDate":"2018-05-02T15:18:00.000","Title":"How to find if there are wrong values in a pandas dataframe?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a device which is sending packet with its own specific construction (header, data, crc) through its ethernet port.\nWhat I would like to do is to communicate with this device using a Raspberry and Python 3.x.\nI am already able to send Raw ethernet packet using the \"socket\" Library, I've checked with wireshark on my computer and everything seems to be transmitted as expected.\nBut now I would like to read incoming raw packet sent by the device and store it somewhere on my RPI to use it later.\nI don't know how to use the \"socket\" Library to read raw packet (I mean layer 2 packet), I only find tutorials to read higher level packet like TCP\/IP.\nWhat I would like to do is Something similar to what wireshark does on my computer, that is to say read all raw packet going through the ethernet port.\nThanks,\nAlban","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":914,"Q_Id":50151655,"Users Score":0,"Answer":"Did you try using ettercap package (ettercap-graphical)? \nIt should be available with apt. \nAlternatively you can try using TCPDump (Java tool) or even check ip tables","Q_Score":2,"Tags":"python,linux,sockets,raspberry-pi,ethernet","A_Id":50182669,"CreationDate":"2018-05-03T09:34:00.000","Title":"Read raw ethernet packet using python on Raspberry","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I am using server(server_name.corp.com) inside a corporate company. On the server i am running a flask server to listen on 0.0.0.0:5000.\nservers are not exposed to outside world but accessible via vpns.\nNow when i run host server_name.corp.com in the box i get some ip1(10.*.*.*)\nWhen i run ifconfig in the box it gives me ip2(10.*.*.*).\nAlso if i run ping server_name.corp.com in same box i get ip2.\nAlso i can ssh into server with ip1 not ip2\nI am able to access the flask server at ip1:5000 but not on ip2:5000.\nI am not into networking so fully confused on why there are 2 different ips and why i can access ip1:5000 from browser not ip2:5000.\nAlso what is equivalent of host command in python ( how to get ip1 from python. I am using socktet.gethostbyname(server_name.corp.com) which gives me ip2)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":50166145,"Users Score":0,"Answer":"Not quite clear about the network status by your statements, I can only tell that if you want to get ip1 by python, you could use standard lib subprocess, which usually be used to execute os command. (See subprocess.Popen)","Q_Score":0,"Tags":"python,linux,networking,server,ip","A_Id":50166912,"CreationDate":"2018-05-04T02:07:00.000","Title":"Host command and ifconfig giving different ips","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Does anyone know how to check if a subdomain exists on a website?\nI am doing a sign up form and everyone gets there own subdomain, I have some javascript written on the front end but I need to find a way to check on the backend.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":237,"Q_Id":50185118,"Users Score":1,"Answer":"Do a curl or http request on subdomain which you want to verify, if you get 404 that means it doesn't exists, if you get 200 it definitely exists","Q_Score":0,"Tags":"javascript,python,subdomain","A_Id":50185326,"CreationDate":"2018-05-05T02:23:00.000","Title":"how to use python to check if subdomain exists?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Does anyone know how to check if a subdomain exists on a website?\nI am doing a sign up form and everyone gets there own subdomain, I have some javascript written on the front end but I need to find a way to check on the backend.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":237,"Q_Id":50185118,"Users Score":0,"Answer":"Put the assigned subdomain in a database table within unique indexed column. It will be easier to check from python (sqlalchemy, pymysql ect...) if subdomain has already been used + will automatically prevent duplicates to be assigned\/inserted.","Q_Score":0,"Tags":"javascript,python,subdomain","A_Id":50185317,"CreationDate":"2018-05-05T02:23:00.000","Title":"how to use python to check if subdomain exists?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"If you have never installed anaconda, it seems to be rather simple. In the installation process of Anaconda, you choose to install visual studio code and that is it. \nBut I would like some help in my situation:\nMy objective: I want to use visual studio code with anaconda\n\nI have a mac with anaconda 1.5.1 installed. \nI installed visual studio code.\nI updated anaconda (from the terminal) now it is 1.6.9\n\nFrom there, I don't know how to proceed. \nany help please","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2644,"Q_Id":50190482,"Users Score":2,"Answer":"You need to select the correct python interpreter. When you are in a .py file, there's a blue bar in the bottom of the window (if you have the dark theme), there you can select the anaconda python interpreter.\nElse you can open the command window with ctrl+p or command+p and type '>' for running vscode commands and search '> Python Interpreter'. \nIf you don't see anaconda there google how to add a new python interpreter to vscode","Q_Score":0,"Tags":"python,anaconda","A_Id":50190821,"CreationDate":"2018-05-05T14:24:00.000","Title":"How to use visual studio code >after< installing anaconda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"We have a Java application in our project and what we want is to call some Python script and return results from it. What is the best way to do this?\nWe want to isolate Python execution to avoid affecting Java application at all. Probably, Dockerizing Python is the best solution. I don't know any other way.\nThen, a question is how to call it from Java.\nAs far as I understand there are several ways:\n\nstart some web-server inside Docker which accepts REST calls from Java App and runs Python scripts and returns results to Java via REST too.\nhandle request and response via Docker CLI somehow.\nuse Java Docker API to send REST request to Docker which then converted by Docker to Stdin\/Stdout of Python script inside Docker.\n\nWhat is the most effective and correct way to connect Java App with Python, running inside Docker?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1012,"Q_Id":50191768,"Users Score":2,"Answer":"You don\u2019t need docker for this. There are a couple of options, you should choose depending on what your Java application is doing. \n\nIf the Java application is a client - based on swing, weblaunch, or providing UI directly - you will want to turn the python functionality to be wrapped in REST\/HTTP calls. \nIf the Java application is a server\/webapp - executing within Tomcat, JBoss or other application container - you should simply wrap the python scrip inside a exec call. See the Java Runtime and ProcessBuilder API for this purpose.","Q_Score":0,"Tags":"java,python,docker","A_Id":50197429,"CreationDate":"2018-05-05T16:40:00.000","Title":"Calling Python scripts from Java. Should I use Docker?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I'm playing around with ethereum and python and I'm running into some weird behavior I can't make sense of. I'm having trouble understanding how return values work when calling a contract function with the python w3 client. Here's a minimal example which is confusing me in several different ways:\nContract:\n\npragma solidity ^0.4.0;\n\ncontract test {\n function test(){\n\n }\n\n function return_true() public returns (bool) {\n return true;\n }\n\n function return_address() public returns (address) {\n return 0x111111111111111111111111111111111111111;\n }\n}\n\nPython unittest code\n\nfrom web3 import Web3, EthereumTesterProvider\nfrom solc import compile_source\nfrom web3.contract import ConciseContract\nimport unittest\nimport os\n\n\ndef get_contract_source(file_name):\n with open(file_name) as f:\n return f.read()\n\n\nclass TestContract(unittest.TestCase):\n CONTRACT_FILE_PATH = \"test.sol\"\n DEFAULT_PROPOSAL_ADDRESS = \"0x1111111111111111111111111111111111111111\"\n\n def setUp(self):\n # copied from https:\/\/github.com\/ethereum\/web3.py\/tree\/1802e0f6c7871d921e6c5f6e43db6bf2ef06d8d1 with MIT licence\n # has slight modifications to work with this unittest\n contract_source_code = get_contract_source(self.CONTRACT_FILE_PATH)\n compiled_sol = compile_source(contract_source_code) # Compiled source code\n contract_interface = compiled_sol[':test']\n # web3.py instance\n self.w3 = Web3(EthereumTesterProvider())\n # Instantiate and deploy contract\n self.contract = self.w3.eth.contract(abi=contract_interface['abi'], bytecode=contract_interface['bin'])\n # Get transaction hash from deployed contract\n tx_hash = self.contract.constructor().transact({'from': self.w3.eth.accounts[0]})\n # Get tx receipt to get contract address\n tx_receipt = self.w3.eth.getTransactionReceipt(tx_hash)\n self.contract_address = tx_receipt['contractAddress']\n # Contract instance in concise mode\n abi = contract_interface['abi']\n self.contract_instance = self.w3.eth.contract(address=self.contract_address, abi=abi,\n ContractFactoryClass=ConciseContract)\n\n def test_return_true_with_gas(self):\n # Fails with HexBytes('0xd302f7841b5d7c1b6dcff6fca0cd039666dbd0cba6e8827e72edb4d06bbab38f') != True\n self.assertEqual(True, self.contract_instance.return_true(transact={\"from\": self.w3.eth.accounts[0]}))\n\n def test_return_true_no_gas(self):\n # passes\n self.assertEqual(True, self.contract_instance.return_true())\n\n def test_return_address(self):\n # fails with AssertionError: '0x1111111111111111111111111111111111111111' != '0x0111111111111111111111111111111111111111'\n self.assertEqual(self.DEFAULT_PROPOSAL_ADDRESS, self.contract_instance.return_address())\n\nI have three methods performing tests on the functions in the contract. In one of them, a non-True value is returned and instead HexBytes are returned. In another, the contract functions returns an address constant but python sees a different value from what's expected. In yet another case I call the return_true contract function without gas and the True constant is seen by python.\n\nWhy does calling return_true with transact={\"from\": self.w3.eth.accounts[0]} cause the return value of the function to be HexBytes(...)?\nWhy does the address returned by return_address differ from what I expect?\n\nI think I have some sort of fundamental misunderstanding of how gas affects function calls.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":792,"Q_Id":50194364,"Users Score":2,"Answer":"The returned value is the transaction hash on the blockchain. When transacting (i.e., when using \"transact\" rather than \"call\") the blockchain gets modified, and the library you are using returns the transaction hash. During that process you must have paid ether in order to be able to modify the blockchain. However, operating in read-only mode costs no ether at all, so there is no need to specify gas.\nDiscounting the \"0x\" at the beginning, ethereum addresses have a length of 40, but in your test you are using a 39-character-long address, so there is a missing a \"1\" there. Meaning, tests are correct, you have an error in your input.\n\nOfftopic, both return_true and return_address should be marked as view in Solidity, since they are not actually modifying the state. I'm pretty sure you get a warning in remix. Once you do that, there is no need to access both methods using \"transact\" and paying ether, and you can do it using \"call\" for free.\nEDIT\nForgot to mention: in case you need to access the transaction hash after using transact you can do so calling the .hex() method on the returned HexBytes object. That'll give you the transaction hash as a string, which is usually way more useful than as a HexBytes.\nI hope it helps!","Q_Score":0,"Tags":"python,ethereum,solidity,web3","A_Id":58494505,"CreationDate":"2018-05-05T21:56:00.000","Title":"Unintuitive solidity contract return values in ethereum python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Google Colab is awesome to work with, but I wish I can run Colab Notebooks completely locally and offline, just like Jupyter notebooks served from the local?\nHow do I do this? Is there a Colab package which I can install?\n\nEDIT: Some previous answers to the question seem to give methods to access Colab hosted by Google. But that's not what I'm looking for.\nMy question is how do I pip install colab so I can run it locally like jupyter after pip install jupyter. Colab package doesn't seem to exist, so if I want it, what do I do to install it from the source?","AnswerCount":3,"Available Count":1,"Score":-1.0,"is_accepted":false,"ViewCount":41922,"Q_Id":50194637,"Users Score":-4,"Answer":"Google Colab is a cloud computer\uff0cit only runs through Internet\uff0cyou can design your Python script\uff0cand run the Python script through Colab\uff0crun Python will use Google Colab hardware\uff0cGoogle will allocate CPU, RAM, GPU and etc for your Python script\uff0cyour local computer just submit Python code to Google Colab\uff0cand run\uff0cthen Google Colab return the result to your local computer\uff0ccloud computation is stronger than local\ncomputation if your local computer hardware is limited\uff0csee this question link will inspire you\uff0casked by me\uff0chttps:\/\/stackoverflow.com\/questions\/48879495\/how-to-apply-googlecolab-stronger-cpu-and-more-ram\/48922199#48922199","Q_Score":23,"Tags":"python,pip,google-colaboratory,productivity-power-tools","A_Id":50194820,"CreationDate":"2018-05-05T22:40:00.000","Title":"Colaboratory: How to install and use on local machine?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have been self-learning machine learning lately, and I am now trying to solve a binary classification problem (i.e: one label which can either be true or false). I was representing this as a single column which can be 1 or 0 (true or false).\nNonetheless, I was researching and read about how categorical variables can reduce the effectiveness of an algorithm, and how one should one-hot encode them or translate into a dummy variable thus ending with 2 labels (variable_true, variable_false).\nWhich is the correct way to go about this? Should one predict a single variable with two possible values or 2 simultaneous variables with a fixed unique value?\nAs an example, let's say we want to predict whether a person is a male or female:\nShould we have a single label Gender and predict 1 or 0 for that variable, or Gender_Male and Gender_Female?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":381,"Q_Id":50198094,"Users Score":0,"Answer":"it's basically the same, when talking about binary classification, you can think of a final layer for each model that adapt the output to other model\ne.g if the model output 0 or 1 than the final layer will translate it to vector like [1,0] or [0,1] and vise-versa by a threshold criteria, usually is >= 0.5\na nice byproduct of 2 nodes in the final layer is the confidence level of the model in it's predictions [0.80, 0.20] and [0.55, 0.45] will both yield [1,0] classification but the first prediction has more confidence\nthis can be also extrapolate from 1 node output by the distance of the output from the fringes 1 and 0 so 0.1 will be considered with more confidence than 0.3 as a 0 prediction","Q_Score":0,"Tags":"python,machine-learning","A_Id":50198173,"CreationDate":"2018-05-06T09:13:00.000","Title":"Predicting binary classification","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I haven't found any examples how to add a retry logic on some rpc call. Does gRPC have the ability to add a maximum retry for call? \nIf so, is it a built-in function?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1347,"Q_Id":50204638,"Users Score":1,"Answer":"Retries are not a feature of gRPC Python at this time.","Q_Score":4,"Tags":"python,grpc","A_Id":50340007,"CreationDate":"2018-05-06T21:22:00.000","Title":"Does gRPC have the ability to add a maximum retry for call?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to create a classifier to identify some aphids.\nMy project has two parts, one with a computer vision (OpenCV), which I already conclude. The second part is with Machine Learning using TensorFlow. But I have no idea how to do it.\nI have these data below that have been removed starting from the use of OpenCV, are HuMoments (I believe that is the path I must follow), each line is the HuMoments of an aphid (insect), I have 500 more data lines that I passed to one CSV file.\nHow can I make a classifier from a CSV file using TensorFlow?\n\nHuMoments (in CSV file):\n 0.27356047,0.04652453,0.00084231,7.79486673,-1.4484489,-1.4727380,-1.3752532\n 0.27455502,0.04913969,3.91102408,1.35705980,3.08570234,2.71530819,-5.0277362\n 0.20708829,0.01563241,3.20141907,9.45211423,1.53559373,1.08038279,-5.8776765\n 0.23454372,0.02820523,5.91665789,6.96682467,1.02919203,7.58756583,-9.7028848","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":50206222,"Users Score":0,"Answer":"You can start with this tutorial, and try it first without changing anything; I strongly suggest this unless you are already familiar with Tensorflow so that you gain some familiarity with it.\nNow you can modify the input layer of this network to match the dimensions of the HuMoments. Next, you can give a numeric label to each type of aphid that you want to recognize, and adjust the size of the output layer to match them.\nYou can now read the CSV file using python, and remove any text like \"HuMoments\". If your file has names of aphids, remove them and replace them with numerical class labels. Replace the training data of the code in the above link, with these data.\nNow you can train the network according to the description under the title \"Train the Model\".\nOne more note. Unless it is essential to use Tensorflow to match your project requirements, I suggest using Keras. Keras is a higher level library that is much easier to learn than Tensorflow, and you have more sample code online.","Q_Score":0,"Tags":"python,csv,tensorflow","A_Id":50206803,"CreationDate":"2018-05-07T02:06:00.000","Title":"Tensorflow How can I make a classifier from a CSV file using TensorFlow?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm working with Python and Selenium to do some automation in the office, and I need to fill in an \"upload file\" dialog box (a windows \"open\" dialog box), which was invoked from a site using a headless chrome browser. Does anyone have any idea on how this could be done?\nIf I wasn't using a headless browser, Pywinauto could be used with a line similar to the following, for example, but this doesn't appear to be an option in headless chrome:\napp.pane.open.ComboBox.Edit.type_keys(uploadfilename + \"{ENTER}\")\nThank you in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":533,"Q_Id":50223735,"Users Score":0,"Answer":"This turned out to not be possible. I ended up running the code on a VM and setting a registry key to allow automation to be run while the VM was minimized, disconnected, or otherwise not being interacted with by users.","Q_Score":1,"Tags":"python,google-chrome,selenium,automation,headless","A_Id":50398443,"CreationDate":"2018-05-07T23:22:00.000","Title":"How can you fill in an open dialog box in headless chrome in Python and Selenium?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I know how to run a python script as a background process, but is there any way to compile a python script into exe file using pyinstaller or other tools so it could have no console or window ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":427,"Q_Id":50231913,"Users Score":0,"Answer":"If you want to run it in background without \"console and \"window\" you have to run it as a service.","Q_Score":0,"Tags":"python,compilation","A_Id":50232373,"CreationDate":"2018-05-08T10:55:00.000","Title":"How to \"compile\" a python script to an \"exe\" file in a way it would be run as background process?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to let a class run on my server, which contains a connected bluetooth socket and continously checks for incoming data, which can then by interpreted. In principle the class structure would look like this:\nInterpreter:\n-> connect (initializes the class and starts the loop)\n-> loop (runs continously in the background)\n-> disconnect (stops the loop)\nThis class should be initiated at some point and then run continously in the background, from time to time a http request would perhaps need data from the attributes of the class, but it should run on its own.\nI don't know how to accomplish this and don't want to get a description on how to do it, but would like to know where I should start, like how this kind of process is called.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":359,"Q_Id":50233238,"Users Score":0,"Answer":"Django on its own doesn't support any background processes - everything is request-response cycle based.\nI don't know if what you're trying to do even has a dedicated name. But most certainly - it's possible. But don't tie yourself to Django with this solution.\nThe way I would accomplish this is I'd run a separate Python process, that would be responsible for keeping the connection to the device and upon request return the required data in some way. \nThe only difficulty you'd have is determining how to communicate with that process from Django. Since, like I said, django is request based, that secondary app could expose some data to your Django app - it could do any of the following:\n\nExpose a dead-simple HTTP Rest API\nExpose an UNIX socket that would just return data immediatelly after connection\nContinuously dump data to some file\/database\/mmap\/queue that Django could read","Q_Score":1,"Tags":"python,django,asynchronous,webserver,channels","A_Id":50233425,"CreationDate":"2018-05-08T12:08:00.000","Title":"(Django) Running asynchronous server task continously in the background","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"When searching my db all special characters work aside from the \"+\" - it thinks its a space. Looking on the backend which is python, there is no issues with it receiving special chars which I believe it is the frontend which is Javascript\nwhat i need to do is replace \"+\" == \"%2b\". Is there a way for me to use create this so it has this value going forth?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":105,"Q_Id":50240427,"Users Score":1,"Answer":"You can use decodeURIComponent('%2b'), or encodeUriComponent('+');\nif you decode the response from the server, you get the + sign-\nif you want to replace all ocurrence just place the whole string insde the method and it decodes\/encodes the whole string.","Q_Score":0,"Tags":"javascript,python,replace,syntax","A_Id":50240590,"CreationDate":"2018-05-08T18:49:00.000","Title":"Replace character with a absolute value","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"This is my first time coding a \"project\" (something more than solving exercises in single files). A number of my .py files have variables imported from a specific path. I also have a main \"Run\" file where I import things I've written in other files and execute the project as a whole. \nRecently I've started working on this project on several different machines (home, work, laptop etc) and have just started learning how to use GitHub.\nMy question is, how do I deal with the fact that every time I open up my code on a different machine I need to go around changing all the paths to fit the new machine, and then change them back again when I'm home? I started writing a Run file for each location I work at so that my sys.path commands are ok with that machine, but it doesn't solve the problem of my other modules importing variables from specific paths that vary from machine to machine. Is there a way round this or is the problem in how I'm setting up the project itself?\nIn an ideal world it would all just work without me having to change something before I run, depending on the machine I'm working on, but I don't know if that's possible.\nMy current thoughts are whether there is some command I'm not aware of that can set variables inside a .py file from my main Run.py file - that way I can just have a run file for each machine.\nAny suggestions are gladly taken! Whatever it is, it must be better than commenting back in the correct file path each time I open it on a different machine!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":201,"Q_Id":50242220,"Users Score":0,"Answer":"You should always use relative paths, not static which I assume you have got.\nAssuming your in an index file and you need to access images folder, you probably have something like \/users\/username\/project\/images\/image.png\nInstead you want something like ..\/images\/image.png, this tells your index file to go backwards one folder to say the root of the project, then proceed into our images folder etc.\nRelative paths mean you create a path from where your file exists, and not an entire path from ground up.","Q_Score":0,"Tags":"python,github,import,path,project","A_Id":50242289,"CreationDate":"2018-05-08T21:02:00.000","Title":"How to deal with working on one project on different machines (paths)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"We are having n number of documents. Upon submission of new document by user, our goal is to inform him about possible duplication of existing document (just like stackoverflow suggests questions may already have answer).\nIn our system, new document is uploaded every minute and mostly about the same topic (where there are more chance of duplication).\nOur current implementation includes gensim doc2vec model trained on documents (tagged with unique document ids). We infer vector for new document and find most_similar docs (ids) with it. Reason behind choosing doc2vec model is that we wanted to take advantage of semantics to improve results. As far as we know, it does not support online training, so we might have to schedule a cron or something that periodically updates the model. But scheduling cron will be disadvantageous as documents come in a burst. User may upload duplicates while model is not yet trained for new data. Also given huge amount of data, training time will be higher.\nSo i would like to know how such cases are handled in big companies. Are there any better alternative? or better algorithm for such problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":418,"Q_Id":50264369,"Users Score":2,"Answer":"You don't have to take the old model down to start training a new model, so despite any training lags, or new-document bursts, you'll always have a live model doing the best it can.\nDepending on how much the document space changes over time, you might find retraining to have a negligible benefit. (One good model, built on a large historical record, might remain fine for inferring new vectors indefinitely.)\nNote that tuning inference to use more steps (especially for short documents), or a lower starting alpha (more like the training default of 0.025) may give better results. \nIf word-vectors are available, there is also the \"Word Mover's Distance\" (WMD) calculation of document similarity, which might be ever better at identifying close duplicates. Note, though, it can be quite expensive to calculate \u2013 you might want to do it only against a subset of likely candidates, or have to add many parallel processors, to do it in bulk. There's another newer distance metric called 'soft cosine similarity' (available in recent gensim) that's somewhere between simple vector-to-vector cosine-similarity and full WMD in its complexity, that may be worth trying. \nTo the extent the vocabulary hasn't expanded, you can load an old Doc2Vec model, and continue to train() it \u2013 and starting from an already working model may help you achieve similar results with fewer passes. But note: it currently doesn't support learning any new words, and the safest practice is to re-train with a mix of all known examples interleaved. (If you only train on incremental new examples, the model may lose a balanced understanding of the older documents that aren't re-presented.)\n(If you chief concern is documents that duplicate exact runs-of-words, rather than just similar fuzzy topics, you might look at mixing-in other techniques, such as breaking a document into a bag-of-character-ngrams, or 'shingleprinting' as in common in plagiarism-detection applications.)","Q_Score":3,"Tags":"python,machine-learning,nlp,gensim,doc2vec","A_Id":50265639,"CreationDate":"2018-05-10T01:53:00.000","Title":"Document similarity in production environment","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to run Apache Airflow's webserver from a virtualenv on a Redhat machine, with some configuration options from a Gunicorn config file. Gunicorn and Airflow are both installed in the virtualenv. The command airflow webserver starts Airflow's webserver and the Gunicorn server. The config file has options to make sure Gunicorn uses\/accepts TLSv1.2 only, as well as a list of ciphers to use. \nThe Gunicorn config file is gunicorn.py. This file is referenced through an environment variable GUNICORN_CMD_ARGS=\"--config=\/path\/to\/gunicorn.py ...\" in .bashrc. This variable also sets a couple of other variables in addition to --config. However, when I run the airflow webserver command, the options in GUNICORN_CMD_ARGS are never applied. \nSeeing as how Gunicorn is not called from command line, but instead by Airflow, I'm assuming this is why the GUNICORN_CMD_ARGS environment variable is not read, but I'm not sure and I'm new to both technologies...\nTL;DR:\nIs there another way to set up Gunicorn to automatically reference a config file, without the GUNICORN_CMD_ARGS environment variable? \nHere's what I'm using:\n\ngunicorn 19.8.1\napache-airflow 1.9.0\npython 2.7.5","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2243,"Q_Id":50264760,"Users Score":1,"Answer":"When Gunicorn is called by Airflow, it uses ~\\airflow\\www\\gunicorn_config.py as its config file.","Q_Score":1,"Tags":"python,virtualenv,config,gunicorn,airflow","A_Id":50646740,"CreationDate":"2018-05-10T02:52:00.000","Title":"Apache Airflow: Gunicorn Configuration File Not Being Read?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am from R background where we can use Plumber kind tool which provide visualization\/graph as Image via end points so we can integrate in our Java application.\nNow I want to integrate my Python\/Juypter visualization graph with my Java application but not sure how to host it and make it as endpoint. Right now I using AWS sagemaker to host Juypter notebook","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":75,"Q_Id":50271174,"Users Score":1,"Answer":"Amazon SageMaker is a set of different services for data scientists. You are using the notebook service that is used for developing ML models in an interactive way. The hosting service in SageMaker is creating an endpoint based on a trained model. You can call this endpoint with invoke-endpoint API call for real time inference. \nIt seems that you are looking for a different type of hosting that is more suitable for serving HTML media rich pages, and doesn\u2019t fit into the hosting model of SageMaker. A combination of EC2 instances, with pre-built AMI or installation scripts, Congnito for authentication, S3 and EBS for object and block storage, and similar building blocks should give you a scalable and cost effective solution.","Q_Score":0,"Tags":"python,amazon-sagemaker","A_Id":50300373,"CreationDate":"2018-05-10T10:48:00.000","Title":"How to make a Python Visualization as service | Integrate with website | specially sagemaker","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a virtualenv environment running python 3.5\nToday, when I booted up my MacBook, I found myself unable to install python packages for my Django project. I get the following error:\n\nCould not fetch URL : There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:646) - skipping\n\nI gather that TLS 1.0 has been discontinued, but from what I understand, newer versions of Python should be using TLS1.2, correct? Even outside of my environment, running pip3 trips the same error. I've updated to the latest version of Sierra and have updated Xcode as well. Does anyone know how to resolve this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":105,"Q_Id":50284838,"Users Score":1,"Answer":"Here is the fix:\ncurl https:\/\/bootstrap.pypa.io\/get-pip.py | python\nExecute from within the appropriate virtual environment.","Q_Score":0,"Tags":"python,django,macos","A_Id":50379383,"CreationDate":"2018-05-11T04:04:00.000","Title":"Python - Enable TLS1.2 on OSX","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Basically, it is a multi-threaded crawler program, which uses requests mainly. After running the program for a few hours, I keep getting the error \"Too many open files\".\nBy running: lsof -p pid, I saw a huge number of entries like below:\npython 75452 xxx 396u a_inode 0,11 0 8121 [eventpoll]\nI cannot figure out what it is and how to trace back to the problem.\nPreviously, I tried to have it running in Windows and never seen this error.\nAny idea how to continue investigating this issue? thanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":528,"Q_Id":50300407,"Users Score":0,"Answer":"I have figured out that it is caused by Gevent. After replacing gevent with multi-thread, everything is just OK. \nHowever, I still don't know what's wrong with gevent, which keeps opening new files(eventpoll).","Q_Score":0,"Tags":"python,ubuntu,memory-leaks,python-requests","A_Id":50348191,"CreationDate":"2018-05-11T21:19:00.000","Title":"python Ubuntu: too many open files [eventpoll]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I've recently started freelance python programming, and was hired to write a script that scraped certain info online (nothing nefarious, just checking how often keywords appear in search results).\nI wrote this script with Selenium, and now that it's done, I'm not quite sure how to prepare it to run on the client's machine. \nSelenium requires a path to your chromedriver file. Am I just going to have to compile the py file as an exe and accept the path to his chromedriver as an argument, then show him how to download chromedriver and how to write the path?\nEDIT: Just actually had a thought while typing this out. Would it work if I sent the client a folder including a chromedriver.exe inside of said folder, so the path was always consistent?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":582,"Q_Id":50301263,"Users Score":0,"Answer":"Option 1) Deliver a Docker image if customer not to watch the browser during running and they can setup Docker environment. The Docker image should includes following items:\n\nPython\nDependencies for running your script, like selenium\nHeadless chrome browser and compatible chrome webdriver binary\nYour script, put them in github and\nfetch them when start docker container, so that customer can always get\nyour latest code\n\nThis approach's benefit: \n\nYou only need to focus on scripts like bug fix and improvement after delivery \nCustomer only need to execute same docker command\n\nOption 2) Deliver a Shell script to do most staff automatically. It should accomplish following items: \n\nInstall Python (Or leave it for customer to complete)\nInstall Selenium library and others needed\nInstall latest chrome webdriver binary (which is compatible backward)\nFetch your script from code repo like github, or simply deliver as packaged folder\nRun your script.\n\nOption 3) Deliver your script and an user guide, customer have to do many staff by self. You can supply a config file along with your script for customer to specify the chrome driver binary path after they download. Your script read the path from this file, better than enter it in cmd line every time.","Q_Score":0,"Tags":"python,selenium","A_Id":50301550,"CreationDate":"2018-05-11T22:56:00.000","Title":"How to prepare Python Selenium project to be used on client's machine?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"The default version of python installed on my mac is python 2. I also have python 3 installed but can't install python 2.\nI'd like to configure Hyrdrogen on Atom to run my script using python 3 instead.\nDoes anybody know how to do this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4266,"Q_Id":50304519,"Users Score":0,"Answer":"I used jupyter kernelspec list and I found 2 kernels available, one for python2 and another for python3\nSo I pasted python3 kernel folder in the same directory where python2 ken=rnel is installed and removed python2 kernel using 'rm -rf python2'","Q_Score":3,"Tags":"python,python-3.x,jupyter,atom-editor,hydrogen","A_Id":56453522,"CreationDate":"2018-05-12T09:01:00.000","Title":"Using Hydrogen with Python 3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using Ubuntu 16.04 . Where is the python 3 installation directory ?\nRunning \"whereis python3\" in terminal gives me:\n\npython3: \/usr\/bin\/python3.5m-config \/usr\/bin\/python3 \n \/usr\/bin\/python3.5m \/usr\/bin\/python3.5-config \/usr\/bin\/python3.5 \n \/usr\/lib\/python3 \/usr\/lib\/python3.5 \/etc\/python3 \/etc\/python3.5\n \/usr\/local\/lib\/python3.5 \/usr\/include\/python3.5m \n \/usr\/include\/python3.5 \/usr\/share\/python3 \n \/usr\/share\/man\/man1\/python3.1.gz\n\nAlso where is the intrepreter i.e the python 3 executable ? And how would I add this path to Pycharm ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":10136,"Q_Id":50304901,"Users Score":5,"Answer":"you can try this :\nwhich python3","Q_Score":3,"Tags":"python,python-3.x","A_Id":50304923,"CreationDate":"2018-05-12T09:48:00.000","Title":"Python 3 install location","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have trained a model successfully and now I want to continue training it with new data. If a given data with the same amount of classes it works fine. But having more data then initially it will give me the error:\n\nValueError: Shapes (?, 14) and (?, 21) are not compatible\n\nHow can I dynamically increase the number of classes in my trained model or how to make the model accept a lesser number of classes? Do I need to save the classes in a pickle file?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":168,"Q_Id":50305294,"Users Score":0,"Answer":"Best thing to do is to train your network from scratch with the output layers adjusted to the new output class size.\nIf retraining is an issue, then keep the trained network as it is and only drop the last layer. Add a new layer with the proper output size, initialized to random weights and then fine-tune (train) the entire network.","Q_Score":1,"Tags":"python-3.x,tensorflow,machine-learning,rnn","A_Id":50305713,"CreationDate":"2018-05-12T10:36:00.000","Title":"How to continue to train a model with new classes and data?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I want to make access from remote ubuntu server to local machine because I have multiple files in this machine and I want to transfer it periodically (every minute) to server how can I do that using python","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":312,"Q_Id":50315821,"Users Score":0,"Answer":"You can easily transfer files between local and remote or between two remote servers. If both servers are Linux-based and require to transfer multiple files and folder using single command, however, you need to follow up below steps:\n\nUser from one remote server should have access to another remote server to corresponding directory you want to transfer the file.\n\nYou might need to create a policy or group and assign to server list to that group\nwhich you want to access and assign the user to that group so 2 different remote\nserver can talk to each other.\n\nRun the following scp command:-\n\n\n scp [options] username1@source_host:directory1\/filename1 \n username2@destination_host:directory2\/filename2","Q_Score":0,"Tags":"python","A_Id":68788882,"CreationDate":"2018-05-13T11:51:00.000","Title":"transfer files between local machine and remote server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"My goal is to make an easy neural network fit by providing 2 verticies of a certain Graph and 1 if there's a link or 0 if there's none.\nI fit my model, it gets loss of about 0.40, accuracy of about 83% during fitting. I then evaluate the model by providing a batch of all positive samples and several batches of negative ones (utilising random.sample). My model gets loss of ~0.35 and 1.0 accuracy for positive samples and ~0.46 loss 0.68 accuracy for negative ones.\nMy understanding of neural networks if extremely limited, but to my understanding the above means it theoretically always is right when it outputs 0 when there's no link, but can sometimes output 1 even if there is none.\nNow for my actual problem: I try to \"reconstruct\" the original graph with my neural network via model.predict. The problem is I don't understand what the predict output means. At first I assumed values above 0.5 mean 1, else 0. But if that's the case the model doesn't even come close to rebuilding the original.\nI get that it won't be perfect, but it simply returns value above 0.5 for random link candidates.\nCan someone explain to me how exactly model.predict works and how to properly use it to rebuild my graph?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":343,"Q_Id":50319873,"Users Score":0,"Answer":"The model that you trained is not directly optimized w.r.t. the graph reconstruction. Without loss of generality, for a N-node graph, you need to predict N choose 2 links. And it may be reasonable to assume that the true values of the most of these links are 0.\nWhen looking into your model accuracy on the 0-class and 1-class, it is clear that your model is prone to predict 1-class, assuming your training data is balanced. Therefore, your reconstructed graph contains many false alarm links. This is the exact reason why the performance of your reconstruction graph is poor.\nIf it is possible to retrain the model, I suggest you do it and use more negative samples.\nIf not, you need to consider applying some post-processing. For example, instead of finding a threshold to decide which two nodes have a link, use the raw predicted link probabilities to form a node-to-node linkage matrix, and apply something like the minimum spanning tree to further decide what are appropriate links.","Q_Score":2,"Tags":"python,tensorflow,machine-learning,keras,predict","A_Id":50326237,"CreationDate":"2018-05-13T19:28:00.000","Title":"Need help using Keras' model.predict","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I build two graphs in my code, graph1 and graph2. \nThere is a tensor, named embedding, in graph1. I tied to use it in graph2 by using get_variable, while the error is tensor must be from the same graph as Tensor. I found that this error occurs because they are in different graphs.\nSo how can I use a tensor in graph1 to graph2?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":50323744,"Users Score":0,"Answer":"expanding on @jdehesa's comment,\nembedding could be trained initially, saved from graph1 and restored to graph2 using tensorflows saver\/restore tools. for this to work you should assign embedding to a name\/variable scope in graph1 and reuse the scope in graph2","Q_Score":0,"Tags":"python,tensorflow,graph","A_Id":50329741,"CreationDate":"2018-05-14T05:48:00.000","Title":"How to used a tensor in different graphs?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm sorry if the title is a little ambiguous. Let me explain what I mean by that : \nI have a python script that does a few things : creates a row in a MySQL table, inserts a json document to a MongoDB, Updates stuff in a local file, and some other stuff, mostly related to databases. Thing is, I want the whole operation to be atomic. Means - If anything during the process I mentioned failed, I want to rollback everything I did. I thought of implementing a rollback function for every 'create' function I have. But I'd love to hear your opinion for how to make some sort of a linked list of operations, in which if any of the nodes failed, I want to discard all the changes done in the process.\nHow would you design such a thing? Is there a library in Python for such things?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":50336747,"Users Score":0,"Answer":"You should implement every action to be reversible and the reverse action to be executable even if the original action has failed. Then if you have any failures, you execute every reversal.","Q_Score":2,"Tags":"python,database,design-patterns","A_Id":50347347,"CreationDate":"2018-05-14T18:25:00.000","Title":"Best practice for rollbacking a multi-purpose python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have looked at a few python GUI frameworks like PyQt, wxPython and Kivy, but have noticed there aren\u2019t many popular (used widely) python applications, from what I can find, that use them.\nBlender, which is pretty popular, doesn\u2019t seem to use them. How would one go about doing what they did\/what did they do and what are the potential benefits over using the previously mentioned frameworks?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":469,"Q_Id":50346411,"Users Score":1,"Answer":"I would say that python isn't a popular choice when it comes to making a GUI application, which is why you don't find many examples of using the GUI frameworks. tkinter, which is part of the python development is another option for GUI's.\nBlender isn't really a good example as it isn't a GUI framework, it is a 3D application that integrates python as a means for users to manipulate it's data. It was started over 25 years ago when the choice of cross platform frameworks was limited, so making their own was an easier choice to make. Python support was added to blender about 13 years ago. One of the factors in blender's choice was to make each platform look identical. That goes against most frameworks that aim to implement a native look and feel for each target platform.\nSo you make your own framework when the work of starting your own framework seems easier than adjusting an existing framework to your needs, or the existing frameworks all fail to meet your needs, one of those needs may be licensing with Qt and wxWidgets both available under (L)GPL, while Qt also sells non-GPL licensing.\nThe benefit to using an existing framework is the amount of work that is already done, you will find there is more than you first think in a GUI framework, especially when you start supporting multiple operating systems.","Q_Score":0,"Tags":"python,python-3.x,user-interface,blender","A_Id":50404266,"CreationDate":"2018-05-15T09:13:00.000","Title":"Why and how would you not use a python GUI framework and make one yourself like many applications including Blender do?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have Python version 3.5 which is located here C:\\Program Files(x86)\\Microsoft Visual Studio\\Shared\\Python35_64 If I install kivy and its components and add-ons with this command: python -m pip install kivy, then it does not install in the place that I need. I want to install kivy in this location C:\\Program Files(x86)\\ Microsoft Visual Studio\\Shared\\Python35_64\\Lib\\site-packages, how can I do this?\nI did not understand how to do this from the explanations on the official website.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":247,"Q_Id":50358110,"Users Score":1,"Answer":"So it turned out that I again solved my problem myself, I have installed Python 3.5 and Python 3.6 on my PC, kiwy was installed in Python 3.6 by default, and my development environment was using Python 3.5, I replaced it with 3.6 and it all worked.","Q_Score":1,"Tags":"python,python-3.x,kivy,python-3.5,site-packages","A_Id":50400643,"CreationDate":"2018-05-15T19:46:00.000","Title":"Installing Kivy to an alternate location","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to write an application which is portable.\nWith \"portable\" I mean that it can be used to access these storages:\n\namazon s3\ngoogle cloud storage\nEucalyptus Storage\n\nThe software should be developed using Python.\nI am unsure how to start, since I could not find a library which supports all three storages.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":225,"Q_Id":50364766,"Users Score":3,"Answer":"You can use boto3 for accessing any services of Amazon.","Q_Score":7,"Tags":"python,amazon-s3,google-cloud-storage,portability","A_Id":50364799,"CreationDate":"2018-05-16T07:28:00.000","Title":"Portable application: s3 and Google cloud storage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new in mininet. I created a custom topology with 2 linear switches and 4 nodes. I need to write a python module accessing each nodes in that topology and do something but I don't know how.\nAny idea please?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":155,"Q_Id":50373532,"Users Score":0,"Answer":"try the following:\ns1.cmd('ifconfig s1 192.168.1.0')\nh1.cmd('ifconfig h1 192.168.2.0')","Q_Score":0,"Tags":"python,mininet,pox","A_Id":50385916,"CreationDate":"2018-05-16T14:25:00.000","Title":"How to access created nodes in a mininet topology?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've been researching like forever, but couldn't find an answer. I'm using OpenCV to detect faces, now I want to calculate the distance to the face. When I detect a face, I get a matofrect (which I can visualize with a rectangle). Pretty clear so far. But now: how do I get the width of the rectangle in the real world? There has to be some average values that represent the width of the human face. If I have that value (in inch, mm or whatever), I can calculate the distance using real width, pixel width and focal length. Please, can anyone help me?\nNote: I'm comparing the \"simple\" rectangle solution against a Facemark based distance measuring solution, so no landmark based answers. I just need the damn average face \/ matofrectwidth :D\nThank you so much!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":138,"Q_Id":50375489,"Users Score":2,"Answer":"OpenCV's facial recognition is slightly larger than a face, therefore an average face may not be helpful. Instead, just take a picture of a face at different distances from the camera and record the distance from the camera along with the pixel width of the face for several distances. After plotting the two variables on a graph, use a trendline to come up with a predictive model.","Q_Score":1,"Tags":"python,opencv,distance,face-detection,measure","A_Id":50375996,"CreationDate":"2018-05-16T16:07:00.000","Title":"Real width of detected face","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a ton of PDF files that are laid out in two columns. When I use PyPDF2 to extract the text, it reads the entire first column (which are like headers) and the entire second column. This makes splitting on the headers impossible. It's laid out in two columns:\n____ __________\n|Col1 Col2 |\n|Col1 Col2 |\n|Col1 Col2 |\n|Col1 Col2 |\n____ __________ \nI think I need to split the PDF in half along the edge of the column, then read each column left to right. It's 2.26 inches width on an 8x11 PDF. I can also get the coordinates using PyPDF2.\nDoes anyone have any experience doing this or know how I would do it?\nEdit: When I extractText using PyPDF2, the ouput has no spaces: Col1Col1Col1Col1Col2Col2Col2Col2","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":783,"Q_Id":50376895,"Users Score":1,"Answer":"Using pdfminer.six successfully read from left to right with spaces in between.","Q_Score":1,"Tags":"python,pdf,pypdf2","A_Id":50377964,"CreationDate":"2018-05-16T17:31:00.000","Title":"Split a PDF file into two columns along a certain measurement in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I get a db record as an sqlalchemy object and I need to consult the original values during some calculation process, so I need the original record till the end. However, the current code modifies the object as it goes and I don't want to refactor it too much at the moment. \nHow can I make a copy of the original data? The deepcopy seems to create a problem, as expected. I definitely prefer not to copy all the fields manually, as someone will forget to update this code when modifying the db object.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1099,"Q_Id":50396458,"Users Score":0,"Answer":"You can have many options here to copy your object.Two of them which I can think of are :\n\nUsing __dict__ it will give the dictionary of the original sqlalchemy object and you can iterate through all the attributes using .keys() function which will give all the attributes.\nYou can also use inspect module and getmembers() to get all the attributes defined and set the required attributes using setattr() method.","Q_Score":1,"Tags":"python,python-2.7,sqlalchemy,copy","A_Id":50397021,"CreationDate":"2018-05-17T16:34:00.000","Title":"how to make a copy of an sqlalchemy object (data only)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using a pi3 which talks to an arduino via serial0 (ttyAMA0)\nIt all works fine. I can talk to it with minicom, bidirectionally. However, a python based server also wants this port. I notice when minicom is running, the python code can write to serial0 but not read from it. At least minicom reports the python server has sent a message.\nCan someone let me know how this serial port handles contention, if at all? I notice running two minicom session to the same serial port wrecks both sessions. Is it possible to have multiple writers and readers if they are coordinated not to act at the same time? Or can there be multiple readers (several terms running cat \/dev\/serial0)\nI have googled around for answers but most hits are about using multiple serial ports or getting a serial port to work at all.\nCheers","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":68,"Q_Id":50404863,"Users Score":1,"Answer":"Since two minicoms can attempt to use the port and there are collisions minicom must not set an advisory lock on local writes to the serial port. I guess that the first app to read received remote serial message clears it, since serial doesn't buffer. When a local app writes to serial, minicom displays this and it gets sent. I'm going to make this assumed summary\n\nwhen a local process puts a message on the serial port everyone can\nsee it and it gets sent to remote. \nwhen a remote message arrives on\nserial, the first local process to get it, gets it. The others\ncan't see it. \nfor some reason, minicom has privilege over arriving\nmessages. This is why two minicoms break the message.","Q_Score":0,"Tags":"python,serial-port,raspberry-pi3","A_Id":50575853,"CreationDate":"2018-05-18T06:14:00.000","Title":"basic serial port contention","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am coming from a C++ programming background and am wondering if there is a pass by reference equivalent in python. The reason I am asking is that I am passing very large arrays into different functions and want to know how to do it in a way that does not waste time or memory by having copy the array to a new temporary variable each time I pass it. It would also be nice if, like in C++, changes I make to the array would persist outside of the function.\nThanks in advance,\nJared","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":507,"Q_Id":50414041,"Users Score":4,"Answer":"Python handles function arguments in the same manner as most common languages: Java, JavaScript, C (pointers), C++ (pointers, references).\nAll objects are allocated on the heap. Variables are always a reference\/pointer to the object. The value, which is the pointer, is copied. The object remains on the heap and is not copied.","Q_Score":0,"Tags":"python,python-3.x","A_Id":50414131,"CreationDate":"2018-05-18T14:53:00.000","Title":"Effective passing of large data to python 3 functions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to make a program which will differentiate a function, but I have no idea how to do this. I've only made a part which transforms the regular expression(x ^ 2 + 2 for example ) into reverse polish notation. Can anybody help me with creating a program which will a find symbolic derivatives of expression with + * \/ - ^","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1646,"Q_Id":50424344,"Users Score":1,"Answer":"Hint: Use a recursive routine. If an operation is unary plus or minus, leave the plus or minus sign alone and continue with the operand. (That means, recursively call the derivative routine on the operand.) If an operation is addition or subtraction, leave the plus or minus sign alone and recursively find the derivative of each operand. If the operation is multiplication, use the product rule. If the operation is division, use the quotient rule. If the operation is exponentiation, use the generalized power rule. (Do you know that rule, for u ^ v? It is not given in most first-year calculus books but is easy to find using logarithmic differentiation.) (Now that you have clarified in a comment that there will be no variable in the exponent, you can use the regular power rule (u^n)' = n * u^(n-1) * u' where n is a constant.) And at the base of the recursion, the derivative of x is 1 and the derivative of a constant is zero.\nThe result of such an algorithm would be very un-simplified but it would meet your stated requirements. Since this algorithm looks at an operation then looks at the operands, having the expression in Polish notation may be simpler than reverse Polish or \"regular expression.\" But you could still do it for the expression in those forms.\nIf you need more detail, show us more of your work.","Q_Score":0,"Tags":"python,math","A_Id":50424748,"CreationDate":"2018-05-19T10:36:00.000","Title":"How to find symbolic derivative using python without sympy?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have an array of integers(nodes or destinations) i.e array[2,3,4,5,6,8] that need to be visited in the given sequence.\nWhat I want is, to get the shortest distance using pgr_dijkstra. But the pgr_dijkstra finds the shortest path for two points, therefore I need to find the distance of each pair using pgr_dijkstra and adding all distances to get the total distance.\nThe pairs will be like\n2,3\n3,4 \n4,5 \n5,6 \n6,8.\nIs there any way to define a function that takes this array and finds the shortest path using pgr_dijkstra.\nQuery is:\nfor 1st pair(2,3)\nSELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads',2,3, false);\nfor 2nd pair(3,4)\nSELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,3,4,*** false)\nfor 3rd pair(4,5)\nSELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,4,5,*** false);\nNOTE: The array size is not fixed, it can be different.\nIs there any way to automate this in postgres sql may be using a loop etc?\nPlease let me know how to do it.\nThank you.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":708,"Q_Id":50429760,"Users Score":0,"Answer":"If you want all pairs distance then use\nselect * from pgr_apspJohnson ('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads)","Q_Score":1,"Tags":"mysql,sql,postgresql,mysql-python,pgrouting","A_Id":51161622,"CreationDate":"2018-05-19T21:46:00.000","Title":"how to get the distance of sequence of nodes in pgr_dijkstra pgrouting?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have built a chatbot using AWS Lex and lambda. I have a use case where in a user enters a question (For example: What is the sale of an item in a particular region). I want that once this question is asked, a html form\/pop up appears that askes the user to select the value of region and item from dropdown menus and fills the slot of the question with the value selected by the user and then return a response. Can some one guide how can this be achieved? Thanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":224,"Q_Id":50447302,"Users Score":0,"Answer":"Lex has something called response cards where your can add all the possible values. These are called prompts. The user can simply select his\/her choice and the slot gets filled. Lex response cards work in Facebook and slack. \nIn case of custom channel, you will have to custom develop the UI components.","Q_Score":0,"Tags":"python-3.x,amazon-web-services,aws-lambda,chatbot,amazon-lex","A_Id":50462554,"CreationDate":"2018-05-21T10:54:00.000","Title":"Using aws lambda to render an html page in aws lex chatbot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new to Python and I am using Python 3.6.4. I also use PyCharm editor to write all my code. Please let me know how can I install Image library in Windows 7 and would it work in PyCharm too.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":434,"Q_Id":50461995,"Users Score":0,"Answer":"From pycharm, \n\ngoto settings -> project Interpreter\nClick on + button on top right corner and you will get pop-up window of \nAvailable packages. Then search for pillow, PIL image python packages.\nThen click on Install package to install those packages.","Q_Score":0,"Tags":"python,windows,python-3.x,image,pycharm","A_Id":50463283,"CreationDate":"2018-05-22T07:22:00.000","Title":"How to install image library in python 3.6.4 in windows 7?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am attempting to learn how to create a website using python. I have been going off the advice of various websites including stackoverflow. Currently I can run code in eclipse using pydev, but I need to install django. I have no idea how to do this and I don't know who to ask or where to begin. Please help","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":50478204,"Users Score":0,"Answer":"I would recommend the following:\n\nInstall virtual environment\n\n$pip install virtualenv\n\nCreate a new virtualenvironment\n\n$ virtualenv django-venv\n\nActivate virtual environment & use\n\n$ source django-venv\/bin\/activate\n\nAnd install django as expected\n\n(django-venv)$ pip install django==1.11.13\n(Replace with django version as needed)","Q_Score":0,"Tags":"python,django,eclipse,pydev","A_Id":50478352,"CreationDate":"2018-05-23T00:28:00.000","Title":"I have downloaded eclipse and pydev, but I am unsure how to get install django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"i have a webservice which gets user requests and produces (multiple) solution(s) to this request.\nI want to return a solution as soon as possible, and send the remaining solutions when they are ready.\nIn order to do this, I thought about using Django's Http stream response. Unfortunately, I am not sure if this is the most adequate way of doing so, because of the problem I will describe below.\nI have a Django view, which receives a query and answers with a stream response. This stream returns the data returned by a generator, which is always a python dictionary.\nThe problem is that upon the second return action of the stream, the Json content breaks.\nIf the python dictionary, which serves as a response, is something like {key: val}, after the second yield the returned response is {key: val} {key: val}, which is not valid Json.\nAny suggestions on how to return multiple Json objects at different moments in time?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":928,"Q_Id":50491360,"Users Score":0,"Answer":"Try decoding with something like \nfor example \n\n\n\nimport json\njson.dumps( {key: val} {key: val}, separators=('}', ':')) #check it","Q_Score":1,"Tags":"python,json,django","A_Id":58790635,"CreationDate":"2018-05-23T14:46:00.000","Title":"Proper way of streaming JSON with Django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have been using pycharm for a while now, and I have to say that I am a real fan of it's features. I have one issue though, when I try to run a .py file from either the desktop or command prompt, I am instead prompted to use the run feature in pycharm. I consider this an issue because if I try to create a program for someone who doesn't know how to code, they would probably be scared off by opening pycharm. I don't, however, want to uninstall pycharm because it is so useful when writing code. Does anyone have any Ideas for me? By the way, I am using a dell Inspiron 15 7000 Gaming laptop with the current version of Windows 10 installed.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":65,"Q_Id":50492727,"Users Score":0,"Answer":"You can try running the direct path of the file, I'm not sure what you have tried.\nIf you wanted to run it as I just described you would do:\npy C:\\~AppData\\Local\\Programs\\Python\\Python36-32\\hello.py\nIf you move the file into your current working directory when programming, you should just be able to run py hello.py.","Q_Score":0,"Tags":"python,pycharm","A_Id":50492782,"CreationDate":"2018-05-23T15:52:00.000","Title":"pycharm won't let me run from desktop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm programming a bit of server code and the MQTT side of it runs in it's own thread using the threading module which works great and no issues but now I'm wondering how to proceed. \nI have two MariaDB databases, one of them is local and the other is remote (There is a good and niche reason for this.) and I'm writing a class which handles the databases. This class will start new threads of classes that submits the data to their respected databases. If conditions are true, then it tells the data to start a new thread to push data to one database, if they are false, the data will go to the other database. The MQTT thread has a instance of the \"Database handler\" class and passes data to it through different calling functions within the class.\nWill this work to allow a thread to concentrate on MQTT tasks while another does the database work? There are other threads as well, I've just never combined databases and threads before so I'd like an opinion or any information that would help me out from more seasoned programmers.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":159,"Q_Id":50497250,"Users Score":0,"Answer":"Writing code that is \"thread safe\" can be tricky. I doubt if the Python connector to MySQL is thread safe; there is very little need for it.\nMySQL is quite happy to have multiple connections to it from clients. But they must be separate connections, not the same connection running in separate threads.\nVery few projects need multi-threaded access to the database. Do you have a particular need? If so let's hear about it, and discuss the 'right' way to do it.\nFor now, each of your threads that needs to talk to the database should create its own connection. Generally, such a connection can be created soon after starting the thread (or process) and kept open until close to the end of the thread. That is, normally you should have only one connection per thread.","Q_Score":0,"Tags":"python,database,multithreading,mariadb","A_Id":50634340,"CreationDate":"2018-05-23T20:49:00.000","Title":"Calling database handler class in a python thread","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have multiple modules and they each have their own log. The all write to the log correctly however when a class is instantiated more than once the log will write the same line multiple times depending on the number of times it was created. \nIf I create the object twice it will log every messages twice, create the object three times it will log every message three times, etc...\nI was wondering how I could fix this without having to only create each object only once.\nAny help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":10,"Q_Id":50535298,"Users Score":0,"Answer":"I was adding the handler multiple times after each instantiation of a log. I checked if the handler had already been added at the instantiation and that fixed the multiple writes.","Q_Score":0,"Tags":"python-2.7,logging","A_Id":50535472,"CreationDate":"2018-05-25T18:54:00.000","Title":"python logging multiple calls after each instantiation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I use celery for doing snmp requests with easysnmp library which have a C interface.\nThe problem is lots of time is being wasted on I\/O. I know that I should use eventlet or gevent in this kind of situations, but I don't know how to handle patching a third party library when it uses C extensions.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":283,"Q_Id":50569140,"Users Score":1,"Answer":"Eventlet and gevent can't monkey-patch C code.\nYou can offload blocking calls to OS threads with eventlet.tpool.execute(library.io_func)","Q_Score":0,"Tags":"python,celery,gevent,eventlet","A_Id":50585353,"CreationDate":"2018-05-28T15:00:00.000","Title":"using c extension library with gevent","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Python Ray looks interesting for machine learning applications. However, I wonder how large Python Ray can handle. Is it limited by memory or can it actually handle data that exceeds memory?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":466,"Q_Id":50575443,"Users Score":2,"Answer":"It currently works best when the data fits in memory (if you're on a cluster, then that means the aggregate memory of the cluster). If the data exceeds the available memory, then Ray will evict the least recently used objects. If those objects are needed later on, they will be reconstructed by rerunning the tasks that created them.","Q_Score":2,"Tags":"python,ray","A_Id":51431522,"CreationDate":"2018-05-29T02:13:00.000","Title":"How large data can Python Ray handle?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to make a Discord bot in Python that a user can request a unit every few minutes, and later ask the bot how many units they have. Would creating a google spreadsheet for the bot to write each user's number of units to be a good idea, or is there a better way to do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":252,"Q_Id":50590788,"Users Score":0,"Answer":"Using a database is the best option. If you're working with a small number of users and requests you could use something even simpler like a text file for ease of use, but I'd recommend a database.\nEasy to use database options include sqlite (use the sqlite3 python library) and MongoDB (I use the mongoengine python library for my Slack bot).","Q_Score":0,"Tags":"python,python-3.x,discord.py","A_Id":50590881,"CreationDate":"2018-05-29T18:31:00.000","Title":"Discord bot with user specific counter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I have created virtual environment named virualenv. I have scrapy project and I am using there some programs installed in my virtualenv. When I run it from terminal in VSC I can see errors even when I set up my virtual environment via Ctrl+Shift+P -> Python: Select Interpreter -> Python 3.5.2(virtualenv). Interpreter works in some way, I can import libs without errors etc, but I am not possible to start my scrapy project from terminal. I have to activate my virtual environment first via \/{path_to_virtualenv}\/bin\/activate. Is there a way, how to automatically activate it? Now I am using PyCharm and it is possible there, but VSC looks much better according to me.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2194,"Q_Id":50593213,"Users Score":1,"Answer":"One way I know how,\nStart cmd\nStart you virtual env\n(helloworld) \\path\\etc> code .\nIt will start studio code in this environment. Hope it helps","Q_Score":3,"Tags":"python-3.x,visual-studio-code,virtualenv","A_Id":54950619,"CreationDate":"2018-05-29T21:28:00.000","Title":"How execute python command within virtualenv with Visual Studio Code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm new (obviously) to python, but not so new to TensorFlow\nI've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console:\n\nWARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed.\n\nI'm a bit confused on how to fix this issue. Do I have to wait for an update of TensorFlow before it works?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1348,"Q_Id":50609002,"Users Score":0,"Answer":"You can simply stop at the break point, and switch to DEBUG CONSOLE panel, and type var.shape. It's not that convenient, but at least you don't need to write any extra debug code in your code.","Q_Score":0,"Tags":"python,debugging,tensorflow,visual-studio-code","A_Id":51165112,"CreationDate":"2018-05-30T15:56:00.000","Title":"TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm new (obviously) to python, but not so new to TensorFlow\nI've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console:\n\nWARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed.\n\nI'm a bit confused on how to fix this issue. Do I have to wait for an update of TensorFlow before it works?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1348,"Q_Id":50609002,"Users Score":0,"Answer":"Probably yes you may have to wait. In the debug mode a deprecated function is being called. \nYou can print out the shape explicitly by calling var.shape() in the code as a workaround. I know not very convenient.","Q_Score":0,"Tags":"python,debugging,tensorflow,visual-studio-code","A_Id":50609100,"CreationDate":"2018-05-30T15:56:00.000","Title":"TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have django 1.11 with latest django-storages, setup with S3 backend.\nI am trying to programatically instantiate an ImageFile, using the AWS image link as a starting point. I cannot figure out how to do this looking at the source \/ documentation. \nI assume I need to create a file, and give it the path derived from the url without the domain, but I can't find exactly how.\nThe final aim of this is to programatically create wagtail Image objects, that point to S3 images (So pass the new ImageFile to the Imagefield of the image). I own the S3 bucket the images are stored in it.\nUploading images works correctly, so the system is setup correctly.\nUpdate\nTo clarify, I need to do the reverse of the normal process. Normally a physical image is given to the system, which then creates a ImageFile, the file is then uploaded to S3, and a URL is assigned to the File.url. I have the File.url and need an ImageFile object.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2104,"Q_Id":50609686,"Users Score":6,"Answer":"It turns out, in several models that expect files, when using DjangoStorages, all I had to do is instead of passing a File on the file field, pass the AWS S3 object key (so not a URL, just the object key).\nWhen model.save() is called, a boto call is made to S3 to verify an object with the provided key is there, and the item is saved.","Q_Score":4,"Tags":"django,boto3,wagtail,python-django-storages","A_Id":50804853,"CreationDate":"2018-05-30T16:38:00.000","Title":"Django storages S3 - Store existing file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I installed miniconda for Windows10 successfully and then I could install numpy, scipy, sklearn successfully, but when I run import sklearn in python IDLE I receive No module named 'sklearn' in anaconda prompt. It recognized my python version, which was 3.6.5, correctly. I don't know what's wrong, can anyone tell me how do I import modules in IDLE ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1095,"Q_Id":50633488,"Users Score":0,"Answer":"Why bot Download the full anaconda and this will install everything you need to start which includes Spider IDE, Rstudio, Jupyter and all the needed modules..\nI have been using anaconda without any error and i will recommend you try it out.","Q_Score":2,"Tags":"python,import,scikit-learn","A_Id":50634001,"CreationDate":"2018-05-31T22:09:00.000","Title":"import sklearn in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I cannot install tensorflow in pycharm on windows 10, though I have tried many different things:\n\nwent to settings > project interpreter and tried clicking the green plus button to install it, gave me the error: non-zero exit code (1) and told me to try installing via pip in the command line, which was successful, but I can't figure out how to make Pycharm use it when it's installed there\ntried changing to a Conda environment, which still would not allow me to run tensorflow since when I input into the python command line: pip.main(['install', 'tensorflow']) it gave me another error and told me to update pip\nupdated pip then tried step 2 again, but now that I have pip 10.0.1, I get the error 'pip has no attribute main'. I tried reverted pip to 9.0.3 in the command line, but this won't change the version used in pycharm, which makes no sense to me. I reinstalled anaconda, as well as pip, and deleted and made a new project and yet it still says that it is using pip 10.0.1 which makes no sense to me\n\nSo in summary, I still can't install tensorflow, and I now have the wrong version of pip being used in Pycharm. I realize that there are many other posts about this issue but I'm pretty sure I've been to all of them and either didn't get an applicable answer or an answer that I understand.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":16417,"Q_Id":50634751,"Users Score":0,"Answer":"what worked for is this;\n\nI installed TensorFlow on the command prompt as an administrator using this command pip install tensorflow\nthen I jumped back to my pycharm and clicked the red light bulb pop-up icon, it will have a few options when you click it, just select the one that says install tensor flow. This would not install in from scratch but basically, rebuild and update your pycharm workspace to note the newly installed tensorflow","Q_Score":3,"Tags":"python,tensorflow,pip,pycharm,conda","A_Id":69603564,"CreationDate":"2018-06-01T01:04:00.000","Title":"Pycharm Can't install TensorFlow","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I started learning django a few days back and started a project, by luck the project made is good and I'm thinking to deploy it. However I didn't initiate it in virtual environment. have made a virtual environment now and want to move project to that. I want to know how can I do that ? I have created requirements.txt whoever it has included all the irrelevant library names. How can I get rid of them and have only that are required for the project.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":863,"Q_Id":50654965,"Users Score":1,"Answer":"Django is completely unrelated to the environment you run it on. \nThe environment represents which python version are you using (2,3...) and the libraries installed.\nTo answer your question, the only thing you need to do is run your manage.py commands from the python executable in the new virtual environment. Of course install all of the necessary libraries in the new environment if you haven't already did so.\nIt might be a problem if you created a python3 environment while the one you created was in python2, but at that point it's a code portability issue.","Q_Score":1,"Tags":"python,django,virtualenv","A_Id":50655762,"CreationDate":"2018-06-02T08:27:00.000","Title":"How should I move my completed Django Project in a Virtual Environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am developing a convolution neural network (CNN) model to predict whether a patient in category 1,2,3 or 4. I use Keras on top of TensorFlow.\nI have 64 breast cancer patient data, classified into four category (1=no disease, 2= \u2026., 3=\u2026.., 4=progressive disease). In each patient's data, I have 3 set of MRI scan images taken at different dates and inside each MRI folder, I have 7 to 8 sub folders containing MRI images in different plane (such as coronal plane\/sagittal plane etc). \nI learned how to deal with basic \u201cCat-Dog-CNN-Classifier\u201d, it was easy as I put all the cat & dog images into a single folder to train the network. But how do I tackle the problem in my breast cancer patient data? It has multiple folders and sub-solders.\nPlease suggest.","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":1045,"Q_Id":50664485,"Users Score":-1,"Answer":"Use os.walk to access all the files in sub-directories recursively and append to the dataset.","Q_Score":0,"Tags":"python,tensorflow,keras","A_Id":55304529,"CreationDate":"2018-06-03T08:14:00.000","Title":"Train CNN model with multiple folders and sub-folders","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the \"script\" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":11411,"Q_Id":50667214,"Users Score":0,"Answer":"I would look in the atom installed plugins in settings.. you can get here by pressing command + shift + p, then searching for settings.\nThe only reason I suggest this is because, plugins is where I installed swift language usage accessibility through a plugin that manages that in atom.\nOther words for plugins on atom would be \"community packages\"\nHope this helps.","Q_Score":3,"Tags":"python,python-3.x,editor,atom-editor","A_Id":50682856,"CreationDate":"2018-06-03T14:02:00.000","Title":"How can I change the default version of Python Used by Atom?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the \"script\" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":11411,"Q_Id":50667214,"Users Score":0,"Answer":"Yes, there is. After starting Atom, open the script you wish to run. Then open command palette and select 'Python: Select interpreter'. A list appears with the available python versions listed. Select the one you want and hit return. Now you can run the script by placing the cursor in the edit window and right-clicking the mouse. A long menu appears and you should choose the 'Run python in the terminal window'. This is towards the bottom of the long menu list. The script will run using the interpreter you selected.","Q_Score":3,"Tags":"python,python-3.x,editor,atom-editor","A_Id":50846354,"CreationDate":"2018-06-03T14:02:00.000","Title":"How can I change the default version of Python Used by Atom?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the \"script\" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","AnswerCount":4,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":11411,"Q_Id":50667214,"Users Score":10,"Answer":"I am using script 3.18.1 in Atom 1.32.2\nNavigate to Atom (at top left) > Open Preferences > Open Config folder.\nNow, Expand the tree as script > lib > grammars\nOpen python.coffee and change 'python' to 'python3' in both the places in command argument","Q_Score":3,"Tags":"python,python-3.x,editor,atom-editor","A_Id":53507376,"CreationDate":"2018-06-03T14:02:00.000","Title":"How can I change the default version of Python Used by Atom?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the \"script\" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":11411,"Q_Id":50667214,"Users Score":0,"Answer":"I came up with an inelegant solution that may not be universal. Using platformio-ide-terminal, I simply had to call python3.9 instead of python or python3. Not sure if that is exactly what you're looking for.","Q_Score":3,"Tags":"python,python-3.x,editor,atom-editor","A_Id":71299879,"CreationDate":"2018-06-03T14:02:00.000","Title":"How can I change the default version of Python Used by Atom?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have scanned PDFs (image based) of bank statements.\nGoogle vision API is able to detect the text pretty accurately but it returns blocks of text and I need line by line text (bank transactions).\nAny idea how to go about it?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3081,"Q_Id":50673749,"Users Score":0,"Answer":"In Google Vision API there is a method fullTextAnnotation which returns a full text string with \\n specifying the end of the line, You can try that.","Q_Score":2,"Tags":"python,pdf,ocr,google-cloud-vision","A_Id":62009637,"CreationDate":"2018-06-04T05:13:00.000","Title":"Line by line data from Google cloud vision API OCR","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"The 'merror' and 'logloss' result from XGB multiclass classification differs by about 0.01 or 0.02 on each run, with the same parameters. Is this normal? \nI want 'merror' and 'logloss' to be constant when I run XGB with the same parameters so I can evaluate the model precisely (e.g. when I add a new feature).\nNow, if I add a new feature I can't really tell whether it had a positive impact on my model's accuracy or not, because my 'merror' and 'logloss' differ on each run regardless of whether I made any changes to the model or the data fed into it since the last run.\nShould I try to fix this and if I should, how can I do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1725,"Q_Id":50688189,"Users Score":0,"Answer":"Managed to solve this. First I set the 'seed' parameter of XgBoost to a fixed value, as Hadus suggested. Then I found out that I used sklearn's train_test_split function earlier in the notebook, without setting the random_state parameter to a fixed value. So I set the random_state parameter to 22 (you can use whichever integer you want) and now I'm getting constant results.","Q_Score":1,"Tags":"python,performance,machine-learning,xgboost","A_Id":50723630,"CreationDate":"2018-06-04T20:20:00.000","Title":"XgBoost accuracy results differ on each run, with the same parameters. How can I make them constant?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I made a program that grabs the top three new posts on the r\/wallpaper subreddit. It downloads the pictures every 24 hours and adds them to my wallpapers folder. What I'm running into is how to have the program running in the background. The program resumes every time I turn the computer on, but it pauses whenever I close the computer. Is there a way to close the computer without pausing the program? I'm on a mac.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":50690141,"Users Score":0,"Answer":"Programs can't run when the computer is powered off. However, you can run a computer headlessly (without mouse, keyboard, and monitor) to save resources. Just ensure your program runs over the command line interface.","Q_Score":0,"Tags":"python,macos,praw","A_Id":50690315,"CreationDate":"2018-06-04T23:38:00.000","Title":"How to keep python programming running constantly","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a script that I am trying to execute every 2 seconds.. to begin it reads a .csv with pd.read_csv. Then executes modifications on the df and finally overwrites the original .csv with to_csv.\nI'm running into a PermissionError: [Errno 13] Permission denied: and from my searches I believe it's due to trying to open\/write too often to the same file though I could be wrong.\n\nAny suggestions how to avoid this?\nNot sure if relevant but the file is stored in one-drive folder.\nIt does save on occasion, seemingly randomly.\nIncreasing the timeout so the script executes slower helps but I want it running fast!\n\nThanks","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":1596,"Q_Id":50692295,"Users Score":-1,"Answer":"Close the file that you are trying to read and write and then try running your script.\nHope it helps","Q_Score":0,"Tags":"python,pandas,csv,io","A_Id":50692350,"CreationDate":"2018-06-05T04:53:00.000","Title":"Pandas - Read\/Write to the same csv quickly.. getting permissions error","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have been working on creating and training a Deep Learning model for the first time. I did not have any knowledge about the subject prior to the project and therefor my knowledge is limited even now.\nI used to run the model on my own laptop but after implementing a well working OHE and SMOTE I simply couldnt run it on my own device anymore due to MemoryError (8GB of RAM). Therefor I am currently running the model on a 30GB RAM RDP which allows me to do so much more, I thought. \nMy code seems to have some horribly inefficiencies of which I wonder if they can be solved. One example is that by using pandas.concat my model's RAM usages skyrockets from 3GB to 11GB which seems very extreme, afterwards I drop a few columns making the RAm spike to 19GB but actually returning back to 11GB after the computation is completed (unlike the concat). I also forced myself to stop using the SMOTE for now just because the RAM usage would just go up way too much. \nAt the end of the code, where the training happens the model breaths its final breath while trying to fit the model. What can I do to optimize this?\nI have thought about splitting the code into multiple parts (for exmaple preprocessing and training) but to do so I would need to store massive datasets in a pickle which can only reach 4GB (correct me if I'm wrong). I have also given thought about using pre-trained models but I truely did not understand how this process goes to work and how to use one in Python. \nP.S.: I would also like my SMOTE back if possible \nThank you all in advance!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1499,"Q_Id":50719405,"Users Score":0,"Answer":"Slightly orthogonal to your actual question, if your high RAM usage is caused by having entire dataset in memory for the training, you could eliminate such memory footprint by reading and storing only one batch at a time: read a batch, train on this batch, read next batch and so on.","Q_Score":0,"Tags":"python,deep-learning,ram,rdp","A_Id":50723414,"CreationDate":"2018-06-06T11:33:00.000","Title":"Optimizing RAM usage when training a learning model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a time series data which looks something like this\nLoan_id Loan_amount Loan_drawn_date\n id_001 2000000 2015-7-15\n id_003 100 2014-7-8\n id_009 78650 2012-12-23\n id_990 100 2018-11-12\nI am trying to build a Arima forecasting model on this data which has round about 550 observations. These are the steps i have followed \n\nConverted the time series data into daily data and replaced NA values with 0. the data look something like this \nLoan_id Loan_amount Loan_drawn_date \nid_001 2000000 2015-7-15\nid_001 0 2015-7-16\nid_001 0 2015-7-17\nid_001 0 2015-7-18\nid_001 0 2015-7-19\nid_001 0 2015-7-20\n....\nid_003 100 2014-7-8\nid_003 0 2014-7-9\nid_003 0 2014-7-10\nid_003 0 2014-7-11\nid_003 0 2014-7-12\nid_003 0 2014-7-13\n....\nid_009 78650 2012-12-23\nid_009 0 2012-12-24\nid_009 0 2012-12-25\nid_009 0 2012-12-26\nid_009 0 2012-12-27\nid_009 0 2012-12-28\n...\nid_990 100 2018-11-12\nid_990 0 2018-11-13\nid_990 0 2018-11-14\nid_990 0 2018-11-15\nid_990 0 2018-11-16\nid_990 0 2018-11-17\nid_990 0 2018-11-18\nid_990 0 2018-11-19\nCan Anyone please suggest me how do i proceed ahead with these 0 values now?\nSeeing the variance in the loan amount numbers i would take log of the of the loan amount. i am trying to build the ARIMA model for the first time and I have read about all the methods of imputation but there is nothing i can find. Can anyone please tell me how do i proceed ahead in this data","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":148,"Q_Id":50747097,"Users Score":0,"Answer":"I don't know exactly about your specific domain problem, but these things apply usually in general:\n\nIf the NA values represent 0 values for your domain specific problem, then replace them with 0 and then fit the ARIMA model (this would for example be the case if you are looking at daily sales and on some days you have 0 sales)\nIf the NA values represent unknown values for your domain specific problem then do not replace them and fit your ARIMA model. (this would be the case, if on a specific day the employee forgot to write down the amount of sales and it could be any number). \n\nI probably would not use imputation at all. There are methods to fit an ARIMA model on time series that have missing values. Usually these algorithms should probably also implemented somewhere in python. (but I don't know since I am mostly using R)","Q_Score":0,"Tags":"python,time-series,missing-data,arima","A_Id":50774913,"CreationDate":"2018-06-07T17:31:00.000","Title":"ARIMA Forecasting","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm looking for a way to randomize lists in python (which I already know how to do) but to then make sure that two things aren't next to each other. For example, if I were to be seating people and numbering the listing going down by 0, 1, 2, 3, 4, 5 based on tables but 2 people couldn't sit next to each other how would I make the list organized in a way to prohibit the 2 people from sitting next to each other.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":70,"Q_Id":50759439,"Users Score":1,"Answer":"As you say that you know how to shuffle a list, the only requirement is that two elements are not next to each other.\nA simple way is to:\n\nshuffle the full list\nif the two elements are close, choose a random possible position for the second one\nexchange the two elements\n\nMaximum cost: one shuffle, one random choice, one exchange","Q_Score":2,"Tags":"python,python-3.x","A_Id":50759628,"CreationDate":"2018-06-08T11:15:00.000","Title":"Randomizing lists with variables in Python 3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to check the SD card size in bash or python. Right now I know df can check it when the SD card is mounted or fdisk -l if root is available.\nBut I want to know how to check the SD card size without requiring mounting the card to the file system or requiring the root permission? For example, if the SD card is not mounted and I issue df -h \/dev\/sdc, this will return a wrong size. In python, os.statvfs this function returns the same content as well. I search on stack overflow but did not find a solution yet.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":413,"Q_Id":50769974,"Users Score":0,"Answer":"Well, I found the lsblk -l can do the job. It tells the total size of the partitions.","Q_Score":0,"Tags":"python,linux,bash","A_Id":50802107,"CreationDate":"2018-06-09T00:49:00.000","Title":"how to check the SD card size before mounted and do not require root","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"There is a website that claims to predict the approximate salary of an individual on the basis of the following criteria presented in the form of individual drop-down\n\nAge : 5 options\nEducation : 3 Options\nSex : 3 Options\nWork Experience : 4 Options\nNationality: 12 Options\n\nOn clicking the Submit button, the website gives a bunch of text as output on a new page with an estimate of the salary in numerals. \nSo, there are technically 5*3*3*4*12 = 2160 data points. I want to get that and arrange it in an excel sheet. Then I would run a regression algorithm to guess the function this website has used. This is what I am looking forward to achieve through this exercise. This is entirely for learning purposes since I'm keen on learning these tools.\nBut I don't know how to go about it? Any relevant tutorial, documentation, guide would help! I am programming in python and I'd love to use it to achieve this task!\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":50776071,"Users Score":1,"Answer":"If you are uncomfortable asking them for database as roganjosh suggested :) use Selenium. Write in Python a script that controls Web Driver and repeatedly sends requests to all possible combinations. The script is pretty simple, just a nested loop for each type of parameter\/drop down.\nIf you are sure that value of each type do not depend on each other, check what request is sent to the server. If it is simple URL encoded, like age=...&sex=...&..., then Selenium is not needed. Just generate such URLa for all possible combinations and call the server.","Q_Score":1,"Tags":"python,selenium,selenium-webdriver,web-scraping,regression","A_Id":50776223,"CreationDate":"2018-06-09T15:59:00.000","Title":"How to write a python program that 'scrapes' the results from a website for all possible combinations chosen from the given drop down menus?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a problem with rasa core, let's suppose that I have a rasa-nlu able to detect time \neg \"let's start tomorrow\" would get the entity time: 2018-06-10:T18:39:155Z\nOk, now I want next branches, or decisions to be conditioned by: \n\ntime is in the past \ntime before one month from now \ntime is beyond 1\nmonth\n\nI do not know how to do that. I do not know how to convert it to a slot able to influence the dialog. My only idea would be to have an action that converts the date to a categorical slot right after detecting time, but I see two problems with that approach: \n\none it would already be too late, meaning that if I do it with a\nposterior action it means the rasa-core has already decided what\ndecision to take without using the date\nand secondly, I do know how to save it, because if I have a\nstories.md that compares a detecting date like in the example with\nthe current time, maybe in the time of the example it was beyond one\nmonth but now it is in the past, so the reset of that story would be\nwrong.\n\nI am pretty lost and I do not know how to deal with this, thanks a lot!!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":568,"Q_Id":50776518,"Users Score":0,"Answer":"I think you could have a validation in the custom form.\nWhere it perform validation on the time and perform next action base on the decision on the time.\nYour story will have to train to handle different action paths.","Q_Score":0,"Tags":"python,date,rasa-nlu,rasa-core","A_Id":54032632,"CreationDate":"2018-06-09T16:51:00.000","Title":"Rasa-core, dealing with dates","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am building a recommender system which does Multi Criteria based ranking of car alternatives. I just need to do ranking of the alternatives in a meaningful way. I have ways of asking user questions via a form.\nEach car will be judged on the following criteria: price, size, electric\/non electric, distance etc. As you can see its a mix of various data types, including ordinal, cardinal(count) and quantitative dat.\nMy question is as follows:\n\nWhich technique should I use for incorporating all the models into a single score Which I can rank. I looked at normalized Weighted sum model, but I have a hard time assigning weights to ordinal(ranked) data. I tried using the SMARTER approach for assigning numerical weights to ordinal data but Im not sure if it is appropriate. Please help!\nAfter someone can help me figure out answer to finding the best ranking method, what if the best ranked alternative isnt good enough on an absolute scale? how do i check that so that enlarge the alternative set further?\n\n3.Since the criterion mention above( price, etc) are all on different units, is there a good method to normalized mixed data types belonging to different scales? does it even make sense to do so, given that the data belongs to many different types?\nany help on these problems will be greatly appreciated! Thank you!","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":972,"Q_Id":50784441,"Users Score":-2,"Answer":"I am happy to see that you are willing to use multiple criteria decision making tool. You can use Analytic Hierarchy Process (AHP), Analytic Network Process (ANP), TOPSIS, VIKOR etc. Please refer relevant papers. You can also refer my papers. \nKrishnendu Mukherjee","Q_Score":1,"Tags":"python,statistics,ranking,recommendation-engine,economics","A_Id":50991998,"CreationDate":"2018-06-10T13:57:00.000","Title":"Multi crtieria alterative ranking based on mixed data types","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"There is plenty of info on how to use what seems to be third-party packages that allow you to access your sFTP by inputting your credentials into these packages. \nMy dilemma is this: How do I know that these third-party packages are not sharing my credentials with developers\/etc?\nThank you in advance for your input.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":53,"Q_Id":50806632,"Users Score":0,"Answer":"Thanks everyone for comments. \nTo distill it: Unless you do a code review yourself or you get a the sftp package from a verified vendor (ie - packages made by Amazon for AWS), you can not assume that these packages are \"safe\" and won't post your info to a third-party site.","Q_Score":0,"Tags":"python,security,sftp,paramiko","A_Id":50827731,"CreationDate":"2018-06-11T22:00:00.000","Title":"Security of SFTP packages in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Can someone point me the right direction to where I can sync up a live video and audio stream?\nI know it sound simple but here is my issue: \n\nWe have 2 computers streaming to a single computer across multiple networks (which can be up to hundreds of miles away).\nAll three computers have their system clocks synchronized using NTP\nVideo computer gathers video and streams UDP to the Display computer\nAudio computer gathers audio and also streams to the Display computer\n\nThere is an application which accepts the audio stream. This application does two things (plays the audio over the speakers and sends network delay information to my application). I am not privileged to the method which they stream the audio.\nMy application displays the video and two other tasks (which I haven't been able to figure out how to do yet).\n- I need to be able to determine the network delay on the video stream (ideally, it would be great to have a timestamp on the video stream from the Video computer which is related to that system clock so I can compare that timestamp to my own system clock).\n- I also need to delay the video display to allow it to be synced up with the audio.\nEverything I have found assumes that either the audio and video are being streamed from the same computer, or that the audio stream is being done by gstreamer so I could use some sync function. I am not privileged to the actual audio stream. I am only given the amount of time the audio was delayed getting there (network delay).\nSo intermittently, I am given a number as the network delay for the audio (example: 250 ms). I need to be able to determine my own network delay for the video (which I don't know how to do yet). Then I need to compare to see if the audio delay is more than the video network delay. Say the video is 100ms ... then I would need to delay the video display by 150ms (which I also don't know how to do). \nANY HELP is appreciated. I am trying to pick up where someone else has left off in this design so it hasn't been easy for me to figure this out and move forward. Also being done in Python ... which further limits the information I have been able to find. Thanks. \nScott","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3020,"Q_Id":50807114,"Users Score":0,"Answer":"A typical way to synch audio and video tracks or streams is have a timestamp for each frame or packet, which is relative to the start of the streams.\nThis way you know that no mater how long it took to get to you, the correct audio to match with the video frame which is 20001999 (for example) milliseconds from the start is the audio which is also timestamped as 20001999 milliseconds from the start.\nTrying to synch audio and video based on an estimate of the network delay will be extremely hard as the delay is very unlikely to be constant, especially on any kind of IP network.\nIf you really have no timestamp information available, then you may have to investigate more complex approaches such as 'markers' in the stream metadata or even some intelligent analysis of the audio and video streams to synch on an event in the streams themselves.","Q_Score":0,"Tags":"python,video,udp,gstreamer,ntp","A_Id":50833346,"CreationDate":"2018-06-11T22:56:00.000","Title":"How to sync 2 streams from separate sources","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a python script that records audio from an I2S MEMS microphone, connected to a Raspberry PI 3. \nThis script runs as supposed to, when accessed from the terminal. The problem appears when i run it as a service in the background. \nFrom what i have seen, the problem is that the script as service, has no access to a software_volume i have configured in asoundrc. The strange thing is that i can see this \"device\" in the list of devices using the get_device_info_by_index() function.\nFor audio capturing i use the pyaudio library and for making the script a service i have utilized the supervisor utility.\nAny ideas what the problem might be and how i can make my script to have access to asoundrc when it runs as a service?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":127,"Q_Id":50812469,"Users Score":2,"Answer":"The ~\/.asoundrc file is looked for the home directory of the current user (this is what ~ means).\nPut it into the home directory of the user as which the service runs, or put the definitions into the global ALSA configuration file \/etc\/asound.conf.","Q_Score":1,"Tags":"python,raspbian,audio-recording,alsa","A_Id":50812689,"CreationDate":"2018-06-12T08:22:00.000","Title":"Python script as service has not access to asoundrc configuration file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using Odoo 10 mass mailing module to send newsletters. I have configured it but I don't know how to configure bounced emails. It is registering correctly sent emails, received (except that it is registering bounced as received), opened and clicks.\nCan anyone please help me?\nRegards","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":520,"Q_Id":50819764,"Users Score":0,"Answer":"I managed to solve this problem. Just configured the 'bounce' system parameter to an email with the same name.\nExample:\nI created an email bounce-register@example.com. Also remember to configure the alias domain in your general settings to 'example.com'\nAfter configuring your email to register bounces you need to configure an incomming mail server for this email (I configured it as an IMAP so I think that should do altough you can also configure it as a POP). That would be it.\nHope this info server for you","Q_Score":0,"Tags":"python,email,odoo,odoo-10","A_Id":50861301,"CreationDate":"2018-06-12T14:34:00.000","Title":"Odoo 10 mass mailing configure bounces","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using Google's Word2vec and I'm wondering how to get the top words that are predicted by a skipgram model that is trained using hierarchical softmax, given an input word?\nFor instance, when using negative sampling, one can simply multiply an input word's embedding (from the input matrix) with each of the vectors in the output matrix and take the one with the top value. However, in hierarchical softmax, there are multiple output vectors that correspond to each input word, due to the use of the Huffman tree. \nHow do we compute the likelihood value\/probability of an output word given an input word in this case?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":342,"Q_Id":50860649,"Users Score":0,"Answer":"I haven't seen any way to do this, and given the way hierarchical-softmax (HS) outputs work, there's no obviously correct way to turn the output nodes' activation levels into a precise per-word likelihood estimation. Note that:\n\nthe predict_output_word() method that (sort-of) simulates a negative-sampling prediction doesn't even try to handle HS mode\nduring training, neither HS nor negative-sampling modes make exact predictions \u2013 they just nudge the outputs to be more like the current training example would require\n\nTo the extent you could calculate all output node activations for a given context, then check each word's unique HS code-point node values for how close they are to \"being predicted\", you could potentially synthesize relative scores for each word \u2013 some measure of how far the values are from a \"certain\" output of that word. But whether and how each node's deviation should contribute to that score, and how that score might be indicative of a interpretable liklihood, is unclear. \nThere could also be issues because of the way HS codes are assigned strictly by word-frequency \u2013 so 'neighbor' word sharing mostly-the-same-encoding may be very different semantically. (There were some hints in the original word2vec.c code that it could potentially be beneficial to assign HS-encodings by clustering related words to have similar codings, rather than by strict frequency, but I've seen little practice of that since.)\nI would suggest sticking to negative-sampling if interpretable predictions are important. (But also remember, word2vec isn't mainly used for predictions, it just uses the training-attempts-at-prediction to bootstrap a vector-arrangment that turn out to be useful for other tasks.)","Q_Score":2,"Tags":"python,c++,nlp,word2vec,gensim","A_Id":50868421,"CreationDate":"2018-06-14T15:07:00.000","Title":"How to predict word using trained skipgram model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I use windows 7 and python 2.7\nWhen I used py2exe to make an .exe file I get the error;\nTraceback (most recent call last):\nFile \"mainpy\", line 17, in\nFile \"main.py\", line 17, in\nFile \"zipextimporter.pyc\", line 82, in load_module\nFile \"zipextimporter.pyc\", line 82, in load_module\nFile \"logging_init_.pyc\", line 26, in\nFile \"zipextimporter.pyc\", line 82, in load_module\nFile \"weakref.pyc\", line 14, in\nImportError: cannot import name _remove_dead_weakref\nThe same code could be used to make an .exe file in another computer so there is nothing wrong with the code in main.py. The minor environmental difference may cause this problem. I used pycharm, python 2.7.10 and py2exe 0.6.9. On another computer all other config are the same except using sublimetext instead of pycharm.\nCould anyone please tell me how to fix that?\nAnother tricky thing is that","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":475,"Q_Id":50870204,"Users Score":0,"Answer":"It is possible that the library does not exists for the other computer.Please check whether the library exists or not.","Q_Score":0,"Tags":"python-2.7,py2exe","A_Id":50870246,"CreationDate":"2018-06-15T06:29:00.000","Title":"ImportError: cannot import name _remove_dead_weakref python 2.7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm trying to write a function which finds the length of a linked list in O(1).\nI know how to implement it in O(n) but I can't figure out how to do it in constant time... is that even possible?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":383,"Q_Id":50871476,"Users Score":0,"Answer":"Its not possible because you have to atleast pass through entire linked list and it takes O(n)\nElse you have to use a variable which counts when inserting elements into linked list","Q_Score":0,"Tags":"python,list,linked-list,time-complexity","A_Id":54593506,"CreationDate":"2018-06-15T08:06:00.000","Title":"finding length of linked list in constant time python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using bs4 and urllib.request in python 3.6 to webscrape. I have to open tabs \/ be able to toggle an \"aria-expanded\" in button tabs in order to access the div tabs I need.\nThe button tab when the tab is closed is as follows with <> instead of --:\nbutton id=\"0-accordion-tab-0\" type=\"button\" class=\"accordion-panel-title u-padding-ver-s u-text-left text-l js-accordion-panel-title\" aria-controls=\"0-accordion-panel-0\" aria-expanded=\"false\"\nWhen opened, the aria-expanded=\"true\" and the div tab appears underneath.\nAny idea on how to do this?\nHelp would be super appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1681,"Q_Id":50882732,"Users Score":0,"Answer":"BeautifulSoup is used to parse HTML\/XML content. You can't click around on a webpage with it. \nI recommend you look through the document to make sure it isn't just moving the content from one place to the other. If the content is loaded through AJAX when the button is clicked then you will have to use something like selenium to trigger the click.\nAn easier option could be to check what url the content is fetched from when you click the button and make a similar call in your script if possible.","Q_Score":0,"Tags":"python-3.x,dom,web-scraping,beautifulsoup,urlopen","A_Id":50882877,"CreationDate":"2018-06-15T21:13:00.000","Title":"Accessing Hidden Tabs, Web Scraping With Python 3.6","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"When I run this simple code:\nfrom flask import Flask,render_template\napp = Flask(__name__)\n@app.route('\/')\ndef index():\n return 'this is the homepage'\nif __name__ == \"__main__\":\n app.run(debug=True, host=\"0.0.0.0\",port=8080)\nIt works fine but when I close it using ctrl+z in the terminal and try to run it again I get OSError: [Errno 98] Address already in use\nSo I tried changing the port address and re-running it which works for some of the port numbers I enter. But I want to know a graceful way to clear the address being used by previous program so that it is free for the current one. \nAlso is what is the apt way to shutdown a server and free the port address.\nKindly tell a simple way to do so OR explain the method used fully because I read solutions to similar problems but didn't understand any of it.\nWhen I run\nnetstat -tulpn\nThe output is : \n(Not all processes could be identified, non-owned process info\n will not be shown, you would have to be root to see it all.)\nActive Internet connections (only servers)\nProto Recv-Q Send-Q Local Address Foreign Address State PID\/Program name\ntcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN -\ntcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -\ntcp 0 0 0.0.0.0:3689 0.0.0.0:* LISTEN 4361\/rhythmbox\ntcp6 0 0 ::1:631 :::* LISTEN -\ntcp6 0 0 :::3689 :::* LISTEN 4361\/rhythmbox\nudp 0 0 0.0.0.0:5353 0.0.0.0:* 3891\/chrome\nudp 0 0 0.0.0.0:5353 0.0.0.0:* -\nudp 0 0 0.0.0.0:39223 0.0.0.0:* -\nudp 0 0 127.0.1.1:53 0.0.0.0:* -\nudp 0 0 0.0.0.0:68 0.0.0.0:* -\nudp 0 0 0.0.0.0:631 0.0.0.0:* -\nudp 0 0 0.0.0.0:58140 0.0.0.0:* -\nudp6 0 0 :::5353 :::* 3891\/chrome\nudp6 0 0 :::5353 :::* -\nudp6 0 0 :::41938 :::* -\nI'm not sure how to interpret it. \nthe output of ps aux | grep 8080 \nis :\nshreyash 22402 0.0 0.0 14224 928 pts\/2 S+ 01:20 0:00 grep --color=auto 8080\nI don't know how to interpret it.\nWhich one is the the process name and what is it's id?","AnswerCount":4,"Available Count":2,"Score":-0.049958375,"is_accepted":false,"ViewCount":10441,"Q_Id":50891073,"Users Score":-1,"Answer":"You will have another process listening on port 8080. You can check to see what that is and kill it. You can find processes listening on ports with netstat -tulpn. Before you do that, check to make sure you don't have another terminal window open with the running instance.","Q_Score":0,"Tags":"python,flask","A_Id":50891141,"CreationDate":"2018-06-16T19:30:00.000","Title":"How to I close down a python server built using flask","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"When I run this simple code:\nfrom flask import Flask,render_template\napp = Flask(__name__)\n@app.route('\/')\ndef index():\n return 'this is the homepage'\nif __name__ == \"__main__\":\n app.run(debug=True, host=\"0.0.0.0\",port=8080)\nIt works fine but when I close it using ctrl+z in the terminal and try to run it again I get OSError: [Errno 98] Address already in use\nSo I tried changing the port address and re-running it which works for some of the port numbers I enter. But I want to know a graceful way to clear the address being used by previous program so that it is free for the current one. \nAlso is what is the apt way to shutdown a server and free the port address.\nKindly tell a simple way to do so OR explain the method used fully because I read solutions to similar problems but didn't understand any of it.\nWhen I run\nnetstat -tulpn\nThe output is : \n(Not all processes could be identified, non-owned process info\n will not be shown, you would have to be root to see it all.)\nActive Internet connections (only servers)\nProto Recv-Q Send-Q Local Address Foreign Address State PID\/Program name\ntcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN -\ntcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -\ntcp 0 0 0.0.0.0:3689 0.0.0.0:* LISTEN 4361\/rhythmbox\ntcp6 0 0 ::1:631 :::* LISTEN -\ntcp6 0 0 :::3689 :::* LISTEN 4361\/rhythmbox\nudp 0 0 0.0.0.0:5353 0.0.0.0:* 3891\/chrome\nudp 0 0 0.0.0.0:5353 0.0.0.0:* -\nudp 0 0 0.0.0.0:39223 0.0.0.0:* -\nudp 0 0 127.0.1.1:53 0.0.0.0:* -\nudp 0 0 0.0.0.0:68 0.0.0.0:* -\nudp 0 0 0.0.0.0:631 0.0.0.0:* -\nudp 0 0 0.0.0.0:58140 0.0.0.0:* -\nudp6 0 0 :::5353 :::* 3891\/chrome\nudp6 0 0 :::5353 :::* -\nudp6 0 0 :::41938 :::* -\nI'm not sure how to interpret it. \nthe output of ps aux | grep 8080 \nis :\nshreyash 22402 0.0 0.0 14224 928 pts\/2 S+ 01:20 0:00 grep --color=auto 8080\nI don't know how to interpret it.\nWhich one is the the process name and what is it's id?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":10441,"Q_Id":50891073,"Users Score":2,"Answer":"It stays alive because you're not closing it. With Ctrl+Z you're removing the execution from current terminal without killing a process.\nTo stop the execution use Ctrl+C","Q_Score":0,"Tags":"python,flask","A_Id":50891134,"CreationDate":"2018-06-16T19:30:00.000","Title":"How to I close down a python server built using flask","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am a python newbie and i have a controler that get Post requests.\nI try to print to log file the request that it receive, i am able to print the body but how can i extract all the request include the headers?\nI am using request.POST.get() to get the body\/data from the request.\nThanks","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":264,"Q_Id":50903244,"Users Score":-1,"Answer":"request.POST should give you the POST body if it is get use request.GET\nif the request body is json use request.data","Q_Score":0,"Tags":"python,django","A_Id":50903304,"CreationDate":"2018-06-18T05:46:00.000","Title":"How to print all recieved post request include headers in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am having trouble adding conda to my environment variables on windows. I installed anaconda 3 though I didn't installed python, so neither pip or pip3 is working in my prompt. I viewed a few post online but I didn't find anything regarding how to add conda to my environment variables. \nI tried to create a PYTHONPATH variable which contained every single folder in Anaconda 3 though it didn't worked.\nMy anaconda prompt isn't working too. :(\nso...How do I add conda and pip to my environment variables or path ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20117,"Q_Id":50906037,"Users Score":0,"Answer":"Thanks guys for helping me out. I solved the problem reinstalling anaconda (several times :[ ), cleaning every log and resetting the path variables via set path= in the windows power shell (since I got some problems reinstalling anaconda adding the folder to PATH[specifically \"unable to load menus\" or something like that])","Q_Score":2,"Tags":"python,path,environment-variables,anaconda","A_Id":50909686,"CreationDate":"2018-06-18T09:08:00.000","Title":"Add conda to my environment variables or path?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to set up a beta environment on Heroku for my Django-based project, but when I install I am getting:\n\nerror in cryptography setup command: Invalid environment marker:\n python_version < '3'\n\nI've done some googling, and it is suggested that I upgrade setuptools, but I can't figure out how to do that. (Putting setuptools in requirements.txt gives me a different error message.) \nSadly, I'm still on Python 2.7, if that matters.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":500,"Q_Id":50913601,"Users Score":0,"Answer":"The problem ended up being the Heroku \"buildpack\" that I was using. I had been using the one from \"thenovices\" for a long time so that I could use numpy, scipy, etc. \nSadly, that buildpack specifies an old version of setuptools and python, and those versions were not understanding some of the new instructions (python_version) in the newer setup files for cryptography.\nIf you're facing this problem, Heroku's advice is to move to Docker-based Heroku, rather than \"traditional\" Heroku.","Q_Score":0,"Tags":"python,heroku","A_Id":51125635,"CreationDate":"2018-06-18T16:13:00.000","Title":"getting \"invalid environment marker\" when trying to install my python project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"i am building a flask RESTapi and i am using postman to make http post requests to my api , i want to use the werkzeug debugger , but postman wont allow me to put in the debugging pin and debug the code from postman , what can i do ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":331,"Q_Id":50926542,"Users Score":0,"Answer":"Never needed any debugger for postman. This is not the tool you need the long blanket of code for one endpoint to test.\nIt gives a good option - console. I have never experienced any trouble this simple element didn't help me so far.","Q_Score":2,"Tags":"python-3.x,postman,flask-restful,werkzeug","A_Id":50954032,"CreationDate":"2018-06-19T10:47:00.000","Title":"how to use the Werkzeug debugger in postman?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm new to coding and I have been learning it on Jupyter. I have anaconda, Sublime Text 3, and the numpy package installed on my Mac. \nOn Jupyter, we would import numpy by simply typing\n import numpy as np\nHowever, this doesnt seem to work on Sublime as I get the error ModuleNotFoundError: No module named 'numpy'\nI would appreciate it if someone could guide me on how to get this working. Thanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1514,"Q_Id":50929439,"Users Score":1,"Answer":"If you have Annaconda, install Spyder. \nIf you continue to have this problem, you could check all the lib install from anaconda.\nI suggest you to install nmpy from anaconda.","Q_Score":1,"Tags":"python,numpy,sublimetext3","A_Id":50929501,"CreationDate":"2018-06-19T13:14:00.000","Title":"Importing Numpy into Sublime Text 3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":1,"ERRORS":1,"REVIEW":0},{"Question":"I have a script called \"RiskTemplate.py\" which generates a pandas dataframe consisting of 156 columns. I created two additional columns which gives me a total count of 158 columns. However, when I run this \"RiskTemplate.py\" script in another script using the below code, the dataframe only pulls the original 156 columns I had before the two additional columns were added.\nexec(open(\"RiskTemplate.py\").read()) \nhow can I get the reference script to pull in the revised dataframe from the underlying script \"RiskTemplate.py\"?\nhere are the lines creating the two additional dataframe columns, they work as intended when I run it directly in the \"RiskTemplate.py\" script. The original dataframe is pulling from SQL via df = pd.read_sql(query,connection)\ndf['LMV % of NAV'] = df['longmv']\/df['End of Month NAV']*100\ndf['SMV % of NAV'] = df['shortmv']\/df['End of Month NAV']*100","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":21,"Q_Id":50934946,"Users Score":1,"Answer":"I figured it out, sorry for the confusion. I did not save the risktemplate that I updated the dataframe to in the same folder that the other reference script was looking at! Newbie!","Q_Score":0,"Tags":"python,dataframe,reference","A_Id":50935236,"CreationDate":"2018-06-19T18:36:00.000","Title":"dataframe from underlying script not updating","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to accept only those strings having the pattern 'wild.flower', 'pink.flower',...i.e any word preceding '.flower', but the word should not contain dot. For example, \"pink.blue.flower\" is unacceptable. Can anyone help how to do this in python using regex?","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":1085,"Q_Id":50939145,"Users Score":2,"Answer":"You are looking for \"^\\w+\\.flower$\".","Q_Score":0,"Tags":"python,regex","A_Id":50939346,"CreationDate":"2018-06-20T01:59:00.000","Title":"Python regex to match words not having dot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to accept only those strings having the pattern 'wild.flower', 'pink.flower',...i.e any word preceding '.flower', but the word should not contain dot. For example, \"pink.blue.flower\" is unacceptable. Can anyone help how to do this in python using regex?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1085,"Q_Id":50939145,"Users Score":0,"Answer":"Your case of pink.blue.flower is unclear. There are 2 possibilities:\n\nMatch only blue (cut off preceding dot and what was before).\nReject this case altogether (you want to match a word preceding .flower\nonly if it is not preceded with a dot).\n\nIn the first case accept other answers.\nBut if you want the second solution, use: \\b(? Settings > Project > Project Interpreter.","Q_Score":0,"Tags":"python,django,django-models,pycharm,django-celery","A_Id":50993837,"CreationDate":"2018-06-22T17:43:00.000","Title":"module can't be installed in Django virtual environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Quite simple, \nIf I perform t-SNE in Python for high-dimensional data then I get 2 or 3 coordinates that reflect each new point. \nBut how do I map these to the original IDs? \nOne way that I can think of is if the indices are kept fixed the entire time, then I can do: \n\nPick a point in t-SNE\nSee what row it was in t-SNE (e.g. index 7)\nGo to original data and pick out row\/index 7.\n\nHowever, I don't know how to check if this actually works. My data is super high-dimensional and it is very hard to make sense of it with a normal \"sanity check\".\nThanks a lot!\nBest,","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":660,"Q_Id":50993934,"Users Score":1,"Answer":"If you are using sklearn's t-SNE, then your assumption is correct. The ordering of the inputs match the ordering of the outputs. So if you do y=TSNE(n_components=n).fit_transform(x) then y and x will be in the same order so y[7] will be the embedding of x[7]. You can trust scikit-learn that this will be the case.","Q_Score":3,"Tags":"python,mapping","A_Id":50994909,"CreationDate":"2018-06-22T18:41:00.000","Title":"Getting IDs from t-SNE plot?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have this large XML file on my drive. The file is too large to be opened with sublimetext or other text editors. \nIt is also too large to be loaded in memory by the regular XML parsers.\nTherefore, I dont even know what's inside of it!\nIs it just possible to \"print\" a few rows of the XML files (as if it was some sort of text document) so that I have an idea of the nodes\/content? \nI am suprised not to find an easy solution to that issue.\nThanks!","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1845,"Q_Id":50994801,"Users Score":1,"Answer":"This is one of the few things I ever do on the command line: the \"more\" command is your friend. Just type\n\nmore big.xml","Q_Score":0,"Tags":"python,xml","A_Id":50995481,"CreationDate":"2018-06-22T19:56:00.000","Title":"how to print the first lines of a large XML?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I used win10. When I installed Visual Studio2017, I configure the Python3 environment. And then after half year I installed Anaconda(Python3) in another directory. Now I have two interpreters in different directories. \n\nNow, no matter in what IDE I code the codes, after I save it and double click it in the directory, the Python File is run by the interpreter configured by VS2017.\n\nWhy do I know that? I use sys.path to get to know it. But when I use VS2017 to run the code, it shows no mistake. The realistic example is that I pip install requests in cmd, then I import it in a Python File. Only when I double click it, the Traceback says I don't have this module. In other cases it works well.\n\nSo, how to change the default python interpreter of the cmd.exe?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":51016767,"Users Score":0,"Answer":"Just change the interpreter order of the python in the PATH is enough.\nIf you want to use python further more, I suggest you to use virtual environment tools like pipenv to control your python interpreters and modules.","Q_Score":0,"Tags":"python,windows,python-3.x,cmd","A_Id":51024834,"CreationDate":"2018-06-25T05:26:00.000","Title":"Two python3 interpreters on win10 cause misunderstanding","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"When I installed the new version of python 3.6.5, JGRASP was using the previous version, how can I use the new version on JGRASP?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":512,"Q_Id":51018026,"Users Score":0,"Answer":"By default, jGRASP will use the first \"python\" on the system path. \nThe new version probably only exists as \"python3\". If that is the case, install jGRASP 2.0.5 Beta if you are using 2.0.4 or a 2.0.5 Alpha. Then, go to \"Settings\" > \"Compiler Settings\" > \"Workspace\", select language \"Python\" if not already selected, select environment \"Python 3 (python 3) - generic\", hit \"Use\" button, and \"OK\" the dialog.","Q_Score":0,"Tags":"python,python-3.x,macos,jgrasp","A_Id":51036825,"CreationDate":"2018-06-25T07:13:00.000","Title":"How can I update Python version when working on JGRASP on mac os?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a html page with text box and submit button. When somebody enters data in text box and click submit, i have to pass that value to a python script which does some operation and print output. Can someone let me now how to achieve this. I did some research on stackoverflow\/google but nothing conclusive. I have python 2.7, Windows 10 and Apache tomcat. Any help would be greatly appreciated.\nThanks,\nJagadeesh.K","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":222,"Q_Id":51024719,"Users Score":0,"Answer":"Short answer: You can't just run a python script in the clients browser. It doesn't work that way.\nIf you want to execute some python when the user does something, you will have to run a web app like the other answer suggested.","Q_Score":0,"Tags":"python,windows,apache,flask","A_Id":51027426,"CreationDate":"2018-06-25T13:30:00.000","Title":"Passing command line parameters to python script from html page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I was surprised to be unable to find any information anywhere on the web on how to do this properly, but I suppose my surprise ought to be mitigated by the fact that normally this can be done via Microsoft's 'Add or Remove Programs' via the Control Panel. \nThis option is not available to me at this time, since I had installed Python again elsewhere (without having uninstalled it), then uninstalled that installation the standard way. Now, despite no option for uninstalling conda via the Control Panel, conda persists in my command line.\nNow, the goal is to remove every trace of it, to end up in a state as though conda never existed on my machine in the first place before I reinstall it to the necessary location. \nI have a bad feeling that if I simply delete the files and then reinstall, this will cause problems. Does anyone have any guidance in how to achieve the above?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":12353,"Q_Id":51039936,"Users Score":6,"Answer":"Open the folder where you installed miniconda, and then search for uninstall.exe. Open that it will erase miniconda for you.","Q_Score":6,"Tags":"python,windows,conda,uninstallation,miniconda","A_Id":65146967,"CreationDate":"2018-06-26T09:53:00.000","Title":"How to uninstall (mini)conda entirely on Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I already installed python3.5.2, tensorflow(with python3.5.2).\nI want to install protobuf now. However, protobuf supports python3.5.0; 3.5.1; and 3.6.0\nI wonder which version should I install.\nMy question is should I upgrade python3.5.2 to python3.6, or downgrade it to python3.5.1.\nI see some people are trying downgrade python3.6 to python3.5\nI googled how to change python3.5.2 to python3.5.1, but no valuable information. I guess this is not usual option.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":116,"Q_Id":51053760,"Users Score":0,"Answer":"So it is version problem\none google post says change python version to a more general version.\nI am not sure how to change python3.5.2 to python3.5.1\nI just installed procobuf3.6\nI hope it works","Q_Score":0,"Tags":"python,tensorflow,protocol-buffers","A_Id":51164732,"CreationDate":"2018-06-27T02:35:00.000","Title":"protobuf, and tensorflow installation, which version to choose","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I'm still new to writing scripts with Python and would really appreciate some guidance.\nI'm wondering how to continue executing my Python script from where it left off after a system restart.\nThe script essentially alternates between restarting and executing a task for example: restart the system, open an application and execute a task, restart the system, open another application and execute another task, etc... \nBut the issue is that once the system restarts and logs back in, all applications shut down including the terminal so the script stops running and never executes the following task. The program shuts down early without an error so the logs are not really of much use. Is there any way to reopen the script and continue from where it left off or prevent applications from being closed during a reboot ? Any guidance on the issue would be appreciated.\nThanks!\nAlso, I'm using a Mac running High Sierra for reference.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1385,"Q_Id":51055649,"Users Score":0,"Answer":"You could write your current progress to a file just before you reboot and read said file on Programm start.\nAbout the automatic restart of the script after reboot: you could have the script to put itself in the Autostart of your system and after everything is done remove itself from it.","Q_Score":2,"Tags":"python,scripting-language","A_Id":51055706,"CreationDate":"2018-06-27T06:09:00.000","Title":"How to Resume Python Script After System Reboot?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I package my python (flask) application with docker. Within my app I'm generating UTC date with datetime library using datetime.utcnow().\nUnfortunately, when I inspect saved data with MongoDB Compass the UTC date is offset two hours (to my local time zone). All my docker containers have time zone set to Etc\/UTC. Morover, mongoengine connection to MongoDB uses tz_aware=False and tzinfo=None, what prevents on fly date conversions.\nWhere does the offset come from and how to fix it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5085,"Q_Id":51098943,"Users Score":15,"Answer":"Finally, after trying to prove myself wrong, and hairless head I found the cause and solution for my problem. \nWe are living in the world of illusion and what you see is not what you get!!!. I decided to inspect my data over mongo shell client\nrather than MongoDB Compass GUI. I figure out that data that arrived to database contained correct UTC date. This narrowed all my previous\nassumption that there has to be something wrong with my python application, and environment that the application is living in. What left was MongoDB Compass itself.\nAfter changing time zone on my machine to a random time zone, and refreshing collection within MongoDB Compass, displayed UTC date changed to a date that fits random time zone. \nBe aware that MongoDB Copass displays whatever is saved in database Date field, enlarged about your machine's time zone. Example, if you saved UTC time equivalent to 8:00 am, \nand your machine's time zone is Europe\/Warsaw then MongoDB Compass will display 10:00am.","Q_Score":5,"Tags":"python,mongodb,docker,mongoengine,mongodb-compass","A_Id":51099120,"CreationDate":"2018-06-29T09:49:00.000","Title":"Incorrect UTC date in MongoDB Compass","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In pandas, how do I replace & with '&' from all columns where & could be in any position in a string?\nFor example, in column Title if there is a value 'Good & bad', how do I replace it with 'Good & bad'?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":11751,"Q_Id":51121170,"Users Score":0,"Answer":"Try this\ndf['Title'] = titanic_df['Title'].replace(\"&\", \"&\")","Q_Score":5,"Tags":"python,python-3.x,pandas","A_Id":51121201,"CreationDate":"2018-07-01T07:10:00.000","Title":"How to replace all string in all columns using pandas?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm wondering what the symbol is or if I am even able to get historical price data on BTC, ETH, etc. denominated in United States Dollars. \nright now when if I'm making a call to client such as: \nClient.get_symbol_info('BTCUSD')\nit returns nothing\nDoes anyone have any idea how to get this info? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":4098,"Q_Id":51127720,"Users Score":3,"Answer":"You can not make trades in Binance with dollars but instead with Tether(USDT) that is a cryptocurrency that is backed 1-to-1 with dollar.\nTo solve that use BTCUSDT\nChange BTCUSD to BTCUSDT","Q_Score":2,"Tags":"python,api,bitcoin,cryptocurrency,binance","A_Id":51128924,"CreationDate":"2018-07-01T23:33:00.000","Title":"Binance API: how to get the USD as the quote asset","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I created one task, where I have white background and black digits.\nI need to take the largest by thickness digit. I have made my picture bw, recognized all symbols, but I don't understand, how to scale thickness. I have tried arcLength(contours), but it gave me the largest by size. I have tried morphological operations, but as I undestood, it helps to remove noises and another mistakes in picture, right? And I had a thought to check the distance between neighbour points of contours, but then I thought that it would be hard because of not exact and clear form of symbols(I draw tnem on paint). So, that's all Ideas, that I had. Can you help me in this question by telling names of themes in Comp. vision and OpenCV, that could help me to solve this task? I don't need exact algorithm of solution, only themes. And if that's not OpenCV task, so which is? What library? Should I learn some pack of themes and basics before the solution of my task?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2238,"Q_Id":51133962,"Users Score":1,"Answer":"One possible solution that I can think of is to alternate erosion and find contours till you have only one contour left (that should be the thicker). This could work if the difference in thickness is enough, but I can also foresee many particular cases that can prevent a correct identification, so it depends very much on how is your original image.","Q_Score":2,"Tags":"python,image,opencv,computer-vision","A_Id":51134741,"CreationDate":"2018-07-02T10:22:00.000","Title":"How can i scale a thickness of a character in image using python OpenCV?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"When I first execute this command it create model in my model.py but when I call it second time for another table in same model.py file then that second table replace model of first can anyone told the reason behind that because I am not able to find perfect solution for that?\n$ python manage.py inspectdb tablename > v1\/projectname\/models.py\nWhen executing this command second time for another table then it replace first table name.\n$ python manage.py inspectdb tablename2 > v1\/projectname\/models.py","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1132,"Q_Id":51137859,"Users Score":0,"Answer":"python manage.py inspectdb table1 table2 table3... > app_name\/models.py\nApply this command for inspection of multiple tables of one database in django.","Q_Score":1,"Tags":"django,python-3.x","A_Id":58061835,"CreationDate":"2018-07-02T13:55:00.000","Title":"django inspectdb, how to write multiple table name during inspection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"If I had a column in a dataframe, and that column contained two possible categorical variables, how do I count how many times each variable appeared? \nSo e.g, how do I count how many of the participants in the study were male or female?\nI've tried value_counts, groupby, len etc, but seem to be getting it wrong. \nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2617,"Q_Id":51140765,"Users Score":0,"Answer":"You could use len([x for x in df[\"Sex\"] if x == \"Male\"). This iterates through the Sex column of your dataframe and determines whether an element is \"Male\" or not. If it is, it is appended to a list via list comprehension. The length of that list is the number of Males in your dataframe.","Q_Score":0,"Tags":"python,python-3.x,pandas","A_Id":51140808,"CreationDate":"2018-07-02T17:04:00.000","Title":"Count Specific Values in Dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"We receive a .tar.gz file from a client every day and I am rewriting our import process using SSIS. One of the first steps in my process is to unzip the .tar.gz file which I achieve via a Python script.\nAfter unzipping we are left with a number of CSV files which I then import into SQL Server. As an aside, I am loading using the CozyRoc DataFlow Task Plus.\nMost of my CSV files load without issue but I have five files which fail. By reading the log I can see that the process is reading the Header and First line as though there is no HeaderRow Delimiter (i.e. it is trying to import the column header as ColumnHeader1ColumnValue1\nI took one of these CSVs, copied the top 5 rows into Excel, used Text-To-Columns to delimit the data then saved that as a new CSV file.\nThis version imported successfully.\nThat makes me think that somehow the original CSV isn't using {CR}{LF} as the row delimiter but I don't know how to check. Any suggestions?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2081,"Q_Id":51160071,"Users Score":0,"Answer":"Seeing that you have EmEditor, you can use EmEditor to find the eol character in two ways:\n\nUse View > Character Code Value... at the end of a line to display a dialog box showing information about the character at the current position.\nGo to View > Marks and turn on Newline Characters and CR and LF with Different Marks to show the eol while editing. LF is displayed with a down arrow while CRLF is a right angle.\n\nSome other things you could try checking for are: file encoding, wrong type of data for a field and an inconsistent number of columns.","Q_Score":2,"Tags":"python,csv,ssis,delimiter,eol","A_Id":51195993,"CreationDate":"2018-07-03T17:27:00.000","Title":"Which newline character is in my CSV?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have two custom-written C routines that I would like to use as a part of a large Python application. I would prefer not to rewrite the C code in pure Python (or Cython, etc.), especially to maintain speed.\nWhat is the cleanest, easiest way that I can use my C code from my Python code? Or, what is the cleanest, easiest way for me to wrap my C code for use in my Python source?\nI know \"cleanest\" and \"easiest\" will attract opinions, but I really just need some good options for using custom pre-written code, versus many of the other answers\/tutorials which describe how to use full-on C libraries as CPython extensions.\nEDIT:\nCython and ctypes have both been suggested. Which is a better choice in my case? Each of the two routines I mentioned originally are very computationally intensive. They are used for image calculations and reconstructions, so my plan is to build a Python application around their use (with other functionality in mind that I already have in Python) with the C code run as needed for processing.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":178,"Q_Id":51160830,"Users Score":1,"Answer":"Use cython to wrap your C code. In other words, create a CPython extension using Cython, that calls your C code.","Q_Score":0,"Tags":"python,c,python-3.x,python-3.5,python-c-api","A_Id":51160991,"CreationDate":"2018-07-03T18:21:00.000","Title":"Calling custom C subroutines in a Python application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have been asked to create a system which has different functionalities. Assume service 1, service 2 and service 3. I need to run these services per hour to do something. \nTo make the system of those services I need: database, web interface for seeing the result of the process, caching and etc.\nThis is what I have thought about so far:\n\nI need kubernetes to orchestrate my services which are packaged as docker containers. I will deploy mySql to save my data and I can use Redis cache for caching.\nMy service are written by python scripts and Java and need to interact with each other through APIs.\nI think I can use AWS EKS for my kubernetes cluster \n\n\nthis is what I need to know: \n\nhow to deploy python or Java applications and connect them to each other and also connect them to a database service\nI also need to know how to schedule the application to run per hour so I can see the results in the web interface.\n\nPlease shoot any ideas or questions you have. \nAny help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":42,"Q_Id":51164215,"Users Score":1,"Answer":"For python\/java applications, create docker images for both applications. If these application run forever to serve traffic then deploy them as deployments.If you need to have only cron like functionality, deploy as Job in kubernetes.\nTo make services accessible, create services as selector for applications, so these services can route traffic to specific applications.\nDatabase or cache should be exposed as service endpoints so your applications are environment independent.","Q_Score":0,"Tags":"java,python,docker,kubernetes","A_Id":51166017,"CreationDate":"2018-07-04T00:03:00.000","Title":"kubernetes architecture for microservices application - suggestions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using search_ext_s() method of python-ldap to search results on the basis of filter_query, upon completion of search I get msg_id which I passed in result function like this ldap_object.result(msg_id) this returns tuple like this (100, attributes values) which is correct(I also tried result2, result3, result4 method of LDAP object), But how can I get response code for ldap search request, also if there are no result for given filter_criteria I get empty list whereas in case of exception I get proper message like this \nldap.SERVER_DOWN: {u'info': 'Transport endpoint is not connected', 'errno': 107, 'desc': u\"Can't contact LDAP server\"}\nCan somebody please help me if there exists any attribute which can give result code for successful LDAP search operation.\nThanks,\nRadhika","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":549,"Q_Id":51174045,"Users Score":0,"Answer":"An LDAP server simply may not return any results, even if there was nothing wrong with the search operation sent by the client. With python-ldap you get an empty result list. Most times this is due to access control hiding directory content. In general the LDAP server won't tell you why it did not return results.\n(There are some special cases where ldap.INSUFFICIENT_ACCESS is raised but you should expect the behaviour to be different when using different LDAP servers.)\nIn python-ldap if the search operation did not raise an exception the LDAP result code was ok(0). So your application has to deal with an empty search result in some application-specific way, e.g. by also raising a custom exception handled by upper layers.","Q_Score":0,"Tags":"python-ldap","A_Id":51365948,"CreationDate":"2018-07-04T12:45:00.000","Title":"search_s search_ext_s search_s methods of python-ldap library doesn't return any Success response code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am trying to learn ML using Kaggle datasets. In one of the problems (using Logistic regression) inputs and parameters matrices are of size (1110001, 8) & (2122640, 8) respectively.\nI am getting memory error while doing it in python. This would be same for any language I guess since it's too big. My question is how do they multiply matrices in real life ML implementations (since it would usually be this big)?\nThings bugging me :\n\nSome ppl in SO have suggested to calculate dot product in parts and then combine. But even then matrix would be still too big for RAM (9.42TB? in this case)\n\nAnd If I write it to a file wouldn't it be too slow for optimization algorithms to read from file and minimize function?\n\nEven if I do write it to file how would fmin_bfgs(or any opt. function) read from file?\n\nAlso Kaggle notebook shows only 1GB of storage available. I don't think anyone would allow TBs of storage space.\n\nIn my input matrix many rows have similar values for some columns. Can I use it my advantage to save space? (like sparse matrix for zeros in matrix)\nCan anyone point me to any real life sample implementation of such cases. Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":102,"Q_Id":51205149,"Users Score":0,"Answer":"I have tried many things. I will be mentioning these here, if anyone needs them in future:\n\nI had already cleaned up data like removing duplicates and\nirrelevant records depending on given problem etc. \nI have stored large matrices which hold mostly 0s as sparse matrix. \nI implemented the gradient descent using mini-batch method instead of plain old Batch method (theta.T dot X).\n\nNow everything is working fine.","Q_Score":0,"Tags":"python,numpy,machine-learning,scipy,logistic-regression","A_Id":51456740,"CreationDate":"2018-07-06T07:29:00.000","Title":"How to find dot product of two very large matrices to avoid memory error?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I use VS code for my Python projects and we have unit tests written using Python's unittest module. I am facing a weird issue with debugging unit tests.\nVSCode Version: May 2018 (1.24)\nOS Version: Windows 10 \nLet's say I have 20 unit tests in a particular project.\nI run the tests by right clicking on a unit test file and click 'Run all unit tests'\nAfter the run is complete, the results bar displays how many tests are passed and how many are failed. (e.g. 15 passed, 5 failed).\nAnd I can run\/debug individual test because there is a small link on every unit test function for that.\nIf I re-run the tests from same file, then the results bar displays the twice number of tests. (e.g. 30 passed, 10 failed)\nAlso the links against individual test functions disappear. So I cannot run individual tests.\nThe only way to be able to run\/debug individual tests after this is by re-launching the VS code.\nAny suggestions on how to fix this?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2562,"Q_Id":51215617,"Users Score":0,"Answer":"This was a bug in Python extension for VS code and it is fixed now.","Q_Score":1,"Tags":"python,python-2.7,visual-studio-code","A_Id":53988353,"CreationDate":"2018-07-06T17:58:00.000","Title":"Python Unit test debugging in VS code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So I was trying to install kivy, which lead me to install pip, and I went down a rabbit hole of altering directories. I am using PyCharm for the record.\nI would like to remove everything python related (including all libraries like pip) from my computer, and start fresh with empty directories, so when I download pycharm again, there will be no issues. \nI am using a Mac, so if any of you could let me know how to do that on a Mac, it would be greatly appreciated.\nCould I just open finder, search python, and delete all of the files (there are tons) or would that be too destructive?\nI hope I am making my situation clear enough, please comment any questions to clarify things.\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1027,"Q_Id":51236750,"Users Score":2,"Answer":"If you are familiar with the Terminal app, you can use command lines to uninstall Python from your Mac. For this, follow these steps:\n\n\nMove Python to Trash.\nOpen the Terminal app and type the following command line in the window: ~ alexa$ sudo rm -rf \/Applications\/Python\\ 3.6\/\nIt will require you to enter your administrator password to confirm the deletion.\n\n\nAnd for the PyCharm:\n\nJust remove the ~\/Library\/Caches\/PyCharm20 and\n ~\/Library\/Preferences\/PyCharm20 directories.\n\nOr if that won't be enough:\n\n\nGo to Applications > right click PyCharm > move to trash\nopen a terminal and run the following: find ~\/Library\/ -iname \"pycharm\"\nverify that all of the results are in fact related to PyCharm and not something else important you need to keep. Then, remove them all\n using the command: find ~\/Library -iname \"pycharm\" -exec rm -r \"{}\"\n \\;","Q_Score":2,"Tags":"python-3.x,directory,pycharm","A_Id":51237145,"CreationDate":"2018-07-08T23:33:00.000","Title":"Wondering how I can delete all of my python related files on Mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to convert m4a audio file with artwork (cover) to mp3. I'm using ffmpeg to convert the audio.\nOnce it copies, the artwork is lost. I'm quite not sure, how to retain the cover. I found some reference about mutagen library but not sure again how to use to copy the artwork.\nAny help would be great.\n\nffmpeg -i source\/file -acodec libmp3lame -ab 128k destination.mp3\n\nUpdate:\nI'm reading the artwork and m4a to be able to attache it back. \nI can get the artwork by using\n\nartwork = audio.tags['covr']\n\nNow my problem is how do I save the artwork as image in a new file?\nI tried the Following:\n\nwith open(path\/to\/write, 'wb') as img:\n img.write(artwork)\n\nThis gives me an error\n\n'list' does not support the buffer interface line\n\nAny suggestion, how I can save the artwork extracted covr data?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":347,"Q_Id":51262309,"Users Score":0,"Answer":"If anyone is having the same issue;\nI ended up reading the artwork from original file and attaching it back to mp3\n\nif audioFileNameWithM4AExtension.startswith(\"covr\"): #checks if it has cover\n cover = audioFileNameWithM4AExtension.tags['covr'][0] #gets the cover","Q_Score":0,"Tags":"python,mp3,m4a,audio-converter","A_Id":51386699,"CreationDate":"2018-07-10T09:58:00.000","Title":"Lost artwork while converting .m4a to .mp3 (Python)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Gensim's Word2Vec model takes as an input a list of lists with the inner list containing individual tokens\/words of a sentence. As I understand Word2Vec is used to \"quantify\" the context of words within a text using vectors. \nI am currently dealing with a corpus of text that has already been split into individual tokens and no longer contains an obvious sentence format (punctuation has been removed). I was wondering how should I input this into the Word2Vec model? \nSay if I simply split the corpus into \"sentences\" of uniform length (10 tokens per sentence for example), would this be a good way of inputting the data into the model? \nEssentially, I am wondering how the format of the input sentences (list of lists) affects the output of Word2Vec?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":98,"Q_Id":51269058,"Users Score":1,"Answer":"That sounds like a reasonable solution. If you have access to data that is similar to your cleaned data you could get average sentence length from that data set. Otherwise, you could find other data in the language you are working with (from wikipedia or another source) and get average sentence length from there.\nOf course your output vectors will not be as reliable as if you had the correct sentence boundaries, but it sounds like word order was preserved so there shouldn't be too much noise from incorrect sentence boundaries.","Q_Score":0,"Tags":"python,nlp,gensim,word2vec,word-embedding","A_Id":51270563,"CreationDate":"2018-07-10T15:26:00.000","Title":"Use proxy sentences from cleaned data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have this issue: \n\nContextualVersionConflict: (pandas 0.22.0 (...),\n Requirement.parse('pandas<0.22,>=0.19'), {'scikit-survival'})\n\nI have even tried to uninstall pandas and install scikit-survival + dependencies via anaconda. But it still does not work....\nAnyone with a suggestion on how to fix?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":2454,"Q_Id":51272642,"Users Score":5,"Answer":"Restarting jupyter notebook fixed it. But I am unsure why this would fix it?","Q_Score":3,"Tags":"python,pandas,scikit-learn","A_Id":51272663,"CreationDate":"2018-07-10T19:19:00.000","Title":"Python: ContextualVersionConflict: pandas 0.22.0; Requirement.parse('pandas<0.22,>=0.19'), {'scikit-survival'})","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to write a program in Python (with OpenCV) that compares 2 images, shows the difference between them, and then informs the user of the percentage of difference between the images. I have already made it so it generates a .jpg showing the difference, but I can't figure out how to make it calculate a percentage. Does anyone know how to do this?\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12248,"Q_Id":51288756,"Users Score":0,"Answer":"You will need to calculate this on your own. You will need the count of diferent pixels and the size of your original image then a simple math: (diferentPixelsCount \/ (mainImage.width * mainImage.height))*100","Q_Score":6,"Tags":"python,opencv","A_Id":51289142,"CreationDate":"2018-07-11T15:01:00.000","Title":"How do I calculate the percentage of difference between two images using Python and OpenCV?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I would like to use Pycharm to write some data science code and I am using Visual Studio Code and run it from terminal. But I would like to know if I could do it on Pycharm? I could not find some modules such as cluster and pylab on Pycharm? Anyone knows how I could import these modules into Pycharm?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":4718,"Q_Id":51294349,"Users Score":1,"Answer":"Go to the Preferences Tab -> Project Interpreter, there's a + symbol that allows you to view and download packages. From there you should be able to find cluster and pylab and install them to PyCharm's interpreter. After that you can import them and run them in your scripts.\nAlternatively, you may switch the project's interpreter to an interpreter that has the packages installed already. This can be done from that same menu.","Q_Score":2,"Tags":"python,pycharm,data-science","A_Id":51294446,"CreationDate":"2018-07-11T21:22:00.000","Title":"How to import 'cluster' and 'pylab' into Pycharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I use Keras for a CNN and have two types of Inputs: Images of objects, and one or two more parameters describing the object (e.g. weight). How can I train my network with both data sources? Concatenation doesn't seem to work because the inputs have different dimensions. My idea was to concatenate the output of the image analysis and the parameters somehow, before sending it into the dense layers, but I'm not sure how. Or is it possible to merge two classifications in Keras, i.e. classifying the image and the parameter and then merging the classification somehow?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1717,"Q_Id":51341613,"Users Score":1,"Answer":"You can use Concatenation layer to merge two inputs. Make sure you're converting multiple inputs into same shape; you can do this by adding additional Dense layer to either of your inputs, so that you can get equal length end layers. Use those same shape outputs in Concatenation layer.","Q_Score":2,"Tags":"python,tensorflow,keras,concatenation,conv-neural-network","A_Id":51341789,"CreationDate":"2018-07-14T17:06:00.000","Title":"Multiple Inputs for CNN: images and parameters, how to merge","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm clustering data (trying out multiple algorithms) and trying to evaluate the coherence\/integrity of the resulting clusters from each algorithm. I do not have any ground truth labels, which rules out quite a few metrics for analysing the performance. \nSo far, I've been using Silhouette score as well as calinski harabaz score (from sklearn). With these scores, however, I can only compare the integrity of the clustering if my labels produced from an algorithm propose there to be at minimum, 2 clusters - but some of my algorithms propose that one cluster is the most reliable.\nThus, if you don't have any ground truth labels, how do you assess whether the proposed clustering by an algorithm is better than if all of the data was assigned in just one cluster?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1769,"Q_Id":51343116,"Users Score":0,"Answer":"Don't just rely on some heuristic, that someone proposed for a very different problem.\nKey to clustering is to carefully consider the problem that you are working on. What is the proper way of proposing the data? How to scale (or not scale)? How to measure the similarity of two records in a way that it quantifies something meaningful for your domain.\nIt is not about choosing the right algorithm; your task is to do the math that relates your domain problem to what the algorithm does. Don't treat it as a black box. Choosing the approach based on the evaluation step does not work: it is already too late; you probably did some bad decisions already in the preprocessing, used the wrong distance, scaling, and other parameters.","Q_Score":1,"Tags":"python-3.x,machine-learning,scikit-learn,cluster-analysis,silhouette","A_Id":51346452,"CreationDate":"2018-07-14T20:27:00.000","Title":"How to analyse the integrity of clustering with no ground truth labels?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":7068,"Q_Id":51345748,"Users Score":1,"Answer":"I would not try to do it using extensions. I would use the platformio-ide-terminal and just do it from the command line. \nJust type: Python script_name.py and it should run fine. Be sure you are in the same directory as your python script.","Q_Score":0,"Tags":"python,ubuntu,terminal,atom-editor","A_Id":51351691,"CreationDate":"2018-07-15T06:08:00.000","Title":"how to run python code in atom in a terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":7068,"Q_Id":51345748,"Users Score":0,"Answer":"\"python filename.py\" should run your python code. If you wish to specifically run the program using python 3.6 then it would be \"python3.6 filename.py\".","Q_Score":0,"Tags":"python,ubuntu,terminal,atom-editor","A_Id":51345815,"CreationDate":"2018-07-15T06:08:00.000","Title":"how to run python code in atom in a terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":7068,"Q_Id":51345748,"Users Score":1,"Answer":"Save your Script as a .py file in a directory.\nOpen the terminal and navigate to the directory containing your script using cd command.\nRun python .py if you are using python2\nRun python3 if you are using python3","Q_Score":0,"Tags":"python,ubuntu,terminal,atom-editor","A_Id":51345813,"CreationDate":"2018-07-15T06:08:00.000","Title":"how to run python code in atom in a terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying measure the latency from my publisher to my subscriber in an MQTT network. I was hoping to use the on_message() function to measure how long this trip takes but its not clear to me whether this callback comes after the broker receives the message or after the subscriber receives it? \nAlso does anyone else have any other suggestion on how to measure latency across the network?","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":1991,"Q_Id":51357410,"Users Score":3,"Answer":"I was involved in similar kind of work where I was supposed measure the latency in wireless sensor networks. There are different ways to measure the latencies.\nIf the subscriber and client are synchronized.\n\nFill the payload with the time stamp value at the client and transmit\nthis packet to subscriber. At the subscriber again take the time\nstamp and take the difference between the time stamp at the\nsubscriber and the timestamp value in the packet.\nThis gives the time taken for the packet to reach subscriber from\nclient.\n\nIf the subscriber and client are not synchronized.\nIn this case measurement of latency is little tricky. Assuming the network is symmetrical.\n\nStart the timer at client before sending the packet to subscriber.\nConfigure subscriber to echo back the message to client. Stop the\ntimer at the client take the difference in clock ticks. This time\nrepresents the round trip time you divide it by two to get one\ndirection latency.","Q_Score":2,"Tags":"python,c++,mqtt,latency,paho","A_Id":51363050,"CreationDate":"2018-07-16T08:18:00.000","Title":"How to measure latency in paho-mqtt network","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying measure the latency from my publisher to my subscriber in an MQTT network. I was hoping to use the on_message() function to measure how long this trip takes but its not clear to me whether this callback comes after the broker receives the message or after the subscriber receives it? \nAlso does anyone else have any other suggestion on how to measure latency across the network?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1991,"Q_Id":51357410,"Users Score":2,"Answer":"on_message() is called on the subscriber when the message reaches the subscriber.\nOne way to measure latency is to do a loop back publish in the same client e.g.\n\nSetup a client\nSubscribe to a given topic\nPublish a message to the topic and record the current (high resolution) timestamp.\nWhen on_message() is called record the time again\n\nIt is worth pointing out that this sort of test assumes that both publisher\/subscriber will be on similar networks (e.g. not cellular vs gigabit fibre).\nAlso latency will be influenced by the load on the broker and the number of subscribers to a given topic.\nThe other option is to measure latency passively by monitoring the network assuming you can see all the traffic from one location as synchronising clocks across monitoring point is very difficult.","Q_Score":2,"Tags":"python,c++,mqtt,latency,paho","A_Id":51361548,"CreationDate":"2018-07-16T08:18:00.000","Title":"How to measure latency in paho-mqtt network","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've searched for working mechanism of tensorflow object detection in google. I've searched how tensorflow train models with dataset. It give me suggestion about how to implement rather than how it works.\nCan anyone explain how dataset are trained in fit into models?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":51362567,"Users Score":0,"Answer":"You can't \"simply\" understand how Tensorflow works without a good background on Artificial Intelligence and Machine Learning.\nI suggest you start working on those topics. Tensorflow will get much easier to understand and to handle after that.","Q_Score":0,"Tags":"python,tensorflow,object-detection,tensorflow-datasets","A_Id":51362949,"CreationDate":"2018-07-16T13:07:00.000","Title":"Brief explanation on tensorflow object detection working mechanism","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"What's the most elegant way to fetch data from an external API if I want to be faithful to the Single Responsibility Principle? Where\/when exactly should it be made?\nAssuming I've got a POST \/foo endpoint which after being called should somehow trigger a call to the external API and fetch\/save some data from it in my local DB.\nShould I add the call in the view? Or the Model?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":230,"Q_Id":51366456,"Users Score":0,"Answer":"I usually add any external API calls into dedicated services.py module (same level as your models.py that you're planning to save results into or common app if any of the existing are not logically related)\nInside that module you can use class called smth like MyExtarnalService and add all needed methods for fetching, posting, removing etc. just like you would do with drf api view.\nAlso remember to handle exceptions properly (timeouts, connection errors, error response codes) by defining custom error exception classes.","Q_Score":0,"Tags":"python,django,django-rest-framework","A_Id":51407850,"CreationDate":"2018-07-16T16:38:00.000","Title":"fetch data from 3rd party API - Single Responsibility Principle in Django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using seasonal.seasonal_decompose in python.\nWhat is the window length of moving average trend in seasonal.seasonal_decompose package?\nBased on my results, I think it is 25. But how can I be sure? how can I change this window length?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":186,"Q_Id":51368153,"Users Score":0,"Answer":"I found the answer. The \"freq\" part defines the window of moving average. Still not sure how the program choose the window when we do not declare it.","Q_Score":0,"Tags":"python,moving-average","A_Id":51369183,"CreationDate":"2018-07-16T18:35:00.000","Title":"What is the window length of moving average trend in seasonal.seasonal_decompose package?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have model saved in graph (.pb file). But now the model is inaccurate and I would like to develop it. I have pictures of additional data to learn, but I don't if it's possible or if it's how to do it? The result must be the modified of new data pb graph.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1718,"Q_Id":51379506,"Users Score":3,"Answer":"It's a good question. Actually it would be nice, if someone could explain how to do this. But in addition i can say you, that it would come to \"catastrophic forgetting\", so it wouldn't work out. You had to train all your data again.\nBut anyway, i also would like to know that espacially for ssd, just for test reasons.","Q_Score":2,"Tags":"python,tensorflow","A_Id":51383561,"CreationDate":"2018-07-17T10:48:00.000","Title":"How to retrain model in graph (.pb)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm Junior Django Dev. Got my first project. Doing quite well but senior dev that teaches me went on vacations....\nI have a Task in my company to create a function that will remind all people in specyfic Group, 5 days before event by sending mail.\nThere is a TournamentModel that contains a tournament_start_date for instance '10.08.2018'.\nPlayer can join tournament, when he does he joins django group \"Registered\".\nI have to create a function (job?) that will check tournament_start_date and if tournament begins in 5 days, this function will send emails to all people in \"Registered\" Group... automatically.\nHow can I do this? What should I use? How to run it and it will automatically check? I'm learning python\/django for few months... but I meet jobs fot the first time ;\/\nI will appreciate any help.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2515,"Q_Id":51379567,"Users Score":1,"Answer":"You can set this mail send function as cron job\u3002You can schedule it by crontab or Celery if Your team has used it.","Q_Score":2,"Tags":"python,django,task,jobs","A_Id":51380304,"CreationDate":"2018-07-17T10:52:00.000","Title":"Django - how to send mail 5 days before event?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"My computer's system language is zh_cn, so the vs code python extension set the default language to chinese. But i want to change the language to english. \nI can't find the reference in the doc or on the internet. Anyone konws how to do it? Thank's for help\nPS: vs code's locale is alreay set to english.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":555,"Q_Id":51422391,"Users Score":0,"Answer":"You probably installed other python extensions for VSCode. Microsoft official python extension will follow the locale setting in user\/workspace settings.\nTry uninstall other python extensions, you may see it changes to English.","Q_Score":2,"Tags":"python,visual-studio-code","A_Id":52695311,"CreationDate":"2018-07-19T12:11:00.000","Title":"how to change vs code python extension's language?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"My computer's system language is zh_cn, so the vs code python extension set the default language to chinese. But i want to change the language to english. \nI can't find the reference in the doc or on the internet. Anyone konws how to do it? Thank's for help\nPS: vs code's locale is alreay set to english.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":555,"Q_Id":51422391,"Users Score":0,"Answer":"When VScode is open go to View menu and select Command Palette. Once the command palette is open type display in the box. This should display the message configure display language. Open that and you should be in a local.json file. The variable local should be set to en for English.","Q_Score":2,"Tags":"python,visual-studio-code","A_Id":51432241,"CreationDate":"2018-07-19T12:11:00.000","Title":"how to change vs code python extension's language?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I'm currently working on a neural network that evaluates students' answers to exam questions. Therefore, preprocessing the corpora for a Word2Vec network is needed. Hyphenation in german texts is quite common. There are mainly two different types of hyphenation:\n1) End of line: \nThe text reaches the end of the line so the last word is sepa-\nrated.\n2) Short form of enumeration:\nin case of two \"elements\": \nGeistes- und Sozialwissenschaften\nmore \"elements\": \nWirtschafts-, Geistes- und Sozialwissenschaften\nThe de-hyphenated form of these enumerations should be:\nGeisteswissenschaften und Sozialwissenschaften\nWirtschaftswissenschaften, Geisteswissenschaften und Sozialwissenschaften\nI need to remove all hyphenations and put the words back together. I already found several solutions for the first problem.\nBut I have absoluteley no clue how to get the second part (in the example above \"wissenschaften\") of the words in the enumeration problem. I don't even know if it is possible at all.\nI hope that I have pointet out my problem properly.\nSo has anyone an idea how to solve this problem?\nThank you very much in advance!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":50,"Q_Id":51430249,"Users Score":1,"Answer":"It's surely possible, as the pattern seems fairly regular. (Something vaguely analogous is sometimes seen in English. For example: The new requirements applied to under-, over-, and average-performing employees.)\nThe rule seems to be roughly, \"when you see word-fragments with a trailing hyphen, and then an und, look for known words that begin with the word-fragments, and end the same as the terminal-word-after-und \u2013 and replace the word-fragments with the longer words\". \nNot being a German speaker and without language-specific knowledge, it wouldn't be possible to know exactly where breaks are appropriate. That is, in your Geistes- und Sozialwissenschaften example, without language-specific knowledge, it's unclear whether the first fragment should become Geisteszialwissenschaften or Geisteswissenschaften or Geistesenschaften or Geiestesaften or any other shared-suffix with Sozialwissenschaften. But if you've got a dictionary of word-fragments, or word-frequency info from other text that uses the same full-length word(s) without this particular enumeration-hyphenation, that could help choose. \n(If there's more than one plausible suffix based on known words, this might even be a possible application of word2vec: the best suffix to choose might well be the one that creates a known-word that is closest to the terminal-word in word-vector-space.)\nSince this seems a very German-specific issue, I'd try asking in forums specific to German natural-language-processing, or to libraries with specific German support. (Maybe, NLTK or Spacy?)\nBut also, knowing word2vec, this sort of patch-up may not actually be that important to your end-goals. Training without this logical-reassembly of the intended full words may still let the fragments achieve useful vectors, and the corresponding full words may achieve useful vectors from other usages. The fragments may wind up close enough to the full compound words that they're \"good enough\" for whatever your next regression\/classifier step does. So if this seems a blocker, don't be afraid to just try ignoring it as a non-problem. (Then if you later find an adequate de-hyphenation approach, you can test whether it really helped or not.)","Q_Score":0,"Tags":"python,nlp,word2vec,preprocessor,hyphenation","A_Id":51439004,"CreationDate":"2018-07-19T19:12:00.000","Title":"Python3 remove multiple hyphenations from a german string","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Does anyone know how to properly install tensorflow on Windows?\nI'm currently using Python 3.7 (also tried with 3.6) and every time I get the same \"Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )\nNo matching distribution found for tensorflow-gpu\" error\nI tried installing using pip and anaconda, both don't work for me.\n\nFound a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","AnswerCount":16,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":44333,"Q_Id":51440475,"Users Score":0,"Answer":"Not Enabling the Long Paths can be the potential problem.To solve that,\nSteps include:\n\nGo to Registry Editor on the Windows Laptop\n\nFind the key \"HKEY_LOCAL_MACHINE\"->\"SYSTEM\"->\"CurrentControlSet\"->\n\"File System\"->\"LongPathsEnabled\" then double click on that option and change the value from 0 to 1.\n\n\n3.Now try to install the tensorflow it will work.","Q_Score":5,"Tags":"python,tensorflow","A_Id":68421682,"CreationDate":"2018-07-20T10:26:00.000","Title":"Can't install tensorflow with pip or anaconda","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Does anyone know how to properly install tensorflow on Windows?\nI'm currently using Python 3.7 (also tried with 3.6) and every time I get the same \"Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )\nNo matching distribution found for tensorflow-gpu\" error\nI tried installing using pip and anaconda, both don't work for me.\n\nFound a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","AnswerCount":16,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":44333,"Q_Id":51440475,"Users Score":0,"Answer":"As of July 2019, I have installed it on python 3.7.3 using py -3 -m pip install tensorflow-gpu\npy -3 in my installation selects the version 3.7.3. \nThe installation can also fail if the python installation is not 64 bit. Install a 64 bit version first.","Q_Score":5,"Tags":"python,tensorflow","A_Id":56944215,"CreationDate":"2018-07-20T10:26:00.000","Title":"Can't install tensorflow with pip or anaconda","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Does anyone know how to properly install tensorflow on Windows?\nI'm currently using Python 3.7 (also tried with 3.6) and every time I get the same \"Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )\nNo matching distribution found for tensorflow-gpu\" error\nI tried installing using pip and anaconda, both don't work for me.\n\nFound a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","AnswerCount":16,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":44333,"Q_Id":51440475,"Users Score":0,"Answer":"Actually the easiest way to install tensorflow is:\ninstall python 3.5 (not 3.6 or 3.7) you can check wich version you have by typing \"python\" in the cmd.\nWhen you install it check in the options that you install pip with it and you add it to variables environnement.\nWhen its done just go into the cmd and tipe \"pip install tensorflow\"\nIt will download tensorflow automatically.\nIf you want to check that it's been installed type \"python\" in the cmd then some that \">>>\" will appear, then you write \"import tensorflow\" and if there's no error, you've done it!","Q_Score":5,"Tags":"python,tensorflow","A_Id":53177064,"CreationDate":"2018-07-20T10:26:00.000","Title":"Can't install tensorflow with pip or anaconda","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Does anyone know how to properly install tensorflow on Windows?\nI'm currently using Python 3.7 (also tried with 3.6) and every time I get the same \"Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )\nNo matching distribution found for tensorflow-gpu\" error\nI tried installing using pip and anaconda, both don't work for me.\n\nFound a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","AnswerCount":16,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":44333,"Q_Id":51440475,"Users Score":5,"Answer":"Tensorflow or Tensorflow-gpu is supported only for 3.5.X versions of Python. Try installing with any Python 3.5.X version. This should fix your problem.","Q_Score":5,"Tags":"python,tensorflow","A_Id":51706227,"CreationDate":"2018-07-20T10:26:00.000","Title":"Can't install tensorflow with pip or anaconda","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Does anyone know how to properly install tensorflow on Windows?\nI'm currently using Python 3.7 (also tried with 3.6) and every time I get the same \"Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )\nNo matching distribution found for tensorflow-gpu\" error\nI tried installing using pip and anaconda, both don't work for me.\n\nFound a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","AnswerCount":16,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":44333,"Q_Id":51440475,"Users Score":0,"Answer":"You mentioned Anaconda. Do you run your python through there?\nIf so check in Anaconda Navigator --> Environments, if your current environment have got tensorflow installed. \nIf not, install tensorflow and run from that environment. \nShould work.","Q_Score":5,"Tags":"python,tensorflow","A_Id":51440570,"CreationDate":"2018-07-20T10:26:00.000","Title":"Can't install tensorflow with pip or anaconda","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using chatter bot for implementing chat bot. I want Chatterbot to training the data set dynamically. \nWhenever I run my code it should train itself from the beginning, because I require new data for every person who'll chat with my bot. \nSo how can I achieve this in python3 and on windows platform ?\nwhat I want to achieve and problem I'm facing:\nI've a python program which will create a text file student_record.txt, this will be generate from a data base and almost new when different student signup or login. In the chatter bot, I trained the bot using with giving this file name but it still replay from the previous trained data","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":209,"Q_Id":51455033,"Users Score":0,"Answer":"I got the solution for that, I just deleted the data base on the beginning of the program thus new data base will create during the execution of the program.\n I used the following command to delete the data base\nimport os\n os.remove(\"database_name\")\nin my case\nimport os \n os.remove(\"db.sqlite3\")\nthank you","Q_Score":0,"Tags":"python-3.x,bots,chatterbot","A_Id":51455870,"CreationDate":"2018-07-21T10:12:00.000","Title":"Chatterbot dynamic training","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am pretty new to Python in general and recently started messing with the Google Cloud environment, specifically with the Natural Language API.\nOne thing that I just cant grasp is how do I make use of this environment, running scripts that use this API or any API from my local PC in this case my Anaconda Spyder environment? \nI have my project setup, but from there I am not exactly sure, which steps are necessary. Do I have to include the authentication somehow in the Script inside Spyder?\nSome insights would be really helpful.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":2057,"Q_Id":51455781,"Users Score":-1,"Answer":"First install the API by pip install or conda install in the scripts directory of anaconda and then simply import it into your code and start coding.","Q_Score":0,"Tags":"python,google-cloud-platform","A_Id":51455824,"CreationDate":"2018-07-21T11:51:00.000","Title":"How do I use Google Cloud API's via Anaconda Spyder?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a python script which opens an image file (.png or .ppm) using OpenCV, then loads all the RGB values into a multidimensional Python array (or list), performs some pixel by pixel calculations solely on the Python array (OpenCV is not used at all for this stage), then uses the newly created array (containing new RGB values) to write a new image file (.png here) using OpenCV again. Numpy is not used at all in this script. The program works fine.\nThe question is how to do this without using any external libraries, regardless whether they are for image processing or not (e.g. OpenCV, Numpy, Scipy, Pillow etc.). To summarize, I need to use bare bones Python's internal modules to: 1. open image and read the RGB values and 2. write a new image from pre-calculated RGB values. I will use Pypy instead of CPython for this purpose, to speed things up.\nNote: I use Windows 10, if that matters.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":6169,"Q_Id":51457993,"Users Score":1,"Answer":"Working with bare-bones .ppm files is trivial: you have three lines of text (P6, \"width height\", 255), and then you have the 3*width*height bytes of RGB. As long as you don't need more complicated variants of the .ppm format, you can write a loader and a saver in 5 lines of code each.","Q_Score":2,"Tags":"python,image-processing,pypy","A_Id":51458710,"CreationDate":"2018-07-21T16:20:00.000","Title":"How to open\/create images in Python without using external modules","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Apologies if my question is stupid. \nI am a newbie is all aspects.\nI used to run my python code straight from the terminal in Linux Ubuntu, \ne.g. I just open the terminal go to my folder and run my command in my Linux terminal \nCUDA_VISIBLE_DEVICES=0 python trainval_net.py --dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda\nnow im trying to use Spyder.\nSo for the same project i have a folder with bunch of functions\/folders\/stuff inside it.\nSo i just open that main folder as a new project, then i have noo idea how i can run my code...\nThere is a console in the right side of spyder which looks like Ipython and i can do stuff in there, but i cannot run the code that i run in terminal there.\nIn iphython or jupyther i used to usee ! at the begining of the command but here when i do it (e.g. !CUDA_VISIBLE_DEVICES=0 python trainval_net.py --dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda) it does not even know the modules and throw errors (e.g. ImportError: No module named numpy`)\nCan anyone tell me how should i run my code here in Spyder\nThank you in advance! :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":447,"Q_Id":51461427,"Users Score":0,"Answer":"Okay I figured it out.\nI need to go to run->configure per file and in the command line options put the configuration (--dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda)","Q_Score":0,"Tags":"python-3.x,anaconda,ubuntu-16.04,spyder","A_Id":51467364,"CreationDate":"2018-07-22T01:51:00.000","Title":"How run my code in spyder as i used to run it in linux terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":1},{"Question":"I am using Midiutil to recreate a modified Bach contrapuntist melody and I am having difficulty finding a method for creating chords using Midiutil in python. Does anyone know a way to create chords using Midiuitl or if there is a way to create chords.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":243,"Q_Id":51462120,"Users Score":0,"Answer":"A chord consists of multiple notes.\nJust add multiple notes with the same timestamp.","Q_Score":0,"Tags":"python,midi","A_Id":51463808,"CreationDate":"2018-07-22T04:44:00.000","Title":"How to use Midiutil to add multiple notes in one timespot (or how to add chords)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So I have just switched over from Spyder to PyCharm. In Spyder, each time you run the program, the console just gets added to, not cleared. This was very useful because I could look through the console to see how my changes to the code were changing the outputs of the program (obviously the console had a maximum length so stuff would get cleared eventually)\nHowever in PyCharm each time I run the program the console is cleared. Surely there must be a way to change this, but I can't find the setting. Thanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":798,"Q_Id":51466944,"Users Score":1,"Answer":"In Spyder the output is there because you are running iPython.\nIn PyCharm you can get the same by pressing on View -> Scientific Mode.\nThen every time you run you see a the new output and the history there.","Q_Score":1,"Tags":"python,console,pycharm,settings","A_Id":51466984,"CreationDate":"2018-07-22T16:11:00.000","Title":"PyCharm, stop the console from clearing every time you run the program","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions:\n\nCanopy version 2.1.3.3542 (64 bit)\njupyter version 1.0.0-25\npandas version 0.23.1-1\npython_dateutil version 2.6.0-1\n\nI'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":17899,"Q_Id":51470373,"Users Score":0,"Answer":"The issue is with the pandas lib\ndowngrade using the command below\npip install pandas==0.22.0","Q_Score":7,"Tags":"python,pandas,jupyter-notebook,canopy,python-dateutil","A_Id":64066254,"CreationDate":"2018-07-23T00:44:00.000","Title":"dateutil 2.5.0 is the minimum required version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions:\n\nCanopy version 2.1.3.3542 (64 bit)\njupyter version 1.0.0-25\npandas version 0.23.1-1\npython_dateutil version 2.6.0-1\n\nI'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.","AnswerCount":5,"Available Count":3,"Score":0.1194272985,"is_accepted":false,"ViewCount":17899,"Q_Id":51470373,"Users Score":3,"Answer":"I had this same issue using the newest pandas version - downgrading to pandas 0.22.0 fixes the problem.\npip install pandas==0.22.0","Q_Score":7,"Tags":"python,pandas,jupyter-notebook,canopy,python-dateutil","A_Id":55159160,"CreationDate":"2018-07-23T00:44:00.000","Title":"dateutil 2.5.0 is the minimum required version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions:\n\nCanopy version 2.1.3.3542 (64 bit)\njupyter version 1.0.0-25\npandas version 0.23.1-1\npython_dateutil version 2.6.0-1\n\nI'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":17899,"Q_Id":51470373,"Users Score":0,"Answer":"Installed Canopy version 2.1.9. The downloaded version worked without updating any of the packages called out by the Canopy Package Manager. Updated all the packages, but then the \"import pandas as pd\" failed when using the jupyter notebook. Downgraded the notebook package from 4.4.1-5 to 4.4.1-4 which cascaded to 35 additional package downgrades. Retested the import of pandas and the issue seems to have disappeared.","Q_Score":7,"Tags":"python,pandas,jupyter-notebook,canopy,python-dateutil","A_Id":51471337,"CreationDate":"2018-07-23T00:44:00.000","Title":"dateutil 2.5.0 is the minimum required version","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have images of vehicles . I need to predict the price of the vehicle based on image extraction. \nWhat I have learnt is , I can use CNN to extract the image features but what I am not able to get is, How to predict the prices of vehicles. \nI know that the I need to train my CNN model before it predicts the price. \nI don't know how to train the model with images along with prices .\nIn the end what I expect is , I will input an vehicle image and I need to get price of the vehicle. \nCan any one provide the approach for this ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":268,"Q_Id":51484727,"Users Score":0,"Answer":"I would use the CNN to predict the model of the car and then using a list of all the car prices it's easy enough to get the price, or if you dont care about the car model just use the prices as lables","Q_Score":0,"Tags":"python,tensorflow","A_Id":67088725,"CreationDate":"2018-07-23T17:57:00.000","Title":"CNN image extraction to predict a continuous value","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to handle the event when the shutdown process is started(for example with long press the robot's chest button or when the battery is critically low). The problem is that I didn't find a way to handle the shutdown\/poweroff event. Do you have any idea how this can be done in some convenient way?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":134,"Q_Id":51498056,"Users Score":2,"Answer":"Unfortunately this won't be possible as when you trigger a shutdown naoqi will exit as well and destroy your service. \nIf you are coding in c++ you could use a destructor, but there is no proper equivalent for python... \nAn alternative would be to execute some code when your script exits whatever the reason. For this you can start your script as a service and wait for \"the end\" using qiApplication.run(). This method will simply block until naoqi asks your service to exit. \nNote: in case of shutdown, all services are being killed, so you cannot run any command from the robot API (as they are probably not available anymore!)","Q_Score":0,"Tags":"python,nao-robot,pepper,choregraphe","A_Id":51566036,"CreationDate":"2018-07-24T11:59:00.000","Title":"How can I handle Pepper robot shutdown event?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I currently have macros set up to automate all my reports. However, some of my macros can take up to 5-10 minutes due to the size of my data. \nI have been moving away from Excel\/VBA to Python\/pandas for data analysis and manipulation. I still use excel for data visualization (i.e., pivot tables). \nI would like to know how other people use python to automate their reports? What do you guys do? Any tips on how I can start the process? \nMajority of my macros do the following actions - \n\nImport text file(s)\nPaste the raw data into a table that's linked to pivot tables \/ charts.\nRefresh workbook \nSave as new","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":606,"Q_Id":51503471,"Users Score":0,"Answer":"When using python to automate reports I fully converted the report from Excel to Pandas. I use pd.read_csv or pd.read_excel to read in the data, and export the fully formatted pivot tables into excel for viewing. doing the 'paste into a table and refresh' is not handled well by python in my experience, and will likely still need macros to handle properly ie, export a csv with the formatted data from python then run a short macro to copy and paste.\nif you have any more specific questions please ask, i have done a decent bit of this","Q_Score":0,"Tags":"python,excel,vba,pandas,reporting","A_Id":51503697,"CreationDate":"2018-07-24T16:25:00.000","Title":"Python - pandas \/ openpyxl: Tips on Automating Reports (Moving Away from VBA).","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am currently working on a program that would take the previous 4000 days of stock data about a particular stock and predict the next 90 days of performance.\nThe way I've elected to do this is with an RNN that makes use of LSTM layers to use the previous 90 days to predict the next day's performance (when training, the previous 90 days are the x-values and the next day is used as the y-value). What I would like to do however, is use the previous 90-180 days to predict all the values for the next 90 days. However, I am unsure of how to implement this in Keras as all the examples I have seen only predict the next day and then they may loop that prediction into the next day's 90 day x-values.\nIs there any ways to just use the previous 180 days to predict the next 90? Or is the LSTM restricted to only predicting the next day?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":299,"Q_Id":51506404,"Users Score":0,"Answer":"I don't have the rep to comment, but I'll say here that I've toyed with a similar task. One could use a sliding window approach for 90 days (I used 30, since 90 is pushing LSTM limits), then predict the price appreciation for next month (so your prediction is for a single value). @Digital-Thinking is generally right though, you shouldn't expect great performance.","Q_Score":0,"Tags":"python,tensorflow,keras,lstm,rnn","A_Id":51526725,"CreationDate":"2018-07-24T19:41:00.000","Title":"How to make RNN time-forecast multiple days using Keras?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Python Kafka worker run by a bash script in a Docker image inside a docker-compose setup that I need to reload and restart whenever a file in its directory changes, as I edit the code. Does anyone know how to accomplish this for a bash script?\nPlease don't merge this with the several answers about running a script whenever a file in a directory changes. I've seen other answers regarding this, but I can't find a way to run a script once, and then stop, reload and re-run it if any files change.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":260,"Q_Id":51507787,"Users Score":1,"Answer":"My suggestion is to let docker start a wrapper script that simply starts the real script in the background.\nThen in an infinite loop:\n\nusing inotifywait the wrapper waits for the appropriate change\nthen kills\/stop\/reload\/... the child process \nstarts a new one in the background again.","Q_Score":0,"Tags":"python,bash,unix,filesystems,reload","A_Id":51511825,"CreationDate":"2018-07-24T21:28:00.000","Title":"How do you setup script RELOAD\/RESTART upon file changes using bash?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've created a kivy app that works perfectly as I desire. It's got a few files in a particular folder that it uses. For the life of me, I don't understand how to create an exe on mac. I know I can use pyinstaller but how do I create an exe from mac.\nPlease help!","AnswerCount":2,"Available Count":2,"Score":-0.1973753202,"is_accepted":false,"ViewCount":1152,"Q_Id":51515471,"Users Score":-2,"Answer":"This is easy with Pyinstaller. I've used it recently.\nInstall pyinstaller\n\npip install pyinstaller\n\nHit following command on terminal where file.py is path to your main file\n\npyinstaller -w -F file.py\n\nYour exe will be created inside a folder dist\nNOTE : verified on windowns, not on mac","Q_Score":1,"Tags":"python,kivy,pyinstaller,kivy-language","A_Id":51515669,"CreationDate":"2018-07-25T09:28:00.000","Title":"Creating an exe file for windows using mac for my Kivy app","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've created a kivy app that works perfectly as I desire. It's got a few files in a particular folder that it uses. For the life of me, I don't understand how to create an exe on mac. I know I can use pyinstaller but how do I create an exe from mac.\nPlease help!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1152,"Q_Id":51515471,"Users Score":1,"Answer":"For pyinstaller, they have stated that packaging Windows binaries while running under OS X is NOT supported, and recommended to use Wine for this.\n\n\nCan I package Windows binaries while running under Linux?\n\nNo, this is not supported. Please use Wine for this, PyInstaller runs\n fine in Wine. You may also want to have a look at this thread in the\n mailinglist. In version 1.4 we had build in some support for this, but\n it showed to work only half. It would require some Windows system on\n another partition and would only work for pure Python programs. As\n soon as you want a decent GUI (gtk, qt, wx), you would need to install\n Windows libraries anyhow. So it's much easier to just use Wine.\n\nCan I package Windows binaries while running under OS X?\n\nNo, this is not supported. Please try Wine for this.\n\nCan I package OS X binaries while running under Linux?\n\nThis is currently not possible at all. Sorry! If you want to help out,\n you are very welcome.","Q_Score":1,"Tags":"python,kivy,pyinstaller,kivy-language","A_Id":51515517,"CreationDate":"2018-07-25T09:28:00.000","Title":"Creating an exe file for windows using mac for my Kivy app","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am writing a server with multiple gunicorn workers and want to let them all have access to a specific variable. I'm using Redis to do this(it's in RAM, so it's fast, right?) but every GET or SET request adds another client. I'm performing maybe ~150 requests per second, so it quickly reaches the 25 connection limit that Heroku has. To access the database, I'm using db = redis.from_url(os.environ.get(\"REDIS_URL\")) and then db.set() and db.get(). Is there a way to lower that number? For instance, by using the same connection over and over again for each worker? But how would I do that? The 3 gunicorn workers I have are performing around 50 queries each per second.\nIf using redis is a bad idea(which it probably is), it would be great if you could suggest alternatives, but also please include a way to fix my current problem as most of my code is based off of it and I don't have enough time to rewrite the whole thing yet.\nNote: The three pieces of code are the only times redis and db are called. I didn't do any configuration or anything. Maybe that info will help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":133,"Q_Id":51519333,"Users Score":0,"Answer":"Most likely, your script creates a new connection for each request.\nBut each worker should create it once and use forever.\nWhich framework are you using?\nIt should have some documentation about how to configure Redis for your webapp.\nP.S. Redis is a good choice to handle that :)","Q_Score":1,"Tags":"python,heroku,redis,gunicorn","A_Id":51519644,"CreationDate":"2018-07-25T12:50:00.000","Title":"Python Redis on Heroku reached max clients","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I am basically trying to start an HTTP server which will respond with content from a website which I can crawl using Scrapy. In order to start crawling the website I need to login to it and to do so I need to access a DB with credentials and such. The main issue here is that I need everything to be fully asynchronous and so far I am struggling to find a combination that will make everything work properly without many sloppy implementations.\nI already got Klein + Scrapy working but when I get to implementing DB accesses I get all messed up in my head. Is there any way to make PyMongo asynchronous with twisted or something (yes, I have seen TxMongo but the documentation is quite bad and I would like to avoid it. I have also found an implementation with adbapi but I would like something more similar to PyMongo).\nTrying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff but then I find myself at an impasse with Scrapy integration.\nI have seen things like scrapa, scrapyd and ScrapyRT but those don't really work for me. Are there any other options?\nFinally, if nothing works, I'll just use aiohttp and instead of Scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response. Any advice on how to proceed down that road?\nThanks for your attention, I'm quite a noob in this area so I don't know if I'm making complete sense. Regardless, any help will be appreciated :)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":263,"Q_Id":51525645,"Users Score":0,"Answer":"Is there any way to make pymongo asynchronous with twisted\n\nNo. pymongo is designed as a synchronous library, and there is no way you can make it asynchronous without basically rewriting it (you could use threads or processes, but that is not what you asked, also you can run into issues with thread-safeness of the code).\n\nTrying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff\n\nIt doesn't. aiohttp is a http library - it can do http asynchronously and that is all, it has nothing to help you access databases. You'd have to basically rewrite pymongo on top of it.\n\nFinally, if nothing works, I'll just use aiohttp and instead of scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response.\n\nThat means lots of work for not using scrapy, and it won't help you with the pymongo issue - you still have to rewrite pymongo!\nMy suggestion is - learn txmongo! If you can't and want to rewrite it, use twisted.web to write it instead of aiohttp since then you can continue using scrapy!","Q_Score":0,"Tags":"python,mongodb,asynchronous,server,scrapy","A_Id":51525888,"CreationDate":"2018-07-25T18:37:00.000","Title":"Async HTTP server with scrapy and mongodb in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to plot an array of temperatures for different location during one day in python and want it to be graphed in the format (time, temperature_array). I am using matplotlib and currently only know how to graph 1 y value for an x value.\nThe temperature code looks like this:\nTemperatures = [[Temp_array0] [Temp_array1] [Temp_array2]...], where each numbered array corresponds to that time and the temperature values in the array are at different latitudes and longitudes.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1722,"Q_Id":51527766,"Users Score":0,"Answer":"You can simply repeat the X values which are common for y values\nSuppose\n[x,x,x,x],[y1,y2,y3,y4]","Q_Score":0,"Tags":"python,matplotlib,graph","A_Id":51527863,"CreationDate":"2018-07-25T21:15:00.000","Title":"Python: How to plot an array of y values for one x value in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Does anyone have experience with triggering an email from Spotfire based on a condition? Say, a sales figure falls below a certain threshold and an email gets sent to the appropriate distribution list. I want to know how involved this would be to do. I know that it can be done using an iron python script, but I'm curious if it can be done based on conditions rather than me hitting \"run\"?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":712,"Q_Id":51547675,"Users Score":1,"Answer":"we actually have a product that does exactly this called the Spotfire Alerting Tool. it functions off of Automation Services and allows you to configure various thresholds for any metrics in the analysis, and then can notify users via email or even SMS.\nof course there is the possibility of coding this yourself (the tool is simply an extension developed using the Spotfire SDK) but I can't comment on how to code it.\nthe best way to get this tool is probably to check with your TIBCO sales rep. if you'd like I can try to reach him on your behalf, but I'll need a bit more info from you. please contact me at nmaresco@tibco.com.\nI hope this kind of answer is okay on SO. I don't have a way to reach you privately and this is the best answer I know how to give :)","Q_Score":2,"Tags":"automation,ironpython,spotfire","A_Id":51674710,"CreationDate":"2018-07-26T21:21:00.000","Title":"Triggering email out of Spotfire based on conditions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using the 2d interpolation function in scipy to smooth a 2d image. As I understand it, interpolate will return z = f(x,y). What I want to do is find x with known values of y and z. I tried something like this;\nf = interp2d(x,y,z)\nindex = (np.abs(f(:,y) - z)).argmin()\nHowever the interp2d object does not work that way. Any ideas on how to do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":51549293,"Users Score":0,"Answer":"I was able to figure this out. yvalue, zvalue, xmin, and xmax are known values. By creating a linspace out of the possible values x can take on, a list can be created with all of the corresponding function values. Then using argmin() we can find the closest value in the list to the known z value. \nf = interp2d(x,y,z)\nxnew = numpy.linspace(xmin, xmax)\nfnew = f(xnew, yvalue)\nxindex = (numpy.abs(fnew - zvalue)).argmin()\nxvalue = xnew(xindex)","Q_Score":1,"Tags":"python,numpy,scipy","A_Id":51565999,"CreationDate":"2018-07-27T00:49:00.000","Title":"Scipy interp2d function produces z = f(x,y), I would like to solve for x","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a minimization problem, that is modeled to be solved in Gurobi, via python.\nBesides, I can calculate a \"good\" initial solution for the problem separately, that can be used as an upper bound for the problem.\nWhat I want to do is to set Gurobi use this upper bound, to enhance its efficiency. I mean, if this upper bound can help Gurobi for its search. The point is that I just have the objective value, but not a complete solution.\nCan anybody help me how to set this upper bound in the Gurobi?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":438,"Q_Id":51550870,"Users Score":0,"Answer":"I think that if you can calculate a good solution, you can also know some bound for your variable even you dont have the solution exactly ?","Q_Score":0,"Tags":"python,initialization,gurobi,upperbound","A_Id":51712765,"CreationDate":"2018-07-27T04:42:00.000","Title":"How to set an start solution in Gurobi, when only objective function is known?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Hellow. It seems to me that I just don't understand something quite obvios in databases. \nSo, we have an author that write books and have books themselves. One author can write many books as well as one book could be written by many authors. \nThus, we have two tables 'Books' and 'Authors'.\nIn 'Authors' I have an 'ID'(Primary key) and 'Name', for example:\n1 - L.Carrol\n2 - D.Brown \nIn 'Books' - 'ID' (pr.key), 'Name' and 'Authors' (and this column is foreign key to the 'Authors' table ID)\n1 - Some_name - 2 (L.Carol)\n2 - Another_name - 2,1 (D.Brown, L.Carol)\nAnd here is my stumbling block, cause i don't understand how to provide the possibility to choose several values from 'Authors' table to one column in 'Books' table.But this must be so simple, isn't it?\nI've red about many-to-many relationship, saw many examples with added extra table to implement that, but still don't understand how to store multiple values from one table in the other's table column. Please, explain the logic, how should I do something like that ? I use SQLiteStudio but clear sql is appropriate too. Help ^(","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":343,"Q_Id":51572979,"Users Score":3,"Answer":"You should have third intermediate table which will have following columns:\n\nid (primary)\nauthor id (from Authors table)\nbook id (from Books table)\n\nThis way you will be able to create a record which will map 1 author to 1 book. So you can have following records:\n\n1 ... Author1ID ... Book1ID\n2 ... Author1ID ... Book2ID\n3 ... Author2ID ... Book2ID\n\nAuthorXID and BookXID - foreign keys from corresponding tables.\nSo Book2 has 2 authors, Author1 has 2 books.\nAlso separate tables for Books and Authors don't need to contain any info about anything except itself.\nAuthors .. 1---Many .. BOOKSFORAUTHORS .. Many---1 .. Books","Q_Score":0,"Tags":"python,sql,database,many-to-many,sqlitestudio","A_Id":51573120,"CreationDate":"2018-07-28T15:56:00.000","Title":"Many to many relationship SQLite (studio or sql)","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I might be sounding like a noob while asking this question but I really want to know how can I get the time from when my screen is on. Not the system up time but the screen up time. I want to use this time in a python app. So please tell me if there is any way to get that. Thanks in advance.\nEdit- I want to get the time from when the display is black due to no activity and we move mouse or press a key and screen comes up, the display is up, the user is able to read and\/or able to edit a document or play games. \nOS is windows .","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":51576081,"Users Score":0,"Answer":"In Mac OS ioreg might have the information you're looking for.\nioreg -n IODisplayWrangler -r IODisplayWrangler -w 0 | grep IOPowerManagement","Q_Score":1,"Tags":"python,windows,operating-system,kernel","A_Id":51576697,"CreationDate":"2018-07-28T23:43:00.000","Title":"Screen up time in desktop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I don't know what title should be, I just got stuck and need to ask.\nI have a model called shift\nand imagine the db_table like this:\n\n#table shift\n+---------------+---------------+---------------+---------------+------------+------------+\n| start | end | off_start | off_end | time | user_id |\n+---------------+---------------+---------------+---------------+------------+------------+\n| 2018-01-01 | 2018-01-05 | 2018-01-06 | 2018-01-07 | 07:00 | 1 |\n| 2018-01-08 | 2018-01-14 | 2018-01-15 | Null | 12:00 | 1 |\n| 2018-01-16 | 2018-01-20 | 2018-01-21 | 2018-01-22 | 18:00 | 1 |\n| 2018-01-23 | 2018-01-27 | 2018-01-28 | 2018-01-31 | 24:00 | 1 |\n| .... | .... | .... | .... | .... | .... |\n+---------------+---------------+---------------+---------------+------------+------------+\n\nif I use queryset with filter like start=2018-01-01 result will 07:00\nbut how to get result 12:00 if I Input 2018-01-10 ?...\nthank you!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":113,"Q_Id":51579732,"Users Score":1,"Answer":"Question isnt too clear, but maybe you're after something like \nstart__lte=2018-01-10, end__gte=2018-01-10?","Q_Score":0,"Tags":"python,django","A_Id":51579769,"CreationDate":"2018-07-29T11:14:00.000","Title":"Django Queryset find data between date","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"After installing Anaconda to C:\\ I cannot open jupyter notebook. Both in the Anaconda Prompt with jupyter notebook and inside the navigator. I just can't make it to work. It doesn't appear any line when I type jupyter notebook iniside the prompt. Neither does the navigator work. Then after that I reinstall Anaconda, didn't work either. \nBut then I try to reinstall jupyter notebook dependently using python -m install jupyter and then run python -m jupyter. It works and connect to the localhost:8888. So my question is that how can I make Jupyter works from Anaconda\nAlso note that my anaconda is not in the environment variable( or %PATH% ) and I have tried reinstalling pyzmq and it didn't solve the problem. I'm using Python 3.7 and 3.6.5 in Anaconda\nMoreover, the spyder works perfectly","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2335,"Q_Id":51617979,"Users Score":1,"Answer":"You need to activate the anaconda environment first.\nIn terminal: source activate environment_name, (or activate environment_name on windows?)\nthen jupyter notebook\nIf you don't know the env name, do conda list\nto restore the default python environment: source deactivate","Q_Score":1,"Tags":"python,anaconda,jupyter-notebook,jupyter","A_Id":51624703,"CreationDate":"2018-07-31T16:24:00.000","Title":"cannot run jupyter notebook from anaconda but able to run it from python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've got a content-based recommender that works... fine. I was fairly certain it was the right approach to take for this problem (matching established \"users\" with \"items\" that are virtually always new, but contain known features similar to existing items).\nAs I was researching, I found that virtually all examples of content-based filtering use articles\/movies as an example and look exclusively at using encoded tf-idf features from blocks of text. That wasn't exactly what I was dealing with, but most of my features were boolean features, so making a similar vector and looking at cosine distance was not particularly difficult. I also had one continuous feature, which I scaled and included in the vector. As I said, it seemed to work, but was pretty iffy, and I think I know part of the reason why...\nThe continuous feature that I'm using is a rating (let's call this \"deliciousness\"), where, in virtually all cases, a better score would indicate an item more favorable for the user. It's continuous, but it also has a clear \"direction\" (not sure if this is the correct terminology). Error in one direction is not the same as error in another.\nI have cases where some users have given high ratings to items with mediocre \"deliciousness\" scores, but logically they would still prefer something that was more delicious. That user's vector might have an average deliciousness of 2.3. My understanding of cosine distance is that in my model, if that user encountered two new items that were exactly the same except that one had a deliciousness of 1.0 and the other had a deliciousness of 4.5, it would actually favor the former because it's a shorter distance between vectors.\nHow do I modify or incorporate some other kind of distance measure here that takes into account that deliciousness error\/distance in one direction is not the same as error\/distance in the other direction?\n(As a secondary question, how do I decide how to best scale this continuous feature next to my boolean features?)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":98,"Q_Id":51618103,"Users Score":1,"Answer":"There are two basic approaches to solve this:\n(1) Write your own distance function. The obvious approach is to remove the deliciousness element from each vector, evaluating that difference independently. Use cosine similarity on the rest of the vector. Combine that figure with the taste differential as desired.\n(2) Transform your deliciousness data such that the resulting metric is linear. This will allow a \"normal\" distance metric to do its job as expected.","Q_Score":1,"Tags":"python,machine-learning,cosine-similarity,recommender-systems","A_Id":51618254,"CreationDate":"2018-07-31T16:30:00.000","Title":"Handling Error for Continuous Features in a Content-Based Filtering Recommender System","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm trying to learn python practically.\nI installed PIP via easy_install and then I wanted to play with some mp3 files so I installed eyed3 via pip while in the project directory. Issue is that it installed the module into python 2.7 which comes standard with mac. I found this out as it keeps telling me that when a script does not run due to missing libraries like libmagic and no matter what I do, it keeps putting any libraries I install into 2.7 thus not being found when running python3. \nMy question is how to I get my system to pretty much ignore the 2.7 install and use the 3.7 install which I have.\nI keep thinking I am doing something wrong as heaps of tutorials breeze over it and only one has so far mentioned that you get clashes between the versions. I really want to learn python and would appreciate some help getting past this blockage.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":51622876,"Users Score":0,"Answer":"Have you tried pip3 install [module-name]?\nThen you should be able to check which modules you've installed using pip3 freeze.","Q_Score":0,"Tags":"python,eyed3,libmagic","A_Id":51622986,"CreationDate":"2018-07-31T22:16:00.000","Title":"How do i get Mac 10.13 to install modules into a 3.x install instead of 2.7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm doing some work on the data in an excel sheet using python pandas. When I write and save the data it seems that pandas only saves and cares about the raw data on the import. Meaning a lot of stuff I really want to keep such as cell colouring, font size, borders, etc get lost. Does anyone know of a way to make pandas save such things?\nFrom what I've read so far it doesn't appear to be possible. The best solution I've found so far is to use the xlsxwriter to format the file in my code before exporting. This seems like a very tedious task that will involve a lot of testing to figure out how to achieve the various formats and aesthetic changes I need. I haven't found anything but would said writer happen to in any way be able to save the sheet format upon import? \nAlternatively, what would you suggest I do to solve the problem that I have described?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":468,"Q_Id":51626502,"Users Score":0,"Answer":"Separate data from formatting. Have a sheet that contains only the data \u2013 that's the one you will be reading\/writing to \u2013 and another that has formatting and reads the data from the first sheet.","Q_Score":1,"Tags":"python,excel,pandas,xlsxwriter","A_Id":51628570,"CreationDate":"2018-08-01T06:16:00.000","Title":"Any way to save format when importing an excel file in Python?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"qcow2 is an image for qemu and it's good to emulate.\nI know how to write data for qcow2 format, but I don't know how backing files in qcow2 work?\nI found nothing tutorial said this.\nCan anyone give me tips?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":398,"Q_Id":51631247,"Users Score":1,"Answer":"Backing file is external snapshot for qcow2 and the qemu will write COW data in the new image.\nFor example:\nYou have image A and B, and A is backing file of B.\nWhen you mount B to \/dev\/nbd and check its data, you'll find you can saw data of A.\nThat's because if there's no data in the range of B, qemu will read the same range of A.\nAn important notes: If qemu doesn't find A, you won't be able to mount B on \/dev\/nbd.","Q_Score":0,"Tags":"python,qemu","A_Id":51924013,"CreationDate":"2018-08-01T10:39:00.000","Title":"How backing file works in qcow2?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to download approximately 50 pdf files from the Internet using a python script. Can Google APIs help me anyhow?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":51654956,"Users Score":0,"Answer":"I am going to assume that you are downloading from Google drive. You can only download one file at a time. You cant batch download of the actual file itself.\nYOu could look into some kind of multi threading system and download the files at the same time that way but you man run into quota issues.","Q_Score":0,"Tags":"python,python-3.x,web-scraping,google-api","A_Id":51663714,"CreationDate":"2018-08-02T13:30:00.000","Title":"how to download many pdf files from google at once using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have certain files in a directory named benchmarks and I want to get code coverage by running these source files.\nI have tried using source flag in the following ways but it doesn't work.\ncoverage3 run --source=benchmarks\ncoverage3 run --source=benchmarks\/\nOn running, I always get Nothing to do.\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1447,"Q_Id":51673082,"Users Score":0,"Answer":"coverage run is like python. If you would run a file with python myprog.py, then you can use coverage run myprog.py.","Q_Score":0,"Tags":"ubuntu-14.04,python-3.6,coverage.py","A_Id":51679505,"CreationDate":"2018-08-03T12:50:00.000","Title":"how to use coverage run --source = {dir_name}","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"How can I get the embed of a message to a variable with the ID of the message in discord.py?\nI get the message with uzenet = await client.get_message(channel, id), but I don't know how to get it's embed.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5406,"Q_Id":51688392,"Users Score":5,"Answer":"To get the first Embed of your message, as you said that would be a dict():\nembedFromMessage = uzenet.embeds[0]\nTo transfer the dict() into an discord.Embed object:\nembed = discord.Embed.from_data(embedFromMessage)","Q_Score":1,"Tags":"python,python-3.x,discord,discord.py","A_Id":55555605,"CreationDate":"2018-08-04T18:15:00.000","Title":"Discord.py get message embed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In my Python script I want to connect to remote server every time. So how can I use my windows credentials to connect to server without typing user ID and password. \nBy default it should read the userid\/password from local system and will connect to remote server.\nI tried with getuser() and getpass() but I have to enter the password everytime. I don't want to enter the password it should take automatically from local system password.\nAny suggestions..","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":183,"Q_Id":51690162,"Users Score":0,"Answer":"I am sorry this is not exactly an answer but I have looked on the web and I do not think you can write a code to automatically open Remote desktop without you having to enter the credentials but can you please edit the question so that I can see the code?","Q_Score":0,"Tags":"python,python-3.x,login,python-requests","A_Id":51690444,"CreationDate":"2018-08-04T22:59:00.000","Title":"How to use Windows credentials to connect remote desktop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"By default subscribers get email messages once the new task in a project is created. How it can be tailored so that unless the projects has checkbox \"Send e-mail on new task\" checked it will not send e-mails on new task?\nI know how to add a custom field to project.project model. But don't know the next step.\nWhat action to override to not send the email when a new task is created and \"Send e-mail on new task\" is not checked for project?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":24,"Q_Id":51719138,"Users Score":0,"Answer":"I found that if project has notifications option \"\nVisible by following customers\" enabled then one can configure subscription for each follower. \nTo not receive e-mails when new task is added to the project: unmark the checkbox \"Task opened\" in the \"Edit subscription of User\" form.","Q_Score":0,"Tags":"python,email,task,project,odoo-10","A_Id":52330973,"CreationDate":"2018-08-07T05:02:00.000","Title":"On project task created do not send email","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am making a program that will call python. I would like to add python in my project so users don't have to download python in order to use it, also it will be better to use the python that my program has so users don't have to download any dependency.\nMy program it's going to be writing in C++ (but can be any language) and I guess I have to call the python that is in the same path of my project?\nLet's say that the system where the user is running already has python and he\/she calls 'pip' i want the program to call pip provided by the python give it by my program and install it in the program directory instead of the system's python?\nIt's that possible? If it is how can I do it?\nReal examples:\nThere are programs that offer a terminal where you can execute python to do things in the program like:\n\nMaya by Autodesk\nNuke by The foundry\nHoudini by Side Effects\n\nNote: It has to be Cross-platform solution","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":73,"Q_Id":51738921,"Users Score":1,"Answer":"In order to run python code, the runtime is sufficient. Under Windows, you can use py2exe to pack your program code together with the python runtime and all recessary dependencies. But pip cannot be used and it makes no sense, as you don't want to develop, but only use the python part.\nTo distribute the complete python installation, like Panda3D does, you'll have to include it in the chosen installer software.","Q_Score":0,"Tags":"python,pythonpath,python-packaging,python-config","A_Id":51739051,"CreationDate":"2018-08-08T05:01:00.000","Title":"How can I pack python into my project?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Please give me a hint about how is it better to code a Python application which helps to organise ideas by tags. \nAdd a new idea:\nInput 1: the idea\nInput 2: corresponding tags\nSearch for the idea:\nInput 1: one or multiple tags\nAs far as I understood, it's necessary to create an array with ideas and an array with tags. But how to connect them? For example, idea number 3 corresponds to tags number 1 and 2. So the question is: how to link these two arrays in the most simple and elegant way?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":159,"Q_Id":51739732,"Users Score":0,"Answer":"Have two dictionaries:\n\nIdea -> Set of Tags\nTag -> Set of Ideas\n\nWhen you add a new idea, add it to the first dictionary, and then update all the sets of the tags it uses in the second dictionary. This way you get easy lookup by both tag and idea.","Q_Score":0,"Tags":"python,logic,tagging","A_Id":51760473,"CreationDate":"2018-08-08T06:15:00.000","Title":"Python app to organise ideas by tags","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I'm trying to do image classification with the Inception V3 model. Does ImageDataGenerator from Keras create new images which are added onto my dataset? If I have 1000 images, will using this function double it to 2000 images which are used for training? Is there a way to know how many images were created and now fed into the model?","AnswerCount":7,"Available Count":2,"Score":0.057080742,"is_accepted":false,"ViewCount":20435,"Q_Id":51748514,"Users Score":2,"Answer":"Also note that: These augmented images are not stored in the memory, they are generated on the fly while training and lost after training. You can't read again those augmented images. \nNot storing those images is a good idea because we'd run out of memory very soon storing huge no of images","Q_Score":39,"Tags":"python,tensorflow,machine-learning,keras,computer-vision","A_Id":51754972,"CreationDate":"2018-08-08T13:54:00.000","Title":"Does ImageDataGenerator add more images to my dataset?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to do image classification with the Inception V3 model. Does ImageDataGenerator from Keras create new images which are added onto my dataset? If I have 1000 images, will using this function double it to 2000 images which are used for training? Is there a way to know how many images were created and now fed into the model?","AnswerCount":7,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":20435,"Q_Id":51748514,"Users Score":0,"Answer":"Let me try and tell u in the easiest way possible with the help of an example.\nFor example:\n\nyou have a set of 500 images\nyou applied the ImageDataGenerator to the dataset with batch_size = 25\nnow you run your model for lets say 5 epochs with\nsteps_per_epoch=total_samples\/batch_size\nso , steps_per_epoch will be equal to 20\nnow your model will run on all 500 images (randomly transformed according to instructions provided to ImageDataGenerator) in each epoch","Q_Score":39,"Tags":"python,tensorflow,machine-learning,keras,computer-vision","A_Id":68196196,"CreationDate":"2018-08-08T13:54:00.000","Title":"Does ImageDataGenerator add more images to my dataset?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am developing a small web application with Flask. This application needs a DSL, which can express the content of .pdf files.\nI have developed a DSL with JetBrains MPS but now I'm not sure how to use it in my web application. Is it possible? Or should I consider to switch to another DSL or make my DSL directly in Python.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1223,"Q_Id":51762908,"Users Score":4,"Answer":"If you want to use MPS in the web frontend the simple answer is: no. \nSince MPS is a projectional editor it needs a projection engine so that user can interact with the program\/model. The projection engine of MPS is build in Java for desktop applications. There have been some efforts to put MPS on the web and build Java Script\/HTML projection engine but none of the work is complete. So unless you would build something like that there is no way to use MPS in the frontend. \nIf your DSL is textual anyway and doesn't leverage the projectional nature of MPS I would go down the text DSL road with specialised tooling for that e.g. python as you suggested or Xtext.","Q_Score":6,"Tags":"python,flask,dsl,jetbrains-ide,mps","A_Id":51779632,"CreationDate":"2018-08-09T09:03:00.000","Title":"Can I use JetBrains MPS in a web application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using odoo version 9 and I've created a module to customize the reports of purchase order. Among the fields that I want displayed in the reports is the supplier reference for article but when I add the code that displays this field \n but it displays an error when I want to start printing the report\nQWebException: \"Expected singleton: purchase.order.line(57, 58, 59, 60, 61, 62, 63, 64)\" while evaluating\n\"', '.join([str(x.product_code) for x in o.order_line.product_id.product_tmpl_id.seller_ids])\"\nPS: I don't change anything in the module purchase. \nI don't know how to fix this problem any idea for help please ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":125,"Q_Id":51764170,"Users Score":1,"Answer":"It is because your purchase order got several orderlines and you are hoping that the order will have only one orderline.\no.orderline.product_id.product_tmpl_id.seller_ids \nwill work only if there is one orderline otherwise you have loop through each orderline. Here o.orderline will have multiple orderlines and you can get product_id from multiple orderline. If you try o.orderline[0].product_id.product_tmpl_id.seller_ids it will work but will get only first orderline details. Inorder to get all the orderline details you need to loop through it.","Q_Score":0,"Tags":"python-2.7,odoo-9","A_Id":51797438,"CreationDate":"2018-08-09T10:03:00.000","Title":"How to solve error Expected singleton: purchase.order.line (57, 58, 59, 60, 61, 62, 63, 64)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"As we know, when using tensorflow to save checkpoint, we have 3 files, for e.g.:\nmodel.ckpt.data-00000-of-00001\nmodel.ckpt.index\nmodel.ckpt.meta\nI check on the faster rcnn and found that they have an evaluation.py script which helps evaluate the pre-trained model, but the script only accept .ckpt file (as they provided some pre-trained models above).\nI have run some finetuning from their pre-trained model\nAnd then I wonder if there's a way to convert all the .data-00000-of-00001, .index and .meta into one single .ckpt file to run the evaluate.py script on the checkpoint?\n(I also notice that the pre-trained models they provided in the repo do have only 1 .ckpt file, how can they do that when the save-checkpoint function generates 3 files?)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":658,"Q_Id":51782871,"Users Score":0,"Answer":"These \n{\nmodel.ckpt.data-00000-of-00001\nmodel.ckpt.index\nmodel.ckpt.meta\n}\nare the more recent checkpoint format\nwhile \n{model.ckpt} \nis a previous checkpoint format\nIt will be in the same concept as to convert a Nintendo Switch to NES ... Or a 3 pieces CD bundle to a single ROM cartridge...","Q_Score":0,"Tags":"python,tensorflow","A_Id":59889359,"CreationDate":"2018-08-10T09:07:00.000","Title":"how to convert tensorflow .meta .data .index to .ckpt file?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need help on how to write a script that configures an applications (VLC) settings to my needs without having to do it manually myself. The reason for this is because I will eventually need to start this application on boot with the correct settings already configured. \nSteps I need done in the script.\n1) I need to open the application.\n2) Open the \u201cOpen Network Stream\u2026\u201d tab (Can be done with Ctrl+N).\n3) Type a string of characters \u201cString of characters\u201d\n4) Push \u201cEnter\u201d twice on the keyboard.\nI\u2019ve checked various websites across the internet and could not find any information regarding this. I am sure it\u2019s possible but I am new to writing scripts and not too experienced. Are commands like the steps above possible to be completed in a script?\nNote: Using Linux based OS (Raspbian).\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20,"Q_Id":51791754,"Users Score":0,"Answer":"Do whichever changes you want manually once on an arbitrary system, then make a copy of the application's configuration files (in this case ~\/.config\/vlc)\nWhen you want to replicate the settings on a different machine, simply copy the settings to the same location.","Q_Score":0,"Tags":"python,shell,vlc","A_Id":51791818,"CreationDate":"2018-08-10T17:54:00.000","Title":"How do I write a script that configures an applications settings for me?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"Since Text(Tk(), image=\"somepicture.png\") is not an option on text boxes, I was wondering how I could make bg= a .png image. Or any other method of allowing a text box to stay a text box, with an image in the background so it can blend into a its surroundings.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":325,"Q_Id":51794733,"Users Score":1,"Answer":"You cannot use an image as a background in a text widget.\nThe best you can do is to create a canvas, place an image on the canvas, and then create a text item on top of that. Text items are editable, but you would have to write a lot of bindings, and you wouldn't have nearly as many features as the text widget. In short, it would be a lot of work.","Q_Score":1,"Tags":"python,image,tkinter,textbox","A_Id":51795473,"CreationDate":"2018-08-10T22:27:00.000","Title":"Python\/Tkinter - Making The Background of a Textbox an Image?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I used to install pyenv by homebrew to manage versions of python, but now, I want to use anaconda.But I don't know how to uninstall pyenv.Please tell me.","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":69156,"Q_Id":51797189,"Users Score":3,"Answer":"Try removing it using the following command:\nbrew remove pyenv","Q_Score":31,"Tags":"python,macos,homebrew,pyenv","A_Id":51797223,"CreationDate":"2018-08-11T06:44:00.000","Title":"how to uninstall pyenv(installed by homebrew) on Mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I used to install pyenv by homebrew to manage versions of python, but now, I want to use anaconda.But I don't know how to uninstall pyenv.Please tell me.","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":69156,"Q_Id":51797189,"Users Score":3,"Answer":"None work for me (under brew) under Mac Cataline. \nThey have a warning about file missing under .pyenv. \n(After I removed the bash_profile lines and also rm -rf ~\/.pyenv,\nI just install Mac OS version of python under python.org and seems ok.\nSeems get my IDLE work and ...","Q_Score":31,"Tags":"python,macos,homebrew,pyenv","A_Id":58908094,"CreationDate":"2018-08-11T06:44:00.000","Title":"how to uninstall pyenv(installed by homebrew) on Mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I cannot find the way to install pandas for sublimetext. Do you might know how?\nThere is something called pandas theme in the package control, but that was not the one I needed; I need the pandas for python for sublimetext.","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":3063,"Q_Id":51798041,"Users Score":-1,"Answer":"You can install this awesome theme through the Package Control.\n\nPress cmd\/ctrl + shift + p to open the command palette.\nType \u201cinstall package\u201d and press enter. Then search for \u201cPanda Syntax Sublime\u201d\n\nManual installation\n\nDownload the latest release, extract and rename the directory to \u201cPanda Syntax\u201d.\nMove the directory inside your sublime Packages directory. (Preferences > Browse packages\u2026)\n\nActivate the theme\nOpen you preferences (Preferences > Setting - User) and add this lines:\n\"color_scheme\": \"Packages\/Panda Syntax Sublime\/Panda\/panda-syntax.tmTheme\"\nNOTE: Restart Sublime Text after activating the theme.","Q_Score":0,"Tags":"python,pandas,sublimetext3","A_Id":51798479,"CreationDate":"2018-08-11T08:48:00.000","Title":"How to install pandas for sublimetext?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I cannot find the way to install pandas for sublimetext. Do you might know how?\nThere is something called pandas theme in the package control, but that was not the one I needed; I need the pandas for python for sublimetext.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3063,"Q_Id":51798041,"Users Score":0,"Answer":"For me, \"pip install pandas\" was not working, so I used pip3 install pandas which worked nicely.\nI would advise using either pip install pandas or pip3 install pandas for sublime text","Q_Score":0,"Tags":"python,pandas,sublimetext3","A_Id":62584783,"CreationDate":"2018-08-11T08:48:00.000","Title":"How to install pandas for sublimetext?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a site www.domain.com and wanted to get all of the urls from my entire website and how many times they have been clicked on, from the Google Analytics API. \nI am especially interested in some of my external links (the ones that don't have www.mydomain.com). I will then match this against all of the links on my site (I somehow need to get these from somewhere so may scrape my own site).\nI am using Python and wanted to do this programmatically. Does anyone know how to do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":346,"Q_Id":51800600,"Users Score":1,"Answer":"I have a site www.domain.com and wanted to get all of the urls from my\n entire website and how many times they have been clicked on\n\nI guess you need parameter Page and metric Pageviews\n\nI am especially interested in some of my external links\n\nYou can get list of external links if you track they as events. \nTry to use some crawler, for example Screaming Frog. It allows to get internal and external links. Free use up to 500 pages.","Q_Score":0,"Tags":"python,google-analytics,web-crawler,google-analytics-api","A_Id":51801255,"CreationDate":"2018-08-11T14:25:00.000","Title":"Can I get a list of all urls on my site from the Google Analytics API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a wrf output netcdf file.File have variables temp abd prec.Dimensions keys are time, south-north and west-east. So how I select different lat long value in region. The problem is south-north and west-east are not variable. I have to find index value of four lat long value","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":51807792,"Users Score":0,"Answer":"1) Change your Registry files (I think it is Registry.EM_COMMON) so that you print latitude and longitude in your wrfout_d01_time.nc files.\n2) Go to your WRFV3 map.\n3) Clean, configure and recompile.\n4) Run your model again the way you are used to.","Q_Score":0,"Tags":"python,netcdf4","A_Id":51878303,"CreationDate":"2018-08-12T10:05:00.000","Title":"Data extraction from wef output file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Python developers\nI am working on spectroscopy in a university. My experimental 1-D data sometimes shows \"cosmic ray\", 3-pixel ultra-high intensity, which is not what I want to analyze. So I want to remove this kind of weird peaks.\nDoes anybody know how to fix this issue in Python 3?\nThanks in advance!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":455,"Q_Id":51812233,"Users Score":0,"Answer":"The answer depends a on what your data looks like: If you have access to two-dimensional CCD readouts that the one-dimensional spectra were created from, then you can use the lacosmic module to get rid of the cosmic rays there. If you have only one-dimensional spectra, but multiple spectra from the same source, then a quick ad-hoc fix is to make a rough normalisation of the spectra and remove those pixels that are several times brighter than the corresponding pixels in the other spectra. If you have only one one-dimensional spectrum from each source, then a less reliable option is to remove all pixels that are much brighter than their neighbours. (Depending on the shape of your cosmics, you may even want to remove the nearest 5 pixels or something, to catch the wings of the cosmic ray peak as well).","Q_Score":0,"Tags":"python-3.x","A_Id":57378910,"CreationDate":"2018-08-12T19:39:00.000","Title":"Cosmic ray removal in spectra","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. \nI'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. \nI would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":13127,"Q_Id":51831152,"Users Score":0,"Answer":"One console is one instance of Python being run on your system. If you want to run different variations of code within the same Python kernel, you can highlight the code you want to run and then choose the run option (Alt+Shift+F10 default).","Q_Score":15,"Tags":"python,console,pycharm","A_Id":51831201,"CreationDate":"2018-08-13T21:59:00.000","Title":"PyCharm running Python file always opens a new console","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. \nI'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. \nI would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?","AnswerCount":6,"Available Count":3,"Score":0.0333209931,"is_accepted":false,"ViewCount":13127,"Q_Id":51831152,"Users Score":1,"Answer":"To allow only one instance to run, go to \"Run\" in the top bar, then \"Edit Configurations...\". Finally, check \"Single instance only\" at the right side. This will run only one instance and restart every time you run.","Q_Score":15,"Tags":"python,console,pycharm","A_Id":51831540,"CreationDate":"2018-08-13T21:59:00.000","Title":"PyCharm running Python file always opens a new console","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. \nI'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. \nI would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":13127,"Q_Id":51831152,"Users Score":0,"Answer":"You have an option to Rerun the program.\nSimply open and navigate to currently running app with:\n\nAlt+4 (Windows)\n\u2318+4 (Mac)\n\nAnd then rerun it with:\n\nCtrl+R (Windows)\n\u2318+R (Mac)\n\nAnother option:\nShow actions popup:\n\nCtrl+Shift+A (Windows)\n\u21e7+\u2318+A (Mac)\n\nAnd type Rerun ..., IDE then hint you with desired action, and call it.","Q_Score":15,"Tags":"python,console,pycharm","A_Id":51831272,"CreationDate":"2018-08-13T21:59:00.000","Title":"PyCharm running Python file always opens a new console","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a simple Python code for a machine learning project. I have a relatively big database of spontaneous speech. I started to train my speech model. Since it's a huge database I let it work overnight. In the morning I woke up and saw a mysterious\nKilled: 9\nline in my Terminal. Nothing else. There is no other error message or something to work with. The code run well for about 6 hours which is 75% of the whole process so I really don't understand whats went wrong.\nWhat is Killed:9 and how to fix it? It's very frustrating to lose hours of computing time...\nI'm on macOS Mojave beta if it's matter. Thank you in advance!","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":16683,"Q_Id":51833310,"Users Score":-1,"Answer":"Try to change the node version.\nIn my case, that helps.","Q_Score":12,"Tags":"python,macos,machine-learning,terminal","A_Id":72140478,"CreationDate":"2018-08-14T03:28:00.000","Title":"What is Killed:9 and how to fix in macOS Terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am fairly proficient in Python and have started exploring the requests library to formulate simple HTTP requests. I have also taken a look at Sessions objects that allow me to login to a website and -using the session key- continue to interact with the website through my account.\nHere comes my problem: I am trying to build a simple API in Python to perform certain actions that I would be able to do via the website. However, I do not know how certain HTTP requests need to look like in order to implement them via the requests library.\nIn general, when I know how to perform a task via the website, how can I identify:\n\nthe type of HTTP request (GET or POST will suffice in my case)\nthe URL, i.e where the resource is located on the server\nthe body parameters that I need to specify for the request to be successful","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":119,"Q_Id":51863463,"Users Score":0,"Answer":"This has nothing to do with python, but you can use a network proxy to examine your requests.\n\nDownload a network proxy like Burpsuite\nSetup your browser to route all traffic through Burpsuite (default is localhost:8080)\nDeactivate packet interception (in the Proxy tab)\nBrowse to your target website normally\nExamine the request history in Burpsuite. You will find every information you need","Q_Score":0,"Tags":"python,api,http,python-requests","A_Id":51863598,"CreationDate":"2018-08-15T17:19:00.000","Title":"Identifying parameters in HTTP request","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I read the data from oracle database to panda dataframe, then, there are some columns with type 'object', then I write the dataframe to hive table, these 'object' types are converted to 'binary' type, does any one know how to solve the problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":51869125,"Users Score":0,"Answer":"When you read data from oracle to dataframe it's created columns with object datatypes.\nYou can ask pandas dataframe try to infer better datatypes (before saving to Hive) if it can:\ndataframe.infer_objects()","Q_Score":0,"Tags":"python-2.7,hive","A_Id":51877123,"CreationDate":"2018-08-16T03:16:00.000","Title":"Why there is binary type after writing to hive table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Can you tell me what is the use of jupyter cluster. I created jupyter cluster,and established its connection.But still I'm confused,how to use this cluster effectively?\nThank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":9087,"Q_Id":51869574,"Users Score":4,"Answer":"With Jupyter Notebook cluster, you can run notebook on the local machine and connect to the notebook on the cluster by setting the appropriate port number. Example code:\n\nGo to Server using ssh username@ip_address to server.\nSet up the port number for running notebook. On remote terminal run jupyter notebook --no-browser --port=7800\nOn your local terminal run ssh -N -f -L localhost:8001:localhost:7800 username@ip_address of server. \nOpen web browser on local machine and go to http:\/\/localhost:8001\/","Q_Score":3,"Tags":"python,python-3.x,jupyter-notebook,cluster-computing,jupyter","A_Id":51869653,"CreationDate":"2018-08-16T04:22:00.000","Title":"What is the use of Jupyter Notebook cluster","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a series of points in two 3D systems. With them, I use np.linalg.lstsq to calculate the affine transformation matrix (4x4) between both. However, due to my project, I have to \"disable\" the shear in the transform. Is there a way to decompose the matrix into the base transformations? I have found out how to do so for Translation and Scaling but I don't know how to separate Rotation and Shear. \nIf not, is there a way to calculate a transformation matrix from the points that doesn't include shear?\nI can only use numpy or tensorflow to solve this problem btw.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1921,"Q_Id":51876622,"Users Score":1,"Answer":"I'm not sure I understand what you're asking.\nAnyway If you have two sets of 3D points P and Q, you can use Kabsch algorithm to find out a rotation matrix R and a translation vector T such that the sum of square distances between (RP+T) and Q is minimized.\nYou can of course combine R and T into a 4x4 matrix (of rotation and translation only. without shear or scale).","Q_Score":1,"Tags":"python-3.x,numpy,transformation,affinetransform,matrix-decomposition","A_Id":51876979,"CreationDate":"2018-08-16T12:03:00.000","Title":"How to decompose affine matrix?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I was running a cell in a Jupyter Notebook for a while and decided to interrupt. However, it still continues to run and I don't know how to proceed to have the thing interrupted...\nThanks for help","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6847,"Q_Id":51877700,"Users Score":3,"Answer":"Sometimes this happens, when you are on a GPU accelerated machine, where the Kernel is waiting for some GPU operation to be finished. I noticed this even on AWS instances.\nThe best thing you can do is just to wait. In the most cases it will recover and finish at some point. If it does not, at least it will tell you the kernel died after some minutes and you don\u00b4t have to copy paste your notebook, to back up your work. In rare cases, you have to kill your python process manually.","Q_Score":3,"Tags":"python,jupyter-notebook","A_Id":51897922,"CreationDate":"2018-08-16T13:00:00.000","Title":"Jupyter notebook kernel does not want to interrupt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"i am trying to recognise discord emotes. \nThey are always between two : and don't contain space. e.g.\n:smile:\nI know how to split strings at delimiters, but how do i only split tokens that are within exactly two : and contain no space? \nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":51887465,"Users Score":0,"Answer":"Thanks to @G_M i found the following solution:\n\n regex = re.compile(r':[A-Za-z0-9]+:')\n result = regex.findall(message.content)\n\nWill give me a list with all the emotes within a message, independent of where they are within the message.","Q_Score":0,"Tags":"python,python-3.x,discord","A_Id":51902676,"CreationDate":"2018-08-17T02:02:00.000","Title":"find token between two delimiters - discord emotes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have an Apache server A set up that currently hosts a webpage of a bar chart (using Chart.js). This data is currently pulled from a local SQLite database every couple seconds, and the web chart is updated.\nI now want to use a separate server B on a Raspberry Pi to send data to the server to be used for the chart, rather than using the database on server A.\nSo one server sends a file to another server, which somehow realises this and accepts it and processes it.\nThe data can either be sent and placed into the current SQLite database, or bypass the database and have the chart update directly from the Pi's sent information.\nI have come across HTTP Post requests, but not sure if that's what I need or quite how to implement it.\nI have managed to get the Pi to simply host a json file (viewable from the external ip address) and pull the data from that with a simple requests.get('ip_address\/json_file') in Python, but this doesn't seem like the most robust or secure solution.\nAny help with what I should be using much appreciated, thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":51897917,"Users Score":0,"Answer":"Maybe I didn't quite understand your request but this is the solution I imagined:\n\nYou create a Frontend with WebSocket support that connects to Server A\nServer B (the one running on the raspberry) sends a POST request\nwith the JSON to Server A\nServer A accepts the JSON and sends it to all clients connected with the WebSocket protocol\n\nServer B ----> Server A <----> Frontend\nThis way you do not expose your Raspberry directly and every request made by the Frontend goes only to Server A.\nTo provide a better user experience you could also create a GET endpoint on Server A to retrieve the latest received JSON, so that when the user loads the Frontend for the first time it calls that endpoint and even if the Raspberry has yet to update the data at least the user can have an insight of the latest available data.","Q_Score":1,"Tags":"javascript,php,python,html,apache","A_Id":51898164,"CreationDate":"2018-08-17T14:49:00.000","Title":"Post file from one server to another","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a variable target_test (for machine learning) and I'd like to display just one element of target_test.\ntype(target_test) print the following statement on the terminal : \nclass 'pandas.core.series.Series'\nIf I do print(target_test) then I get the entire 2 vectors that are displayed. \nBut I'd like to print just the second element of the first column for example.\nSo do you have an idea how I could do that ?\nI convert target_test to frame or to xarray but it didn't change the error I get.\nWhen I write something like : print(targets_test[0][0]) \nI got the following output : \nTypeError: 'instancemethod' object has no attribute '__getitem__'","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1594,"Q_Id":51898845,"Users Score":1,"Answer":"For the first column, you can use targets_test.keys()[i], for the second one targets_test.values[i] where i is the row starting from 0.","Q_Score":0,"Tags":"python,pandas","A_Id":51898927,"CreationDate":"2018-08-17T15:42:00.000","Title":"How to display a pandas Series in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm trying to get the generated URL of a file in a test model I've created,\nand I'm trying to get the correct url of the file by: modelobject.file.url which does give me the correct url if the file is public, however if the file is private it does not automatically generate a signed url for me, how is this normally done with django-storages? \nIs the API supposed to automatically generate a signed url for private files? I am getting the expected Access Denied Page for 'none' signed urls currently, and need to get the signed 'volatile' link to the file.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":941,"Q_Id":51913046,"Users Score":6,"Answer":"I've figured out what I needed to do,\nin the Private Storage class, I forgot to put custom_domain = False originally left this line off, because I did not think I needed it however you absolutely do in order to generate signed urls automatically.","Q_Score":2,"Tags":"python,django,boto3","A_Id":51913176,"CreationDate":"2018-08-18T22:38:00.000","Title":"django-storages boto3 accessing file url of a private file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I'm using the DRF and ReactJS and I am trying to login with Patreon using \ndjango-rest-framework-social-oauth2. \nIn React, I send a request to the back-end auth\/login\/patreon\/ and I reach the Patreon OAuth screen where I say I want to login with PAtreon. Patreon then returns with a request to the back-end at accounts\/profile. At this point a python-social-oauth user has also been created.\nAt this point I'm confused. How do I make a request to Patreon to login, create a user in the back-end, and return the session information to the react front-end so that I can include the session information in all following requests from the front-end? I don't want the returned request to be at the backend\/accounts\/profile, do I?\nUpdate\nI now realize I can set the redirect url with LOGIN_REDIRECT_URL but still, how do I now retrieve the session id, pass it to the front-end, and include it with all requests?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":691,"Q_Id":51922459,"Users Score":0,"Answer":"Once you receive the user profile email, unique id, and other details from Patreon then create a user at the Database level.\nNow after creating a user at the Database level you have to log in the user using the Django login function or any other login mechanism before redirecting the user to the frontend with a session. The redirect URL for the home\/ landing page is provided by the Frontend side where they want to land the user after being successfully logged with session-id being set in cookies. Onward Frontend side can use session id in cookies for other requests.\nHere is the flow:\n\nReact JS -> auth\/login\/patreon\/ -> redirected to Patreon -> Redirected back to the Backend with user information -> Create User (DB level) -> Login user -> Redirect back to Frontend (React JS on a specific URL provided by Front end)","Q_Score":9,"Tags":"django,reactjs,django-rest-framework,python-social-auth","A_Id":70682638,"CreationDate":"2018-08-19T22:55:00.000","Title":"Django - DRF (django-rest-framework-social-oauth2) and React creating a user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am teaching a class that uses VScode.\nI am used to teaching using IDLE, and it is very nice for the students to be able to call their defined functions and run snippets of code in a python terminal, for debugging purposes.\nIn VScode, they I have been unable to do the same in a satisfactory way.\nOption1: I can select all code, right click and run selection\/line on terminal. This works for small snippets, but I cannot do it for the whole file (even after selecting the whole file with ctrl-A). On linux, this works, but on windows, it does not, unfortunately (and my students use windows)\nOption2: I can use the debug console. This requires adding a breakpoint in one of the last lines of the file, and does not offer tab completion. It works, but is less convenient than IDLE.\nOption 3: I can also add the commands to run to the bottom of the file (which is a least preferred alternative, given that is forgoes the interativity of the read-print-eval loop).\nIs there any better solution? Installing a VScode extension would not be a problem.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":254,"Q_Id":51934119,"Users Score":3,"Answer":"Visual Code is just a text editor like your traditional notepad. to run and debug any kind program you need to install the particular extension for the programming language.\nIn your case you are using python so you need to install the extension of it. the best one is the \"Python\" which is developed by microsoft itself. go to your extensions manager and install this extension. right click and click \"run python file in terminal\" and you are all set.\nthis will run exactly as they run from the idle(which is default IDE provided by python itself) you can enter the arguments from the console itself. according to me this is the best way to run and debug python programs in VScode.\nanother way is that VScode shows which python version is installed on your computer on the left bottom side, click on it and the programs will use this interpreter.\nout of all the ways listed here and many others, the best method is to run the program in the terminal which is the recommend by python itself and many other programmers.\nthis method is very simple. what you have to do is open up your command prompt and type the path where python.exe is installed and the type the path of the your program as the argument and press enter. you are done !\nex : C:\\Python27\\python.exe C:\\Users\\Username\\Desktop\\my_python_script.py\nYou can also pass your arguments of your program in the command prompt itself.\nif you do not want to type all this and then just use the solution mentioned above.\nhope that your query is solved.\nregards","Q_Score":2,"Tags":"python,visual-studio-code","A_Id":51940891,"CreationDate":"2018-08-20T15:34:00.000","Title":"In Visual Studio Code, how do I load my python code to a read-print-eval loop?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I\u2019m practicing Pyspark (standalone) in the Pyspark shell at work and it\u2019s pretty new to me. Is there a rule of thumb regarding max file size and the RAM (or any other spec) on my machine? What about when using a cluster? \nThe file I\u2019m practicing with is about 1200 lines. But I\u2019m curious to know how large of a file size can be read into an RDD in regards to machine specifications or cluster specifications.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":593,"Q_Id":51939204,"Users Score":0,"Answer":"There is no hard limit on the Data size you can process, however when your RDD (Resilient Distributed Dataset) size exceeds the size of your RAM then the data will be moved to Disk. Even after the data is moved to the Disk spark will be equally capable of processing it. For example if your data is 12GB and available memory is 8GB spark will distribute the leftover data to disk and takes care of all transformations \/ actions seamlessly. Having said that you can process the data appropriately equal to size of disk.\nThere are of-course size limitation on size of single RDD which is 2GB. In other words the maximum size of a block will not exceed 2GB.","Q_Score":0,"Tags":"python,linux,apache-spark,pyspark","A_Id":51939403,"CreationDate":"2018-08-20T22:16:00.000","Title":"Maximum files size for Pyspark RDD","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"i want to do something as a parametric study in Abaqus, where the parameter i am changing is a part of the assembly\/geometry. \nImagine the following:\nA cube is hanging on 8 ropes. Each two of the 8 ropes line up in one corner of a room. the other ends of the ropes merge with the room diagonal of the cube. It's something like a cable-driven parallel robot\/rope robot.\nNow, i want to calculate the forces in the ropes in different positions of the cube, while only 7 of the 8 ropes are actually used. That means i have 8 simulations for each position of my cube. \nI wrote a matlab script to generate the nodes and wires of the cube in different positions and angle of rotations so i can copy them into an input file for Abaqus. \nSince I'm new to Abaqus scripting etc, i wonder which is the best way to make this work.\nwould you guys generate 8 input files for one position of the cube and calculate \nthem manually or is there a way to let abaqus somehow iterate different assemblys?\nI guess i should wright a python script, but i don't know how to make the ropes the parameter that is changing.\nAny help is appreciated!\nThanks, Tobi","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":854,"Q_Id":51966721,"Users Score":0,"Answer":"In case someon is interested, i was able to do it the following way:\nI created a model in abaqus till the point, i could have started the job. Then i took the .jnl file (which is created automaticaly by abaqus) and saved it as a .py file. Then i modified this script by defining every single point as a variable and every wire for the parts as tuples, consisting out of the variables. Than i made for loops and for every 9 cases unique wire definitions, which i called during the loop. During the loop also the constraints were changed and the job were started. I also made a field output request for the endnodes of the ropes (representing motors) for there coordinates and reaction force (the same nodes are the bc pinned)\nThen i saved the fieldoutput in a certain simple txt file which i was able to analyse via matlab.\nThen i wrote a matlab script which created the points, attached them to the python script, copied it to a unique directory and even started the job. \nThis way, i was able to do geometric parametric studies in abaqus using matlab and python.\nCode will be uploaded soon","Q_Score":0,"Tags":"python,abaqus","A_Id":52174746,"CreationDate":"2018-08-22T12:17:00.000","Title":"Abaqus: parametric geometry\/assembly in Inputfile or Python script?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I want to make my display tables bigger so users can see the tables better when that are used in conjunction with Jupyter RISE (slide shows). \nHow do I do that? \nI don't need to show more columns, but rather I want the table to fill up the whole width of the Jupyter RISE slide.\nAny idea on how to do that?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":509,"Q_Id":51967445,"Users Score":0,"Answer":"If df is a pandas.DataFrame object.\nYou can do: \ndf.style.set_properties(**{'max-width': '200px', 'font-size': '15pt'})","Q_Score":2,"Tags":"python,pandas,jupyter-notebook","A_Id":56978954,"CreationDate":"2018-08-22T12:57:00.000","Title":"Pandas DataFrame Display in Jupyter Notebook","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a locally made Django website and I hosted it on Heroku, at the same time I push changes to anathor github repo. I am using built in Database to store data. Will other users be able to get the data that has been entered in the database from my repo (like user details) ?\nIf so how to prevent it from happening ? Solutions like adding files to .gitignore will also prevent pushing to Heroku.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":152,"Q_Id":51968194,"Users Score":0,"Answer":"The code itself wouldn't be enough to get access to the database. For that you need the db name and password, which shouldn't be in your git repo at all.\nOn Heroku you use environment variables - which are set automatically by the postgres add-on - along with the dj_database_url library which turns that into the relevant values in the Django DATABASES setting.","Q_Score":0,"Tags":"python,django,github,hosting","A_Id":51968425,"CreationDate":"2018-08-22T13:38:00.000","Title":"Will making a Django website public on github let others get the data in its database ? If so how to prevent it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm building my first web application and I've got a question around process and best practice, I'm hoping the expertise on this website might be give me a bit of direction.\nEssentially, all the MVP is doing is going to be writing an overlay onto an image and presenting this back to the user, as follows;\n\nUser uploads picture via web form (into AWS S3) - to do\nPython script executes (in lambda) and creates image overlay, saves new image back into S3 - complete\nUser is presented back with new image to download - to do\n\nI've been running this locally as sort of a proof of concept and was planning on linking up with S3 today but then suddenly realised, what happens when there are two concurrent users and two images being uploaded with different filenames with two separate lambda functions working?\nThe only solution I could think of is having the image renamed upon upload with a record inserted into an RDS, then the lambda function to run upon record insertion against the new image, which would resolve half of it, but then how would I get the correct image relayed back to the user?\nI'll be clear, I have next to no experience in web development, I want the front end to be as dumb as possible and run everything in Python (I'm a data scientist, I can write Python for data analysis but no experience as a software dev!)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":377,"Q_Id":51970168,"Users Score":0,"Answer":"You don't really need an RDS, just invoke your lambda synchronously from the browser.\nSo \n\nUpload file to S3, using a randomized file name\nInvoke your lambda synchronously, passing it the file name\nHave your lambda read the file, convert it, and respond with either the file itself (binary responses aren't trivial), or a path to the converted file in S3.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,architecture,software-design","A_Id":51976456,"CreationDate":"2018-08-22T15:24:00.000","Title":"Uploading an image to S3 and manipulating with Python in Lambda - best practice","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"how to install twilio via pip?\nI tried to install twilio python module\nbut i can't install it \ni get following error \nno Module named twilio\nWhen trying to install twilio\npip install twilio\nI get the following error.\npyopenssl 18.0.0 has requirement six>=1.5.2, but you'll have six 1.4.1 which is incompatible.\nCannot uninstall 'pyOpenSSL'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.\ni got the answer and installed\npip install --ignore-installed twilio\nbut i get following error\n\nCould not install packages due to an EnvironmentError: [Errno 13] Permission denied: '\/Library\/Python\/2.7\/site-packages\/pytz-2018.5.dist-info'\nConsider using the `--user` option or check the permissions.\n\ni have anaconda installed \nis this a problem?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":2822,"Q_Id":51985401,"Users Score":-1,"Answer":"step1:download python-2.7.15.msi\nstep 2:install and If your system does not have Python added to your PATH while installing\n\"add python exe to path\"\nstep 3:go C:\\Python27\\Scripts of your system\nstep4:in command prompt C:\\Python27\\Scripts>pip install twilio\nstep 5:after installation is done >python command line\n import twilio\nprint(twilio.version)\nstep 6:if u get the version ...you are done","Q_Score":2,"Tags":"python,module,installation,twilio","A_Id":53578904,"CreationDate":"2018-08-23T12:03:00.000","Title":"How to install twilio via pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm trying to retrieve the objects\/items (server name, host name, domain name, location, etc...) that are stored under the saved quote for a particular Softlayer account. Can someone help how to retrieve the objects within a quote? I could find a REST API (Python) to retrieve quote details (quote ID, status, etc..) but couldn't find a way to fetch objects within a quote.\nThanks!\nBest regards,\nKhelan Patel","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":51988650,"Users Score":0,"Answer":"Thanks Albert getRecalculatedOrderContainer is the thing I was looking for.","Q_Score":0,"Tags":"python,rest,ibm-cloud,ibm-cloud-infrastructure","A_Id":52010150,"CreationDate":"2018-08-23T14:53:00.000","Title":"How to retrieve objects from the sotlayer saved quote using Python API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I know how to debug a flask application in Pycharm. The question is whether this is also possible in IntelliJ.\nI have my flask application debugging in Pycharm but one thing I could do in IntelliJ was evaluate expressions inline by pressing the alt + left mouse click. This isn't available in Pycharm so I wanted to run my Flask application in IntelliJ but there isn't a Flask template. \nIs it possible to add a Flask template to the Run\/Debug configuration? I tried looking for a plugin but couldn't find that either.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1566,"Q_Id":51995696,"Users Score":0,"Answer":"Yes, you can. Just setup the proper parameters for Run script into PyCharm IDE. After that you can debug it as usual py script. In PyCharm you can evaluate any line in debug mode too.","Q_Score":1,"Tags":"python,intellij-idea,flask","A_Id":51996045,"CreationDate":"2018-08-23T23:45:00.000","Title":"Can I debug Flask applications in IntelliJ?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"After the training is finished and I did the prediction on my network, I want to calculate \"precision\" and \"recall\" of my model, and then send it to log file of \"tensorboard\" to show the plot.\nwhile training, I send \"tensorboard\" function as a callback to keras. but after training is finished, I dont know how to add some more data to tensorboard to be plotted.\nI use keras for coding and tensorflow as its backend.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":289,"Q_Id":52006745,"Users Score":0,"Answer":"I believe that you've already done that work: it's the same process as the validation (prediction and check) step you do after training. You simply tally the results of the four categories (true\/false pos\/neg) and plug those counts into the equations (ratios) for precision and recall.","Q_Score":1,"Tags":"python,tensorflow,keras,tensorboard,precision-recall","A_Id":52006897,"CreationDate":"2018-08-24T14:36:00.000","Title":"how to add the overall \"precision\" and \"recall\" metrics to \"tensorboard\" log file, after training is finished?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am very new to image recognition with CNNs and currently using several standard (pre-trained) architectures available within Keras (VGG and ResNet) for image classification tasks. I am wondering how one can generalise the number of input channels to more than 3 (instead of standard RGB). For example, I have an image which was taken through 5 different (optic) filters and I am thinking about passing these 5 images to the network.\nSo, conceptually, I need to pass as an input (Height, Width, Depth) = (28, 28, 5), where 28x28 is the image size and 5 - the number of channels.\nAny easy way to do it with ResNet or VGG please?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":315,"Q_Id":52047126,"Users Score":2,"Answer":"If you retrain the models, that's not a problem. Only if you want to use a trained model, you have to keep the input the same.","Q_Score":7,"Tags":"python,tensorflow,image-processing,keras,conv-neural-network","A_Id":54873735,"CreationDate":"2018-08-27T21:20:00.000","Title":"Convolutional neural network architectures with an arbitrary number of input channels (more than RGB)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to run python in PyCharm by using a Docker image, but also with a Conda environment that is set up in the Docker image. I've been able to set up Docker and (locally) set up Conda in PyCharm independently, but I'm stumped as to how to make all three work together. \nThe problem comes when I try to create a new project interpreter for the Conda environment inside the Docker image. When I try to enter the python interpreter path, it throws an error saying that the directory\/path doesn't exist.\nIn short, the question is the same as the title: how can I set up PyCharm to run on a Conda environment inside a Docker image?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1524,"Q_Id":52049202,"Users Score":3,"Answer":"I'm not sure if this is the most eloquent solution, but I do have a solution to this now!\n\nStart up a container from the your base image and attach to it\nInstall the Conda env yaml file inside the docker container\nFrom outside the Docker container stream (i.e. a new terminal window), commit the existing container (and its changes) to a new image: docker commit SOURCE_CONTAINER NEW_IMAGE\n\nNote: see docker commit --help for more options here\n\nRun the new image and start a container for it\nFrom PyCharm, in preferences, go to Project > Project Interpreter\nAdd a new Docker project interpreter, choosing your new image as the image name, and set the path to wherever you installed your Conda environment on the Docker image (ex: \/usr\/local\/conda3\/envs\/my_env\/bin\/python)\n\nAnd just like that, you're good to go!","Q_Score":5,"Tags":"python,docker,pycharm,conda","A_Id":52231615,"CreationDate":"2018-08-28T02:21:00.000","Title":"How to use Docker AND Conda in PyCharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I would like to detect upright and upside-down faces, however faces weren't recognized in upside-down images.\nI used the dlib library in Python with shape_predictor_68_face_landmarks.dat. \nIs there a library that can recognize upright and upside-down faces?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":855,"Q_Id":52059560,"Users Score":8,"Answer":"You could use the same library to detect upside down faces. If the library is unable to detect the face initially, transform it 180\u00b0 and check again. If it is recognized in this condition, you know it was an upside down face.","Q_Score":0,"Tags":"python,opencv,computer-vision,face-detection,dlib","A_Id":52059673,"CreationDate":"2018-08-28T13:52:00.000","Title":"how to detect upside down face?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using the Geany IDE and I've wrote a python code that makes a GUI. Im new to python and i'm better with C. I've done research on the web and its too complicated because theres so much jargon involved. Behind each button I want C to be the backbone of it (So c to execute when clicked). So, how can i make a c file and link it to my code?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":52075229,"Users Score":2,"Answer":"I too had a question like this and I found a website that described how to do it step by step but I can\u2019t seem to find it. If you think about it, all these \u2018import\u2019 files are just code thats been made separately and thats why you import them. So, in order to import your \u2018C File\u2019 do the following.\n\nCreate the file you want to put in c (e.g bloop.c)\nThen open the terminal and assuming you saved your file to the desktop, type \u2018cd Desktop\u2019. If you put it somewhere else other than the desktop, then type cd (insert the directory).\nNow, type in gcc -shared -Wl,-soname,adder -o adder.so -fPIC bloop.c into the terminal.\nAfter that, go into you python code and right at the very top of your code, type \u2018import ctypes\u2019 or \u2018from ctypes import *\u2019 to import the ctypes library.\nBelow that type adder = CDLL(\u2018.\/adder.so\u2019).\nif you want to add a instance for the class you need to type (letter or word)=adder.main(). For example, ctest = adder.main()\nNow lets say you have a method you want to use from your c program you can type your charater or word (dot) method you created in c. For example \u2018ctest.beans()\u2019 (assuming you have a method in your code called beans).","Q_Score":1,"Tags":"python","A_Id":52075312,"CreationDate":"2018-08-29T10:28:00.000","Title":"How to have cfiles in python code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I would like to know how should i could manage to change the static files use by the saelor framework. I've tried to change the logo.svg but failed to do so.\nI'm still learning python program while using the saleor framework for e-commerce.\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":183,"Q_Id":52079211,"Users Score":1,"Answer":"Here is how it should be done. You must put your logo in the saleor\/static\/images folder then change it in base.html file in footer and navbar section.","Q_Score":1,"Tags":"django,python-3.x,saleor","A_Id":52221713,"CreationDate":"2018-08-29T13:57:00.000","Title":"Cannot update svg file(s) for saleor framework + python + django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using tkinter and the PIL to make a basic photo viewer (mostly for learning purposes). I have the bg color of all of my widgets set to the default which is \"systemfacebutton\", whatever that means.\nI am using the PIL.Image module to view and rotate my images. When an image is rotated you have to choose a fillcolor for the area behind the image. I want this fill color to be the same as the default system color but I have no idea how to get a the rgb value or a supported color name for this. It has to be calculated by python at run time so that it is consistent on anyone's OS.\nDoes anyone know how I can do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":43,"Q_Id":52085525,"Users Score":2,"Answer":"You can use w.winfo_rgb(\"systembuttonface\") to turn any color name to a tuple of R, G, B. (w is any Tkinter widget, the root window perhaps. Note that you had the color name scrambled.) The values returned are 16-bit for some unknown reason, you'll likely need to shift them right by 8 bits to get the 0-255 values commonly used for specifying colors.","Q_Score":1,"Tags":"python,python-3.x,tkinter,python-imaging-library","A_Id":52085742,"CreationDate":"2018-08-29T20:22:00.000","Title":"Determining \"SystemFaceButton\" RBG Value At RunTime","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"One more question:\nIf they are tied biases, how can I implement untied biases?\nI am using tensorflow 1.10.0 in python.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":213,"Q_Id":52088059,"Users Score":0,"Answer":"tied biases is used in tf.layers.conv2d.\nIf you want united biases, just turn off use_bias and create bias variable manually with tf.Variable or tf.get_variable same shape with following feature map, finally sum them up.","Q_Score":0,"Tags":"python,tensorflow","A_Id":52089011,"CreationDate":"2018-08-30T01:29:00.000","Title":"In tf.layers.conv2d, with use_bias=True, are the biases tied or untied?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to create a picture slideshow which will show all the png and jpg files of a folder using django.\nProblem is how do I open windows explorer through django and prompt user to choose a folder name to load images from. Once this is done, how do I read all image files from this folder? Can I store all image files from this folder inside a list and pass this list in template views through context?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":747,"Q_Id":52104316,"Users Score":0,"Answer":"This link \u201chttps:\/\/github.com\/csev\/dj4e-samples\/tree\/master\/pics\u201d\nshows how to store data into to database(sqlite is the database used here) using Django forms. But you cannot upload an entire folder at once, so you have to create a one to many model between display_id(This is just a field name in models you can name it anything you want) and pics. Now you can individually upload all pics in the folder to the same display _id and access all of them using this display_id. Also make sure to pass content_type for jpg and png separately while retrieving the pics.","Q_Score":0,"Tags":"python,django,file-read","A_Id":64512939,"CreationDate":"2018-08-30T19:43:00.000","Title":"Reading all the image files in a folder in Django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm writing a Python script to do some web automation stuff. In order to log in the website, I have to give it my phone number and the website will send out an SMS verification code. Is there a way to get this code so that I can use it in my Python program? Right now what I can think of is that I can write an Android APP and it will be triggered once there are new SMS and it will get the code and invoke an API so that the code will be stored somewhere. Then I can grab the stored code from within my Python program. This is doable but a little bit hard for me as I don't know how to develop a mobile APP. I want to know is there any other methods so that I can get this code? Thanks.\nBTW, I have to use my own phone number and can't use other phone to receive the verification code. So it may not possible to use some services.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1347,"Q_Id":52106861,"Users Score":0,"Answer":"Answer my own question. I use IFTTT to forward the message to Slack and use Slack API to access the message.","Q_Score":0,"Tags":"python,automation,sms,sms-verification","A_Id":52124085,"CreationDate":"2018-08-31T00:05:00.000","Title":"How can I get SMS verification code in my Python program?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using python and boto to assume an AWS IAM role. I want to see what policies are attached to the role so i can loop through them and determine what actions are available for the role. I want to do this so I can know if some actions are available instead of doing this by calling them and checking if i get an error. However I cannot find a way to list the policies for the role after assuming it as the role is not authorised to perform IAM actions.\nIs there anyone who knows how this is done or is this perhaps something i should not be doing.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":209,"Q_Id":52119306,"Users Score":1,"Answer":"To obtain policies, your AWS credentials require permissions to retrieve the policies.\nIf such permissions are not associated with the assumed role, you could use another set of credentials to retrieve the permissions (but those credentials would need appropriate IAM permissions).\nThere is no way to ask \"What policies do I have?\" without having the necessary permissions. This is an intentional part of AWS security because seeing policies can reveal some security information (eg \"Oh, why am I specifically denied access to the Top-Secret-XYZ S3 bucket?\").","Q_Score":0,"Tags":"python,amazon-web-services,aws-sdk,boto3,amazon-iam","A_Id":52123513,"CreationDate":"2018-08-31T16:13:00.000","Title":"How to list available policies for an assumed AWS IAM role","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I want to add alembic to an existing ,sqlalchemy using, project, with a working production db. I fail to find what's the standard way to do a \"zero\" migration == the migration setting up the db as it is now (For new developers setting up their environment) \nCurrently I've added import the declarative base class and all the models using it to the env.py , but first time alembic -c alembic.dev.ini revision --autogenerate does create the existing tables. \nAnd I need to \"fake\" the migration on existing installations - using code. For django ORM I know how to make this work, but I fail to find what's the right way to do this with sqlalchemy\/alembic","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":9086,"Q_Id":52121596,"Users Score":34,"Answer":"alembic revision --autogenerate inspects the state of the connected database and the state of the target metadata and then creates a migration that brings the database in line with metadata.\nIf you are introducing alembic\/sqlalchemy to an existing database, and you want a migration file that given an empty, fresh database would reproduce the current state- follow these steps.\n\nEnsure that your metadata is truly in line with your current database(i.e. ensure that running alembic revision --autogenerate creates a migration with zero operations).\n\nCreate a new temp_db that is empty and point your sqlalchemy.url in alembic.ini to this new temp_db.\n\nRun alembic revision --autogenerate. This will create your desired bulk migration that brings a fresh db in line with the current one.\n\nRemove temp_db and re-point sqlalchemy.url to your existing database.\n\nRun alembic stamp head. This tells sqlalchemy that the current migration represents the state of the database- so next time you run alembic upgrade head it will begin from this migration.","Q_Score":23,"Tags":"python,sqlalchemy,alembic","A_Id":56651578,"CreationDate":"2018-08-31T19:23:00.000","Title":"Creating \"zero state\" migration for existing db with sqlalchemy\/alembic and \"faking\" zero migration for that existing db","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am working on an application in Django where there is a feature which lets the user share a download link to a public file. The server downloads the file and processes the information within. This can be a time taking task therefore I want to send periodic feedbacks to the user before operations has completed. For instances, I would like to inform the user that file has downloaded successfully or if some information was missing from one of the record e.t.c.\nI was thinking that after the client app has sent the upload request, I could get client app to periodically ask the server about the status. But I don't know how can I track the progress a different request.How can I implement this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":201,"Q_Id":52138904,"Users Score":0,"Answer":"At first the progress task information can be saved in rdb or redis\u3002\nYou can return the id of the task when uses submit the request to start task and the task can be executed in the background context\u3002\nThe background task can save the task progress info in the db which you selected.\nThe app client get the progress info by the task id which the backend returned and the backend get the progress info from the db and push it in the response.\nThe interval of the request can be defined by yourself.","Q_Score":0,"Tags":"python,django,sockets,http","A_Id":52143574,"CreationDate":"2018-09-02T16:24:00.000","Title":"Django send progress back to client before request has ended","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Is it possible that a flat numpy 1d array's size (nbytes) is 16568 (~16.5kb) but when saved to disk, has a size of >2 mbs? \nI am saving the array using numpy's numpy.save method. Dtype of array is 'O' (or object).\nAlso, how do I save that flat array to disk such that I get approx similar size to nbytes when saved on disk? Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":138,"Q_Id":52142524,"Users Score":0,"Answer":"For others references, From numpy documentation:\n\nnumpy.ndarray.nbytes attribute\nndarray.nbytes Total bytes consumed by the elements of the array.\nNotes\nDoes not include memory consumed by non-element attributes of the\narray object.\n\nSo, the nbytes just considers elements of the array.","Q_Score":0,"Tags":"python,numpy","A_Id":71867816,"CreationDate":"2018-09-03T02:29:00.000","Title":"Numpy array size different when saved to disk (compared to nbytes)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I would like to write a RE to match all lowercase characters and words (special characters and symbols should not match), so like [a-z]+ EXCEPT the two words true and false.\nI'm going to use it with Python.\nI've written (?!true|false\\b)\\b[a-z]+, it works but it does not recognise lowercase characters following an uppercase one (e.g. with \"This\" it doesn't match \"his\"). I don't know how to include also this kind of match.\nFor instance:\n\ntrue & G(asymbol) & false should match only asymbol\ntrue & G(asymbol) & anothersymbol should match only [asymbol, anothersymbol]\nasymbolUbsymbol | false should match only [asymbol, bsymbol]\n\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":621,"Q_Id":52182897,"Users Score":0,"Answer":"I would create two regexes (you want to mix word boundary matching with optionally splitting words apart, which is, AFAIK not straighforward mixable, you would have to re-phrase your regex either without word boundaries or without splitting):\n\nfirst regex: [a-z]+\nsecond regex: \\b(?!true|false)[a-z]+","Q_Score":3,"Tags":"regex,python-3.x","A_Id":52183159,"CreationDate":"2018-09-05T10:27:00.000","Title":"Regex to match all lowercase character except some words","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want all the floating numbers in my PyTorch code double type by default, how can I do that?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":5260,"Q_Id":52199728,"Users Score":2,"Answer":"You should use for that torch.set_default_dtype. \nIt is true that using torch.set_default_tensor_type will also have a similar effect, but torch.set_default_tensor_type not only sets the default data type, but also sets the default values for the device where the tensor is allocated, and the layout of the tensor.","Q_Score":4,"Tags":"python,image-processing,machine-learning,computer-vision,pytorch","A_Id":54257615,"CreationDate":"2018-09-06T08:27:00.000","Title":"How to use double as the default type for floating numbers in PyTorch","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"When I created directory under the python env, it has single quote like (D:\\'Test Directory'). How do I change to this directory in Jupyter notebook?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":52211800,"Users Score":0,"Answer":"I could able to change the directory using escape sequence like this.. os.chdir('C:\\\\'Test Directory\\')","Q_Score":0,"Tags":"python-3.x","A_Id":52211897,"CreationDate":"2018-09-06T20:34:00.000","Title":"how to change directory in Jupyter Notebook with Special characters?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Let\u2019s say you have a set\/list\/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let\u2019s say for reasons that are not important, you run them through a function and receive the following pairs:\n(1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My question involves the next part: how do I find out the minimum amount of numbers that can be part of a pair without being repeated? For this particular sequence it is all six. For [1,4,2] the pairs are (1,4), (1,2), (2,4). In this case any one of the numbers could be excluded as they are all in pairs, but they each repeat, therefore it would be 2 (which 2 do not matter). \nAt first glance this seems like a graph traversal problem - the numbers are nodes, the pairs edges. Is there some part of mathematics that deals with this? I have no problem writing up a traversal algorithm, I was just wondering if there was a solution with a lower time complexity. Thanks!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":52231442,"Users Score":0,"Answer":"If you really intended to find the minimum amount, the answer is 0, because you don't have to use any number at all.\nI guess you meant to write \"maximal amount of numbers\". \nIf I understand your problem correctly, it sounds like we can translated it to the following problem:\nGiven a set of n numbers (1,..,n), what is the maximal amount of numbers I can use to divide the set into pairs, where each number can appear only once. \nThe answer to this question is:\n\nwhen n = 2k f(n) = 2k for k>=0\nwhen n = 2k+1 f(n) = 2k for k>=0\n\nI'll explain, using induction.\n\nif n = 0 then we can use at most 0 numbers to create pairs.\nif n = 2 (the set can be [1,2]) then we can use both numbers to\ncreate one pair (1,2)\nAssumption: if n=2k lets assume we can use all 2k numbers to create 2k pairs and prove using induction that we can use 2k+2 numbers for n = 2k+2.\nProof: if n = 2k+2, [1,2,..,k,..,2k,2k+1,2k+2], we can create k pairs using 2k numbers (from our assomption). without loss of generality, lets assume out pairs are (1,2),(3,4),..,(2k-1,2k). we can see that we still have two numbers [2k+1, 2k+2] that we didn't use, and therefor we can create a pair out of two of them, which means that we used 2k+2 numbers.\n\nYou can prove on your own the case when n is odd.","Q_Score":0,"Tags":"python,algorithm,set,graph-algorithm,graph-traversal","A_Id":52236009,"CreationDate":"2018-09-08T02:24:00.000","Title":"Graph traversal, maybe another type of mathematics?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Let\u2019s say you have a set\/list\/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let\u2019s say for reasons that are not important, you run them through a function and receive the following pairs:\n(1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My question involves the next part: how do I find out the minimum amount of numbers that can be part of a pair without being repeated? For this particular sequence it is all six. For [1,4,2] the pairs are (1,4), (1,2), (2,4). In this case any one of the numbers could be excluded as they are all in pairs, but they each repeat, therefore it would be 2 (which 2 do not matter). \nAt first glance this seems like a graph traversal problem - the numbers are nodes, the pairs edges. Is there some part of mathematics that deals with this? I have no problem writing up a traversal algorithm, I was just wondering if there was a solution with a lower time complexity. Thanks!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":52231442,"Users Score":0,"Answer":"In case anyone cares in the future, the solution is called a blossom algorithm.","Q_Score":0,"Tags":"python,algorithm,set,graph-algorithm,graph-traversal","A_Id":52249740,"CreationDate":"2018-09-08T02:24:00.000","Title":"Graph traversal, maybe another type of mathematics?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Getting error in missingno module import in Jupyter Notebook . It works fine in IDLE . But showing \"No missingno module exist\" in Jupyter Notebook . Can anybody tell me how to resolve this ?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":13775,"Q_Id":52235421,"Users Score":2,"Answer":"This command helped me:\nconda install -c conda-forge\/label\/gcc7 missingno\n You have to make sure that you run Anaconda prompt as Administrator.","Q_Score":4,"Tags":"python-3.x,jupyter-notebook,missing-data","A_Id":59066388,"CreationDate":"2018-09-08T12:37:00.000","Title":"error in missingno module import in Jupyter Notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Getting error in missingno module import in Jupyter Notebook . It works fine in IDLE . But showing \"No missingno module exist\" in Jupyter Notebook . Can anybody tell me how to resolve this ?","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":13775,"Q_Id":52235421,"Users Score":3,"Answer":"Installing missingno through anaconda solved the problem for me","Q_Score":4,"Tags":"python-3.x,jupyter-notebook,missing-data","A_Id":52235801,"CreationDate":"2018-09-08T12:37:00.000","Title":"error in missingno module import in Jupyter Notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I\u2019ve build a web based data dashboard that shows 4 graphs - each containing a large amount of data points.\nWhen the URL endpoint is visited Flask calls my python script that grabs the data from a sql server and then starts manipulating it and finally outputs the bokeh graphs. \nHowever, as these graphs get larger\/there becomes more graphs on the screen the website takes long to load - since the entire function has to run before something is displayed.\nHow would I go about lazy loading these? I.e. it loads the first (most important graph) and displays it while running the function for the other graphs, showing them as and when they finish running (showing a sort of loading bar where each of the graphs are or something).\nWould love some advice on how to implement this or similar.\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":14269,"Q_Id":52238200,"Users Score":3,"Answer":"I had the same problem as you. The problem with any kind of flask render is that all data is processed and passed to the page (i.e. client) simultaneously, often at large time cost. Not only that, but the the server web process is quite heavily loaded.\nThe solution I was forced to implement as the comment suggested was to load the page with blank charts and then upon mounting them access a flask api (via JS ajax) that returns chart json data to the client. This permits lazy loading of charts, as well as allowing the data manipulation to possibly be performed on a worker and not web server.","Q_Score":9,"Tags":"python,flask,bokeh","A_Id":52238332,"CreationDate":"2018-09-08T18:25:00.000","Title":"Lazy loading with python and flask","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've been trying for a few days now, to be able to import the library tkinter in pycharm. But, I am unable to do so.\n,I tried to import it or to install some packages but still nothing, I reinstalled python and pycharm again nothing. Does anyone know how to fix this?\nI am using pycharm community edition 2018 2.3 and python 3.7 .\nEDIT:So , I uninstalled python 3.7 and I installed python 3.6 x64 ,I tried changing my interpreter to the new path to python and still not working...\nEDIT 2 : I installed pycharm pro(free trial 30 days) and it's actually works and I tried to open my project in pycharm community and it's not working... \nEDIT 3 : I installed python 3.6 x64 and now it's working.\nThanks for the help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1222,"Q_Id":52242756,"Users Score":0,"Answer":"Thanks to vsDeus for asking this question. I had the same problem running Linux Mint Mate 19.1 and nothing got tkinter and some other modules working in Pycharm CE. In Eclipse with Pydev all worked just fine but for some reason I would rather work in Pycharm when coding than Eclipse.\nThe steps outlined here did not work for me but the steps he took handed me the solution. Basically I had to uninstall Pycharm, remove all its configuration files, then reinstall pip3, tkinter and then reinstall Pycharm CE. Finally I reopened previously saved projects and then set the correct interpreter.\nWhen I tried to change the python interpreter before no alternatives appeared. After all these steps the choice became available. Most importantly now tkinter, matplotlib and other modules I wanted to use are available in Pycharm.","Q_Score":1,"Tags":"python,tkinter,pycharm","A_Id":54604896,"CreationDate":"2018-09-09T08:36:00.000","Title":"I can't import tkinter in pycharm community edition","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to decode one character (represented as c-dimensional one hot vectors) at a time with tensorflow seq2seq model implementations. I am not using any embedding in my case. \nNow I am stuck with tf.contrib.seq2seq.GreedyEmbeddingHelper. It requires \"embedding: A callable that takes a vector tensor of ids (argmax ids), or the params argument for embedding_lookup. The returned tensor will be passed to the decoder input.\"\nHow I will define callable? What are inputs (vector tensor if ids(argmax ids)) and outputs of this callable function? Please explain using examples.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":267,"Q_Id":52256809,"Users Score":0,"Answer":"embedding = tf.Variable(tf.random_uniform([c-dimensional ,\nEMBEDDING_DIM]))\nhere you can create the embedding for you own model.\nand this will be trained during your training process to give a vector for your own input.\nif you don't want to use it you just can create a matrix where is every column of it is one hot vector represents the character and pass it as embedding.\nit will be some thing like that:\n[[1,0,0],[0,1,0],[0,0,1]]\nhere if you have vocabsize of 3 .","Q_Score":0,"Tags":"python,tensorflow","A_Id":55653445,"CreationDate":"2018-09-10T11:25:00.000","Title":"how to use Tensorflow seq2seq.GreedyEmbeddingHelper first parameter Embedding in case of using normal one hot vector instead of embedding?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want advice on how to do the following:\nOn the same server, I want to have two apps. One WordPress app and one Python app. At the same time, I want the root of my domain to be a static landing page.\nUrl structure I want to achieve:\n\nexample.com\/ => static landing page\nexample.com\/tickets => wordpress\nexample.com\/pythonapp => python app\n\nI have never done something like this before and searching for solutions didn't help.\nIs it even possible? \nIs it better to use subdomains?\nIs it better to use different servers?\nHow should I approach this?\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":112,"Q_Id":52257055,"Users Score":1,"Answer":"It depends on the webserver you want to use. Let's go with apache as it is one of the most used web servers on the internet.\n\nYou install your wordpress installation into the \/tickets subdirectory and install word-press as you normally would. This should install wordpress into the subdirectory.\nConfigure your Python-WSGI App with this configuration:\n\nWSGIScriptAlias \/pythonapp \/var\/www\/path\/to\/my\/wsgi.py","Q_Score":0,"Tags":"python,wordpress,url,routes,url-routing","A_Id":52257207,"CreationDate":"2018-09-10T11:43:00.000","Title":"one server, same domain, different apps (example.com\/ & example.com\/tickets )?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I know how to save model results to .txt files and saving plots to .png. I also found some post which shows how to save multiple plots on a single pdf file. What I am looking for is generating a single pdf file which can contain both model results\/summary and it's related plots. So at the end I can have something like auto generated model report. Can someone suggest me how I can do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":52286538,"Users Score":0,"Answer":"I\u2019ve had good results with the fpdf module. It should do everything you need it to do and the learning curve isn\u2019t bad. You can install with pip install fpdf.","Q_Score":0,"Tags":"python,plot,model,report","A_Id":52286569,"CreationDate":"2018-09-12T02:15:00.000","Title":"How to saving plots and model results to pdf in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to add InPadding to my LabelFrame i'm using AppJar GUI. I try this:\nself.app.setLabelFrameInPadding(self.name(\"_content\"), [20, 20])\nBut i get this error:\n\nappJar:WARNING [Line 12->3063\/configureWidget]: Error configuring _content: unknown option \"-ipadx\"\n\nAny ideas how to fix it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":112,"Q_Id":52289125,"Users Score":0,"Answer":"Because of the way containers are implemented in appJar, padding works slightly differently for labelFrames.\nTry calling: app.setLabelFramePadding('name', [20,20])","Q_Score":0,"Tags":"python-3.x,user-interface","A_Id":52907153,"CreationDate":"2018-09-12T06:55:00.000","Title":"Error configuring: unknown option \"-ipadx\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I want to deploy same flask application as two different instances lets say sandbox instance and testing instance on the same iis server and same machine. having two folders with different configurations (one for testing and one for sandbox) IIS runs whichever is requested first. for example I want to deploy one under www.example.com\/test and the other under www.example.com\/sandbox. if I requested www.example.com\/test first then this app keeps working correctly but whenever I request www.example.com\/sandbox it returns 404 and vice versa! \nquestion bottom line: \n\nhow can I make both apps run under the same domain with such URLs?\nwould using app factory pattern solve this issue?\nwhat blocks both apps from running side by side as I am trying to do?\n\nthanks a lot in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":218,"Q_Id":52295946,"Users Score":0,"Answer":"been stuck for a week before asking this question and the neatest way I found was to assign each app a different app pool and now they are working together side by side happily ever after.","Q_Score":0,"Tags":"flask,iis-7.5,python-3.6","A_Id":52297680,"CreationDate":"2018-09-12T13:04:00.000","Title":"Two flask Apps same domain IIS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I initially started a small python project (Python, Tkinter amd PonyORM) and became larger that is why I decided to divide the code (used to be single file only) to several modules (e.g. main, form1, entity, database). Main acting as the main controller, form1 as an example can contain a tkinter Frame which can be used as an interface where the user can input data, entity contains the db.Enttiy mappings and database for the pony.Database instance along with its connection details. I think problem is that during import, I'm getting this error \"pony.orm.core.ERDiagramError: Cannot define entity 'EmpInfo': database mapping has already been generated\". Can you point me to any existing code how should be done.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":588,"Q_Id":52308096,"Users Score":0,"Answer":"Probably you import your modules in a wrong order. Any module which contains entity definitions should be imported before db.generate_mapping() call.\nI think you should call db.generate_mapping() right before entering tk.mainloop() when all imports are already done.","Q_Score":2,"Tags":"python,ponyorm","A_Id":52330119,"CreationDate":"2018-09-13T06:51:00.000","Title":"Sharing PonyORM's db session across different python module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Building Tensorflow and other such packages from source and especially against GPU's is a fairly long task and often encounters errors, so once built and installed I really dont want to mess with them. \nI regularly use virtualenvs, but I am always worried about installing certain packages as sometimes their dependencies will overwrite my own packages I have built from source... \nI know I can remove, and then rebuild from my .wheels, but sometimes this is a time consuming task. Is there a way that if I attempt to pip install a package, it first checks against current package versions and doesn't continue before I agree to those changes? \nEven current packages dependencies don't show versions with pip show","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":52310090,"Users Score":0,"Answer":"Is there a way that if I attempt to pip install a package, it first checks against current package versions and doesn't continue before I agree to those changes?\n\nNo. But pip install doesn't touch installed dependencies until you explicitly run pip install -U. So don't use -U\/--upgrade option and upgrade dependencies when pip fails with unmet dependencies.","Q_Score":0,"Tags":"python,pip,package-managers","A_Id":52334042,"CreationDate":"2018-09-13T08:55:00.000","Title":"Python3 - How do I stop current versions of packages being over-ridden by other packages dependencies","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I must use \"q\" (which is a degree measure) from the command line and then convert \"q\" to radians and have it write out the value of sin(5q) + sin(6q). Considering that I believe I have to use sys.argv's for this I have no clue where to even begin","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":153,"Q_Id":52324220,"Users Score":0,"Answer":"you can use following commands\nq=sys.argv[1] #you can give the decimal value too in your command line\nnow q will be string eg. \"1.345\" so you have convert this to float[ using\nfunction q=float(q) .","Q_Score":0,"Tags":"python,python-3.x,command-line-arguments","A_Id":52325190,"CreationDate":"2018-09-14T02:32:00.000","Title":"how do I connect sys.argv into my float value?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've been working on scrapy for 3 months. for extracting selectors I use simple response.css or response.xpath..\nI'm asked to switch to ItemLoaders and use add_xpath add_css etc.\nI know how ItemLoaders work and ho convinient they are but can anyone compare these 2 w.r.t efficiency? which way is efficient and why ??","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":191,"Q_Id":52330140,"Users Score":0,"Answer":"Item loaders do exactly the same thing underneath that you do when you don't use them. So for every loader.add_css\/add_xpath call there will be responce.css\/xpath executed. It won't be any faster and the little amount of additional work they do won't really make things any slower (especially in comparison to xml parsing and network\/io load).","Q_Score":1,"Tags":"python,python-3.x,scrapy,css-selectors","A_Id":52332084,"CreationDate":"2018-09-14T10:30:00.000","Title":"Scrapy: Difference between simple spider and the one with ItemLoader","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"From a Python script, I want to feed some small string data to a subprocess, but said subprocess non-negotiably accepts only a filename as an argument, which it will open and read. I non-negotiably do not want to write this data to disk - it should reside only in memory.\nMy first instinct was to use StringIO, but I realize that StringIO has no fileno(). mmap(-1, ...) also doesn't seem to create a file descriptor. With those off the table, I'm at a loss as to how to do this. Is this even achievable? The fd would be OS-level visible, but (I would expect) only to the process's children. \ntl;dr how to create private file descriptor to a python string\/memory that only a child process can see?\nP.S. This is all on Linux and doesn't have to be portable in any way.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":876,"Q_Id":52340974,"Users Score":2,"Answer":"Reifying @user4815162342's comment as an answer:\nThe direct way to do this is:\n\npass \/dev\/stdin as the file argument to the process;\nuse stdin=subprocess.PIPE;\nfinally, Popen.communicate() to feed the desired contents","Q_Score":3,"Tags":"python","A_Id":52350691,"CreationDate":"2018-09-15T01:56:00.000","Title":"Possible to get a file descriptor for Python's StringIO?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to make a python program that creates and writes in a txt file.\nthe program works, but I want it to cross the \"hidden\" thing in the txt file's properties, so that the txt can't be seen without using the python program I made. I have no clues how to do that, please understand I am a beginner in python.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":165,"Q_Id":52371454,"Users Score":0,"Answer":"I'm not 100% sure but I don't think you can do this in Python. I'd suggest finding a simple Visual Basic script and running it from your Python file.","Q_Score":0,"Tags":"python,python-3.x","A_Id":52371677,"CreationDate":"2018-09-17T15:44:00.000","Title":"how to modify txt file properties with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to play a sound (from a wav file) using winsound's winsound.PlaySound function. I know that winsound.Beep allows me to specify the time in milliseconds, but how can I implement that behavior with winsound.PlaySound? \nI tried to use the time.sleep function, but that only delays the function, not specifies the amount of time. \nAny help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":52389545,"Users Score":1,"Answer":"Create a thread to play the sound, start it. Create a thread that sleeps the right amount of time and has a handle to the first thread. Have the second thread terminate the first thread when the sleep is over.","Q_Score":1,"Tags":"python,python-3.x,time,sleep","A_Id":52389699,"CreationDate":"2018-09-18T15:03:00.000","Title":"How can I run code for a certain amount of time?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am building a web-app. One part of the app calls a function that starts a tweepy StreamListener on certain track. That functions process a tweet and then it writes a json object to a file or mongodb.\nOn the other hand I need a process that is reading the file or mongodb and paginates the tweet if some property is in it. The thing is that I don't know how to do that second part. Do I need different threads?\nWhat solutions could there be?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":52391071,"Users Score":0,"Answer":"You can certainly do it with a thread or spinning up a new process that will perform the pagination.\nAlternatively you can look into a task queue service (Redis queue, celery, as examples). Your web-app can add a task to this queue and your other program can listen to this queue and perform the pagination tasks as they come in.","Q_Score":0,"Tags":"python,flask,tweepy","A_Id":52396308,"CreationDate":"2018-09-18T16:35:00.000","Title":"Do I need two instances of python-flask?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"If I am running Celery on (say) a bank of 50 machines all using a distributed RabbitMQ cluster.\nIf I have a task that is running and I know the task id, how in the world can Celery figure out which machine its running on to terminate it?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":52415048,"Users Score":0,"Answer":"I am not sure if you can actually do it, when you spawn a task you will have a worker, somewhere in you 50 boxes, that executes that and you technically have no control on it as it s a separate process and the only thing you can control is either the asyncResult or the amqp message on the queue.","Q_Score":0,"Tags":"python,rabbitmq,celery","A_Id":52451214,"CreationDate":"2018-09-19T22:34:00.000","Title":"Celery - how to stop running task when using distributed RabbitMQ backend?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want my flask APP to pull updates from a local txt file every 200ms, is it possible to do that?\nP.S. I've considered BackgroundScheduler() from apschedulerler, but the granularity of is 1s.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":610,"Q_Id":52415384,"Users Score":1,"Answer":"Couldn't you just start a loop in a thread that sleeps for 200 ms before the next iteration?","Q_Score":0,"Tags":"python,python-3.x,flask,apscheduler","A_Id":52419167,"CreationDate":"2018-09-19T23:16:00.000","Title":"how to run periodic task in high frequency in flask?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Suppose I have multiple mongodbs like mongodb_1, mongodb_2, mongodb_3 with same kind of data like employee details of different organizations.\nWhen user triggers GET request to get employee details from all the above 3 mongodbs whose designation is \"TechnicalLead\". then first we need to connect to mongodb_1 and search and then disconnect with mongodb_1 and connect to mongodb_2 and search and repeat the same for all dbs.\nCan any one suggest how can we achieve above using python EVE Rest api framework.\nBest Regards,\nNarendra","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":52418721,"Users Score":0,"Answer":"First of all, it is not a recommended way to run multiple instances (especially when the servers might be running at the same time) as it will lead to usage of the same config parameters like for example logpath and pidfilepath which in most cases is not what you want.\nSecondly for getting the data from multiple mongodb instances you have to create separate get requests for fetching the data. There are two methods of view for the model that can be used:\n\nquery individual databases for data, then assemble the results for viewing on the screen.\nQuery a central database that the two other databases continously update.","Q_Score":0,"Tags":"python,mongodb,eve","A_Id":52872010,"CreationDate":"2018-09-20T06:14:00.000","Title":"How to search for all existing mongodbs for single GET request","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm studying Python for 4\/5 months and this is my third project built from scratch, but im not able to solve this problem on my own.\nThis script downloads 1 image for each url given.\nIm not able to find a solution on how to implement Thread Pool Executor or async in this script. I cannot figure out how to link the url with the image number to the save image part. \nI build a dict of all the urls that i need to download but how do I actually save the image with the correct name?\nAny other advise?\nPS. The urls present at the moment are only fake one.\nSynchronous version:\n\n\n import requests\n import argparse\n import re\n import os\n import logging\n\n from bs4 import BeautifulSoup\n\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-n\", \"--num\", help=\"Book number\", type=int, required=True) \n parser.add_argument(\"-p\", dest=r\"path_name\", default=r\"F:\\Users\\123\", help=\"Save to dir\", )\n args = parser.parse_args()\n\n\n\n logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',\n level=logging.ERROR)\n logger = logging.getLogger(__name__) \n\n\n def get_parser(url_c): \n url = f'https:\/\/test.net\/g\/{url_c}\/1'\n logger.info(f'Main url: {url_c}')\n responce = requests.get(url, timeout=5) # timeout will raise an exeption\n if responce.status_code == 200:\n page = requests.get(url, timeout=5).content\n soup = BeautifulSoup(page, 'html.parser')\n return soup\n else:\n responce.raise_for_status()\n\n\n def get_locators(soup): # take get_parser\n # Extract first\/last page num\n first = int(soup.select_one('span.current').string)\n logger.info(f'First page: {first}')\n last = int(soup.select_one('span.num-pages').string) + 1\n\n # Extract img_code and extension\n link = soup.find('img', {'class': 'fit-horizontal'}).attrs[\"src\"]\n logger.info(f'Locator code: {link}')\n code = re.search('galleries.([0-9]+)\\\/.\\.(\\w{3})', link)\n book_code = code.group(1) # internal code \n extension = code.group(2) # png or jpg\n\n # extract Dir book name\n pattern = re.compile('pretty\":\"(.*)\"')\n found = soup.find('script', text=pattern)\n string = pattern.search(found.text).group(1)\n dir_name = string.split('\"')[0]\n logger.info(f'Dir name: {dir_name}')\n\n logger.info(f'Hidden code: {book_code}')\n print(f'Extension: {extension}')\n print(f'Tot pages: {last}')\n print(f'')\n\n return {'first_p': first, \n 'last_p': last, \n 'book_code': book_code, \n 'ext': extension, \n 'dir': dir_name\n }\n\n\n def setup_download_dir(path, dir): # (args.path_name, locator['dir'])\n # Make folder if it not exist\n filepath = os.path.join(f'{path}\\{dir}')\n if not os.path.exists(filepath):\n try:\n os.makedirs(filepath)\n print(f'Directory created at: {filepath}')\n except OSError as err:\n print(f\"Can't create {filepath}: {err}\") \n return filepath \n\n\n def main(locator, filepath):\n for image_n in range(locator['first_p'], locator['last_p']):\n url = f\"https:\/\/i.test.net\/galleries\/{locator['book_code']}\/{image_n}.{locator['ext']}\"\n logger.info(f'Url Img: {url}')\n responce = requests.get(url, timeout=3)\n if responce.status_code == 200:\n img_data = requests.get(url, timeout=3).content \n else: \n responce.raise_for_status() # raise exepetion \n\n with open((os.path.join(filepath, f\"{image_n}.{locator['ext']}\")), 'wb') as handler:\n handler.write(img_data) # write image\n print(f'Img {image_n} - DONE')\n\n\n if __name__ == '__main__':\n try:\n locator = get_locators(get_parser(args.num)) # args.num ex. 241461\n main(locator, setup_download_dir(args.path_name, locator['dir'])) \n except KeyboardInterrupt:\n print(f'Program aborted...' + '\\n')\n\n\nUrls list:\n\n\n def img_links(locator):\n image_url = []\n for num in range(locator['first_p'], locator['last_p']):\n url = f\"https:\/\/i.test.net\/galleries\/{locator['book_code']}\/{num}.{locator['ext']}\"\n image_url.append(url)\n logger.info(f'Url List: {image_url}') \n return image_url","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":353,"Q_Id":52430038,"Users Score":0,"Answer":"I found the solution in the book fluent python. Here the snippet:\n\n def download_many(cc_list, base_url, verbose, concur_req):\n counter = collections.Counter()\n with futures.ThreadPoolExecutor(max_workers=concur_req) as executor:\n to_do_map = {}\n for cc in sorted(cc_list):\n future = executor.submit(download_one, cc, base_url, verbose)\n to_do_map[future] = cc\n done_iter = futures.as_completed(to_do_map)\n if not verbose:\n done_iter = tqdm.tqdm(done_iter, total=len(cc_list))\n for future in done_iter:\n try:\n res = future.result()\n except requests.exceptions.HTTPError as exc:\n error_msg = 'HTTP {res.status_code} - {res.reason}'\n error_msg = error_msg.format(res=exc.response)\n except requests.exceptions.ConnectionError as exc:\n error_msg = 'Connection error'\n else:\n error_msg = ''\n status = res.status\n if error_msg:\n status = HTTPStatus.error\n counter[status] += 1\n if verbose and error_msg:\n cc = to_do_map[future]\n print('*** Error for {}: {}'.format(cc, error_msg))\n return counter","Q_Score":2,"Tags":"python,python-3.x,asynchronous,python-multithreading,imagedownload","A_Id":52735044,"CreationDate":"2018-09-20T17:05:00.000","Title":"python asyncronous images download (multiple urls)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Let's say I have a list of 887123, 123, 128821, 9, 233, 9190902. I want to put those strings on screen using pygame (line drawing), and I want to do so proportionally, so that they fit the screen. If the screen is 1280x720, how do I scale the numbers down so that they keep their proportions to each other but fit the screen? \nI did try with techniques such as dividing every number by two until they are all smaller than 720, but that is skewed. Is there an algorithm for this sort of mathematical scaling?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":29,"Q_Id":52465795,"Users Score":1,"Answer":"I used this algorithm: x = (x \/ (maximum value)) * (720 - 1)","Q_Score":0,"Tags":"python,math,pygame","A_Id":52466333,"CreationDate":"2018-09-23T11:46:00.000","Title":"How to put a list of arbitrary integers on screen (from lowest to highest) in pygame proportionally?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have several unit-tests (only python3.6 and higher) which are importing a helper class to setup some things (eg. pulling some Docker images) on the system before starting the tests.\nThe class is doing everything while it get instantiate. It needs to stay alive because it holds some information which are evaluated during the runtime and needed for the different tests.\nThe call of the helper class is very expensive and I wanna speedup my tests the helper class only once. My approach here would be to use a singleton but I was told that in most cases a singleton is not needed. Are there other options for me or is a singleton here actually a good solution?\nThe option should allow executing all tests at all and every test on his own.\nAlso I would have some theoretical questions.\nIf I use a singleton here how is python executing this in parallel? Is python waiting for the first instance to be finish or can there be a race condition? And if yes how do I avoid them?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":52468137,"Users Score":0,"Answer":"I can only given an answer on the \"are there other options for me\" part of your question...\nThe use of such a complex setup for unit-tests (pulling docker images etc.) makes me suspicious:\nIt can mean that your tests are in fact integration tests rather than unit-tests. Which could be perfectly fine if your goal is to find the bugs in the interactions between the involved components or in the interactions between your code and its system environment. (The fact that your setup involves Docker images gives the impression that you intend to test your system-under-test against the system environment.) If this is the case I wish you luck to get the other aspects of your question answered (parallelization of tests, singletons and thread safety). Maybe it makes sense to tag your question \"integration-testing\" rather than \"unit-testing\" then, in order to attract the proper experts.\nOn the other hand your complex setup could be an indication that your unit-tests are not yet designed properly and\/or the system under test is not yet designed to be easily testable with unit-tests: Unit-tests focus on the system-under-test in isolation - isolation from depended-on-components, but also isolation from the specifics of the system environment. For such tests of a properly isolated system-under-test a complex setup using Docker would not be needed.\nIf the latter is true you could benefit from making yourself familiar with topics like \"mocking\", \"dependency injection\" or \"inversion of control\", which will help you to design your system-under-test and your unit test cases such that they are independent of the system environment. Then, your complex setup would no longer be necessary and the other aspects of your question (singleton, parallelization etc.) may no longer be relevant.","Q_Score":0,"Tags":"python,python-3.x,unit-testing,singleton","A_Id":55788176,"CreationDate":"2018-09-23T16:36:00.000","Title":"Python3.6 and singletons - use case and parallel execution","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"As mentioned above I would like to know how I can increase the no of errors shown in flake8 and pylint. I have installed both and they work fine when I am working with small files. I am currently working with a very large file (>18k lines) and there is no error highlighting done at the bottom part of the file, I believe the current limit is set to 100 and would like to increase it.\nIf this isn't possible is there any way I can just do linting for my part of the code? I am just adding a function in this large file and would like to monitor the same.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":384,"Q_Id":52476494,"Users Score":0,"Answer":"Can use \"python.linting.maxNumberOfProblems\": 2000 to increase the no of problems being displayed but the limit seems to be set to 1001 so more than 1001 problems can't be displayed.","Q_Score":0,"Tags":"python,visual-studio-code,vscode-settings,vscode-debugger","A_Id":52476867,"CreationDate":"2018-09-24T09:39:00.000","Title":"How to increase the error limit in flake8 and pylint VS Code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"how do I build a knowledge graph in python from structured texts? Do I need to know any graph databases? Any resources would be of great help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":484,"Q_Id":52478308,"Users Score":0,"Answer":"Knowledge Graph (KG) is just a virtual representation and not an actual graph stored as it is.\nTo store the data you can use any of the present databases like SQL, MongoDB, etc. But to benefit the fact that we are storing graphs here, I'll suggest better use graph-based databases like node4js.","Q_Score":1,"Tags":"python,nlp","A_Id":58796431,"CreationDate":"2018-09-24T11:25:00.000","Title":"Knowledge graph in python for NLP","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm exploring ODL and mininet and able to run both and populate the network nodes over ODL and I can view the topology via ODL default webgui.\nI'm planning to create my own webgui and to start with simple topology view. I need advise and guideline on how I can achieve topology view on my own webgui. Plan to use python and html. Just a simple single page html and python script. Hopefully someone could lead me the way. Please assist and thank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":52493515,"Users Score":0,"Answer":"If a web GUI for ODL would provide value for you, please consider working to contribute that upstream. The previous GUI (DLUX) has recently been deprecated because no one was supporting it, although it seems many people were using it.","Q_Score":0,"Tags":"python,html,api,web,opendaylight","A_Id":52532208,"CreationDate":"2018-09-25T08:12:00.000","Title":"How to view Opendaylight topology on external webgui","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have created custom exception in python 3 and the over all code works just fine. But there is one thing I am not able to wrap my head around is that why do I need to send my message to the Exception class's __init__() and how does it convert the Custom exception into that string message when I try to print the exception since the code in the Exception or even the BaseException does not do much.\nNot quite able to understand why call the super().__init__() from custom exception?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":739,"Q_Id":52509910,"Users Score":2,"Answer":"This is so that your custom exceptions can start off with the same instance attributes as a BaseException object does, including the value attribute, which stores the exception message, which is needed by certain other methods such as __str__, which allows the exception object to be converted to a string directly. You can skip calling super().__init__ in your subclass's __init__ and instead initialize all the necessary attributes on your own if you want, but then you would not be taking advantage of one of the key benefits of class inheritance. Always call super().__init__ unless you have very specific reasons not to reuse any of the parent class's instance attributes.","Q_Score":2,"Tags":"python,python-3.x,exception,super","A_Id":52510057,"CreationDate":"2018-09-26T04:22:00.000","Title":"Python3, calling super's __init__ from a custom exception","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I recently upgraded PyCharm (community version). If it matters, I am running on a Mac OSX machine. After the upgrade, I have one project in which PyCharm cannot find any python modules. It can't find numpy, matplotlib, anything ... I have checked a couple of other projects and they seem to be fine. I noticed that somehow the interpreter for the project in question was not the same as for the others. So I changed it to match the others. But PyCharm still can't find the modules. Any ideas what else I can do?\nMore generally, something like this happens every time I upgrade to a new PyCharm version. The fix each time is a little different. Any ideas on how I can prevent this in the first place?\nEDIT: FWIW, I just now tried to create a new dummy project. It has the same problem. I notice that my two problem projects are created with a \"venv\" sub-directory. My \"good\" projects don't have this thing. Is this a clue to what is going on?\nEDIT 2: OK, just realized that when creating a new project, I can select \"New environment\" or \"Existing interpreter\", and I want \"Existing interpreter\". However, I would still like to know how one project that was working fine before is now hosed, and how I can fix it. Thanks.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":90,"Q_Id":52526167,"Users Score":1,"Answer":"Your project is most likely pointing to the wrong interpreter. E.G. Using a virtual environment when you want to use a global one.\nYou must point PyCharm to the correct interpreter that you want to use.\n\"File\/Settings(Preferences On Mac)\/Project: ... \/Project Interpreter\" takes you to the settings associated with the interpreters.\nThis window shows all of the modules within the interpreter.\nFrom here you can click the settings wheel in the top right and configure your interpreters. (add virtual environments and what not)\nor you can select an existing interpreter from the drop down to use with your project.","Q_Score":1,"Tags":"python,pycharm","A_Id":52526325,"CreationDate":"2018-09-26T21:19:00.000","Title":"Interpreter problem (apparently) with a project in PyCharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I recently upgraded PyCharm (community version). If it matters, I am running on a Mac OSX machine. After the upgrade, I have one project in which PyCharm cannot find any python modules. It can't find numpy, matplotlib, anything ... I have checked a couple of other projects and they seem to be fine. I noticed that somehow the interpreter for the project in question was not the same as for the others. So I changed it to match the others. But PyCharm still can't find the modules. Any ideas what else I can do?\nMore generally, something like this happens every time I upgrade to a new PyCharm version. The fix each time is a little different. Any ideas on how I can prevent this in the first place?\nEDIT: FWIW, I just now tried to create a new dummy project. It has the same problem. I notice that my two problem projects are created with a \"venv\" sub-directory. My \"good\" projects don't have this thing. Is this a clue to what is going on?\nEDIT 2: OK, just realized that when creating a new project, I can select \"New environment\" or \"Existing interpreter\", and I want \"Existing interpreter\". However, I would still like to know how one project that was working fine before is now hosed, and how I can fix it. Thanks.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":90,"Q_Id":52526167,"Users Score":1,"Answer":"It seems, when you are creating a new project, you also opt to create a new virtual environment, which then is created (default) in that venv sub-directory.\nBut that would only apply to new projects, what is going on with your old projects, changing their project interpreter environment i do not understand.\nSo what i would say is you have some corrupt settings (e.g. in ~\/Library\/Preferences\/PyCharm2018.2 ), which are copied upon PyCharm upgrade.\nYou might try newly configure PyCharm by moving away those PyCharm preferences, so you can put them back later.\nThe Project configuration mainly, special the Project interpreter on the other hand is stored inside $PROJECT_ROOT\/.idea and thus should not change.","Q_Score":1,"Tags":"python,pycharm","A_Id":52526644,"CreationDate":"2018-09-26T21:19:00.000","Title":"Interpreter problem (apparently) with a project in PyCharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"if all(data_Window['CI']!=np.nan):\nI have used the all() function with if so that if column CI has no NA values, then it will do some operation. But i got syntax error.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":52529669,"Users Score":0,"Answer":"This gives you all a columns and how many null values they have.\ndf = pd.DataFrame({0:[1,2,None,],1:[2,3,None])\ndf.isnull().sum()","Q_Score":0,"Tags":"python,pandas","A_Id":52529791,"CreationDate":"2018-09-27T04:54:00.000","Title":"how can i check all the values of dataframe whether have null values in them without a loop","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Consider the following situation: you work with audio files and soon there are different contexts of what \"an audio\" actually is in same solution. \nThis on one side is more obvious through typing, though while Python has classes and typing, but it is less explicit in the code like in Java. I think this occurs in any untyped language.\nMy question is how to have less ambiguous variable names and whether there is something like an official and widely accepted guideline or even a standard like PEP\/RFC for that or comparable.\nExamples for variables:\n\nA string type to designate the path\/filename of the actual audio file\nA file handle for the above to do the I\/O\nThen, in the package pydub, you deal with the type AudioSegment\nWhile in the package moviepy, you deal with the type AudioFileClip\n\nUsing all the four together, requires in my eyes for a clever naming strategy, but maybe I just oversee something. \nMaybe this is a quite exocic example, but if you think of any other media types, this should provide a more broad view angle. Likewise, is a Document a handle, a path or an abstract object?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":52533609,"Users Score":0,"Answer":"There is no definitive standard\/rfc to name your variables. One option is to prefix\/suffix your variables with a (possibly short form) type. For example, you can name a variable as foo_Foo where variable foo_Foo is of type Foo.","Q_Score":0,"Tags":"python,variables,naming-conventions,typing","A_Id":52533696,"CreationDate":"2018-09-27T09:20:00.000","Title":"Choosing best semantics for related variables in an untyped language like Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using holoviews and bokeh with python 3 to create an interactive network graph fro mNetworkx. I can't manage to set the edge color to blank. It seems that the edge_color option does not exist. Do you have any idea how I could do that?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":284,"Q_Id":52539639,"Users Score":1,"Answer":"Problem solved, the option to change edges color is edge_line_color and not edge_color.","Q_Score":0,"Tags":"python,networkx,bokeh,holoviews","A_Id":52539914,"CreationDate":"2018-09-27T14:44:00.000","Title":"Holoviews - network graph - change edge color","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want Pipenv to make virtual environment in the same folder with my project (Django).\nI searched and found the PIPENV_VENV_IN_PROJECT option but I don't know where and how to use this.","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":36278,"Q_Id":52540121,"Users Score":58,"Answer":"This maybe help someone else.. I find another easy way to solve this!\nJust make empty folder inside your project and name it .venv\nand pipenv will use this folder.","Q_Score":58,"Tags":"python,pipenv","A_Id":60234402,"CreationDate":"2018-09-27T15:09:00.000","Title":"Make Pipenv create the virtualenv in the same folder","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want Pipenv to make virtual environment in the same folder with my project (Django).\nI searched and found the PIPENV_VENV_IN_PROJECT option but I don't know where and how to use this.","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":36278,"Q_Id":52540121,"Users Score":2,"Answer":"For posterity's sake, if you find pipenv is not creating a virtual environment in the proper location, you may have an erroneous Pipfile somewhere, confusing the pipenv shell call - in which case I would delete it form path locations that are not explicitly linked to a repository.","Q_Score":58,"Tags":"python,pipenv","A_Id":68628989,"CreationDate":"2018-09-27T15:09:00.000","Title":"Make Pipenv create the virtualenv in the same folder","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a tkinter listbox, when I select a item it performs a few actions then returns the results, while that is happening the item I selected does not show as selected, is there a way to force it to show selected immediately so it's obvious to the user they selected the correct one while waiting on the returned results? I'm using python 3.4 and I'm on a windows 7 machine.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":52540413,"Users Score":0,"Answer":"The item does show as selected right away because the time consuming actions are executed before updating the GUI. You can force the GUI to update before executing the actions by using window.update_idletasks().","Q_Score":0,"Tags":"python,tkinter,listbox","A_Id":52550908,"CreationDate":"2018-09-27T15:26:00.000","Title":"Force tkinter listbox to highlight item when selected before task is started","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to find the number of rows that have certain values such as None or \"\" or NaN (basically empty values) in all columns of a DataFrame object. How can I do this?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":875,"Q_Id":52544340,"Users Score":2,"Answer":"Use df.isnull().sum() to get number of rows with None and NaN value.\nUse df.eq(value).sum() for any kind of values including empty string \"\".","Q_Score":1,"Tags":"python,pandas,dataframe,sklearn-pandas","A_Id":52544615,"CreationDate":"2018-09-27T20:02:00.000","Title":"In Python DataFrame how to find out number of rows that have valid values of columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a charge object with information in charge['metadata']['distinct_id']. There could be the case that it's not set, therefore I tried it that way which doesn't work charge.get(['metadata']['distinct_id'], None)\nDo you know how to do that the right way?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":76,"Q_Id":52552848,"Users Score":1,"Answer":"As @blue_note mentioned you could not user two consecutive levels. However your can try something like \ncharge.get('metadata', {}).get('distinct_id')\nhere, you tried to get 'metadata' from charge and if it does not found then it will consider blank dictionary and try to get 'distinct_id' from there (technically it does not exists). In this scenario, you need not to worry about if metadata exists or not. If it exists then it will check for distinct_id from metadata or else it throws None.\nHope this will solve your problem.\nCheers..!","Q_Score":0,"Tags":"python,django","A_Id":52553493,"CreationDate":"2018-09-28T10:00:00.000","Title":".get + dict variable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a charge object with information in charge['metadata']['distinct_id']. There could be the case that it's not set, therefore I tried it that way which doesn't work charge.get(['metadata']['distinct_id'], None)\nDo you know how to do that the right way?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":52552848,"Users Score":5,"Answer":"You don't say what the error is, but, two things possibly wrong\n\nit should be charge.get('metadata', None)\nyou can't directly do it on two consecutive levels. If the metadata key returns None, you can't go on and ask for the distinct_id key. You could return an empty dict and apply get to that, eg something like charge.get('metadata', {}).get('distinct_id', None)","Q_Score":0,"Tags":"python,django","A_Id":52552937,"CreationDate":"2018-09-28T10:00:00.000","Title":".get + dict variable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In the MongoDB console, I know that you can use $ last and $ natural. In PyMongo, I could not use it, maybe I was doing something wrong?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3599,"Q_Id":52559576,"Users Score":1,"Answer":"Another way is:\ndb.collection.find().limit(1).sort([('$natural',-1)])\nThis seemed to work best for me.","Q_Score":1,"Tags":"python,pymongo","A_Id":65854155,"CreationDate":"2018-09-28T16:43:00.000","Title":"PyMongo how to get the last item in the collection?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I intent to implement image captioning. Would it be possible to transfer learning for LSTM? I have used pretrained VGG16(transfer learning) to Extract features as input of the LSTM.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":289,"Q_Id":52568209,"Users Score":-1,"Answer":"As I have discovered, we can't use Transfer learning on the LSTM weights. I think the causation is infra-structure of LSTM networks.","Q_Score":0,"Tags":"python-3.x,conv-neural-network,lstm","A_Id":52700217,"CreationDate":"2018-09-29T12:08:00.000","Title":"how can I use Transfer Learning for LSTM?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In a Linux directory, I have several numbered files, such as \"day1\" and \"day2\". My goal is to write a code that retrieves the number from the files and add 1 to the file that has the biggest number and create a new file. So, for example, if there are files, 'day1', 'day2' and 'day3', the code should read the list of files and add 'day4'. To do so, at least I need to know how to retrieve the numbers on the file name.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":52572007,"Users Score":0,"Answer":"Get all files with the os module\/package (don't have the exact command handy) and then use regex(package) to get the numbers. If you don't want to look into regex you could remove the letters from your string with replace() and convert that string with int().","Q_Score":0,"Tags":"python,linux,python-3.x,file,ubuntu","A_Id":52572081,"CreationDate":"2018-09-29T19:52:00.000","Title":"Is there any way to retrieve file name using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"The following were what I did in python shell. Can anyone explain the difference?\n\n\n\ndatetime.datetime.now()\n datetime.datetime(2018, 9, 29, 21, 34, 10, 847635)\nprint(datetime.datetime.now())\n 2018-09-29 21:34:26.900063","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":356,"Q_Id":52574958,"Users Score":1,"Answer":"The first is the result of calling repr on the datetime value, the second is the result of calling str on a datetime.\nThe Python shell calls repr on values other than None before printing them, while print tries str before calling repr (if str fails).\nThis is not dependent on the Python version.","Q_Score":0,"Tags":"python,python-3.x,printing","A_Id":52574995,"CreationDate":"2018-09-30T05:33:00.000","Title":"python 3, how print function changes output?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using python's CLI module which takes any do_* method and sets it as a command, so a do_show() method will be executed if the user type \"show\".\nHow can I execute the do_show() method using any variation of capitalization from user input e.g. SHOW, Show, sHoW and so on without giving a Command Not Found error?\nI think the answer would be something to do with overriding the Cmd class and forcing it to take the user's input.lower() but idk how to do that :\/","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":108,"Q_Id":52580345,"Users Score":1,"Answer":"You should override onecmd to achieve desired functionality.","Q_Score":1,"Tags":"python","A_Id":52580602,"CreationDate":"2018-09-30T17:25:00.000","Title":"Python's cmd.Cmd case insensitive commands","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I need to present my data in various graphs. Usually what I do is to take a screenshot of my graph (I almost exclusively make them with matplotlib) and paste it into my PowerPoint.\nUnfortunately my direct superior seems not to be happy with the way I present them. Sometimes he wants certain things in log scale and sometimes he dislike my color palette. The data is all there, but because its an image there's no way I can change that in the meeting. \nMy superior seems to really care about those things and spend quite a lot of time telling me how to make plots in every single meeting. He (usually) will not comment on my data before I make a plot the way he wants.\nThat's where my question becomes relevant. Right now what I have in my mind is to have an interactive canvas embedded in my PowerPoint such that I can change the range of the axis, color of my data point, etc in real time. I have been searching online for such a thing but it comes out empty. I wonder if that could be done and how can it be done?\nFor some simple graphs Excel plot may work, but usually I have to present things in 1D or 2D histograms\/density plots with millions of entries. Sometimes I have to fit points with complicated mathematical formulas and that's something Excel is incapable of doing and I must use scipy and pandas. \nThe closest thing to this I found online is rise with jupyter which convert a jupyter notebook into a slide show. I think that is a good start which allows me to run python code in real time inside the presentation, but I would like to use PowerPoint related solutions if possible, mostly because I am familiar with how PowerPoint works and I still find certain PowerPoint features useful. \nThank you for all your help. While I do prefer PowerPoint, any other products that allows me to modify plots in my presentation in real time or alternatives of rise are welcomed.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":8150,"Q_Id":52586506,"Users Score":3,"Answer":"When putting a picture in PowerPoint you can decide whether you want to embed it or link to it. If you decide to link to the picture, you would be free to change it outside of powerpoint. This opens up the possibility for the following workflow:\nNext to your presentation you have a Python IDE or Juypter notebook open with the scripts that generate the figures. They all have a savefig command in them to save to exactly the location on disc from where you link the images in PowerPoint. If you need to change the figure, you make the changes in the python code, run the script (or cell) and switch back to PowerPoint where the newly created image is updated.\nNote that I would not recommend putting too much effort into finding a better solution to this, but rather spend the time thinking about good visual reprentations of the data, due to the following reasons: 1. If your instrutor's demands are completely unreasonable (\"I like blue better than green, so you need to use blue.\") than it's not worth spending effort into satisfying their demands at all. 2. If your instrutor's demands are based on the fact that the current reprentation does not allow to interprete the data correctly, this can be prevented by spending more thoughts on good plots prior to the presentation. This is a learning process, which I guess your instructor wants you to internalize. After all, you won't get a degree in computer science for writing a PowerPoint backend to matplotlib, but rather for being able to present your research in a way suited for your subject.","Q_Score":2,"Tags":"python,matplotlib,powerpoint,jupyter,rise","A_Id":52589887,"CreationDate":"2018-10-01T07:38:00.000","Title":"Possible ways to embed python matplotlib into my presentation interactively","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I don't know if the problem is between me and Pyomo.DAE or between me and IPOPT. I am doing this all from the command-line interface in Bash on Ubuntu on Windows (WSL). When I run:\n\nJAMPchip@DESKTOP-BOB968S:~\/examples\/dae$ python3 run_disease.py\n\nI receive the following output:\n\nWARNING: Could not locate the 'ipopt' executable, which is required\n for solver\n ipopt Traceback (most recent call last): File \"run_disease.py\", line 15, in \n results = solver.solve(instance,tee=True) File \"\/usr\/lib\/python3.6\/site-packages\/pyomo\/opt\/base\/solvers.py\", line\n 541, in solve\n self.available(exception_flag=True) File \"\/usr\/lib\/python3.6\/site-packages\/pyomo\/opt\/solver\/shellcmd.py\", line\n 122, in available\n raise ApplicationError(msg % self.name) pyutilib.common._exceptions.ApplicationError: No executable found for\n solver 'ipopt'\n\nWhen I run \"make test\" in the IPOPT build folder, I reecieved:\n\nTesting AMPL Solver Executable...\n Test passed! Testing C++ Example...\n Test passed! Testing C Example...\n Test passed! Testing Fortran Example...\n Test passed!\n\nBut my one major concern is that in the \"configure\" output was the follwing:\n\nchecking for COIN-OR package HSL... not given: No package 'coinhsl'\n found\n\nThere were also a few warning when I ran \"make\". I am not at all sure where the issue lies. How do I make python3 find IPOPT, and how do I tell if I have IPOPT on the system for pyomo.dae to find? I am pretty confident that I have \"coibhsl\" in the HSL folder, so how do I make sure that it is found by IPOPT?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1060,"Q_Id":52596464,"Users Score":0,"Answer":"As sascha states, you need to make sure that the directory containing your IPOPT executable (likely the build folder) is in your system PATH. That way, if you were to open a terminal and call ipopt from an arbitrary directory, it would be detected as a valid command. This is distinct from being able to call make test from within the IPOPT build folder.","Q_Score":0,"Tags":"python,pyomo,ipopt","A_Id":52612398,"CreationDate":"2018-10-01T18:01:00.000","Title":"\"No package 'coinhsl' found\": IPOPT compiles and passes test, but pyomo cannot find it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Is there any way to disable the print screen key when running a python application?\nMaybe editing the windows registry is the way?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":426,"Q_Id":52609180,"Users Score":0,"Answer":"printscreen is OS Functionality.\nTheir is No ASCII code for PrintScreen.\nEven their are many ways to take PrintScreen.\n\nThus, You can Disable keyboard but its difficult to stop user from taking PrintScreen.","Q_Score":0,"Tags":"python,windows,screenshot,printscreen","A_Id":57932453,"CreationDate":"2018-10-02T13:17:00.000","Title":"how to disable printscreen key on windows using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am running multiple scrapers using the command line which is an automated process.\nPython : 2.7.12\nScrapy : 1.4.0\nOS : Ubuntu 16.04.4 LTS\nI want to know how scrapy handles the case when \n\nThere is not enough memory\/cpu bandwidth to start a scraper.\nThere is not enough memory\/cpu bandwidth during a scraper run.\n\nI have gone through the documentation but couldn't find anything.\nAnyone answering this, you don't have to know the right answer, if you can point me to the general direction of any resource you know which would be helpful, that would also be appreciated","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":95,"Q_Id":52643398,"Users Score":1,"Answer":"The operating system kills any process that tries to access more memory than the limit.\nApplies to python programs too. and scrapy is no different.\nMore often than not, bandwidth is the bottleneck in scraping \/ crawling applications.\nMemory would only be a bottleneck if there is a serious memory leak in your application.\nYour application would just be very slow if CPU is being shared by many process on the same machine.","Q_Score":0,"Tags":"python,python-2.7,memory-management,scrapy","A_Id":52643555,"CreationDate":"2018-10-04T09:28:00.000","Title":"How does scrapy behave when enough resources are not available","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a raspberry pi 3b+ and i'm showing ip camera stream using the Opencv in python.\nMy default ip in rasppberry is 169.254.210.x range and I have to put the camera in the same range.\nHow can i change my raspberry ip?\nSuppose if I run the program on a web service such as a flask, can i change the raspberry pi server ip every time?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":154,"Q_Id":52652775,"Users Score":0,"Answer":"You can statically change your ip of raspberry pi by editing \/etc\/network\/interfaces\nTry editing a line of the file which contains address.","Q_Score":0,"Tags":"python,networking,flask,raspberry-pi3","A_Id":52653195,"CreationDate":"2018-10-04T17:55:00.000","Title":"how to change raspberry pi ip in flask web service","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"From what I've read, it sounds like the issue might be that the module isn't in the same directory as my script. Is that the case? If so, how do I find the module and move it to the correct location? \nEdit\nIn case it's relevant - I installed docx using easy_install, not pip.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3543,"Q_Id":52654136,"Users Score":0,"Answer":"pip show docx \nThis will show you where it is installed. However, if you're using python3 then\n pip install python-docx \nmight be the one you need.","Q_Score":1,"Tags":"python,import,docx","A_Id":52857960,"CreationDate":"2018-10-04T19:48:00.000","Title":"\"No module named 'docx'\" error but \"requirement already satisfied\" when I try to install","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"From what I've read, it sounds like the issue might be that the module isn't in the same directory as my script. Is that the case? If so, how do I find the module and move it to the correct location? \nEdit\nIn case it's relevant - I installed docx using easy_install, not pip.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3543,"Q_Id":52654136,"Users Score":0,"Answer":"Please install python-docx.\nThen you import docx (not python-docx)","Q_Score":1,"Tags":"python,import,docx","A_Id":71628381,"CreationDate":"2018-10-04T19:48:00.000","Title":"\"No module named 'docx'\" error but \"requirement already satisfied\" when I try to install","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I know that installing python packages using sudo pip install is bad a security risk. Unfortunately, I found this out after installing quite a few packages using sudo. \nIs there a way to find out what python packages I installed using sudo pip install? The end goal being uninstallment and correctly re-installing them within a virtual environment.\nI tried pip list to get information about the packages, but it only gave me their version. pip show gave me more information about an individual package such as where it is installed, but I don't know how to make use of that information.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1059,"Q_Id":52669951,"Users Score":1,"Answer":"any modules you installed with sudo will be owned by root, so you can open your shell\/terminal, cd to site-packages directory & check the directories owner with ls -la, then any that has root in the owner column is the one you want to uninstall.","Q_Score":1,"Tags":"python,pip,sudo","A_Id":52670311,"CreationDate":"2018-10-05T16:36:00.000","Title":"How can I see what packages were installed using `sudo pip install`?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I know that installing python packages using sudo pip install is bad a security risk. Unfortunately, I found this out after installing quite a few packages using sudo. \nIs there a way to find out what python packages I installed using sudo pip install? The end goal being uninstallment and correctly re-installing them within a virtual environment.\nI tried pip list to get information about the packages, but it only gave me their version. pip show gave me more information about an individual package such as where it is installed, but I don't know how to make use of that information.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1059,"Q_Id":52669951,"Users Score":0,"Answer":"try the following command: pip freeze","Q_Score":1,"Tags":"python,pip,sudo","A_Id":52670065,"CreationDate":"2018-10-05T16:36:00.000","Title":"How can I see what packages were installed using `sudo pip install`?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I am working on a machine learning project and I am wondering whether it is possible to change the loss function while the network is training. I'm not sure how to do it exactly in code.\nFor example, start training with cross entropy loss and then halfway through training, switch to 0-1 loss.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":783,"Q_Id":52682979,"Users Score":0,"Answer":"You have to implement your own algorithm. This is mostly possible with Tensorflow.","Q_Score":0,"Tags":"python,tensorflow,machine-learning","A_Id":52695712,"CreationDate":"2018-10-06T20:14:00.000","Title":"Is it possible to change the loss function dynamically during training?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"but I have been trying to play around with it for awhile. I've seen a lot of guides on how Keras is used to build LSTM models and how people feed in the inputs and get expected outputs. But what I have never seen yet is, for example stock data, how we can make the LSTM model understand patterns between different dimensions, say close price is much higher than normal because volume is low.\nPoint of this is that I want to do a test with stock prediction, but make it so that each dimensions are not reliant on previous time steps, but also reliant on other dimensions it haves as well.\nSorry if I am not asking the question correctly, please ask more questions if I am not explaining it clearly.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":52706996,"Users Score":0,"Answer":"First: Regressors will replicate if you input a feature that gives some direct intuition about the predicted input might be to secure the error is minimized, rather than trying to actually predict it. Try to focus on binary classification or multiclass classification, whether the closing price go up\/down or how much. \nSecond: Always engineer the raw features to give more explicit patterns to the ML algorithm. Think on inputs as Volume(t) - Volume(t-1), close(t)^2 - close(t-1)^2, technical indicators(RSI, CCI, OBV etc.) Create your own features. You can use the pyti library for technical indicators.","Q_Score":0,"Tags":"python,machine-learning,keras,lstm,rnn","A_Id":52786576,"CreationDate":"2018-10-08T17:02:00.000","Title":"Keras LSTM Input Dimension understanding each other","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"We are trying to order a 128 subnet. But looks like it doesn't work, get an error saying Invalid combination specified for ordering a subnet. The same code works to order a 64 subnet. Any thoughts how to order a 128 subnet? \n\nnetwork_mgr = SoftLayer.managers.network.NetworkManager(client)\nnetwork_mgr.add_subnet(\u2018private\u2019, 128, vlan_id, test_order=True)\n\n\nTraceback (most recent call last):\n File \"subnet.py\", line 11, in \n result = nwmgr.add_subnet('private', 128, vlan_id, test_order=True)\n File \"\/usr\/local\/lib\/python2.7\/site-packages\/SoftLayer\/managers\/network.py\", line 154, in add_subnet\n raise TypeError('Invalid combination specified for ordering a'\nTypeError: Invalid combination specified for ordering a subnet.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":52714654,"Users Score":0,"Answer":"Currently it seems not possible to add 128 ip subnet into the order, the package used by the manager to order subnets only allows to add subnets for: 64,32,16,8,4 (capacity),\nIt is because the package that does not contain any item that has 128 ip addresses subnet, this is the reason why you are getting the error Exception you provided.\nYou may also verify this through the Portal UI, if you can see 128 ip address option through UI in your account, please update this forum with a screenshot.","Q_Score":0,"Tags":"python-2.7,ibm-cloud-infrastructure","A_Id":52730451,"CreationDate":"2018-10-09T06:31:00.000","Title":"SoftLayer API: order a 128 subnet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"If I forget to add the Python to the path while installing it, how can I add it to my Windows path?\nWithout adding it to the path I am unable to use it. Also if I want to put python 3 as default.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":164,"Q_Id":52718655,"Users Score":1,"Answer":"Edit Path in Environment Variables\nAdd Python's path to the end of the list (these are separated by ';').\nFor example:\n\nC:\\Users\\AppData\\Local\\Programs\\Python\\Python36;\nC:\\Users\\AppData\\Local\\Programs\\Python\\Python36\\Scripts\n\nand if you want to make it default\nyou have to edit the system environmental variables\nedit the following from the Path\n\nC:\\Windows;C:\\Windows\\System32;C:\\Python27\n\nNow Python 3 would have been become the default python in your system\nYou can check it by python --version","Q_Score":1,"Tags":"python,windows,path","A_Id":52718656,"CreationDate":"2018-10-09T10:19:00.000","Title":"Add Python to the Windows path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"We've built a large python repo that uses lots of libraries (numpy, scipy, tensor flow, ...) And have managed these dependencies through a conda environment. Basically we have lots of developers contributing and anytime someone needs a new library for something they are working on they 'conda install' it. \nFast forward to today and now we need to deploy some applications that use our repo. We are deploying using docker, but are finding that these images are really large and causing some issues, e.g. 10+ GB. However each individual application only uses a subset of all the dependencies in the environment.yml. \nIs there some easy strategy for dealing with this problem? In a sense I need to know the dependencies for each application, but I'm not sure how to do this in an automated way. \nAny help here would be great. I'm new to this whole AWS, Docker, and python deployment thing... We're really a bunch of engineers and scientists who need to scale up our software. We have something that works, it just seems like there has to be a better way .","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2376,"Q_Id":52719729,"Users Score":3,"Answer":"First see if there are easy wins to shrink the image, like using Alpine Linux and being very careful about what gets installed with the OS package manager, and ensuring you only allow installing dependencies or recommended items when truly required, and that you clean up and delete artifacts like package lists, big things you may not need like Java, etc.\nThe base Anaconda\/Ubuntu image is ~ 3.5GB in size, so it's not crazy that with a lot of extra installations of heavy third-party packages, you could get up to 10GB. In production image processing applications, I routinely worked with Docker images in the range of 3GB to 6GB, and those sizes were after we had heavily optimized the container.\nTo your question about splitting dependencies, you should provide each different application with its own package definition, basically a setup.py script and some other details, including dependencies listed in some mix of requirements.txt for pip and\/or environment.yaml for conda.\nIf you have Project A in some folder \/ repo and Project B in another, you want people to easily be able to do something like pip install or conda env create -f ProjectB_environment.yml or something, and voila, that application is installed.\nThen when you deploy a specific application, have some CI tool like Jenkins build the container for that application using a FROM line to start from your thin Alpine \/ whatever container, and only perform conda install or pip install for the dependency file for that project, and not all the others.\nThis also has the benefit that multiple different projects can declare different version dependencies even among the same set of libraries. Maybe Project A is ready to upgrade to the latest and greatest pandas version, but Project B needs some refactoring before the team wants to test that upgrade. This way, when CI builds the container for Project B, it will have a Python dependency file with one set of versions, while in Project A's folder or repo of source code, it might have something different.","Q_Score":5,"Tags":"python,amazon-web-services,docker","A_Id":52719901,"CreationDate":"2018-10-09T11:15:00.000","Title":"Deploying python with docker, images too big","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Could you recomend me best way how to do it: i have a list phrases, for example [\"free flower delivery\",\"flower delivery Moscow\",\"color + home delivery\",\"flower delivery + delivery\",\"order flowers + with delivery\",\"color delivery\"] and pattern - \"flower delivery\". I need to get list with phrases as close as possible to pattern. \nCould you give some advice to how to do it?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":345,"Q_Id":52724518,"Users Score":1,"Answer":"Answer given by nflacco is correct.. In addition to that, have you tried edit distance? Try fuzzywuzzy (pip install fuzzywuzzy).. it uses Edit distance to give you a score, how near two sentences are","Q_Score":0,"Tags":"python-3.x,machine-learning,nlp","A_Id":52733825,"CreationDate":"2018-10-09T15:27:00.000","Title":"Text classification by pattern","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am currently using TensorFlow tutorial's first_steps_with_tensor_flow.ipynb notebook to learn TF for implementing ML models. In the notebook, they have used Stochastic Gradient Descent (SGD) to optimize the loss function. Below is the snippet of the my_input_function:\ndef my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):\nHere, it can be seen that the batch_size is 1. The notebook uses a housing data set containing 17000 labeled examples for training. This means for SGD, I will be having 17000 batches.\nLRmodel = linear_regressor.train(input_fn = lambda:my_input_fn(my_feature, \n targets), steps=100)\nI have three questions - \n\nWhy is steps=100 in linear_regressor.train method above? Since we have 17000 batches and steps in ML means the count for evaluating one batch, in linear_regressor.train method steps = 17000 should be initialized, right?\nIs number of batches equal to the number of steps\/iterations in ML?\nWith my 17000 examples, if I keep my batch_size=100, steps=500, and num_epochs=5, what does this initialization mean and how does it correlate to 170 batches?","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":185,"Q_Id":52738335,"Users Score":-1,"Answer":"step is the literal meaning: means you refresh the parameters in your batch size; so for linear_regessor.train, it will train 100 times for this batch_size 1.\nepoch means to refresh the whole data, which is 17,000 in your set.","Q_Score":0,"Tags":"python,tensorflow,machine-learning","A_Id":52904530,"CreationDate":"2018-10-10T10:39:00.000","Title":"TensorFlow: Correct way of using steps in Stochastic Gradient Descent","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":1},{"Question":"I have a task in which i have a csv file having some sample data. The task is to convert the data inside the csv file into other formats like JSON, HTML, YAML etc after applying some data validation rules.\nNow i am also supposed to write some unit tests for this in pytest or the unittest module in Python.\nMy question is how do i actually write the unit tests for this since i am converting them to different JSON\/HTML files ? Should i prepare some sample files and then do a comparison with them in my unit tests.\nI think only the data validation part in the task can be tested using unittest and not the creation of files in different formats right ?\nAny ideas would be immensely helpful.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":511,"Q_Id":52763725,"Users Score":0,"Answer":"You should do functional tests, so testing the whole pipeline from a csv file to the end result, but unit tests is about checking that individual steps work.\nSo for instance, can you read a csv file properly? Does it fail as expected when you don't provide a csv file? Are you able to check each validation unit? Are they failing when they should? Are they passing valid data? \nAnd of course, the result must be tested as well. Starting from a known internal representation, is the resulting json valid? Does it contain all the required data? Same for yaml, HTML. You should not test the formatting, but really what was output and if it's correct. \nYou should always test that valid data passes and that incorrect doesn't at each step of your work flow.","Q_Score":1,"Tags":"python,pytest,python-unittest","A_Id":52763853,"CreationDate":"2018-10-11T15:24:00.000","Title":"Writing unit tests in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I use a QSortFilterProxyModel to filter a QSqlTableModel's data, and want to get the filtered rowCount.\nBut when I call the QSortFilterProxyModel.rowCount method, the QSqlTableModel's rowCount was returned.\nSo how can I get the filtered rowcount?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":993,"Q_Id":52779569,"Users Score":0,"Answer":"You should after set QSortFilterProxyModel filter to call proxymodel.rowCount\u3002","Q_Score":1,"Tags":"python,pyqt,pyqt5,qsortfilterproxymodel","A_Id":62380679,"CreationDate":"2018-10-12T12:28:00.000","Title":"How to get filtered rowCount in a QSortFilterProxyModel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I installed Anaconda 3 and wanted to execute python from the shell. It returned that it's either written wrong or does not exist. Apparently, I have to add a path to the environmentle variable.\nCan someone tell how to do this?\nEnvironment: Windows 10, 64 bit and python 3.7\nPs: I know the web is full with that but I am notoriously afraid to make a mistake. And I did not find an exact entry for my environment. Thanks in advance.\nBest Daniel","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4417,"Q_Id":52792102,"Users Score":0,"Answer":"Windows:\n\nsearch for -->Edit the system environment variables\nIn Advanced tab, click Environment variabless\nIn System variables, Select PATH and click edit. Now Click new, ADD YOU PATH.\nClick Apply and close.\n\nNow, check in command prompt","Q_Score":0,"Tags":"python-3.x,path,environment-variables","A_Id":52792181,"CreationDate":"2018-10-13T10:45:00.000","Title":"python 3.7 setting environment variable path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Given I have two different lists with ints.\na = [1, 4, 11, 20, 25] and b = [3, 10, 20]\nI want to return a list of length len(b) that stores the closest number in a for each ints in b.\nSo, this should return [4, 11, 20].\nI can do this in brute force, but what is a more efficient way to do this? \nEDIT: It would be great if I can do this with standard library, if needed, only.","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1190,"Q_Id":52798977,"Users Score":0,"Answer":"Use binary search, assuming the lists are in order.\nThe brute force in this case is only O(n), so I wouldn't worry about it, just use brute force.\nEDIT:\nyeh it is O(len(a)*len(b)) (roughly O(n^2)\nsorry stupid mistake.\nSince these aren't necessarily sorted the fastest is still O(len(a)*len(b)) though. Sorting the lists (using timsort) would take O(nlogn), then binary search O(logn), which results in O(nlog^2n)*O(n)=O(n^2log^2n), which is slower then just O(n^2).","Q_Score":3,"Tags":"python","A_Id":52799033,"CreationDate":"2018-10-14T02:29:00.000","Title":"Given two lists of ints, how can we find the closes number in one list from the other one?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Say that most of my DAGs and tasks in AirFlow are supposed to run Python code on the same machine as the AirFlow server.\nCan I have different DAGs use different conda environments? If so, how should I do it? For example, can I use the Python Operator for that? Or would that restrict me to using the same conda environment that I used to install AirFlow.\nMore generally, where\/how should I ideally activate the desired conda environment for each DAG or task?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1210,"Q_Id":52805712,"Users Score":4,"Answer":"The Python that is running the Airflow Worker code, is the one whose environment will be used to execute the code.\nWhat you can do is have separate named queues for separate execution environments for different workers, so that only a specific machine or group of machines will execute a certain DAG.","Q_Score":6,"Tags":"python,python-3.x,anaconda,conda,airflow","A_Id":52834527,"CreationDate":"2018-10-14T18:17:00.000","Title":"Python tasks and DAGs with different conda environments","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"For example in python if I\u2019m sending data through sockets could I make my own encryption algorithm to encrypt that data? Would it be unbreakable since only I know how it works?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":52806028,"Users Score":1,"Answer":"Yes you can. Would it be unbreakable? No. This is called security through obscurity. You're relying on the fact that nobody knows how it works. But can you really rely on that?\nSomeone is going to receive the data, and they'll have to decrypt it. The code must run on their machine for that to happen. If they have the code, they know how it works. Well, at least anyone with a lot of spare time and nothing else to do can easily reverse engineer it, and there goes your obscurity.\nIs it feasable to make your own algorithm? Sure. A bit of XOR here, a bit of shuffling there... eventually you'll have an encryption algorithm. It probably wouldn't be a good one but it would do the job, at least until someone tries to break it, then it probably wouldn't last a day.\nDoes Python care? Do sockets care? No. You can do whatever you want with the data. It's just bits after all, what they mean is up to you.\nAre you a cryptographer? No, otherwise you wouldn't be here asking this. So should you do it? No.","Q_Score":0,"Tags":"python,encryption","A_Id":52806168,"CreationDate":"2018-10-14T18:54:00.000","Title":"Is it possible to make my own encryption when sending data through sockets?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the time used for \"reading\" the image from the queue and then substract that value from number of miliseconds avalible for one frame:\nif I have as input video with 50FPS and i want to playback it in real-time i do 1000\/50 => 20ms per frame. \nAnd then wait that time using cv2.WaitKey()\nBut still I get some laggy output. Which is slower then the source video","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2282,"Q_Id":52806175,"Users Score":1,"Answer":"I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer()\ncalculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer.\neg cv2.WaitKey((1000\/50) - (time processing finished - time read started) - 10)\nor you could have a more rigid timing eg script start time + frame# * 20ms - time processing finished\nI haven't tried this personally so im not sure if it will actually work, also might be worth having a check so the number isnt below 1","Q_Score":0,"Tags":"python,opencv","A_Id":52806830,"CreationDate":"2018-10-14T19:10:00.000","Title":"imshow() with desired framerate with opencv","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am execute a python script in Azure machine learning studio. I am including other python scripts and python library, Theano. I can see the Theano get loaded and I got the proper result after script executed. But I saw the error message:\n\nWARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.\n\nDid anyone know how to solve this problem? Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":96,"Q_Id":52844431,"Users Score":1,"Answer":"I don't think you can fix that - the Python script environment in Azure ML Studio is rather locked down, you can't really configure it (except for choosing from a small selection of Anaconda\/Python versions). \nYou might be better off using the new Azure ML service, which allows you considerably more configuration options (including using GPUs and the like).","Q_Score":1,"Tags":"python,theano,azure-machine-learning-studio","A_Id":52850290,"CreationDate":"2018-10-16T21:43:00.000","Title":"Azure Machine Learning Studio execute python script, Theano unable to execute optimized C-implementations (for both CPU and GPU)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have two same versions of python on windows. Both are 3.6.4. I installed one of them, and the other one comes with Anaconda.\nMy question is how do I use pip to install a package for one of them? It looks like the common method will not work since the two python versions are the same.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":85,"Q_Id":52856872,"Users Score":1,"Answer":"pip points to only one installation because pip is a script from one python.\nIf you have one Python in your PATH, then it's that python and that pip that will be used.","Q_Score":0,"Tags":"python,windows,pip","A_Id":52856982,"CreationDate":"2018-10-17T14:07:00.000","Title":"how to use pip install a package if there are two same version of python on windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have two same versions of python on windows. Both are 3.6.4. I installed one of them, and the other one comes with Anaconda.\nMy question is how do I use pip to install a package for one of them? It looks like the common method will not work since the two python versions are the same.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":52856872,"Users Score":0,"Answer":"Use virtualenv, conda environment or pipenv, it will help with managing packages for different projects.","Q_Score":0,"Tags":"python,windows,pip","A_Id":52857053,"CreationDate":"2018-10-17T14:07:00.000","Title":"how to use pip install a package if there are two same version of python on windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a problem with installing numpy with python 3.6 and i have windows 10 64 bit \nPython 3.6.6 \nBut when i typed python on cmd this appears \nPython is not recognized as an internal or external command \nI typed py it solves problem but how can i install numpy \nI tried to type commant set path =c:\/python36 \nAnd copy paste the actual path on cmd but it isnt work \nI tried also to edit the enviromnent path through type a ; and c:\/python 36 and restart but it isnt help this \nI used pip install nupy and download pip but it isnt work","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":52866467,"Users Score":0,"Answer":"Try pip3 install numpy. To install python 3 packages you should use pip3","Q_Score":1,"Tags":"python,numpy,scipy,installation","A_Id":52866580,"CreationDate":"2018-10-18T03:14:00.000","Title":"How can i make computer read a python file instead of py?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a problem with installing numpy with python 3.6 and i have windows 10 64 bit \nPython 3.6.6 \nBut when i typed python on cmd this appears \nPython is not recognized as an internal or external command \nI typed py it solves problem but how can i install numpy \nI tried to type commant set path =c:\/python36 \nAnd copy paste the actual path on cmd but it isnt work \nI tried also to edit the enviromnent path through type a ; and c:\/python 36 and restart but it isnt help this \nI used pip install nupy and download pip but it isnt work","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":52866467,"Users Score":0,"Answer":"On Windows, the py command should be able to launch any Python version you have installed. Each Python installation has its own pip. To be sure you get the right one, use py -3.6 -m pip instead of just pip.\n\nYou can use where pip and where pip3 to see which Python's pip they mean. Windows just finds the first one on your path.\n\nIf you activate a virtualenv, then you you should get the right one for the virtualenv while the virtualenv is active.","Q_Score":1,"Tags":"python,numpy,scipy,installation","A_Id":52866587,"CreationDate":"2018-10-18T03:14:00.000","Title":"How can i make computer read a python file instead of py?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I know how to import and manipulate data from csv, but I always need to save to xlsx or so to see the changes. Is there a way to see 'live changes' as if I am already using Excel? \nPS using pandas\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":47,"Q_Id":52871447,"Users Score":0,"Answer":"This is not possible using pandas. This lib creates copy of your .csv \/ .xls file and stores it in RAM. So all changes are applied to file stored in you memory not on disk.","Q_Score":0,"Tags":"python,pandas","A_Id":52871555,"CreationDate":"2018-10-18T09:53:00.000","Title":"Is it possible to manipulate data from csv without the need for producing a new csv file?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have data frame with a object column lets say col1, which has values likes:\n1.00,\n1,\n0.50,\n1.54\nI want to have the output like the below:\n1,\n1,\n0.5,\n1.54\nbasically, remove zeros after decimal values if it does not have any digit after zero. Please note that i need answer for dataframe. pd.set_option and round don't work for me.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5418,"Q_Id":52889130,"Users Score":0,"Answer":"A quick-and-dirty solution is to use \"%g\" % value, which will convert floats 1.5 to 1.5 but 1.0 to 1 and so on. The negative side-effect is that large numbers will be represented in scientific notation like 4.44e+07.","Q_Score":2,"Tags":"python,pandas","A_Id":52889192,"CreationDate":"2018-10-19T09:04:00.000","Title":"how to remove zeros after decimal from string remove all zero after dot","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"This is related to new features Visual Studio has introduced - Python support, Machine Learning projects to support.\nI have installed support and found that I can create a python project and can run it. However, I could not find how to call a python function from another C# file.\nExample, I created a classifier.py from given project samples, Now I want to run the classifier and get results from another C# class.\nIf there is no such portability, then how is it different from creating a C# Process class object and running the Python.exe with our py file as a parameter.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":632,"Q_Id":52890639,"Users Score":0,"Answer":"As per the comments, python support has come in visual studio. Visual studio is supporting running python scripts and debugging. \nHowever, calling one python function from c# function and vice versa is not supported yet.\nClosing the thread. Thanks for suggestions.","Q_Score":1,"Tags":"c#,python,visual-studio","A_Id":52965599,"CreationDate":"2018-10-19T10:34:00.000","Title":"Call Python functions from C# in Visual Studio Python support VS 2017","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a multibranch pipeline set up in Jenkins that runs a Jenkinsfile, which uses pytest for testing scripts, and outputs the results using Cobertura plug-in and checks code quality with Pylint and Warnings plug-in.\nI would like to test the code with Python 2 and Python 3 using virtualenv, but I do not know how to perform this in the Jenkinsfile, and Shining Panda plug-in will not work for multibranch pipelines (as far as I know). Any help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1383,"Q_Id":52890995,"Users Score":1,"Answer":"You can do it even using vanilla Jenkins (without any plugins). 'Biggest' problem will be with proper parametrization. But let's start from the beginning.\n2 versions of Python\nWhen you install 2 versions of python on a single machine you will have 2 different exec files. For python2 you will have python and for python3 you will have python3. Even when you create virtualenv (use venv) you will have both of them. So you are able to run unittests agains both versions of python. It's just a matter of executing proper command from batch\/bash script.\nJenkins\nThere are many ways of performing it:\n\nyou can prepare separate jobs for both python 2 and 3 versions of tests and run them from jenkins file\nyou can define the whole pipeline in a single jenkins file where each python test is a different stage (they can be run one after another or concurrently)","Q_Score":2,"Tags":"python,jenkins,github,continuous-integration,jenkins-pipeline","A_Id":52892540,"CreationDate":"2018-10-19T10:55:00.000","Title":"Running Jenkinsfile with multiple Python versions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to track a moving deformable object in a video (but only 2D space). How do I find the paths (subpaths) revisited by the object in the span of its whole trajectory? For instance, if the object traced a path, p0-p1-p2-...-p10, I want to find the number of cases the object traced either p0-...-p10 or a sub-path like p3-p4-p5. Here, p0,p1,...,p10 represent object positions (in (x,y) pixel coordinates at the respective instants). Also, how do I know at which frame(s) these paths (subpaths) are being revisited?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":52901800,"Users Score":0,"Answer":"I would first create a detection procedure that outputs a list of points visited along with their video frame number. Then use list exploration functions to know how many redundant suites are found and where.\nAs you see I don't write your code. If you need anymore advise please ask!","Q_Score":1,"Tags":"python,python-2.7,video-tracking","A_Id":52943459,"CreationDate":"2018-10-20T01:58:00.000","Title":"How to find redundant paths (subpaths) in the trajectory of a moving object?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I got a requirement to parse the message files in .txt format real time as and when they arrive in incoming windows directory. The directory is in my local Windows Virtual Machine something like D:\/MessageFiles\/\nI wrote a Python script to parse the message files because it's a fixed width file and it parses all the files in the directory and generates the output. Once the files are successfully parsed, it will be moved to archive directory. Now, i would like to make this script run continuously so that it looks for the incoming message files in the directory D:\/MessageFiles\/ and perform the processing as and when it sees the new files in the path. \nCan someone please let me know how to do this?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2288,"Q_Id":52906106,"Users Score":3,"Answer":"There are a few ways to do this, it depends on how fast you need it to archive the files.\nIf the frequency is low, for example every hour, you can try to use windows task scheduler to run the python script.\nIf we are talking high frequency, or you really want a python script running 24\/7, you could put it in a while loop and at the end of the loop do time.sleep()\nIf you go with this, I would recommend not blindly parsing the entire directory on every run, but instead finding a way to check whether new files have been added to the directory (such as the amount of files perhaps, or the total size). And then if there is a fluctuation you can archive.","Q_Score":2,"Tags":"python,python-3.x","A_Id":52906144,"CreationDate":"2018-10-20T13:20:00.000","Title":"Python - How to run script continuously to look for files in Windows directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm confused on how the PyOpenGL camera works or how to implement it. Am I meant to rotate and move the whole world around the camera or is there a different way?\nI couldn't find anything that can help me and I don't know how to translate C to python.\nI just need a way to transform the camera that can help me understand how it works.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":362,"Q_Id":52907020,"Users Score":3,"Answer":"To say it bluntly: There is no such thing as a \"camera\" in OpenGL (neither there is in DirectX, or Vulkan, or in any of the legacy 3D graphics APIs). The effects of a camera is understood as some parameter that contributes to the ultimate placement of geometry inside the viewport volume.\nThe sooner you understand that all that current GPUs do is offering massively accelerated computational resources to set the values of pixels in a 2D grid, where the region of the pixels changed are mere points, lines or triangles on a 2D plane onto which they are projected from an arbitrarily dimensioned, abstract space, the better.\nYou're not even moving around the world around the camera. Setting up transformations is actually errecting the stage in which \"the world\" will appear in the first place. Any notion of a \"camera\" is an abstraction created by a higher level framework, like a third party 3D engine or your own creation.\nSo instead of thinking in terms of a camera, which constrains your thinking, you should think about it this way:\nWhat kind of transformations do I have to chain up, to give a tuple of numbers that are called \"position\" an actual meaning, by letting this position turn up at a certain place on the visible screen?\nYou really ought to think that way, because that is what's actually happening.","Q_Score":0,"Tags":"python,opengl,pyopengl","A_Id":52907433,"CreationDate":"2018-10-20T15:04:00.000","Title":"PyOpenGL camera system","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I install my python modules via pip for my Azure Web Apps. But some of python libraries that I need are only available in conda. I have been trying to install anaconda on Azure Web Apps (windows\/linux), no success so far. Any suggestions\/examples on how to use conda env on azure web apps?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":923,"Q_Id":52915651,"Users Score":1,"Answer":"Currently, Azure App Service only supports the official Python to be installed as extensions. Instead of using the normal App Service, I would suggest you to use a Webapp for Container so that you can deploy your web app as a docker container. I suppose this is the only solution until Microsoft supports Anaconda on App Service.","Q_Score":1,"Tags":"python,azure,anaconda,virtualenv,azure-web-app-service","A_Id":53827502,"CreationDate":"2018-10-21T13:11:00.000","Title":"Anaconda Installation on Azure Web App Services","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am currently working on a Python tweet analyser and part of this will be to count common words. I have seen a number of tutorials on how to do this, and most tokenize the strings of text before further analysis.\nSurely it would be easier to avoid this stage of preprocessing and count the words directly from the string - so why do this?","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":120,"Q_Id":52916729,"Users Score":1,"Answer":"Perhaps I'm being overly correct, but doesn't tokenization simply refer to splitting up the input stream (of characters, in this case) based on delimiters to receive whatever is regarded as a \"token\"?\nYour tokens can be arbitrary: you can perform analysis on the word level where your tokens are words and the delimiter is any space or punctuation character. It's just as likely that you analyse n-grams, where your tokens correspond to a group of words and delimiting is done e.g. by sliding a window.\nSo in short, in order to analyse words in a stream of text, you need to tokenize to receive \"raw\" words to operate on.\nTokenization however is often followed by stemming and lemmatization to reduce noise. This becomes quite clear when thinking about sentiment analysis: if you see the tokens happy, happily and happiness, do you want to treat them each separately, or wouldn't you rather combine them to three instances of happy to better convey a stronger notion of \"being happy\"?","Q_Score":2,"Tags":"python,nltk,tweepy,analysis","A_Id":52917230,"CreationDate":"2018-10-21T15:08:00.000","Title":"Why tokenize\/preprocess words for language analysis?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am currently working on a Python tweet analyser and part of this will be to count common words. I have seen a number of tutorials on how to do this, and most tokenize the strings of text before further analysis.\nSurely it would be easier to avoid this stage of preprocessing and count the words directly from the string - so why do this?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":120,"Q_Id":52916729,"Users Score":0,"Answer":"Tokenization is an easy way of understanding the lexicon\/vocabulary in text processing.\nA basic first step in analyzing language or patterns in text is to remove symbols\/punctuations and stop words. With tokenization you are able to split the large chunks of text to identify and remove text which might not add value, in many cases, stop words like 'the','a','and', etc do not add much value in identifying words of interest.\nWord frequencies are also very common in understanding the usage of words in text, Google's Ngram allows for language analysis and plots out the popularity\/frequency of a word over the years. If you do not tokenize or split the strings, you will not have a basis to count the words that appear in a text.\nTokenization also allows you to run a more advanced analysis, for example tagging the part of speech or assigning sentiments to certain words. Also for machine learning, texts are mostly preprocessed to convert them to arrays which are used in te different layers of neural networks. Without tokenizing, the inputs will all be too distinct to run any analysis on.","Q_Score":2,"Tags":"python,nltk,tweepy,analysis","A_Id":52917506,"CreationDate":"2018-10-21T15:08:00.000","Title":"Why tokenize\/preprocess words for language analysis?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am currently working on a school project. We need to be able to shutdown (and maybe restart) a pythonscript that is running on another raspberry pi using a button.\nI thought that the easiest thing, might just be to shutdown the pi from the other pi. But I have no experience on this subject.\nI don't need an exact guide (I appreciate all the help I can get) but does anyone know how one might do this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":531,"Q_Id":52949880,"Users Score":0,"Answer":"Well first we should ask if the PI you are trying to shutdown is connect to a network ? (LAN or the internet, doesn't matter).\nIf the answer is yes, you can simply connect to your PI through SSH, and call shutdown.sh. \nI don't know why you want another PI, you can do it through any device connected to the same network as your first PI (Wi-Fi or ethernet if LAN, or simply from anytwhere if it's open to the internet). \nYou could make a smartphone app, or any kind or code that can connect to SSH (all of them).","Q_Score":0,"Tags":"python,button,ssh,raspberry-pi","A_Id":52950146,"CreationDate":"2018-10-23T13:07:00.000","Title":"Shutdown (a script) one raspberry pi with another raspberry pi","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I'm running into an issue without volume mounting, combined with the creation of directories in python. \nEssentially inside my container, I'm writing to some path \/opt\/\u2026, and I may have to make the path (which I'm using os.makedirs for)\nIf I mount a host file path like -v \/opt:\/opt, with bad \"permissions\" where the docker container does not seem to be able to write to, the creation of the path inside the container DOES NOT FAIL. The makedirs(P) works, because inside the container, it can make the dir just fine, because it has sudo permissions. However, nothing gets written, silently, on the host at \/opt\/\u2026. The data just isn't there, but no exception is ever raised.\nIf I mount a path with proper\/open permissions, like -v \/tmp:\/opt, then the data shows up on the host machine at \/tmp\/\u2026 as expected.\nSo, how do I not silently fail if there are no write permissions on the host on the left side of the -v argument?\\\nEDIT: my question is \"how do I detect this bad deployment scenario, crash, and fail fast inside the container, if the person who deploys the container, does it wrong\"? Just silently not writing data isn't acceptable.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":52952692,"Users Score":0,"Answer":"The bad mount is root on the host right, and the good mount is the user in the Docker group on the host? Can you check the user\/group of the mounted \/opt? It should be different than that of \/tmp.","Q_Score":1,"Tags":"python,docker","A_Id":52952936,"CreationDate":"2018-10-23T15:25:00.000","Title":"python+docker: docker volume mounting with bad perms, data silently missing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a project I\u2019m exploring where I want to scrape the real estate broker websites in my country (30-40 websites of listings) and keep the information about each property in a database. \nI have experimented a bit with scraping in python using both BeautifulSoup and Scrapy. \nWhat I would Ideally like to achieve is a daily updated database that will find new properties and remove properties when they are sold.\nAny pointers as to how to achieve this? \nI am relatively new to programming and open to learning different languages and resources if python isn\u2019t suitable.\nSorry if this forum isn\u2019t intended for this kind of vague question :-)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":129,"Q_Id":52962131,"Users Score":0,"Answer":"Build a scraper and schedule a daily run. You can use scrapy and the daily run will update the database daily.","Q_Score":0,"Tags":"python,database,web-scraping,automation,scrapy","A_Id":52967603,"CreationDate":"2018-10-24T06:17:00.000","Title":"Building comprehensive scraping program\/database for real estate websites","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to use the first three convolution layers of vgg-16 to generate feature maps.\nBut i want to use it with variable image size,i.e not imagenet size of 224x224 or 256x256. Such as 480x640or any other randome image dimension. \nAs convolution layer are independent of image spatial size, how can I use the weights for varying image sizes?\nSo how do we use the pre-trained weights of vgg-16 upto the first three convolution layers.\nKindly let me know if that is possible.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":129,"Q_Id":52965773,"Users Score":0,"Answer":"As convolution layer are independent of image size\nActually it's more complicated than that. The kernel itself is independent of the image size because we apply it on each pixel. And indeed, the training of these kernels can be reused.\nBut this means that the output size is dependent on the image size, because this is the number of nodes that are fed out of the layer for each input pixel. So the dense layer is not adapted to your image, even if the feature extractors are independent.\nSo you need to preprocess your image to fit into the size of the first layer or you retrain your dense layers from scratch.\nWhen people talk about \"transfer-learning\" is what people have done in segmentation for decades. You reuse the best feature extractors and then you train a dedicated model with these features.","Q_Score":0,"Tags":"python,tensorflow,deep-learning","A_Id":52965846,"CreationDate":"2018-10-24T09:41:00.000","Title":"Using convolution layer trained weights for different image size","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I know complex math and the necessary operations (either \"native\" Python, or through NumPy). My question has to do with how to display complex numbers in a UI using wxPython. All the questions I found dealing with Python and complex numbers have to do with manipulating complex data.\nMy original thought was to subclass wx.TextCtrl and override the set and get methods to apply and strip some formatting as needed, and concatenating an i (or j) to the imaginary part.\nAm I going down the wrong path? I feel like displaying complex numbers is something that should already be done somewhere.\nWhat would be the recommended pattern for this even when using another UI toolkit, as the problem is similar. Also read my comment below on why I would like to do this.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":52975402,"Users Score":0,"Answer":"As Brian considered my first comment good advice, and he got no more answers, I am posting it as an answer. Please refer also to the other question comments discussing the issue.\n\nIn any UI you display strings and you read strings from the user. Why\n would you mix the type to string or string to type translation with\n widgets functionality? Get them, convert and use, or \"print\" them to\n string and show the string in the ui.","Q_Score":1,"Tags":"python,user-interface,tkinter,wxpython","A_Id":55748532,"CreationDate":"2018-10-24T18:05:00.000","Title":"Display complex numbers in UI when using wxPython","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Can you change the file metadata on a cloud database using Apache Beam? From what I understand, Beam is used to set up dataflow pipelines for Google Dataflow. But is it possible to use Beam to change the metadata if you have the necessary changes in a CSV file without setting up and running an entire new pipeline? If it is possible, how do you do it?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":294,"Q_Id":52978251,"Users Score":0,"Answer":"You could code Cloud Dataflow to handle this but I would not. A simple GCE instance would be easier to develop and run the job. An even better choice might be UDF (see below).\nThere are some guidelines for when Cloud Dataflow is appropriate:\n\nYour data is not tabular and you can not use SQL to do the analysis.\nLarge portions of the job are parallel -- in other words, you can process different subsets of the data on different machines.\nYour logic involves custom functions, iterations, etc...\nThe distribution of the work varies across your data subsets.\n\nSince your task involves modifying a database, I am assuming a SQL database, it would be much easier and faster to write a UDF to process and modify the database.","Q_Score":1,"Tags":"java,python,google-cloud-platform,apache-beam,database-metadata","A_Id":52980803,"CreationDate":"2018-10-24T21:37:00.000","Title":"Change file metadata using Apache Beam on a cloud database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Well I start learning Tensorflow but I notice there's so much confusion about how to use this thing..\nFirst, some tutorials present models using low level API tf.varibles, scopes...etc, but other tutorials use Keras instead and for example to use tensor board to invoke callbacks.\nSecond, what's the purpose of having ton of duplicate API, really what's the purpose behind using high level API like Keras when you have low level to build model like Lego blocks?\nFinally, what's the true purpose of using eager execution?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":241,"Q_Id":52980583,"Users Score":0,"Answer":"You can use these APIs all together. E.g. if you have a regular dense network, but with an special layer you can use higher level API for dense layers (tf.layers and tf.keras) and low level API for your special layer. Furthermore, it is complex graphs are easier to define in low level APIs, e.g. if you want to share variables, etc. \nEager execution helps you for fast debugging, it evaluates tensors directly without a need of invoking a session.","Q_Score":0,"Tags":"python,python-3.x,tensorflow,tensorboard","A_Id":52981417,"CreationDate":"2018-10-25T02:44:00.000","Title":"How to use Tensorflow Keras API","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using keras's data generator with flow_from_dataframe. for training it works just fine, but when using model.predict_generator on the test set, I discovered that the ordering of the generated results is different than the ordering of the \"id\" column in my dataframe.\nshuffle=False does make the ordering of the generator consistent, but it is a different ordering than the dataframe. I also tried different batch sizes and the corresponding correct steps for the predict_generator function. (for example: batch_Size=1, steps=len(data))\nhow can I make sure the labels predicted for my test set are ordered in the same way of my dataframe \"id\" column?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":403,"Q_Id":52987835,"Users Score":0,"Answer":"While I haven't found a way to decide the order in which the generator produces data, the order can be obtained with the generator.filenames property.","Q_Score":0,"Tags":"python,keras","A_Id":53030878,"CreationDate":"2018-10-25T11:08:00.000","Title":"Keras flow_from_dataframe wrong data ordering","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've been struggling with this problem in various guises for a long time, and never managed to find a good solution.\nBasically if I want to write a function that performs an operation over a given, but arbitrary axis of an arbitrary rank array, in the style of (for example) np.mean(A,axis=some_axis), I have no idea in general how to do this.\nThe issue always seems to come down to the inflexibility of the slicing syntax; if I want to access the ith slice on the 3rd index, I can use A[:,:,i], but I can't generalise this to the nth index.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":68,"Q_Id":52992767,"Users Score":1,"Answer":"numpy functions use several approaches to do this:\n\ntranspose axes to move the target axis to a known position, usually first or last; and if needed transpose the result\nreshape (along with transpose) to reduce the problem simpler dimensions. If your focus is on the n'th dimension, it might not matter where the (:n) dimension are flattened or not. They are just 'going along for the ride'.\nconstruct an indexing tuple. idx = (slice(None), slice(None), j); A[idx] is the equivalent of A[:,:,j]. Start with a list or array of the right size, fill with slices, fiddle with it, and then convert to a tuple (tuples are immutable).\nConstruct indices with indexing_tricks tools like np.r_, np.s_ etc. \n\nStudy code that provides for axes. Compiled ufuncs won't help, but functions like tensordot, take_along_axis, apply_along_axis, np.cross are written in Python, and use one or more of these tricks.","Q_Score":0,"Tags":"python,numpy,multidimensional-array,indexing","A_Id":52993813,"CreationDate":"2018-10-25T15:16:00.000","Title":"Write python functions to operate over arbitrary axes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a performance measurement issue while executing a migration to Cython from C-compiled functions (through scipy.weave) called from a Python engine.\nThe new cython functions profiled end-to-end with cProfile (if not necessary I won't deep down in cython profiling) record cumulative measurement times highly variable.\nEg. the cumulate time of a cython function executed 9 times per 5 repetitions (after a warm-up of 5 executions - not took in consideration by the profiling function) is taking:\n\nin a first round 215,627339 seconds\nin a second round 235,336131 seconds\n\nEach execution calls the functions many times with different, but fixed parameters.\nMaybe this variability could depends on CPU loads of the test machine (a cloud-hosted dedicated one), but I wonder if such a variability (almost 10%) could depend someway by cython or lack of optimization (I already use hints on division, bounds check, wrap-around, ...).\nAny idea on how to take reliable metrics?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":168,"Q_Id":52992993,"Users Score":0,"Answer":"I'm not a performance expert but from my understanding the thing you should be measuring would be the average time it take per execution not the cumulative time? Other than that is your function doing any like reading from disk and\/or making network requests?","Q_Score":1,"Tags":"python,performance,cython","A_Id":52993380,"CreationDate":"2018-10-25T15:26:00.000","Title":"Highly variable execution times in Cython functions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a performance measurement issue while executing a migration to Cython from C-compiled functions (through scipy.weave) called from a Python engine.\nThe new cython functions profiled end-to-end with cProfile (if not necessary I won't deep down in cython profiling) record cumulative measurement times highly variable.\nEg. the cumulate time of a cython function executed 9 times per 5 repetitions (after a warm-up of 5 executions - not took in consideration by the profiling function) is taking:\n\nin a first round 215,627339 seconds\nin a second round 235,336131 seconds\n\nEach execution calls the functions many times with different, but fixed parameters.\nMaybe this variability could depends on CPU loads of the test machine (a cloud-hosted dedicated one), but I wonder if such a variability (almost 10%) could depend someway by cython or lack of optimization (I already use hints on division, bounds check, wrap-around, ...).\nAny idea on how to take reliable metrics?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":168,"Q_Id":52992993,"Users Score":1,"Answer":"First of all, you need to ensure that your measurement device is capable of measuring what you need: specifically, only the system resources you consume. UNIX's utime is one such command, although even that one still includes swap time. Check the documentation of your profiler: it should have capabilities to measure only the CPU time consumed by the function. If so, then your figures are due to something else.\nOnce you've controlled the external variations, you need to examine the internal. You've said nothing about the complexion of your function. Some (many?) functions have available short-cuts for data-driven trivialities, such as multiplication by 0 or 1. Some are dependent on an overt or covert iteration that varies with the data. You need to analyze the input data with respect to the algorithm.\nOne tool you can use is a line-oriented profiler to detail where the variations originate; seeing which lines take the extra time should help determine where the \"noise\" comes from.","Q_Score":1,"Tags":"python,performance,cython","A_Id":52994620,"CreationDate":"2018-10-25T15:26:00.000","Title":"Highly variable execution times in Cython functions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers.\nConvolutional layer with kernel_size = (5,5) with 32 output channels\n\nnew dimension of throughput = (32, 28, 28)\n\nMax Pooling layer with pool_size (2,2) and step (2,2)\n\nnew dimension of throughput = (32, 14, 14)\n\nIf I now want to create a second convolutional layer with kernel size = (5x5) and 64 output channels, how do I proceed? Does this mean that I only need two new filters (2 x 32 existing channels) or does the kernel size change to be (32 x 5 x 5) since there are already 32 input channels? \nSince the initial input was a 2D image, I do not know how to conduct convolution for the hidden layer since the input is now 3 dimensional (32 x 14 x 14).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1046,"Q_Id":52997810,"Users Score":0,"Answer":"you need 64 kernel, each with the size of (32,5,5) . \ndepth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same.\ne.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an input with N as depth or say channel, you just copy this 3x3 kernel N times in 3rd dimension, the following math is just like the 1 channel case, you sum all values in all N channels which your kernel window is currently on them after multiplying the kernel values with them and get the value of just 1 entry or pixel. so what you get as output in the end is a matrix with 1 channel:) how much depth you want your matrix for next layer to have? that's the number of kernels you should apply. hence in your case it would be a kernel with this size (64 x 32 x 5 x 5) which is actually 64 kernels with 32 channels for each and same 5x5 values in all cahnnels.\n(\"I am not a very confident english speaker hope you get what I said, it would be nice if someone edit this :)\")","Q_Score":0,"Tags":"python,tensorflow,neural-network,conv-neural-network,convolution","A_Id":52998511,"CreationDate":"2018-10-25T20:43:00.000","Title":"Kernel size change in convolutional neural networks","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to run a program with pynput. I tried installing it through terminal on Mac with pip. However, it still says it's unresolved on my ide PyCharm. Does anyone have any idea of how to install this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":219,"Q_Id":52998495,"Users Score":0,"Answer":"I have three theories, but first: make sure it is installed by running python -c \"import pynput\" \n\nJetBrain's IDEs typically do not scan for package updates, so try restarting the IDE.\nJetBrain's IDE might configure a python environment for you, this might cause you to have to manually import it in your run configuration.\nYou have two python versions installed and you installed the package on the opposite version you run script on.\n\nI think either 1 or 3 is the most likely.","Q_Score":0,"Tags":"python,pycharm","A_Id":52998994,"CreationDate":"2018-10-25T21:40:00.000","Title":"Python: I can not get pynput to install","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I can get the dimensions of tensors at graph construction time via manually printing shapes of tensors(tf.shape()) but how to get the shape of these tensors at session runtime? \nThe reason that I want shape of tensors at runtime is because at graph construction time shape of some tensors is coming as (?,8) and I cannot deduce the first dimension then.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":885,"Q_Id":53003231,"Users Score":0,"Answer":"You have to make the tensors an output of the graph. For example, if showme_tensor is the tensor you want to print, just run the graph like that : \n_showme_tensor = sess.run(showme_tensor) \nand then you can just print the output as you print a list. If you have different tensors to print, you can just add them like that :\n_showme_tensor_1, _showme_tensor_2 = sess.run([showme_tensor_1, showme_tensor_2])","Q_Score":0,"Tags":"python,tensorflow","A_Id":53003804,"CreationDate":"2018-10-26T07:04:00.000","Title":"How to get the dimension of tensors at runtime?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a pandas dataframe result which stores a result obtained from a sql query. I want to paste this result onto the chart backend of a specified chart in the selected presentation. Any idea how to do this?\nP.S. The presentation is loaded using the module python-pptx","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1226,"Q_Id":53021158,"Users Score":0,"Answer":"you will need to read a bit about python-pptx.\nYou need chart's index and slide index of the chart. Once you know them\nget your chart object like this->\nchart = presentation.slides[slide_index].shapes[shape_index].chart\nreplacing data\nchart.replace_data(new_chart_data)\nreset_chart_data_labels(chart)\nthen when you save your presentation it will have updated the data.\nusually, I uniquely name all my slides and charts in a template and then I have a function that will get me the chart's index and slide's index. (basically, I iterate through all slides, all shapes, and find a match for my named chart).\nHere is a screenshot where I name a chart->[![screenshot][1]][1]. Naming slides is a bit more tricky and I will not delve into that but all you need is slide_index just count the slides 0 based and then you have the slide's index.\n[1]: https:\/\/i.stack.imgur.com\/aFQwb.png","Q_Score":2,"Tags":"python,python-3.x,pandas,powerpoint","A_Id":69335685,"CreationDate":"2018-10-27T10:53:00.000","Title":"python - pandas dataframe to powerpoint chart backend","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"What is the recommended way to run Flask app (e.g. via Gunicorn?) and how to make it up and running automatically after linux server (redhat) restart?\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":636,"Q_Id":53092810,"Users Score":0,"Answer":"have you looked at supervisord? it works reasonably well and handles restarting processes automatically if they fail as well as looking after error logs nicely","Q_Score":0,"Tags":"python,linux,flask,redhat","A_Id":53092907,"CreationDate":"2018-10-31T22:26:00.000","Title":"How to make Flask app up and running after server restart?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am current working on a real time face detection project.\nWhat I have done is that I capture the frame using cv2, do detection and then show result using cv2.imshow(), which result in a low fps. \nI want a high fps video showing on the screen without lag and a low fps detection bounding box overlay.\nIs there a solution to show the real time video stream (with the last detection result bounding box), and once a new detection is finished, show the new bounding box and the background was not delayed by the detection function.\nAny help is appreciated!\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":413,"Q_Id":53094695,"Users Score":0,"Answer":"A common approach would be to create a flag that allows the detection algorithim to only run once every couple of frames and save the predicted reigons of interest to a list, whilst creating bounding boxes for every frame.\nSo for example you have a face detection algorithim, process every 15th frame to detect faces, but in every frame create a bounding box from the predictions. Even though the predictions get updated every 15 frames.\nAnother approach could be to add an object tracking layer. Run your heavy algorithim to find the ROIs and then use the object tracking library to hold on to them till the next time it runs the detection algorithim.\nHope this made sense.","Q_Score":0,"Tags":"python,cv2","A_Id":53094747,"CreationDate":"2018-11-01T03:08:00.000","Title":"cv2 show video stream & add overlay after another function finishes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a script that uploads files to Google Drive. I want to upload python files. I can do it manually and have it keep the file as .py correctly (and it's previewable), but no matter what mimetypes I try, I can't get my program to upload it correctly. It can upload the file as a .txt or as something GDrive can't recognize, but not as a .py file. I can't find an explicit mimetype for it (I found a reference for text\/x-script.python but it doesn't work as an out mimetype).\nDoes anyone know how to correctly upload a .py file to Google Drive using REST?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":690,"Q_Id":53096788,"Users Score":-1,"Answer":"Also this is a valid Python mimetype: text\/x-script.python","Q_Score":2,"Tags":"python,rest,google-drive-api,mime-types","A_Id":71426330,"CreationDate":"2018-11-01T07:22:00.000","Title":"What Is the Correct Mimetype (in and out) for a .Py File for Google Drive?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So I am running python 3.6.5 on a school computer the most things are heavily restricted to do on a school computer and i can only use python on drive D. I cannot use batch either. I had python 2.7 on it last year until i deleted all the files and installed python 3.6.5 after that i couldn't double click on a .py file to open it as it said continue using E:\\Python27\\python(2.7).exe I had the old python of a USB which is why it asks this but know i would like to change that path the the new python file so how would i do that in windows","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":53098459,"Users Score":0,"Answer":"Just open your Python IDE and open the file manually.","Q_Score":0,"Tags":"python,windows,directory","A_Id":53098745,"CreationDate":"2018-11-01T09:31:00.000","Title":"Running a python file in windows after removing old python files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to do a groupby of my MODELS by CITYS with keeping all the columns where i can print the percentage of each MODELS IN THIS CITY. \nI put my dataframe in PHOTO below.\nAnd i have written this code but i don\"t know how to do ??\nfor name,group in d_copy.groupby(['CITYS'])['MODELS']:","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":53110240,"Users Score":0,"Answer":"Did you try this : d_copy.groupby(['CITYS','MODELS']).mean() to have the average percentage of a model by city.\nThen if you want to catch the percentages you have to convert it in DF and select the column : pd.DataFrame(d_copy.groupby(['CITYS','MODELS']).mean())['PERCENTAGE']","Q_Score":0,"Tags":"python,pandas,dataframe,group-by","A_Id":53112148,"CreationDate":"2018-11-01T22:25:00.000","Title":"GROUPBY with showing all the columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am working on a project for a client in which I need to load a lot of data into data studio. I am having trouble getting the deployment to work with my REST API. \nThe API has been tested with code locally but I need to know how to make it compatible with the code base in App Scripts. Has anyone else had experience with working around this? The endpoint is a Python Flask application. \nAlso, is there a limit on the amount of data that you can dump in a single response to the Data Studio? As a solution to my needs(needing to be able to load data for 300+ accounts) I have created a program that caches the data needed from each account and returns the whole payload at once. There are a lot of entries, so I was wondering if they had a limit to what can be uploaded at once. \nThank you in advance","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":146,"Q_Id":53128705,"Users Score":1,"Answer":"I found the issue, it was a simple case of forgetting to add the url to the whitelist.","Q_Score":0,"Tags":"javascript,python,rest,google-apps-script,google-data-studio","A_Id":53149427,"CreationDate":"2018-11-03T05:34:00.000","Title":"Google Data Studio Connector and App Scripts","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Is it possible to have a multi-line text entry field with drop down options?\nI currently have a GUI with a multi-line Text widget where the user writes some comments, but I would like to have some pre-set options for these comments that the user can hit a drop-down button to select from.\nAs far as I can tell, the Combobox widget does not allow changing the height of the text-entry field, so it is effectively limited to one line (expanding the width arbitrarily is not an option). Therefore, what I think I need to do is sub-class the Text widget and somehow add functionality for a drop down to show these (potentially truncated) pre-set options. \nI foresee a number of challenges with this route, and wanted to make sure I'm not missing anything obvious with the existing built-in widgets that could do what I need.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":900,"Q_Id":53132987,"Users Score":1,"Answer":"I don't think you are missing anything. Note that ttk.Combobox is a composite widget. It subclasses ttk.Entry and has ttk.Listbox attached.\nTo make multiline equivalent, subclass Text. as you suggested. Perhaps call it ComboText. Attach either a frame with multiple read-only Texts, or a Text with multiple entries, each with a separate tag. Pick a method to open the combotext and methods to close it, with or without copying a selection into the main text. Write up an initial doc describing how to operate the thing.","Q_Score":1,"Tags":"python,tkinter","A_Id":53133245,"CreationDate":"2018-11-03T15:56:00.000","Title":"Multi-Line Combobox in Tkinter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"How can I get this to happen in Apache (with python, on Debian if it matters)?\n\nUser submits a form\nBased on the form entries I calculate which html file to serve them (say 0101.html)\nIf 0101.html exists, redirect them directly to 0101.html\nOtherwise, run a script to create 0101.html, then redirect them to it.\n\nThanks!\nEdit: I see there was a vote to close as too broad (though no comment or suggestion). I am just looking for a minimum working example of the Apache configuration files I would need. If you want the concrete way I think it will be done, I think apache just needs to check if 0101.html exists, if so serve it, otherwise run cgi\/myprogram.py with input argument 0101.html. Hope this helps. If not, please suggest how I can make it more specific. Thank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":100,"Q_Id":53142553,"Users Score":0,"Answer":"Apache shouldn't care. Just serve a program that looks for the file. If it finds it it will read it (or whatever and) return results and if it doesn't find it, it will create and return the result. All can be done with a simple python file.","Q_Score":1,"Tags":"python,apache,cgi","A_Id":53142803,"CreationDate":"2018-11-04T15:50:00.000","Title":"Apache - if file does not exist, run script to create it, then serve it","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to push some files up to s3 with the AWS CLI and I am running into an error:\nupload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna\nI believe this is a Python specific problem but I am not sure how to enable this type of encoding for my python interpreter. I just freshly installed Python 3.6 and have verified that it being used by powershell and cmd. \n$> python --version\n Python 3.6.7\nIf this isn't a Python specific problem, it might be helpful to know that I also just freshly installed the AWS CLI and have it properly configured. Let me know if there is anything else I am missing to help solve this problem. Thanks.","AnswerCount":5,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":6262,"Q_Id":53144254,"Users Score":8,"Answer":"I had the same problem in Windows.\nAfter investigating the problem, I realized that the problem is in the aws-cli installed using the MSI installer (x64). After removing \"AWS Command Line Interface\" from the list of installed programs and installing aws-cli using pip, the problem was solved.\nI also tried to install MSI installer x32 and the problem was missing.","Q_Score":12,"Tags":"python-3.x,amazon-s3,aws-cli","A_Id":54337242,"CreationDate":"2018-11-04T18:53:00.000","Title":"AWS CLI upload failed: unknown encoding: idna","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to push some files up to s3 with the AWS CLI and I am running into an error:\nupload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna\nI believe this is a Python specific problem but I am not sure how to enable this type of encoding for my python interpreter. I just freshly installed Python 3.6 and have verified that it being used by powershell and cmd. \n$> python --version\n Python 3.6.7\nIf this isn't a Python specific problem, it might be helpful to know that I also just freshly installed the AWS CLI and have it properly configured. Let me know if there is anything else I am missing to help solve this problem. Thanks.","AnswerCount":5,"Available Count":2,"Score":-0.0798297691,"is_accepted":false,"ViewCount":6262,"Q_Id":53144254,"Users Score":-2,"Answer":"Even I was facing same issue. I was running it on Windows server 2008 R2. I was trying to upload around 500 files to s3 using below command.\n\naws s3 cp sourcedir s3bucket --recursive --acl\n bucket-owner-full-control --profile profilename\n\nIt works well and uploads almost all files, but for initial 2 or 3 files, it used to fail with error: An HTTP Client raised and unhandled exception: unknown encoding: idna\nThis error was not consistent. The file for which upload failed, it might succeed if I try to run it again. It was quite weird.\nTried on trial and error basis and it started working well.\nSolution:\n\nUninstalled Python 3 and AWS CLI.\nInstalled Python 2.7.15\nAdded python installed path in environment variable PATH. Also added pythoninstalledpath\\scripts to PATH variable. \nAWS CLI doesnt work well with MS Installer on Windows Server 2008, instead used PIP. \n\nCommand: \n\npip install awscli\n\nNote: for pip to work, do not forget to add pythoninstalledpath\\scripts to PATH variable.\nYou should have following version:\nCommand: \n\naws --version\n\nOutput: aws-cli\/1.16.72 Python\/2.7.15 Windows\/2008ServerR2 botocore\/1.12.62\nVoila! The error is gone!","Q_Score":12,"Tags":"python-3.x,amazon-s3,aws-cli","A_Id":53693183,"CreationDate":"2018-11-04T18:53:00.000","Title":"AWS CLI upload failed: unknown encoding: idna","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Im writing a webapplication, where im trying to display the connected USB devices. I found a Python function that does exactly what i want but i cant really figure out how to call the function from my HTML code, preferably on the click of a button.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":53152383,"Users Score":0,"Answer":"simple answer: you can't. the code would have to be run client-side, and no browser would execute potentially malicious code automatically (and not every system has a python interpreter installed). \nthe only thing you can execute client-side (without the user taking action, e.g. downloading a program or browser add-on) is javascript.","Q_Score":0,"Tags":"python,html,python-2.7","A_Id":53152455,"CreationDate":"2018-11-05T10:20:00.000","Title":"Calling a Python function from HTML","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"We have several microservices on Golang and Python, On Golang we are writing finance operations and on Python online store logic, we want to create one API for our front-end and we don't know how to do it.\nI have read about API gateway and would it be right if Golang will create its own GraphQL server, Python will create another one and they both will communicate with the third graphql server which will generate API for out front-end.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":832,"Q_Id":53159897,"Users Score":2,"Answer":"I do not know much details about your services, but great pattern I successfully used on different projects is as you mentioned GraphQL gateway. \nYou will create one service, I prefer to create it in Node.js where all requests from frontend will coming through. Then from GraphQL gateway you will request your microservices. This will be basically your only entry point into the backend system. Requests will be authenticated and you are able to unify access to your data and perform also some performance optimizations like implementing data loader's caching and batching to mitigate N+1 problem. In addition you will reduce complexity of having multiple APIs and leverage all the GraphQL benefits. \nOn my last project we had 7 different frontends and each was using the same GraphQL gateway and I was really happy with our approach. There are definitely some downsides as you need to keep in sync all your frontends and GraphQL gateway, therefore you need to be more aware of your breaking changes, but it is solvable with for example deprecated directive and by performing blue\/green deployment with Kubernetes cluster. \nThe other option is to create the so-called backend for frontend in GraphQL. Right now I do not have enough information which solution would be best for you. You need to decide based on your frontend needs and business domain, but usually I prefer GraphQL gateway as GraphQL has great flexibility and the need to taylor your API to frontend is covered by GraphQL capabilities. Hope it helps David","Q_Score":2,"Tags":"python,api,go,graphql","A_Id":53173290,"CreationDate":"2018-11-05T18:11:00.000","Title":"How to create Graphql server for microservices?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands.\nNow the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image.\nI want to input this small windows of this image (having dimension like W * W * L; W < S) into a 3D CNN, but for that, I need to have 5 dimensions in the following format: (batch, length, width, depth, channels).\nIt seems to me I am missing one of the spatial dimensions, how do I convert my image array into a 5-dimensional array without losing any information?\nI am using python and Keras for the above.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":138,"Q_Id":53159930,"Users Score":1,"Answer":"If you want to convolve along the dimension of your channels, you should add a singleton dimension in the position of channel. If you don't want to convolve along the dimension of your channels, you should use a 2D CNN.","Q_Score":1,"Tags":"python,keras,conv-neural-network","A_Id":53159976,"CreationDate":"2018-11-05T18:14:00.000","Title":"What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands.\nNow the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image.\nI want to input this small windows of this image (having dimension like W * W * L; W < S) into a 3D CNN, but for that, I need to have 5 dimensions in the following format: (batch, length, width, depth, channels).\nIt seems to me I am missing one of the spatial dimensions, how do I convert my image array into a 5-dimensional array without losing any information?\nI am using python and Keras for the above.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":138,"Q_Id":53159930,"Users Score":1,"Answer":"What you want is a 2D CNN, not a 3D one. A 2D CNN already supports multiple channels, so you should have no problem using it with a hyperspectral image.","Q_Score":1,"Tags":"python,keras,conv-neural-network","A_Id":53163769,"CreationDate":"2018-11-05T18:14:00.000","Title":"What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to model a four generational family tree starting with a couple. After that if I input a name of a person and a relation like 'brother' or 'sister' or 'parent' my code should output the person's brothers or sisters or parents. I have a fair bit of knowledge of python and self taught in DSA. I think I should model the data as a dictionary and code for a tree DS with two root nodes(i.e, the first couple). But I am not sure how to start. I just need to know how to start modelling the family tree and the direction of how to proceed to code. Thank you in advance!","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":6151,"Q_Id":53166322,"Users Score":5,"Answer":"There's plenty of ways to skin a cat, but I'd suggest to create:\n\nA Person class which holds relevant data about the individual (gender) and direct relationship data (parents, spouse, children).\nA dictionary mapping names to Person elements.\n\nThat should allow you to answer all of the necessary questions, and it's flexible enough to handle all kinds of family trees (including non-tree-shaped ones).","Q_Score":0,"Tags":"python,algorithm,family-tree","A_Id":53166406,"CreationDate":"2018-11-06T05:41:00.000","Title":"Family tree in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am really new to Tensorflow as well as gaussian mixture model.\nI have recently used tensorflow.contrib.distribution.MixtureSameFamily class for predicting probability density function which is derived from gaussian mixture of 4 components.\nWhen I plotted the predicted density function using \"prob()\" function as Tensorflow tutorial explains, I found the plotted pdf with only one mode. I expected to see 4 modes as the mixture components are 4.\nI would like to ask whether Tensorflow uses any global mode predicting algorithm in their MixtureSameFamily class. If not, I would also like to know how MixtureSameFamily class forms the pdf with statistical values. \nThank you very much.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":339,"Q_Id":53167161,"Users Score":0,"Answer":"I found an answer for above question thanks to my collegue. \nThe 4 components of gaussian mixture have had very similar means that the mixture seems like it has only one mode.\nIf I put four explicitly different values as means to the MixtureSameFamily class, I could get a plot of gaussian mixture with 4 different modes.\nThank you very much for reading this.","Q_Score":0,"Tags":"python,tensorflow,gmm","A_Id":53184428,"CreationDate":"2018-11-06T07:03:00.000","Title":"Tensorflow MixtureSameFamily and gaussian mixture model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I have installed pylint plugin and restarted the Intellij IDEA. It is NOT external tool (so please avoid providing answers on running as an external tool as I know how to).\nHowever I have no 'pylint' in the tool menu or the code menu. \nIs it invoked by running 'Analyze'? or is there a way to run the pylint plugin on py files?","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":4470,"Q_Id":53183669,"Users Score":5,"Answer":"This is for the latest IntelliJ IDEA version 2018.3.5 (Community Edition):\n\nType \"Command ,\" or click \"IntelliJ IDEA -> Preferences...\"\nFrom the list on the left of the popped up window select \"Plugins\"\nMake sure that on the right top the first tab \"Marketplace\" is picked if it's not\nSearch for \"Pylint\" and when the item is found, click the greed button \"Install\" associated with the found item\n\nThe plugin should then be installed properly.\nOne can then turn on\/off real-time Pylint scan via the same window by navigating in the list on the left: \"Editor -> Inspections\", then in the list on the right unfolding \"Pylint\" and finally checking\/unchecking the corresponding checkbox on the right of the unfolded item.\nOne can also in the same window go the very last top-level item within the list on the left named \"Other Settings\" and unfold it.\nWithin it there's an item called \"Pylint\", click on it.\nOn the top right there should be a button \"Test\", click on it.\nIf in a few seconds to the left of the \"Test\" text there appears a green checkmark, then Pylint is installed correctly.\nFinally, to access the actual Pylint window, click \"View\"->\"Tool Windows\"->\"Pylint\"!\nEnjoy!","Q_Score":4,"Tags":"python,intellij-idea,pylint","A_Id":54933295,"CreationDate":"2018-11-07T04:43:00.000","Title":"How to run pylint plugin in Intellij IDEA?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"i am working on NLP using python and nltk. \nI was wondering whether is there any dataset which have bags of words which shows keywords relating to emotions such as happy, joy, anger, sadness and etc\nfrom what i dug up in the nltk corpus, i see there are some sentiment analysis corpus which contain positive and negative review which doesn't exactly related to keywords showing emotions.\nIs there anyway which i could build my own dictionary containing words which shows emotion for this purpose? is so, how do i do it and is there any collection of such words?\nAny help would be greatly appreciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":822,"Q_Id":53200934,"Users Score":0,"Answer":"I'm not aware of any dataset that associates sentiments to keywords, but you can easily built one starting from a generic sentiment analysis dataset.\n1) Clean the datasets from the stopwords and all the terms that you don't want to associate to a sentiment. \n2)Compute the count of each words in the two sentiment classes and normalize it. In this way you will associate a probability to each word to belong to a class. Let's suppose that you have 300 times the word \"love\" appearing in the positive sentences and the same word appearing 150 times in the negative sentences. Normalizing you have that the word \"love\" belongs with a probability of 66% (300\/(150+300)) to the positive class and 33% to the negative one.\n3) In order to make the dictionary more robust to the borderline terms you can set a threshold to consider neutral all the words with the max probability lower than the threshold. \nThis is an easy approach to build the dictionary that you are looking for. You could use more sophisticated approach as Term Frequency-Inverse Document Frequency.","Q_Score":0,"Tags":"python,nlp,nltk","A_Id":53211031,"CreationDate":"2018-11-08T02:59:00.000","Title":"nltk bags of words showing emotions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am working on a project using AWS ECS. I want to use Celery as a distributed task queue. Celery Worker can be build up as EC2 type, but because of the large amount of time that the instance is in the idle state, I think it would be cost-effective for AWS Fargate to run the job and quit immediately.\nDo you have suggestions on how to use the Celery Worker efficiently in the AWS cloud?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5029,"Q_Id":53218715,"Users Score":6,"Answer":"Fargate launch type is going to take longer to spin up than EC2 launch type, because AWS is doing all the \"host things\" for you when you start the task, including the notoriously slow attaching of an ENI, and likely downloading the image from a Docker repo. Right now there's no contest, EC2 launch type is faster every time.\nSo it really depends on the type of work you want the workers to do. You can expect a new Fargate task to take a few minutes to enter a RUNNING state for the aforementioned reasons. EC2 launch, on the other hand, because the ENI is already in place on your host and the image is already downloaded (at best) or mostly downloaded (likely worst), will move from PENDING to RUNNING very quickly.\n\nUse EC2 launch type for steady workloads, use Fargate launch type for burst capacity\nThis is the current prevailing wisdom, often discussed as a cost factor because Fargate can't take advantage of the typical EC2 cost savings mechanisms like reserved instances and spot pricing. It's expensive to run Fargate all the time, compared to EC2.\nTo be clear, it's perfectly fine to run 100% in Fargate (we do), but you have to be willing to accept the downsides of doing that - slower scaling and cost.\nNote you can run both launch types in the same cluster. Clusters are logical anyway, just a way to organize your resources.\n\nExample cluster\nThis example shows a static EC2 launch type service running 4 celery tasks. The number of tasks, specs, instance size and all doesn't really matter, do it up however you like. The important thing is - EC2 launch type service doesn't need to scale; the Fargate launch type service is able to scale from nothing running (during periods where there's little or no work to do) to as many workers as you can handle, based on your scaling rules.\nEC2 launch type Celery service\nRunning 1 EC2 launch type t3.medium (2vcpu\/4GB).\nMin tasks: 2, Desired: 4, Max tasks: 4\nRunning 4 celery tasks at 512\/1024 in this EC2 launch type.\nNo scaling policies\nFargate launch type Celery service\nMin tasks: 0, Desired: (x), Max tasks: 32\nRunning (x) celery tasks (same task def as EC2 launch type) at 512\/1024\nAdd scaling policies to this service","Q_Score":5,"Tags":"python,amazon-web-services,celery,amazon-ecs,aws-fargate","A_Id":53268305,"CreationDate":"2018-11-09T01:48:00.000","Title":"Operating the Celery Worker in the ECS Fargate","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"My remote MySQL database and local MySQL database have the same table structure, and the remote and local MySQL database is utf-8charset.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":254,"Q_Id":53221361,"Users Score":0,"Answer":"You'd better merge value and sql template string and print it , make sure the sql is correct.","Q_Score":0,"Tags":"python,mysql,sql,pymysql","A_Id":53221802,"CreationDate":"2018-11-09T07:20:00.000","Title":"how do I insert some rows that I select from remote MySQL database to my local MySQL database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Python application that simulates the behaviour of a system, let's say a car.\nThe application defines a quite large set of variables, some corresponding to real world parameters (the remaining fuel volume, the car speed, etc.) and others related to the simulator internal mechanics which are of no interest to the user.\nEverything works fine, but currently the user can have no interaction with the simulation whatsoever during its execution: she just sets simulation parameters, lauchs the simulation, and waits for its termination.\nI'd like the user (i.e. not the creator of the application) to be able to write Python scripts, outside of the app, that could read\/write the variables associated with the real world parameters (and only these variables).\nFor instance, at t=23s (this condition I know how to check for), I'd like to execute user script gasLeak.py, that reads the remaining fuel value and sets it to half its current value.\nTo sum up, how is it possible, from a Python main app, to execute user-written Python scripts that can access and modifiy only a pre-defined subset of the main script variables. In a perfect world, I'd also like that modifications applied to user scripts during the running of the app to be taken into account without having to restart said app (something along the reloading of a module).","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":53229900,"Users Score":0,"Answer":"Make the user-written scripts read command-line arguments and print to stdout. Then you can call them with the subprocess module with the variables they need to know about as arguments and read their responses with subprocess.check_output.","Q_Score":0,"Tags":"python,namespaces,interpreter","A_Id":53229974,"CreationDate":"2018-11-09T16:42:00.000","Title":"Run external Python script that could only read\/write only a subset of main app variables","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using pytest-xdist plugin to run some test using the @pytest.mark.parametrize to run the same test with different parameters.\nAs part of these tests, I need to open\/close web servers and the ports are generated at collection time.\nxdist does the test collection on the slave and they are not synchronised, so how can I guarantee uniqueness for the port generation.\nI can use the same port for each slave but I don't know how to archive this.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":304,"Q_Id":53234370,"Users Score":0,"Answer":"I figured that I did not give enough information regarding my issue.\nWhat I did was to create one parameterized test using @pytest.mark.parametrize and before the test, I collect the list of parameters, the collection query a web server and receive a list of \"jobs\" to process.\nEach test contains information on a port that he needs to bind to, do some work and exit because the tests are running in parallel I need to make sure that the ports will be different.\nEventually, I make sure that the job ids will be in the rand on 1024-65000 and used that for the port.","Q_Score":0,"Tags":"python,pytest,xdist,pytest-xdist","A_Id":53282799,"CreationDate":"2018-11-09T23:03:00.000","Title":"pytest-xdist generate random & uniqe ports for each test","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I think i am looking for something simpler than detecting a document boundaries in a photo. I am only trying to flag photos which are mostly of documents rather than just a normal scene photo. is this an easier problem to solve?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":53244536,"Users Score":0,"Answer":"Are the documents mostly white? If so, you could analyse the images for white content above a certain percentage. Generally text documents only have about 10% printed content on them in total.","Q_Score":0,"Tags":"python,image,opencv,image-processing","A_Id":53244565,"CreationDate":"2018-11-10T23:45:00.000","Title":"how to detect if photo is mostly a document?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I\u2019m currently working on a Raspberry Pi\/Django project slightly more complex that i\u2019m used to. (i either do local raspberry pi projects, or simple Django websites; never the two combined!)\nThe idea is two have two Raspberry Pi\u2019s collecting information running a local Python script, that would each take input from one HDMI feed (i\u2019ve got all that part figured out - I THINK) using image processing. Now i want these two Raspberry Pi\u2019s (that don\u2019t talk to each other) to connect to a backend server that would combine, store (and process) the information gathered by my two Pis\nI\u2019m expecting each Pi to be working on one frame per second, comparing it to the frame a second earlier (only a few different things he is looking out for) isolate any new event, and send it to the server. I\u2019m therefore expecting no more than a dozen binary timestamped data points per second.\nNow what is the smart way to do it here ? \n\nDo i make contact to the backend every second? Every 10 seconds?\nHow do i make these bulk HttpRequests ? Through a POST request? Through a simple text file that i send for the Django backend to process? (i have found some info about \u201cbulk updates\u201d for django but i\u2019m not sure that covers it entirely)\nHow do i make it robust? How do i make sure that all data what successfully transmitted before deleting the log locally ? (if one call fails for a reason, or gets delayed, how do i make sure that the next one compensates for lost info?\n\nBasically, i\u2019m asking advise for making a IOT based project, where a sensor gathers bulk information and want to send it to a backend server for processing, and how should that archiving process be designed.\nPS: i expect the image processing part (at one fps) to be fast enough on my Pi Zero (as it is VERY simple); backlog at that level shouldn\u2019t be an issue.\nPPS: i\u2019m using a django backend (even if it seems a little overkill) \n a\/ because i already know the framework pretty well\n b\/ because i\u2019m expecting to build real-time performance indicators from the combined data points gathered, using django, and displaying them in (almost) real-time on a webpage.\nThank you very much !","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1234,"Q_Id":53249599,"Users Score":2,"Answer":"This partly depends on just how resilient you need it to be. If you really can't afford for a single update to be lost, I would consider using a message queue such as RabbitMQ - the clients would add things directly to the queue and the server would pop them off in turn, with no need to involve HTTP requests at all.\nOtherwise it would be much simpler to just POST each frame's data in some serialized format (ie JSON) and Django would simply deserialize and iterate through the list, saving each entry to the db. This should be fast enough for the rate you describe - I'd expect saving a dozen db entries to take significantly less than half a second - but this still leaves the problem of what to do if things get hung up for some reason. Setting a super-short timeout on the server will help, as would keeping the data to be posted until you have confirmation that it has been saved - and creating unique IDs in the client to ensure that the request is idempotent.","Q_Score":1,"Tags":"python,django,raspberry-pi,iot,bulk-load","A_Id":53249964,"CreationDate":"2018-11-11T14:15:00.000","Title":"Sending data to Django backend from RaspberryPi Sensor (frequency, bulk-update, robustness)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I know i can access a Dynamics instance from a python script by using the oData API, but what about the other way around? Is it possible to somehow call a python script from within Dynamics and possible even pass arguments?\nWould this require me to use custom js\/c#\/other code within Dynamics?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":337,"Q_Id":53258692,"Users Score":1,"Answer":"You won't be able to nativley execute a python script within Dynamics. \nI would approach this by placing the Python script in a service that can be called via a web service call from Dynamics. You could make the call from form JavaScript or a Plugin using C#.","Q_Score":0,"Tags":"python,dynamics-crm","A_Id":53266129,"CreationDate":"2018-11-12T08:56:00.000","Title":"run python from Microsoft Dynamics","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I'm pretty much stuck right now.\nI wrote a parser in python3 using the python-docx library to extract all tables found in an existing .docx and store it in a python datastructure.\nSo far so good. Works as it should.\nNow I have the problem that there are hyperlinks in these tables which I definitely need! Due to the structure (xml underneath) the docx library doesn't catch these. Neither the url nor the display text provided. I found many people having similar concerns about this, but most didn't seem to have 'just that' dilemma.\nI thought about unpacking the .docx and scan the _ref document for the corresponding 'rid' and fill the actual data I have with the links found in the _ref xml.\nEither way it seems seriously weary to do it that way, so I was wondering if there is a more pythonic way to do it or if somebody got good advise how to tackle this problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":438,"Q_Id":53269311,"Users Score":0,"Answer":"You can extract the links by parsing xml of docx file. \nYou can extract all text from the document by using document.element.getiterator()\nIterate all the tags of xml and extract its text. You will get all the missing data which python-docx failed to extract.","Q_Score":0,"Tags":"python,xml,hyperlink,docx","A_Id":57556934,"CreationDate":"2018-11-12T20:04:00.000","Title":"Extracting URL from inside docx tables","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Using openpyxl, I'm able to read 2 numbers on a sheet, and also able to read their sum by loading the sheet with data_only=True. \nHowever, when I alter the 2 numbers using openpyxl and then try to read the answer using data_only=True, it returns no output. How do I do this?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1946,"Q_Id":53271690,"Users Score":1,"Answer":"You can have either the value or the formula in openpyxl. It is precisely to avoid the confusion that this kind of edit could introduce that the library works like this. To evaluate the changed formulae you'll need to load the file in an app like MS Excel or LibreOffice that can evaluate the formulae and store the results.","Q_Score":2,"Tags":"python,excel,openpyxl","A_Id":53276640,"CreationDate":"2018-11-12T23:39:00.000","Title":"openpyxl how to read formula result after editing input data on the sheet? data_only=True gives me a \"None\" result","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using a bit of code that is derived from inception v3 as distributed by the Google folks, but it's now complaining that the queue runners used to read the data are deprecated (tf.train.string_input_producer in image_processing.py, and similar). Apparently I'm supposed to switch to tf.data for this kind of stuff.\nUnfortunately, the documentation on tf.data isn't doing much to relieve my concern that I've got too much data to fit in memory, especially given that I want to batch it in a reusable way, etc. I'm confident that the tf.data stuff can do this; I just don't know how to do it. Can anyone point me to a full example of code that uses tf.data to deal with batches of data that won't all fit in memory? Ideally, it would simply be an updated version of the inception-v3 code, but I'd be happy to try and work with anything. Thanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":58,"Q_Id":53272508,"Users Score":1,"Answer":"Well, I eventually got this working. The various documents referenced in the comment on my question had what I needed, and I gradually figured out which parameters passed to queuerunners corresponded to which parameters in the tf.data stuff.\nThere was one gotcha that took a while for me to sort out. In the inception implementation, the number of examples used for validation is rounded up to be a multiple of the batch size; presumably the validation set is reshuffled and some examples are used more than once. (This does not strike me as great practice, but generally the number of validation instances is way larger than the batch size, so only a relative few are double counted.)\nIn the tf.data stuff, enabling shuffling and reuse is a separate thing and I didn't do it on the validation data. Then things broke because there weren't enough unique validation instances, and I had to track that down.\nI hope this helps the next person with this issue. Unfortunately, my code has drifted quite far from Inception v3 and I doubt that it would be helpful for me to post my modification. Thanks!","Q_Score":1,"Tags":"python,tensorflow","A_Id":53452436,"CreationDate":"2018-11-13T01:35:00.000","Title":"inception v3 using tf.data?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Hi I was wondering how I could format a large text file by adding line breaks after certain characters or words. For instance, everytime a comma was in the paragraph could I use python to make this output an extra linebreak.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":53289151,"Users Score":0,"Answer":"you can use the ''.replace() method like so:\n'roses can be blue, red, white'.replace(',' , ',\\n') gives \n'roses can be blue,\\n red,\\n white' efectively inserting '\\n' after every ,","Q_Score":0,"Tags":"python,string,text","A_Id":53289189,"CreationDate":"2018-11-13T20:39:00.000","Title":"how to reformat a text paragrath using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"How do i make python listen for changes to a folder on my desktop, and every time a file was added, the program would read the file name and categorize it it based on the extension?\nThis is a part of a more detailed program but I don't know how to get started on this part. This part of the program detects when the user drags a file into a folder on his\/her desktop and then moves that file to a different location based on the file extension.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":123,"Q_Id":53310469,"Users Score":0,"Answer":"Periodically read the files in the folder and compare to a set of files remaining after the last execution of your script. Use os.listdir() and isfile().\nRead the extension of new files and copy them to a directory based on internal rules. This is a simple string slice, e.g., filename[-3:] for 3-character extensions.\nRemove moved files from your set of last results. Use os.rename() or shutil.move().\nSleep until next execution is scheduled.","Q_Score":0,"Tags":"python,python-2.7","A_Id":53310614,"CreationDate":"2018-11-14T23:48:00.000","Title":"Python detecting different extensions on files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"This app is working fine on heroku but how do i configure it on godaddy using custom domain.\nWhen i navigate to custom domain, it redirects to mcc.godaddy.com.\nWhat all settings need to be changed.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":53311456,"Users Score":0,"Answer":"The solution is to add a correct CNAME record and wait till the value you entered has propagated.\nGo to DNS management and make following changes:\nIn the 'Host' field enter 'www' and in 'Points to' field add 'yourappname.herokuapp.com'","Q_Score":0,"Tags":"python,heroku,dns,hosting,custom-domain","A_Id":53688750,"CreationDate":"2018-11-15T02:12:00.000","Title":"How do I configure settings for my Python Flask app on GoDaddy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using Python to try and do some macroeconomic analysis of different stock markets. I was wondering about how to properly compare indices of varying sizes. For instance, the Dow Jones is around 25,000 on the y-axis, while the Russel 2000 is only around 1,500. I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink\/enlarge a background chart so that it matches the other on a new y-axis. Is there some statistical method where I can do this same thing in Python?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":318,"Q_Id":53312182,"Users Score":0,"Answer":"I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink\/enlarge a background chart so that it matches the other on a new y-axis. \n\nThese websites rescale them by fixing the initial starting points for both indices at, say, 100. I.e. if Dow is 25000 points and S&P is 2500, then Dow is divided by 250 to get to 100 initially and S&P by 25. Then you have two indices that start at 100 and you then can compare them side by side. \nThe other method (works good only if you have two series) - is to set y-axis on the right hand side for one series, and on the left hand side for the other one.","Q_Score":0,"Tags":"python,plot,statistics,correlation","A_Id":53312488,"CreationDate":"2018-11-15T03:51:00.000","Title":"Compare stock indices of different sizes Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have data with the shape of (3000, 4), the features are (product, store, week, quantity). Quantity is the target. \nSo I want to reconstruct this matrix to a tensor, without blending the corresponding quantities.\nFor example, if there are 30 product, 20 stores and 5 weeks, the shape of the tensor should be (5, 20, 30), with the corresponding quantity. Because there won't be an entry like (store A, product X, week 3) twice in entire data, so every store x product x week pair should have one corresponding quantity.\nAny suggestions about how to achieve this, or there is any logical error? Thanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":626,"Q_Id":53313913,"Users Score":0,"Answer":"You can first go through each of your first three columns and count the number of different products, stores and weeks that you have. This will give you the shape of your new array, which you can create using numpy. Importantly now, you need to create a conversion matrix for each category. For example, if product is 'XXX', then you want to know to which row of the first dimension (as product is the first dimension of your array) 'XXX' corresponds; same idea for store and week. Once you have all of this, you can simply iterate through all lines of your existing array and assign the value of quantity to the correct location inside your new array based on the indices stored in your conversion matrices for each value of product, store and week. As you said, it makes sense because there is a one-to-one correspondence.","Q_Score":1,"Tags":"python,tensorflow","A_Id":53314237,"CreationDate":"2018-11-15T06:53:00.000","Title":"How to convert 2D matrix to 3D tensor without blending corresponding entries?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've been having an issue with Anaconda, on two separate Windows machines.\nI've downloaded and installed Anaconda. I know the commands, how to install libraries, I've even installed tensorflow-gpu (which works). I also use Jupyter notebook and I'm quite familiar with it by this point.\nThe issue:\nFor some reason, when I create new environments and install libraries to that environment... it ALWAYS installs them to (base). Whenever I try to run code in a jupyter notebook that is located in an environment other than (base), it can't find any of the libraries I need... because it's installing them to (base) by default.\nI always ensure that I've activated the correct environment before installing any libraries. But it doesn't seem to make a difference.\nCan anyone help me with this... am I doing something wrong?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":225,"Q_Id":53318001,"Users Score":0,"Answer":"Kind of fixed my problem. It is to do with launching Jupyter notebook. \nAfter switching environment via command prompt... the command 'jupyter notebook' runs jupyter notebook via the default python environment, regardless.\nHowever, if I switch environments via anaconda navigator and launch jupyter notebook from there, it works perfectly. \nMaybe I'm missing a command via the prompt?","Q_Score":0,"Tags":"python,anaconda","A_Id":53320989,"CreationDate":"2018-11-15T11:02:00.000","Title":"Installing packages to Anaconda Environments","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I download the some of pdf and stored in directory. Need to insert them into mongo database with python code so how could i do these. Need to store them by making three columns (pdf_name, pdf_ganerateDate, FlagOfWork)like that.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1972,"Q_Id":53318410,"Users Score":1,"Answer":"You can use GridFS. Please check this url http:\/\/api.mongodb.com\/python\/current\/examples\/gridfs.html.\nIt will help you to store any file to mongoDb and get them. In other collection you can save file metadata.","Q_Score":0,"Tags":"python,mongodb,insert,store","A_Id":53320112,"CreationDate":"2018-11-15T11:25:00.000","Title":"How Do I store downloaded pdf files to Mongo DB","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Using pipenv to create a virtual environment in a folder. \nHowever, the environment seems to be in the path: \n\n\/Users\/......\/.local\/share\/virtualenvs\/......\n\nAnd when I run the command pipenv run python train.py, I get the error:\n\ncan't open file 'train.py': [Errno 2] No such file or directory\n\nHow to run a file in the folder where I created the virtual environment?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5743,"Q_Id":53322703,"Users Score":7,"Answer":"You need to be in the same directory of the file you want to run then use:\npipenv run python train.py\nNote:\n\nYou may be at the project main directory while the file you need to run is inside a directory inside your project directory\nIf you use django to create your project, it will create two folders inside each other with the same name so as a best practice change the top directory name to 'yourname-project' then inside the directory 'yourname' run the pipenv run python train.py command","Q_Score":6,"Tags":"python,path,pipenv,virtual-environment","A_Id":53470726,"CreationDate":"2018-11-15T15:28:00.000","Title":"how to use pipenv to run file in current folder","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am using XGBClassifier to train in python and there are a handful of categorical variables in my training dataset. Originally, I planed to convert each of them into a few dummies before I throw in my data, but then the feature importance will be calculated for each dummy, not the original categorical ones. Since I also need to order all of my original variables (including numerical + categorical) by importance, I am wondering how to get importance of my original variables? Is it simply adding up?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1226,"Q_Id":53327334,"Users Score":0,"Answer":"You could probably get by with summing the individual categories' importances into their original, parent category. But, unless these features are high-cardinality, my two cents would be to report them individually. I tend to err on the side of being more explicit with reporting model performance\/importance measures.","Q_Score":0,"Tags":"python,xgboost,categorical-data","A_Id":53327378,"CreationDate":"2018-11-15T20:21:00.000","Title":"xgboost feature importance of categorical variable","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In the past, I've been using WebJobs to schedule small recurrent tasks that perform a specific background task, e.g., generating a daily summary of user activities. For each task, I've written a console application in C# that was published as an Azure Webjob.\nNow I'd like to daily execute some Python code that is already working in a Docker container. I think I figured out how to get a container running in Azure. Right now, I want to minimize the operation cost since the container will only run for a duration of 5 minutes. Therefore, I'd like to somehow schedule that my container starts once per day (at 1am) and shuts down after completion. How can I achieve this setup in Azure?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":534,"Q_Id":53327355,"Users Score":4,"Answer":"I'd probably write a scheduled build job on vsts\\whatever to run at 1am daily to launch a container on Azure Container Instances. Container should shutdown on its own when the program exists (so your program has to do that without help from outside).","Q_Score":0,"Tags":"python,azure,docker,scheduled-tasks","A_Id":53327397,"CreationDate":"2018-11-15T20:22:00.000","Title":"How to run a briefly running Docker container on Azure on a daily basis?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm fairly new to MongoDB. I need my Python script to query new entries from my Database in real time, but the only way to do this seems to be replica sets, but my Database is not a replica set, or with a Tailable cursor, which is only for capped collections.\nFrom what i understood, a capped collection has a limit, but since i don't know how big my Database is gonna be and for when i'm gonna need to send data there, i am thinking of putting the limit to 3-4 million documents. Would this be possible?.\nHow can i do that?.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":203,"Q_Id":53342146,"Users Score":1,"Answer":"so do you want to increase the size of capped collection ? \nif yes then if you know average document size then you may define size like: \ndb.createCollection(\"sample\", { capped : true, size : 10000000, max : 5000000 } ) here 5000000 is max documents with size limit of 10000000 bytes","Q_Score":0,"Tags":"python,mongodb","A_Id":53372398,"CreationDate":"2018-11-16T16:47:00.000","Title":"MongoDB - how can i set a documents limit to my capped collection?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have wrote an Android library and build an aar file. And I want to write a python program to use the aar library. Is it possible to do that? If so, how to do that? Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":305,"Q_Id":53347795,"Users Score":0,"Answer":"There is no way to include all dependencies to your aar file. So According to the open source licenses you can add their sources to your project.","Q_Score":3,"Tags":"android,python,aar","A_Id":53348357,"CreationDate":"2018-11-17T02:57:00.000","Title":"Import aar of Android library in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I just installed Graphene on my Django project and would like to use it also for the back-end, templating. So far, I find just tutorials how to use it only for front-end, no mention about back-end. \n\nShould I suppose that it is not a good idea to use it instead of a SQL database? If yes, then why? Is there a downside in the speed in the comparison to a SQL databases like MySQL?\nWhat's the best option how to retrieve the data for templates in Python? I mean, best for the performance.\n\nThnx.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":229,"Q_Id":53351156,"Users Score":2,"Answer":"GraphQL is an API specification. It doesn't specify how data is stored, so it is not a replacement for a database.\nIf you're using GraphQL, you don't use Django templates to specify the GraphQL output, because GraphQL specifies the entire HTTP response from the web service, so this question doesn't make sense.","Q_Score":2,"Tags":"python,django,django-templates,graphql,graphene-python","A_Id":53380130,"CreationDate":"2018-11-17T12:15:00.000","Title":"GraphQL\/Graphene for backend calls in Django's templates","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":1},{"Question":"I am pretty new to neural networks. I am training a network in tensorflow, but the number of positive examples is much much less than negative examples in my dataset (it is a medical dataset). \nSo, I know that F-score calculated from precision and recall is a good measure of how well the model is trained. \nI have used error functions like cross-entropy loss or MSE before, but they are all based on accuracy calculation (if I am not wrong). But how do I use this F-score as an error function? Is there a tensorflow function for that? Or I have to create a new one?\nThanks in advance.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":8429,"Q_Id":53354176,"Users Score":0,"Answer":"the loss value and accuracy is a different concept. The loss value is used for training the NN. However, accuracy or other metrics is to value the training result.","Q_Score":6,"Tags":"python,tensorflow,loss-function,precision-recall","A_Id":61325048,"CreationDate":"2018-11-17T18:20:00.000","Title":"How to use F-score as error function to train neural networks?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I mounted my drive using this : \nfrom google.colab import drive\ndrive.mount('\/content\/drive\/')\nI have a file inside a folder that I want the path of how do I determine the path? \nSay the folder that contains the file is named 'x' inside my drive","AnswerCount":4,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":18640,"Q_Id":53355510,"Users Score":6,"Answer":"The path as parameter for a function will be \/content\/drive\/My Drive\/x\/the_file, so without backslash inside My Drive","Q_Score":7,"Tags":"python,google-colaboratory","A_Id":56513444,"CreationDate":"2018-11-17T20:57:00.000","Title":"How to determine file path in Google colab?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I mounted my drive using this : \nfrom google.colab import drive\ndrive.mount('\/content\/drive\/')\nI have a file inside a folder that I want the path of how do I determine the path? \nSay the folder that contains the file is named 'x' inside my drive","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":18640,"Q_Id":53355510,"Users Score":9,"Answer":"The path will be \/content\/drive\/My\\ Drive\/x\/the_file.","Q_Score":7,"Tags":"python,google-colaboratory","A_Id":53357067,"CreationDate":"2018-11-17T20:57:00.000","Title":"How to determine file path in Google colab?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Using Windows\nLearning about virtualenv. Here is my understanding of it and a few question that I have. Please correct me if my understanding is incorrect. \nvirtualenv are environments where your pip dependencies and its selected version are stored for a particular project. A folder is made for your project and inside there are the dependencies. \n\nI was told you would not want to save your .py scripts in side of virtual ENV, if that's the case how do I access the virtual env when I want to run that project? Open it up in the command line under source ENV\/bin\/activate then cd my way to where my script is stored?\nBy running pip freeze that creates a requirements.txt file in that project folder that is just a txt. copy of the dependencies of that virtual env? \nIf I'm in a second virutalenv who do I import another virtualenv's requirements? I've been to the documentation but I still don't get it.\n$ env1\/bin\/pip freeze > requirements.txt\n$ env2\/bin\/pip install -r requirements.txt\n\nGuess I'm confused on the \"requirements\" description. Isn't best practice to always call our requirements, requirements.txt? If that's the case how does env2 know I'm want env1 requirements? \nThank you for any info or suggestions. Really appreciate the assistance.\nI created a virtualenv C:\\Users\\admin\\Documents\\Enviorments>virtualenv django_1 \nUsing base prefix'c:\\\\users\\\\admin\\\\appdata\\\\local\\\\programs\\\\python\\\\python37-32' \nNew python executable in C:\\Users\\admin\\Documents\\Enviorments\\django_1\\Scripts\\python.exe Installing setuptools, pip, wheel...done.\nHow do I activate it? source django_1\/bin\/activate doesn't work?\nI've tried: source C:\\Users\\admin\\Documents\\Enviorments\\django_1\/bin\/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":104,"Q_Id":53356442,"Users Score":0,"Answer":"* disclaimer * I mainly use conda environments instead of virtualenv, but I believe that most of this is the same across both of them and is true to your case.\n\nYou should be able to access your scripts from any environment you are in. If you have virtenvA and virtenvB then you can access your script from inside either of your environments. All you would do is activate one of them and then run python \/path\/to\/my\/script.py, but you need to make sure any dependent libraries are installed. \nCorrect, but for clarity the requirements file contains a list of the dependencies by name only. It doesn't contain any actual code or packages. You can print out a requirements file but it should just be a list which says package names and their version numbers. Like pandas 1.0.1 numpy 1.0.1 scipy 1.0.1 etc.\nIn the lines of code you have here you would export the dependencies list of env1 and then you would install these dependencies in env2. If env2 was empty then it will now just be a copy of env1, otherwise it will be the same but with all the packages of env1 added and if it had a different version number of some of the same packages then this would be overwritten","Q_Score":1,"Tags":"python,python-3.x,virtualenv","A_Id":53356590,"CreationDate":"2018-11-17T23:12:00.000","Title":"virtualenv - Birds Eye View of Understanding","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Using Windows\nLearning about virtualenv. Here is my understanding of it and a few question that I have. Please correct me if my understanding is incorrect. \nvirtualenv are environments where your pip dependencies and its selected version are stored for a particular project. A folder is made for your project and inside there are the dependencies. \n\nI was told you would not want to save your .py scripts in side of virtual ENV, if that's the case how do I access the virtual env when I want to run that project? Open it up in the command line under source ENV\/bin\/activate then cd my way to where my script is stored?\nBy running pip freeze that creates a requirements.txt file in that project folder that is just a txt. copy of the dependencies of that virtual env? \nIf I'm in a second virutalenv who do I import another virtualenv's requirements? I've been to the documentation but I still don't get it.\n$ env1\/bin\/pip freeze > requirements.txt\n$ env2\/bin\/pip install -r requirements.txt\n\nGuess I'm confused on the \"requirements\" description. Isn't best practice to always call our requirements, requirements.txt? If that's the case how does env2 know I'm want env1 requirements? \nThank you for any info or suggestions. Really appreciate the assistance.\nI created a virtualenv C:\\Users\\admin\\Documents\\Enviorments>virtualenv django_1 \nUsing base prefix'c:\\\\users\\\\admin\\\\appdata\\\\local\\\\programs\\\\python\\\\python37-32' \nNew python executable in C:\\Users\\admin\\Documents\\Enviorments\\django_1\\Scripts\\python.exe Installing setuptools, pip, wheel...done.\nHow do I activate it? source django_1\/bin\/activate doesn't work?\nI've tried: source C:\\Users\\admin\\Documents\\Enviorments\\django_1\/bin\/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":104,"Q_Id":53356442,"Users Score":0,"Answer":"virtualenv simply creates a new Python environment for your project. Think of it as another copy of Python that you have in your system. Virutual environment is helpful for development, especially if you will need different versions of the same libraries.\nAnswer to your first question is, yes, for each project that you use virtualenv, you need to activate it first. After activating, when you run python script, not just your project's scripts, but any python script, will use dependencies and configuration of the active Python environment.\nAnswer to the second question, pip freeze > requirements.txt will create requirements file in active folder, not in your project folder. So, let's say in your cmd\/terminal you are in C:\\Desktop, then the requirements file will be created there. If you're in C\\Desktop\\myproject folder, the file will be created there. Requirements file will contain the packages installed on active virtualenv.\nAnswer to 3rd question is related to second. Simply, you need to write full path of the second requirements file. So if you are in first project and want to install packages from second virtualenv, you run it like env2\/bin\/pip install -r \/path\/to\/my\/first\/requirements.txt. If in your terminal you are in active folder that does not have requirements.txt file, then running pip install will give you an error. So, running the command does not know which requirements file you want to use, you specify it. \nI created a virtualenv \nC:\\Users\\admin\\Documents\\Enviorments>virtualenv django_1 Using base prefix 'c:\\\\users\\\\admin\\\\appdata\\\\local\\\\programs\\\\python\\\\python37-32' New python executable in C:\\Users\\admin\\Documents\\Enviorments\\django_1\\Scripts\\python.exe Installing setuptools, pip, wheel...done. \nHow do I activate it? source django_1\/bin\/activate doesn't work? \nI've tried: source C:\\Users\\admin\\Documents\\Enviorments\\django_1\/bin\/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.","Q_Score":1,"Tags":"python,python-3.x,virtualenv","A_Id":53356656,"CreationDate":"2018-11-17T23:12:00.000","Title":"virtualenv - Birds Eye View of Understanding","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have the problem that for a project I need to work with a framework (Python), that has a poor documentation. I know what it does since it is the back end of a running application. I also know that no framework is good if the documentation is bad and that I should prob. code it myself. But, I have a time constraint. Therefore my question is: Is there a cooking recipe on how to understand a poorly documented framework? \nWhat I tried until now is checking some functions and identify the organizational units in the framework but I am lacking a system to do it more effectively.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":34,"Q_Id":53370709,"Users Score":2,"Answer":"If I were you, with time constaraints, and bound to use a specific framework. I'll go in the following manner: \n\nList down the use cases I desire to implement using the framework \nIdentify the APIs provided by the framework that helps me implement the use cases\nPrototype the usecases based on the available documentation and reading\n\nThe prototyping is not implementing the entire use case, but to identify the building blocks around the case and implementing them. e.g., If my usecase is to fetch the Students, along with their courses, and if I were using Hibernate to implement, I would prototype the database accesss, validating how easily am I able to access the database using Hibernate, or how easily I am able to get the relational data by means of joining\/aggregation etc. \nThe prototyping will help me figure out the possible limitations\/bugs in the framework. If the limitations are more of show-stoppers, I will implement the supporting APIs myself; or I can take a call to scrap out the entire framework and write one for myself; whichever makes more sense.","Q_Score":0,"Tags":"python,oop,frameworks","A_Id":53370875,"CreationDate":"2018-11-19T08:19:00.000","Title":"How do I efficiently understand a framework with sparse documentation?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am searching for a long time on net. But no use. Please help or try to give me some ideas how to achieve this.\nWhen I use python module concurrent.futures.ThreadPoolExecutor(max_workers=None), I want to know the max_workers how much the number of suitable.\nI've read the official document.\nI still don't know the number of suitable when I coding.\n\nChanged in version 3.5: If max_worker is None or not give, it will default to the number of processors on the machine, multiplied by 5, assuming that ThreadPoolExecutor is often used to overlap I\/O instead of CPU work and the number of workers should be higher than the number of workers for ProcessPoolExecutor.\n\nHow to understand \"max_workers\" better? \nFor the first time to ask questions, thank you very much.","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":9046,"Q_Id":53385479,"Users Score":5,"Answer":"max_worker, you can take it as threads number.\nIf you want to make the best of CPUs, you should keep it running (instead of sleeping).\nIdeally if you set it to None, there will be ( CPU number * 5) threads at most. On average, each CPU has 5 thread to schedule. Then if one of them falls into sleep, another thread will be scheduled.","Q_Score":2,"Tags":"python,threadpoolexecutor","A_Id":53385557,"CreationDate":"2018-11-20T02:45:00.000","Title":"Python concurrent.futures.ThreadPoolExecutor max_workers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm working on a domain fronting project. Basically I'm trying to use the subprocess.call() function to interpret the following command:\nwget -O - https:\/\/fronteddomain.example --header 'Host: targetdomain.example'\nWith the proper domains, I know how to domain front, that is not the problem. Just need some help with writing using the python subprocess.call() function with wget.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":292,"Q_Id":53400968,"Users Score":1,"Answer":"I figured it out using curl: \ncall([\"curl\", \"-s\", \"-H\" \"Host: targetdomain.example\", \"-H\", \"Connection: close\", \"frontdomain.example\"])","Q_Score":0,"Tags":"python,subprocess,wget","A_Id":53402135,"CreationDate":"2018-11-20T20:23:00.000","Title":"wget with subprocess.call()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have to run pdf2image on my Python Lambda Function in AWS, but it requires poppler and poppler-utils to be installed on the machine. \nI have tried to search in many different places how to do that but could not find anything or anyone that have done that using lambda functions.\nWould any of you know how to generate poppler binaries, put it on my Lambda package and tell Lambda to use that?\nThank you all.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6645,"Q_Id":53403399,"Users Score":0,"Answer":"Hi @Alex Albracht thanks for compiled easy instructions! They helped a lot. But I really struggled with getting the lambda function find the poppler path. So, I'll try to add that up with an effort to make it clear.\nThe binary files should go in a zip folder having structure as:\npoppler.zip -> bin\/poppler\nwhere poppler folder contains the binary files. This zip folder can be then uploaded as a layer in AWS lambda.\nFor pdf2image to work, it needs poppler path. This should be included in the lambda function in the format - \"\/opt\/bin\/poppler\".\nFor example,\npoppler_path = \"\/opt\/bin\/poppler\"\npages = convert_from_path(PDF_file, 500, poppler_path=poppler_path)","Q_Score":8,"Tags":"python,aws-lambda,poppler","A_Id":62793956,"CreationDate":"2018-11-20T23:58:00.000","Title":"How to install Poppler to be used on AWS Lambda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using python with clpex, when I finished my model I run the program and it throws me the following error:\nCplexSolverError: CPLEX Error 1016: Promotional version. Problem size limits exceeded.\nI have the IBM Academic CPLEX installed, how can I make python recognize this and not the promotional version?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1190,"Q_Id":53413153,"Users Score":1,"Answer":"you can go to the direction you install CPLEX. For Example, D:\\Cplex\nAfter that you will see a foler name cplex, then you click on that, --> python --> choose the version of your python ( Ex: 3.6 ), then choose the folder x64_win64, you will see another file name cplex. \nYou copy this file into your python site packakges ^^ and then you will not be restricted","Q_Score":0,"Tags":"python,cplex","A_Id":55095092,"CreationDate":"2018-11-21T13:30:00.000","Title":"CPLEX Error 1016: Promotional version , use academic version CPLEX","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I want to add a real-time chart to my Flask webapp. This chart, other than current updated data, should contain historical data too.\nAt the moment i can create the chart and i can make it real time but i have no idea how to make the data 'persistent', so i can't see what the chart looked like days or weeks ago.\nI'm using a Javascript charting library, while Data is being sent from my Flask script, but what it's not really clear is how i can \"store\" my data on Javascript. At the moment, indeed, the chart will reset each time the page is loaded.\nHow would it be possible to accomplish that? Is there an example for it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":53453643,"Users Score":0,"Answer":"You can try to store the data in a database and or in a file and extract from there .\nYou can also try to use dash or you can make on the right side a menu with dates like 21 september and see the chart from that day .\nFor dash you can look on YouTube at Sentdex","Q_Score":0,"Tags":"javascript,python,flask","A_Id":53487472,"CreationDate":"2018-11-23T22:49:00.000","Title":"How can i create a persistent data chart with Flask and Javascript?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"as you can tell I\u2019m fairly new to using Pyspark Python my RDD is set out as follows:\n(ID, First name, Last name, Address)\n(ID, First name, Last name, Address)\n(ID, First name, Last name, Address)\n(ID, First name, Last name, Address)\n(ID, First name, Last name, Address)\n Is there anyway I can count how many of these records I have stored within my RDD such as count all the IDs in the RDD. So that the output would tell me I have 5 of them. \nI have tried using RDD.count() but that just seems to return how many items I have in my dataset in total.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":458,"Q_Id":53468203,"Users Score":0,"Answer":"If you have RDD of tuples like RDD[(ID, First name, Last name, Address)] then you can perform below operation to do different types of counting.\n\nCount the total number of elements\/Rows in your RDD.\nrdd.count()\nCount Distinct IDs from your above RDD. Select the ID element and then do a distinct on top of it.\nrdd.map(lambda x : x[0]).distinct().count()\n\nHope it helps to do the different sort of counting.\nLet me know if you need any further help here.\nRegards,\nNeeraj","Q_Score":0,"Tags":"python,scala,pyspark","A_Id":53471694,"CreationDate":"2018-11-25T13:55:00.000","Title":"How do I count how many items are in a specific row in my RDD","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I created a web app with Flask where I'll be showing data, so I need charts for it.\nThe problem is that I don't really know how to do that, so I'm trying to find the best way to do that. I tried to use a Javascript charting library on my frontend and send the data to the chart using SocketIO, but the problem is that I need to send that data frequently and at a certain point I'll be having a lot of data, so sending each time a huge load of data through AJAX\/SocketIO would not be the best thing to do.\nTo solve this, I had this idea: could I generate the chart from my backend, instead of sending data to the frontend? I think it would be the better thing to do, since I won't have to send the data to the frontend each time and there won't be a need to generate a ton of data each time the page is loaded, since the chart will be processed on the frontend.\nSo would it be possible to generate a chart from my Flask code in Python and visualize it on my webpage? Is there a good library do that?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":139,"Q_Id":53471032,"Users Score":2,"Answer":"Try to use dash is a python library for web charts","Q_Score":1,"Tags":"python,flask,data-visualization","A_Id":53472495,"CreationDate":"2018-11-25T19:22:00.000","Title":"Adding charts to a Flask webapp","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I got this binary number 101111111111000\nI need to strip off the 8 most significant bits and have 11111000 at the end.\nI tried to make 101111111111000 << 8, but this results in 10111111111100000000000, it hasn't the same effect as >> which strips the lower bits. So how can this be done? The final result MUST BE binary type.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":406,"Q_Id":53472702,"Users Score":0,"Answer":"To achieve this for a number x with n digits, one can use this\nx&(2**(len(bin(x))-2-8)-1)\n-2 to strip 0b, -8 to strip leftmost\nSimply said it ands your number with just enough 1s that the 8 leftmost bits are set to 0.","Q_Score":1,"Tags":"python,binary","A_Id":53472900,"CreationDate":"2018-11-25T22:35:00.000","Title":"How to strip off left side of binary number in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to clear a printed line, but so far I have found no good answers for using python 3.7, IDLE on windows 10. I am trying to make a simple code that prints a changing variable. But I don't want tons of new lines being printed. I want to try and get it all on one line.\nIs it possible to print a variable that has been updated later on in the code?\nDo remember I am doing this in IDLE, not kali or something like that.\nThanks for all your help in advance.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":440,"Q_Id":53475654,"Users Score":1,"Answer":"The Python language definition defines when bytes will be sent to a file, such as sys.stdout, the default file for print. It does not define what the connected device does with the bytes.\nWhen running code from IDLE, sys.stdout is initially connected to IDLE's Shell window. Shell is not a terminal and does not interpret terminal control codes other than '\\n'. The reasons are a) IDLE is aimed at program development, by programmers, rather than program running by users, and developers sometimes need to see all the output from a program; and b) IDLE is cross-platform, while terminal behaviors are various, depending on the system, settings, and current modes (such as insert versus overwrite).\nHowever, I am planning to add an option to run code in an IDLE editor with sys.stdout directed to the local system terminal\/console.","Q_Score":3,"Tags":"python-3.x,python-idle","A_Id":53485761,"CreationDate":"2018-11-26T06:17:00.000","Title":"how do I clear a printed line and replace it with updated variable IDLE","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"pre:\n\nI installed both python2.7 and python 3.70\neclipse installed pydev, and configured two interpreters for each py version\nI have a project with some py scripts\n\nquestion:\nI choose one py file, I want run it in py2, then i want it run in py3(manually).\nI know that each file cound has it's run configuration, but it could only choose one interpreter a time. \nI also know that py.exe could help you get the right version of python.\nI tried to add an interpreter with py.exe, but pydev keeps telling me that \"python stdlibs\" is necessary for a interpreter while only python3's lib shows up.\nso, is there a way just like right click the file and choose \"run use interpreter xxx\"?\nor, does pydev has the ability to choose interpreters by \"#! python2\"\/\"#! python3\" at file head?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":324,"Q_Id":53496898,"Users Score":1,"Answer":"I didn't understand what's the actual workflow you want...\nDo you want to run each file on a different interpreter (say you have mod1.py and want to run it always on py2 and then mod2.py should be run always on py3) or do you want to run the same file on multiple interpreters (i.e.: you have mod1.py and want to run it both on py2 and py3) or something else? \nSo, please give more information on what's your actual problem and what you want to achieve...\n\nOptions to run a single file in multiple interpreters:\n\nAlways run with the default interpreter (so, make a regular run -- F9 to run the current editor -- change the default interpreter -- using Ctrl+shift+Alt+I -- and then rerun with Ctrl+F11).\nCreate a .sh\/.bat which will always do 2 launches (initially configure it to just be a wrapper to launch with one python, then, after properly configuring it inside of PyDev that way change it to launch python 2 times, one with py2 and another with py3 -- note that I haven't tested, but it should work in theory).","Q_Score":0,"Tags":"python,eclipse,pydev","A_Id":53516276,"CreationDate":"2018-11-27T09:51:00.000","Title":"how to run python in eclipse with both py2 and py3?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am looking for a way to extract words from text if they match the following conditions:\n1) are capitalised\nand\n2) appear on a new line on their own (i.e. no other text on the same line).\nI am able to extract all capitalised words with this code:\n caps=re.findall(r\"\\b[A-Z]+\\b\", mytext)\nbut can't figure out how to implement the second condition. Any help will be greatly appreciated.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":38,"Q_Id":53509867,"Users Score":-1,"Answer":"please try following statements \\r\\n at the begining of your regex expression","Q_Score":2,"Tags":"python,regex,python-3.x","A_Id":53509933,"CreationDate":"2018-11-27T23:32:00.000","Title":"Python regex to identify capitalised single word lines in a text abstract","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Can i do these two things:\n\nIs there any library in dart for Sentiment Analysis? \nCan I use Python (for Sentiment Analysis) in dart? \n\nMy main motive for these questions is that I'm working on an application in a flutter and I use sentiment analysis and I have no idea that how I do that. \nCan anyone please help me to solve this Problem.?\nOr is there any way that I can do text sentiment analysis in the flutter app?","AnswerCount":5,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":36968,"Q_Id":53519266,"Users Score":10,"Answer":"You can create an api using Python then serve it your mobile app (FLUTTER) using http requests.\nI","Q_Score":21,"Tags":"python,dart,python-requests,flutter,dart-pub","A_Id":56864092,"CreationDate":"2018-11-28T12:15:00.000","Title":"Python and Dart Integration in Flutter Mobile Application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm running the Set_Attitude_Target example on an Intel Aero with Ardupilot. The code is working as intended but on top of a clear sensor error, that becomes more evident the longer I run the experiment.\nIn short, the altitude report from the example is reporting that in LocationLocal there is a relative altitude of -0.01, which gets smaller and smaller the longer the drone stays on. \nIf the drone takes off, say, 1 meter, then the relative altitude is less than that, so the difference is being taken out.\nI ran the same example with the throttle set to a low value so the drone would stay stationary while \"trying to take off\" with insufficient thrust. For the 5 seconds that the drone was trying to take off, as well as after it gave up, disarmed and continued to run the code, the console read incremental losses to altitude, until I stopped it at -1 meter. \nWhere is this sensor error coming from and how do I remedy it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":53522828,"Users Score":0,"Answer":"As per Agustinus Baskara's comment on the original post, it would appear the built-in sensor is simply that bad - it can't be improved upon with software.","Q_Score":0,"Tags":"dronekit-python","A_Id":53557092,"CreationDate":"2018-11-28T15:25:00.000","Title":"Why is LocationLocal: Relative Alt dropping into negative values on a stationary drone?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am confused now about the loss functions used in XGBoost. Here is how I feel confused:\n\nwe have objective, which is the loss function needs to be minimized; eval_metric: the metric used to represent the learning result. These two are totally unrelated (if we don't consider such as for classification only logloss and mlogloss can be used as eval_metric). Is this correct? If I am, then for a classification problem, how you can use rmse as a performance metric?\ntake two options for objective as an example, reg:logistic and binary:logistic. For 0\/1 classifications, usually binary logistic loss, or cross entropy should be considered as the loss function, right? So which of the two options is for this loss function, and what's the value of the other one? Say, if binary:logistic represents the cross entropy loss function, then what does reg:logistic do?\nwhat's the difference between multi:softmax and multi:softprob? Do they use the same loss function and just differ in the output format? If so, that should be the same for reg:logistic and binary:logistic as well, right?\n\nsupplement for the 2nd problem\nsay, the loss function for 0\/1 classification problem should be\nL = sum(y_i*log(P_i)+(1-y_i)*log(P_i)). So if I need to choose binary:logistic here, or reg:logistic to let xgboost classifier to use L loss function. If it is binary:logistic, then what loss function reg:logistic uses?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":19483,"Q_Id":53530189,"Users Score":11,"Answer":"'binary:logistic' uses -(y*log(y_pred) + (1-y)*(log(1-y_pred)))\n'reg:logistic' uses (y - y_pred)^2\nTo get a total estimation of error we sum all errors and divide by number of samples.\n\nYou can find this in the basics. When looking on Linear regression VS Logistic regression.\nLinear regression uses (y - y_pred)^2 as the Cost Function\nLogistic regression uses -(y*log(y_pred) + (y-1)*(log(1-y_pred))) as the Cost function\n\nEvaluation metrics are completely different thing. They design to evaluate your model. You can be confused by them because it is logical to use some evaluation metrics that are the same as the loss function, like MSE in regression problems. However, in binary problems it is not always wise to look at the logloss. My experience have thought me (in classification problems) to generally look on AUC ROC.\nEDIT\n\naccording to xgboost documentation:\n\nreg:linear: linear regression\n\n\nreg:logistic: logistic regression\n\n\nbinary:logistic: logistic regression for binary classification, output\nprobability\n\nSo I'm guessing:\nreg:linear: is as we said, (y - y_pred)^2\nreg:logistic is -(y*log(y_pred) + (y-1)*(log(1-y_pred))) and rounding predictions with 0.5 threshhold\nbinary:logistic is plain -(y*log(y_pred) + (1-y)*(log(1-y_pred))) (returns the probability)\nYou can test it out and see if it do as I've edited. If so, I will update the answer, otherwise, I'll just delete it :<","Q_Score":15,"Tags":"python,machine-learning,xgboost,xgbclassifier","A_Id":53535742,"CreationDate":"2018-11-29T00:38:00.000","Title":"The loss function and evaluation metric of XGBoost","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"Perhaps it is a basic question but I am really not a profession in Portainer.\nI have a local Portainer, a Pycharm to manage the Python code. What should I do after I modified my code and deploy this change to the local Portainer?\nThx","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":53535445,"Users Score":0,"Answer":"If you have mounted the folder where your code resides directly in the container the changes will be also be applied in your container so no further action is required.\nIf you have not mounted the folder to your container (for example if you copy the code when you build the image), you would have to rebuild the image. Of course this is a lot more work so I would recommend using the mounted volumes.","Q_Score":1,"Tags":"linux,python-3.x,portainer","A_Id":53535718,"CreationDate":"2018-11-29T09:16:00.000","Title":"After I modified my Python code in Pycharm, how to deploy the change to my Portainer?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to obtain an application variable (app user id) in before_execute(conn, clauseelement, multiparam, param) method. The app user id is stored in python http request object which I do not have any access to in the db event.\nIs there any way to associate a piece of sqlalchemy external data somewhere to fetch it in before_execute event later? \nAppreciate your time and help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":53551209,"Users Score":0,"Answer":"Answering my own question here with a possible solution :)\n\nFrom http request copied the piece of data to session object\nSince the session binding was at engine level, copied the data from session to connection object in SessionEvent.after_begin(session, transaction, connection). [Had it been Connection level binding, we could have directly set the objects from session object to connection object.]\n\nNow the data is available in connection object and in before_execute() too.","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":53667953,"CreationDate":"2018-11-30T04:23:00.000","Title":"Sqlalchemy before_execute event - how to pass some external variable, say app user id?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to load certain data using sessions (locally) and it has been working for some time but, now I get the following warning and my data that was loaded through sessions is no longer being loaded.\n\nThe \"b'session'\" cookie is too large: the value was 13083 bytes but\n the header required 44 extra bytes. The final size was 13127 bytes but\n the limitis 4093 bytes. Browsers may silently ignore cookies larger\n than this.\n\nI have tried using session.clear(). I also opened up chrome developer tools and tried deleting the cookies associated with 127.0.0.1:5000. I have also tried using a different secret key to use with the session.\nIt would be greatly appreciated if I could get some help on this, since I have been searching for a solution for many hours.\nEdit:\nI am not looking to increase my limit by switching to server-side sessions. Instead, I would like to know how I could clear my client-side session data so I can reuse it.\nEdit #2:\nI figured it out. I forgot that I pushed way more data to my database, so every time a query was performed, the session would fill up immediately.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6403,"Q_Id":53551637,"Users Score":6,"Answer":"It looks like you are using the client-side type of session that is set by default with Flask which has a limited capacity of 4KB. You can use a server-side type session that will not have this limit, for example, by using a back-end file system (you save the session data in a file system in the server, not in the browser). To do so, set the configuration variable 'SESSION_TYPE' to 'filesystem'. \nYou can check other alternatives for the 'SESSION_TYPE' variable in the Flask documentation.","Q_Score":6,"Tags":"python,session,flask","A_Id":53554226,"CreationDate":"2018-11-30T05:17:00.000","Title":"Session cookie is too large flask application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Currently in R, once you load a dataset (for example with read.csv), Rstudio saves it as a variable in the global environment. This ensures you don't have to load the dataset every single time you do a particular test or change. \nWith Python, I do not know which text editor\/IDE will allow me to do this. E.G - I want to load a dataset once, and then subsequently do all sorts of things with it, instead of having to load it every time I run the script. \nAny points as to how to do this would be very useful","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":53557674,"Users Score":0,"Answer":"It depends how large your data set is. \nFor relatively smaller datasets you could look at installing Anaconda Python Jupyter notebooks. Really great for working with data and visualisation once the dataset is loaded. For larger datasets you can write some functions \/ generators to iterate efficiently through the dataset.","Q_Score":0,"Tags":"python,global-variables,spyder","A_Id":53557733,"CreationDate":"2018-11-30T12:32:00.000","Title":"not having to load a dataset over and over","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am executing the query using pymysql in python. \n\nselect (sum(acc_Value)) from accInfo where acc_Name = 'ABC'\n\nThe purpose of the query is to get the sum of all the values in acc_Value column for all the rows matchin acc_Name = 'ABC'.\nThe output i am getting when using cur.fetchone() is \n\n(Decimal('256830696'),)\n\nNow how to get that value \"256830696\" alone in python. \nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":283,"Q_Id":53559241,"Users Score":-1,"Answer":"It's a tuple, just take the 0th index","Q_Score":0,"Tags":"python,pymysql","A_Id":53559488,"CreationDate":"2018-11-30T14:16:00.000","Title":"pymysql - Get value from a query","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to make a save function in a program im doing for bubbling\/ballooning drawings. The only thing I can't get to work is save a \"work copy\". As if a drawing gets revision changes, you don't need to redo all the work. Just load the work copy, and add\/remove\/re-arrage bubbles.\nI'm using tkinter and canvas. And creates ovals and text for bubbles. But I can't figure out any good way to save the info from the oval\/text objects.\nI tried to pickle the whole canvas, but that seems like it won't work after some googeling.\nAnd pickle every object when created seems to only save the object id. 1, 2 etc. And that also won't work since some bubbles will be moved and receive new coordinates. They might also have a different color, size etc.\nIn my next approach I'm thinking of saving the whole \"can.create_oval( x1, y1, x2, y2, fill = fillC, outli....\" as a string to a txt and make the function to recreate a with eval()\nAny one have any good suggestion on how to approach this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":597,"Q_Id":53571619,"Users Score":1,"Answer":"There is no built-in way to save and restore the canvas. However, the canvas has methods you can use to get all of the information about the items on the canvas. You can use these methods to save this information to a file and then read this file back and recreate the objects. \n\nfind_all - will return an ordered list of object ids for all objects on the canvas\ntype - will return the type of the object as a string (\"rectangle\", \"circle\", \"text\", etc)\nitemconfig - returns a dictionary with all of the configuration values for the object. The values in the dictionary are a list of values which includes the default value of the option at index 3 and the current value at index 4. You can use this to save only the option values that have been explicitly changed from the default.\ngettags - returns a list of tags associated with the object","Q_Score":1,"Tags":"python,python-3.x,canvas,tkinter,pickle","A_Id":53572078,"CreationDate":"2018-12-01T14:09:00.000","Title":"Saving objects from tk canvas","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to generate three different sized output vectors namely 25d, 50d and 75d. I am trying to do so by training the same dataset using the word2vec model. I am not sure how I can get three vectors of different sizes using the same training dataset. Can someone please help me get started on this? I am very new to machine learning and word2vec. Thanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":25,"Q_Id":53586254,"Users Score":1,"Answer":"You run the code for one model three times, each time supplying a different vector_size parameter to the model initialization.","Q_Score":0,"Tags":"python,vector,word2vec","A_Id":53586551,"CreationDate":"2018-12-03T01:15:00.000","Title":"Different sized vectors in word2vec","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So instead of having data-item-url=\"https:\/\/miglopes.pythonanywhere.com\/ra%C3%A7%C3%A3o-de-c%C3%A3o-purina-junior-10kg\/\"\nit keeps on appearing \ndata-item-url=\"http:\/\/localhost\/ra%C3%A7%C3%A3o-de-c%C3%A3o-purina-junior-10kg\/\"\nhow do i remove the localhost so my snipcart can work on checkout?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":53587049,"Users Score":0,"Answer":"Without more details of where this tag is coming from it's hard to know for sure... but most likely you need to update your site's hostname in the Wagtail admin, under Settings -> Sites.","Q_Score":0,"Tags":"localhost,wagtail,pythonanywhere,snipcart","A_Id":53600830,"CreationDate":"2018-12-03T03:23:00.000","Title":"data-item-url is on localhost instead of pythonanywhere (wagtail + snipcart project)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm currently using the Fourier transformation in conjunction with Keras for voice recogition (speaker identification). I have heard MFCC is a better option for voice recognition, but I am not sure how to use it.\nI am using librosa in python (3) to extract 20 MFCC features. My question is: which MFCC features should I use for speaker identification?\nIn addition to this I am unsure on how to implement these features. What I would do is to get the necessary features and make one long vector input for a neural network. However, it is also possible to display colors, so could image recognition also be possible, or is this more aimed at speech, and not speaker recognition?\nIn short, I am unsure where I should start, as I am not very experienced with image recognition and have no idea where to start.\nThanks in advance!!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1526,"Q_Id":53601892,"Users Score":0,"Answer":"You can use MFCCs with dense layers \/ multilayer perceptron, but probably a Convolutional Neural Network on the mel-spectrogram will perform better, assuming that you have enough training data.","Q_Score":0,"Tags":"python,keras,neural-network,voice-recognition,mfcc","A_Id":53806861,"CreationDate":"2018-12-03T21:09:00.000","Title":"Using MFCC's for voice recognition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Let's say i have the following file,\ndummy_file.txt(contents below)\nfirst line\nthird line\nhow can i add a line to that file right in the middle so the end result is:\nfirst line\nsecond line\nthird line\nI have looked into opening the file with the append option, however that adds the line to the end of the file.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":53,"Q_Id":53619223,"Users Score":0,"Answer":"The standard file methods don't support inserting into the middle of a file. You need to read the file, add your new data to the data that you read in, and then re-write the whole file.","Q_Score":0,"Tags":"python-3.x","A_Id":53619407,"CreationDate":"2018-12-04T18:22:00.000","Title":"How to add text to a file in python3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I create a pyspark dataframe and i want to see it in the SciView tab in PyCharm when i debug my code (like I used to do when i have worked with pandas).\nIt says \"Nothing to show\" (the dataframe exists, I can see it when I use the show() command).\nsomeone knows how to do it or maybe there is no integration between pycharm and pyspark dataframe in this case?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1310,"Q_Id":53627818,"Users Score":5,"Answer":"Pycharm does not support spark dataframes, you should call the toPandas() method on the dataframe. As @abhiieor mentioned in a comment, be aware that you can potentially collect a lot of data, you should first limit() the number of rows returned.","Q_Score":5,"Tags":"python,pyspark,pycharm","A_Id":53633844,"CreationDate":"2018-12-05T08:13:00.000","Title":"DataFrame view in PyCharm when using pyspark","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"It doesn't have to be exactly a trigger inside the database. I just want to know how I should design this, so that when changes are made inside MySQL or SQL server, some script could be triggered.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":146,"Q_Id":53678628,"Users Score":0,"Answer":"One Way would be to keep a counter on the last updated row in the database, and then you need to keep polling(Checking) the database through python for new records in short intervals.\nIf the value in the counter is increased then you could use the subprocess module to call another Python script.","Q_Score":0,"Tags":"python,mysql,sql-server","A_Id":53678958,"CreationDate":"2018-12-08T01:12:00.000","Title":"Is it possible to trigger a script or program if any data is updated in a database, like MySQL?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"the version of python is 3.6\nI tried to execute my code but, there are still some errors as below:\nTraceback (most recent call last):\n\nFile\n \"C:\\Users\\tmdgu\\Desktop\\NLP-master1\\NLP-master\\Ontology_Construction.py\",\n line 55, in \n , binary=True)\nFile \"E:\\Program\n Files\\Python\\Python35-32\\lib\\site-packages\\gensim\\models\\word2vec.py\",\n line 1282, in load_word2vec_format\n raise DeprecationWarning(\"Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead.\")\nDeprecationWarning: Deprecated. Use\n gensim.models.KeyedVectors.load_word2vec_format instead.\n\nhow to fix the code? or is the path to data wrong?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1129,"Q_Id":53697450,"Users Score":2,"Answer":"This is just a warning, not a fatal error. Your code likely still works.\n\"Deprecation\" means a function's use has been marked by the authors as no longer encouraged. \nThe function typically still works, but may not for much longer \u2013 becoming unreliable or unavailable in some future library release. Often, there's a newer, more-preferred way to do the same thing, so you don't trigger the warning message. \nYour warning message points you at the now-preferred way to load word-vectors of that format: use KeyedVectors.load_word2vec_format() instead. \nDid you try using that, instead of whatever line of code (not shown in your question) that you were trying before seeing the warning?","Q_Score":0,"Tags":"python,gensim,word2vec","A_Id":53714251,"CreationDate":"2018-12-09T22:47:00.000","Title":"Error for word2vec with GoogleNews-vectors-negative300.bin","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new to python and am unsure of how the breakpoint method works. Does it open the debugger for the IDE or some built-in debugger?\nAdditionally, I was wondering how that debugger would be able to be operated.\nFor example, I use Spyder, does that mean that if I use the breakpoint() method, Spyder's debugger will open, through which I could the Debugger dropdown menu, or would some other debugger open?\nI would also like to know how this function works in conjunction with the breakpointhook() method.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1616,"Q_Id":53715811,"Users Score":4,"Answer":"No, debugger will not open itself automatically as a consequence of setting a breakpoint.\nSo you have first set a breakpoint (or more of them), and then manually launch a debugger.\nAfter this, the debugger will perform your code as usually, but will stop performing instructions when it reaches a breakpoint - the instruction at the breakpoint itself it will not perform. It will pause just before it, given you an opportunity to perform some debug tasks, as\n\ninspect variable values,\nset variables manually to other values,\ncontinue performing instructions step by step (i. e. only the next instruction),\ncontinue performing instructions to the next breakpoint,\nprematurely stop debugging your program.\n\nThis is the common scenario for all debuggers of all programming languages (and their IDEs).\nFor IDEs, launching a debugger will \n\nenable or reveal debugging instructions in their menu system,\nshow a toolbar for them and will,\nenable hot keys for them.\n\nWithout setting at least one breakpoint, most debuggers perform the whole program without a pause (as launching it without a debugger), so you will have no opportunity to perform any debugging task.\n(Some IDEs have an option to launch a debugger in the \"first instruction, then a pause\" mode, so you need not set breakpoints in advance in this case.)\n\nYes, the breakpoint() built-in function (introduced in Python 3.7) stops executing your program, enters it in the debugging mode, and you may use Spyder's debugger drop-down menu.\n(It isn't a Spyders' debugger, only its drop-down menu; the used debugger will be still the pdb, i. e. the default Python DeBugger.)\nThe connection between the breakpoint() built-in function and the breakpointhook() function (from the sys built-in module) is very straightforward - the first one directly calls the second one. \nThe natural question is why we need two functions with the exactly same behavior?\nThe answer is in the design - the breakpoint() function may be changed indirectly, by changing the behavior of the breakpointhook() function.\nFor example, IDE creators may change the behavior of the breakpointhook() function so that it will launch their own debugger, not the pdb one.","Q_Score":5,"Tags":"python,python-3.x,methods,built-in","A_Id":53715996,"CreationDate":"2018-12-11T00:40:00.000","Title":"Use of Breakpoint Method","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm new to programming and I just downloaded Anaconda a few days ago for Windows 64-bit. I came across the Invent with Python book and decided I wanted to work through it so I downloaded that too. I ended up running into a couple issues with it not working (somehow I ended up with Spyder (Python 2.7) and end=' ' wasn't doing what it was supposed to so I uninstalled and reinstalled Anaconda -- though originally I did download the 3.7 version). It looked as if I had the 2.7 version of Pygame. I'm looking around and I don't see a Pygame version for Python 3.7 that is compatible with Anaconda. The only ones I saw were for Mac or not meant to work with Anaconda. This is all pretty new to me so I'm not sure what my options are. Thanks in advance.\nAlso, how do I delete the incorrect Pygame version?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1501,"Q_Id":53716005,"Users Score":2,"Answer":"just use pip install pygame & python will look for a version compatible with your installation.\nIf you're using Anaconda and pip doesn't work on CMD prompt, try using the Anaconda prompt from start menu.","Q_Score":0,"Tags":"python,python-3.x,pygame,anaconda,spyder","A_Id":53718434,"CreationDate":"2018-12-11T01:14:00.000","Title":"Is there an appropriate version of Pygame for Python 3.7 installed with Anaconda?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Is it possible to retrieve or reformat the falsifying example after a test failure? The point is to show the example data in a different format - data generated by the strategy is easy to work with in the code but not really user friendly, so I'm looking at how to display it in a different form. Even a post-mortem tool working with the example database would be enough, but there does not seem to be any API allowing that, or am I missing something?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":53729754,"Users Score":0,"Answer":"Even a post-mortem tool working with the example database would be enough, but there does not seem to be any API allowing that, or am I missing something?\n\nThe example database uses a private format and only records the choices a strategy made to generate the falsifying example, so there's no way to extract the data of the example short of re-running the test.\nStuart's recommendation of hypothesis.note(...) is a good one.","Q_Score":1,"Tags":"python,python-hypothesis","A_Id":53777577,"CreationDate":"2018-12-11T17:54:00.000","Title":"python-hypothesis: Retrieving or reformatting a falsifying example","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In my view.py I obtain a date from my MSSQL database in this format 2018-12-06 00:00:00.000 so I pass that value as context like datedb and in my html page I render it like this {{datedb|date:\"c\"}} but it shows the date with one day less like this:\n\n2018-12-05T18:00:00-06:00\n\nIs the 06 not the 05 day.\nwhy is this happening? how can I show the right date?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":19,"Q_Id":53731263,"Users Score":0,"Answer":"One way of solve the problem was chage to USE_TZ = False has Willem said in the comments, but that gives another error so I found the way to do it just adding in the template this {% load tz %} and using the flter |utc on the date variables like datedb|utc|date:'Y-m-d'.","Q_Score":0,"Tags":"python,django,django-templates,django-template-filters","A_Id":53749903,"CreationDate":"2018-12-11T19:43:00.000","Title":"Template rest one day from the date","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I know how to add single packages and I know that the conda create command supports adding a new environment with all anaconda packages installed.\nBut how can I add all anaconda packages to an existing environment?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":711,"Q_Id":53742827,"Users Score":1,"Answer":"I was able to solve the problem as following:\n\nCreate a helper env with anaconda: conda create -n env_name anaconda\nActivate that env conda activate env_name\nExport packages into specification file: conda list --explicit > spec-file.txt\nActivate the target environment: activate target_env_name\nImport that specification file: conda install --file spec-file.txt","Q_Score":1,"Tags":"python,python-3.x,anaconda,conda","A_Id":53743588,"CreationDate":"2018-12-12T12:15:00.000","Title":"Add full anaconda package list to existing conda environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have two different text which I want to compare using tfidf vectorization.\nWhat I am doing is:\n\ntokenizing each document\nvectorizing using TFIDFVectorizer.fit_transform(tokens_list)\n\nNow the vectors that I get after step 2 are of different shape.\nBut as per the concept, we should have the same shape for both the vectors. Only then the vectors can be compared.\nWhat am I doing wrong? Please help.\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2848,"Q_Id":53748236,"Users Score":3,"Answer":"As G. Anderson already pointed out, and to help the future guys on this, when we use the fit function of TFIDFVectorizer on document D1, it means that for the D1, the bag of words are constructed.\nThe transform() function computes the tfidf frequency of each word in the bag of word. \nNow our aim is to compare the document D2 with D1. It means we want to see how many words of D1 match up with D2. Thats why we perform fit_transform() on D1 and then only the transform() function on D2 would apply the bag of words of D1 and count the inverse frequency of tokens in D2. \nThis would give the relative comparison of D1 against D2.","Q_Score":4,"Tags":"python,nltk,cosine-similarity,tfidfvectorizer","A_Id":59012601,"CreationDate":"2018-12-12T17:20:00.000","Title":"how to compare two text document with tfidf vectorizer?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So basically I have a dictionary with x and y values and I want to be able to get only the x value of the first coordinate and only the y value of the first coordinate and then the same with the second coordinate and so on, so that I can use it in an if-statement.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":69,"Q_Id":53763243,"Users Score":1,"Answer":"if the values are ordered in columns just use \n\nx=your_variable[:,0] y=your_variable[:,1]\n\ni think","Q_Score":0,"Tags":"python","A_Id":53763397,"CreationDate":"2018-12-13T13:43:00.000","Title":"python, dictionaries how to get the first value of the first key","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I used sudo apt-get install python3.6-tk and it works fine. Tkinter works if I open python in terminal, but I cannot get it installed on my Pycharm project. pip install command says it cannot find Tkinter. I cannot find python-tk in the list of possible installs either. \nIs there a way to get Tkinter just standard into every virtualenv when I make a new project in Pycharm?\nEdit: on Linux Mint\nEdit2: It is a clear problem of Pycharm not getting tkinter guys. If I run my local python file from terminal it works fine. Just that for some reason Pycharm cannot find anything tkinter related.","AnswerCount":6,"Available Count":1,"Score":-0.0333209931,"is_accepted":false,"ViewCount":58399,"Q_Id":53797598,"Users Score":-1,"Answer":"Python already has tkinter installed. It is a base module, like random or time, therefore you don't need to install it.","Q_Score":14,"Tags":"python,tkinter,pycharm","A_Id":53797731,"CreationDate":"2018-12-15T21:55:00.000","Title":"how to install tkinter with Pycharm?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using python in linux and tried to use command line to print out the output log while redirecting the output and error to a txt.file. However, after I searched and tried the methods such as \npython [program] 2>&1 | tee output.log\nBut it just redirected the output the the output.log and the print content disappeared. I wonder how I could print the output to console while save\/redirect them to output.log ? It would be useful if we hope to tune the parameter while having notice on the output loss and parameter.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":125,"Q_Id":53825380,"Users Score":0,"Answer":"You can create a screen like this: screen -L and then run the python script in this screen which would give the output to the console and also would save it the file: screenlog.0. You could leave the screen by using Ctrl+A+D while the script is running and check the script output by reattaching to the screen by screen -r. Also, in the screen, you won't be able to scroll past the current screen view.","Q_Score":0,"Tags":"python,linux","A_Id":53825451,"CreationDate":"2018-12-18T01:57:00.000","Title":"Print output to console while redirect the output to a file in linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have one more Query \nhere is two sentences\n\n[1,12:12] call basic_while1() Error Code: 1046. No database selected \n[1,12:12] call add() Asdfjgg Error Code: 1046. No database selected\n[1,12:12] call add()\n[1,12:12]\nError Code: 1046. No database selected\nnow I want to get output like this\n['1','12:12',\"call basic_while1\"] , ['1','12:12', 'call add() Asdfjgg'],['1','12:12', 'call add()'],['1','12:12'],['','','',' Error Code: 1046. No database selected']\n\nI used this r'^\\[(\\d+),(\\s[0-9:]+)\\]\\s+(.+) this is my main regex then as per my concern I modified it but It didn't help me\nI want to cut everything exact before \"Error Code\"\nhow to do that?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":233,"Q_Id":53830824,"Users Score":0,"Answer":"basically you asked to get everything before the \"Error Code\"\n\nI want to cut everything exact before \"Error Code\"\n\nso it is simple, try: find = re.search('((.)+)(\\sError Code)*',s) and find.group(1) will give you '[1,12:12] call add() Asdfjgg' which is what you wanted.\nif after you got that string you want list that you requested : \ndesired_list = find.group(1).replace('[','').replace(']','').replace(',',' ').split()","Q_Score":0,"Tags":"python,regex","A_Id":53831362,"CreationDate":"2018-12-18T10:17:00.000","Title":"Regex for Sentences in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work.\nI have it on python2.7, but I would also like to install it for the next versions. \nCurrently, I have installed python 2.7, python 3.5, and python 3.7.\nI tried to install numpy using:\n\nbrew install numpy --with-python3 (no error)\nsudo port install py35-numpy@1.15.4 (no error)\nsudo port install py37-numpy@1.15.4 (no error)\npip3.5 install numpy (gives \"Could not find a version that satisfies the requirement numpy (from versions: )\nNo matching distribution found for numpy\" )\n\nI can tell that it is not installed because when I type python3 and then import numpy as np gives \"ModuleNotFoundError: No module named 'numpy'\"\nAny ideas on how to make it work?\nThanks in advance.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":3167,"Q_Id":53842426,"Users Score":1,"Answer":"First, you need to activate the virtual environment for the version of python you wish to run. After you have done that then just run \"pip install numpy\" or \"pip3 install numpy\".\nIf you used Anaconda to install python then, after activating your environment, type conda install numpy.","Q_Score":2,"Tags":"python,python-3.x,macos,numpy","A_Id":53844061,"CreationDate":"2018-12-18T23:09:00.000","Title":"install numpy on python 3.5 Mac OS High sierra","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work.\nI have it on python2.7, but I would also like to install it for the next versions. \nCurrently, I have installed python 2.7, python 3.5, and python 3.7.\nI tried to install numpy using:\n\nbrew install numpy --with-python3 (no error)\nsudo port install py35-numpy@1.15.4 (no error)\nsudo port install py37-numpy@1.15.4 (no error)\npip3.5 install numpy (gives \"Could not find a version that satisfies the requirement numpy (from versions: )\nNo matching distribution found for numpy\" )\n\nI can tell that it is not installed because when I type python3 and then import numpy as np gives \"ModuleNotFoundError: No module named 'numpy'\"\nAny ideas on how to make it work?\nThanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3167,"Q_Id":53842426,"Users Score":0,"Answer":"If running pip3.5 --version or pip3 --version works, what is the output when you run pip3 freeze? If there is no output, it indicates that there are no packages installed for the Python 3 environment and you should be able to install numpy with pip3 install numpy.","Q_Score":2,"Tags":"python,python-3.x,macos,numpy","A_Id":53928674,"CreationDate":"2018-12-18T23:09:00.000","Title":"install numpy on python 3.5 Mac OS High sierra","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've got the updated Python VSCode extension installed and it works great. I'm able to use the URL with the token to connect to a remote Jupyter notebook. I just cannot seem to figure out how to change the kernel on the remote notebook for use in VSCode. \nIf I connect to the remote notebook through a web browser, I can see my two environments through the GUI and change kernels. Is there a similar option in the VSCode extension?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2677,"Q_Id":53854464,"Users Score":0,"Answer":"Run the following command in vscode: \nPython: Select interpreter to start Jupyter server\nIt will allow you to choose the kernel that you want.","Q_Score":3,"Tags":"python,visual-studio-code,jupyter","A_Id":60310074,"CreationDate":"2018-12-19T15:33:00.000","Title":"Python Vscode extension - can't change remote jupyter notebook kernel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've got the updated Python VSCode extension installed and it works great. I'm able to use the URL with the token to connect to a remote Jupyter notebook. I just cannot seem to figure out how to change the kernel on the remote notebook for use in VSCode. \nIf I connect to the remote notebook through a web browser, I can see my two environments through the GUI and change kernels. Is there a similar option in the VSCode extension?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2677,"Q_Id":53854464,"Users Score":0,"Answer":"The command that worked for me in vscode:\nNotebook: Select Notebook Kernel","Q_Score":3,"Tags":"python,visual-studio-code,jupyter","A_Id":69253401,"CreationDate":"2018-12-19T15:33:00.000","Title":"Python Vscode extension - can't change remote jupyter notebook kernel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I would like to develop a trend following strategy via back-testing a universe of stocks; lets just say all NYSE or S&P500 equities. I am asking this question today because I am unsure how to handle the storage\/organization of the massive amounts of historical price data. \nAfter multiple hours of research I am here, asking for your experience and awareness. I would be extremely grateful for any information\/awareness you can share on this topic\n\nPersonal Experience background:\n-I know how to code. Was a Electrical Engineering major, not a CS major.\n-I know how to pull in stock data for individual tickers into excel. \nFamiliar with using filtering and custom studies on ThinkOrSwim.\nApplied Context: \nFrom 1995 to today lets evaluate the best performing equities on a relative strength\/momentum basis. We will look to compare many technical characteristics to develop a strategy. The key to this is having data for a universe of stocks that we can run backtests on using python, C#, R, or any other coding language. We can then determine possible strategies by assesing the returns, the omega ratio, median excess returns, and Jensen's alpha (measured weekly) of entries and exits that are technical driven. \n\nHere's where I am having trouble figuring out what the next step is:\n-Loading data for all S&P500 companies into a single excel workbook is just not gonna work. Its too much data for excel to handle I feel like. Each ticker is going to have multiple MB of price data. \n-What is the best way to get and then store the price data for each ticker in the universe? Are we looking at something like SQL or Microsoft access here? I dont know; I dont have enough awareness on the subject of handling lots of data like this. What are you thoughts? \n\nI have used ToS to filter stocks based off of true\/false parameters over a period of time in the past; however the capabilities of ToS are limited. \nI would like a more flexible backtesting engine like code written in python or C#. Not sure if Rscript is of any use. - Maybe, there are libraries out there that I do not have awareness of that would make this all possible? If there are let me know.\nI am aware that Quantopia and other web based Quant platforms are around. Are these my best bets for backtesting? Any thoughts on them? \n\nAm I making this too complicated? \nBacktesting a strategy on a single equity or several equities isnt a problem in excel, ToS, or even Tradingview. But with lots of data Im not sure what the best option is for storing that data and then using a python script or something to perform the back test. \n\nRandom Final thought:-Ultimately would like to explore some AI assistance with optimizing strategies that were created based off parameters. I know this is a thing but not sure where to learn more about this. If you do please let me know.\n\nThank you guys. I hope this wasn't too much. If you can share any knowledge to increase my awareness on the topic I would really appreciate it.\nTwitter:@b_gumm","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":603,"Q_Id":53878551,"Users Score":0,"Answer":"The amout of data is too much for EXCEL or CALC. Even if you want to screen only 500 Stocks from S&P 500, you will get 2,2 Millions of rows (approx. 220 days\/year * 20 years * 500 stocks). For this amount of data, you should use a SQL Database like MySQL. It is performant enough to handle this amount of data. But you have to find a way for updating. If you get the complete time series daily and store it into your database, this process can take approx. 1 hour. You could also use delta downloads but be aware of corporate actions (e.g. splits). \nI don't know Quantopia, but I know a similar backtesting service where I have created a python backtesting script last year. The outcome was quite different to what I have expected. The research result was that the backtesting service was calculating wrong results because of wrong data. So be cautious about the results.","Q_Score":0,"Tags":"python,excel,stocks,universe,back-testing","A_Id":53965459,"CreationDate":"2018-12-21T02:43:00.000","Title":"Backtesting a Universe of Stocks","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I am working on a tool for my company created to get data from our Facebook publications. It has not been working for a while, so I have to get all the historical data from June to November 2018. \nMy two scripts (one that get title and type of publication, and the other that get the number of link clicks) are working well to get data from last pushes, but when I try to add a date range in my Graph API request, I have some issues:\n\nthe regular query is [page_id]\/posts?fields=id,created_time,link,type,name\nthe query for historical data is [page_id]\/posts?fields=id,created_time,link,type,name,since=1529280000&until=1529712000, as the API is supposed to work with unixtime\nI get perfect results for regular use, but the results for historical data only shows video publications in Graph API Explorer, with a debug message saying:\n\n\nThe since field does not exist on the PagePost object.\n\nSame for \"until\" field when not using \"since\". I tried to replace \"posts\/\" with \"feed\/\" but it returned the exact same result...\nDo you have any idea of how to get all the publications from a Page I own on a certain date range?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":356,"Q_Id":53883849,"Users Score":0,"Answer":"So it seems that it is not possible to request this kind of data unfortunately, third party services must be used...","Q_Score":0,"Tags":"python,facebook,facebook-graph-api,unix-timestamp","A_Id":53889179,"CreationDate":"2018-12-21T11:15:00.000","Title":"Date Range for Facebook Graph API request on posts level","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm writing a script for automatizing some tasks at my job. However, I need to make my script portable and try it on different screen resolution. \nSo far right now I've tried to multiply my coordinate with the ratio between the old and new resolutions, but this doesn't work properly.\nDo you know how I can convert my X, Y coordinates for mouse's clicks make it works on different resolution?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1023,"Q_Id":53900943,"Users Score":1,"Answer":"Quick question: Are you trying to get it to click on certain buttons? (i.e. buttons that look the same on every computer you plug it into) And by portable, do you mean on a thumb drive (usb)?\nYou may be able to take an image of the button (i.e. cropping a screenshot), pass it on to the opencv module, one of the modules has an Image within Image searching ability. you can pass that image along with a screenshot (using pyautogui.screenshot()) and it will return the (x,y) coordinates of the button, pass that on to pyautogui.moveto(x,y) and pyautogui.click(), it might be able to work. you might have to describe the action you are trying to get Pyautogui to do a little better.","Q_Score":2,"Tags":"python-3.x,resolution,pyautogui","A_Id":54048053,"CreationDate":"2018-12-23T03:14:00.000","Title":"Pyautogui mouse click on different resolution","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to extract the text just after strong tag from html page given below? how can i do it using beautiful soup. It is causing me problem as it doesn't have any class or id so only way to select this tag is using text.\n{strong}Name:{\/strong} Sam smith{br}\nRequired result\nSam smith","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":77,"Q_Id":53914377,"Users Score":-1,"Answer":"Thanks for all your answers but i was able to do this by following:\nb_el = soup.find('strong',text='Name:')\nprint b_el.next_sibling\nThis works fine for me. This prints just next sibling how can i print next 2 sibling is there anyway ?","Q_Score":0,"Tags":"python,web-scraping,beautifulsoup","A_Id":53914744,"CreationDate":"2018-12-24T13:58:00.000","Title":"extracting text just after a particular tag using beautifulsoup?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I just started with AWS and I want to train my own model with own dataset. I have my model as keras model with tensorflow backend in Python. I read some documentations, they say I need a Docker image to load my model. So, how do I convert keras model into Docker image. I searched through internet but found nothing that explained the process clearly. How to make docker image of keras model, how to load it to sagemaker. And also how to load my data from a h5 file into S3 bucket for training? Can anyone please help me in getting clear explanation?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2089,"Q_Id":53921454,"Users Score":0,"Answer":"You can convert your Keras model to a tf.estimator and train using the TensorFlow framework estimators in Sagemaker.\nThis conversion is pretty basic though, I reimplemented my models in TensorFlow using the tf.keras API which makes the model nearly identical and train with the Sagemaker TF estimator in script mode.\nMy initial approach using pure Keras models was based on bring-your-own-algo containers similar to the answer by Matthew Arthur.","Q_Score":0,"Tags":"python,amazon-web-services,tensorflow,keras,amazon-sagemaker","A_Id":54089983,"CreationDate":"2018-12-25T10:26:00.000","Title":"How to train your own model in AWS Sagemaker?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am coming from NodeJS and learning Python and was wondering how to properly install the packages in requirements.txt file locally in the project.\nFor node, this is done by managing and installing the packages in package.json via npm install. However, the convention for Python project seems to be to add packages to a directory called lib. When I do pip install -r requirements.txt I think this does a global install on my computer, similar to nodes npm install -g global install. How can I install the dependencies of my requirements.txt file in a folder called lib?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20109,"Q_Id":53925660,"Users Score":27,"Answer":"use this command\npip install -r requirements.txt -t ","Q_Score":23,"Tags":"python,python-2.7,pip,requirements.txt","A_Id":53925671,"CreationDate":"2018-12-25T21:14:00.000","Title":"Installing Python Dependencies locally in project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to check-in the file which is in client workspace. Before check-in i need to verify if the file has been changed. Please tell me how to check this.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":596,"Q_Id":53931642,"Users Score":2,"Answer":"Use the p4 diff -sr command. This will do a diff of opened files and return the names of ones that are unchanged.","Q_Score":1,"Tags":"python-2.7,perforce,p4python","A_Id":53935076,"CreationDate":"2018-12-26T11:44:00.000","Title":"P4Python check if file is modified after check-out","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a problem with using the rqt_image_view package in ROS. Each time when I type rqt_image_view or rosrun rqt_image_view rqt_image_view in terminal, it will return: \n\nTraceback (most recent call last):\n File \"\/opt\/ros\/kinetic\/bin\/rqt_image_view\", line 16, in \n plugin_argument_provider=add_arguments))\n File \"\/opt\/ros\/kinetic\/lib\/python2.7\/dist-packages\/rqt_gui\/main.py\", line 59, in main\n return super(Main, self).main(argv, standalone=standalone, plugin_argument_provider=plugin_argument_provider, plugin_manager_settings_prefix=str(hash(os.environ['ROS_PACKAGE_PATH'])))\n File \"\/opt\/ros\/kinetic\/lib\/python2.7\/dist-packages\/qt_gui\/main.py\", line 338, in main\n from python_qt_binding import QT_BINDING\n ImportError: cannot import name QT_BINDING\n\nIn the \/.bashrc file, I have source :\n\nsource \/opt\/ros\/kinetic\/setup.bash\n source \/home\/kelu\/Dropbox\/GET_Lab\/leap_ws\/devel\/setup.bash --extend\n source \/eda\/gazebo\/setup.bash --extend\n\nThey are the default path of ROS, my own working space, the robot simulator of our university. I must use all of them. I have already finished many projects with this environmental variable setting. However, when I want to use the package rqt_image_view today, it returns the above error info.\nWhen I run echo $ROS_PACKAGE_PATH, I get the return:\n\n\/eda\/gazebo\/ros\/kinetic\/share:\/home\/kelu\/Dropbox\/GET_Lab\/leap_ws\/src:\/opt\/ros\/kinetic\/share\n\nAnd echo $PATH\n\n\/usr\/local\/cuda\/bin:\/opt\/ros\/kinetic\/bin:\/usr\/local\/cuda\/bin:\/usr\/local\/cuda\/bin:\/home\/kelu\/bin:\/home\/kelu\/.local\/bin:\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin:\/usr\/games:\/usr\/local\/games:\/snap\/bin\n\nThen I only source the \/opt\/ros\/kinetic\/setup.bash ,the rqt_image_view package runs!!\nIt seems that, if I want to use rqt_image_view, then I can not source both \/opt\/ros\/kinetic\/setup.bash and \/home\/kelu\/Dropbox\/GET_Lab\/leap_ws\/devel\/setup.bash at the same time.\nCould someone tell me how to fix this problem? I have already search 5 hours in google and haven't find a solution.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":547,"Q_Id":53937342,"Users Score":0,"Answer":"Different solutions to try:\n\nIt sounds like the first path \/eda\/gazebo\/ros\/kinetic\/share or \/home\/kelu\/Dropbox\/GET_Lab\/leap_ws\/src has an rqt_image_view package that is being used. Try to remove that dependency.\nHave you tried switching the source files being sourced? This depends on how the rqt_image_view package was built, such as by source or through a package manager. \n\nInitially, it sounds like there is a problem with the paths being searched or wrong package being run since the package works with the default ROS environment setup.","Q_Score":0,"Tags":"python,linux,bash,ubuntu,ros","A_Id":54034106,"CreationDate":"2018-12-26T21:26:00.000","Title":"How can I source two paths for the ROS environmental variable at the same time?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm using scipy curve_fit to curve a line for retention. however, I found the result line may produce negative number. how can i add some constrain?\nthe 'bounds' only constrain parameters not the results y","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":149,"Q_Id":53942983,"Users Score":0,"Answer":"One of the simpler ways to handle negative value in y, is to make a log transformation. Get the best fit for log transformed y, then do exponential transformation for actual error in the fit or for any new value prediction.","Q_Score":0,"Tags":"python,scipy","A_Id":53944352,"CreationDate":"2018-12-27T09:49:00.000","Title":"how to constrain scipy curve_fit in positive result","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using vpython library in spyder. After importing the library when I call simple function like print('x') or carry out any assignment operation and execute the program, immediately a browser tab named localhost and port address opens up and I get the output in console {if I used print function}. \nI would like to know if there is any option to prevent the tab from opening and is it possible to make the tab open only when it is required.\nPS : I am using windows 10, chrome as browser, python 3.5 and spyder 3.1.4.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":238,"Q_Id":53943901,"Users Score":0,"Answer":"There is work in progress to prevent the opening of a browser tab when there are no 3D objects or graph to display. I don't know when this will be released.","Q_Score":1,"Tags":"python,spyder,vpython","A_Id":53982995,"CreationDate":"2018-12-27T10:57:00.000","Title":"Vpython using Spyder : how to prevent browser tab from opening?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I get this error after already having installed autofocus when I try to run a .py file from the command line that contains the line:\nfrom autofocus import Autofocus2D\nOutput:\nImportError: cannot import name 'AFAVSignature'\nIs anyne familiar with this package and how to import it?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":53948358,"Users Score":0,"Answer":"It doesn't look like the library is supported for python 3. I was getting the same error, but removed that line from init.py and found that there was another error with of something like 'print e' not working, so I put the line back in and imported with python2 and it worked.","Q_Score":0,"Tags":"python-3.x","A_Id":54234661,"CreationDate":"2018-12-27T16:54:00.000","Title":"ImportError: cannot import name 'AFAVSignature'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have several virtual environment in my computer and sometimes I am in doubt about which python virtual environment I am using. Is there an easy way to find out which virtual environment I am connected to?","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":33051,"Q_Id":53952214,"Users Score":8,"Answer":"From a shell prompt, you can just do echo $VIRTUAL_ENV (or in Windows cmd.exe, echo %VIRTUAL_ENV%).\nFrom within Python, sys.prefix provides the root of your Python installation (the virtual environment if active), and sys.executable tells you which Python executable is running your script.","Q_Score":27,"Tags":"python,python-2.7,python-venv","A_Id":53952369,"CreationDate":"2018-12-28T00:04:00.000","Title":"how can I find out which python virtual environment I am using?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have several virtual environment in my computer and sometimes I am in doubt about which python virtual environment I am using. Is there an easy way to find out which virtual environment I am connected to?","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":33051,"Q_Id":53952214,"Users Score":10,"Answer":"Usually it's set to display in your prompt. You can also try typing in which python or which pip in your terminal to see if it points to you venv location, and which one. (Use where instead of which on Windows.)","Q_Score":27,"Tags":"python,python-2.7,python-venv","A_Id":53952232,"CreationDate":"2018-12-28T00:04:00.000","Title":"how can I find out which python virtual environment I am using?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"ive made a mistake with my django and messed up my model\nI want to delete it & then recreate it - how do I do that?\nI get this when I try to migrate - i just want to drop it\nrelation \"netshock_todo\" already exists\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":173,"Q_Id":53978491,"Users Score":1,"Answer":"Delete all of your migrations file except __init__.py\nThen go to database and find migrations table, delete all row in migrations table. Then run makemigrations and migrate command","Q_Score":0,"Tags":"django,python-3.x","A_Id":53978508,"CreationDate":"2018-12-30T14:34:00.000","Title":"how to delete django relation and rebuild model","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to use Scrapy shell to try and figure out the selectors for zone-h.org. I run scrapy shell 'webpage' afterwards I tried to view the content to be sure that it is downloaded. But all I can see is a dash icon (-). It doesn't download the page. I tried to enter the website to check if my connection to the website is somehow blocked, but it was reachable. I tried setting user agent to something more generic like chrome but no luck there either. The website is blocking me somehow but I don't know how can I bypass it. I digged through the the website if they block crawling and it doesn't say it is forbidden to crawl it. Can anyone help out?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":621,"Q_Id":53988585,"Users Score":0,"Answer":"Can you use scrapy shell \"webpage\" on another webpage that you know works\/doesn't block scraping?\nHave you tried using the view(response) command to open up what scrapy sees in a web browser?\nWhen you go to the webpage using a normal browser, are you redirected to another, final homepage?\n- if so, try using the final homepage's URL in your scrapy shell command\nDo you have firewalls that could interfere with a Python\/commandline app from connecting to the internet?","Q_Score":1,"Tags":"python,scrapy,web-crawler","A_Id":53989874,"CreationDate":"2018-12-31T14:33:00.000","Title":"Scrapy shell doesn't crawl web page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"On my windows machine I created a virtual environement in conda where I run python 3.6. I want to permanently add a folder to the virtual python path environment. If I append something to sys.path it is lost on exiting python. \nOutside of my virtual enviroment I can just add to user variables by going to advanced system settings. I have no idea how to do this within my virtual enviroment.\nAny help is much appreciated.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":4348,"Q_Id":54031169,"Users Score":-1,"Answer":"If you are on Windows 10+, this should work:\n1) Click on the Windows button on the screen or on the keyboard, both in the bottom left section. \n2) Type \"Environment Variables\" (without the quotation marks, of course).\n3) Click on the option that says something like \"Edit the System Environment Variables\"\n4) Click on the \"Advanced Tab,\" and then click \"Environment Variables\" (Near the bottom)\n5) Click \"Path\" in the top box - it should be the 3rd option - and then click \"Edit\" (the top one)\n6) Click \"New\" at the top, and then add the path to the folder you want to create. \n7) Click \"Ok\" at the bottom of all the pages that were opened as a result of the above-described actions to save. \nThat should work, please let me know in the comments if it doesn't.","Q_Score":3,"Tags":"python,virtualenv","A_Id":54031279,"CreationDate":"2019-01-03T23:22:00.000","Title":"How to add to pythonpath in virtualenvironment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm wondering about how a dash app works in terms of loading data, parsing and doing initial calcs when serving to a client who logs onto the website.\nFor instance, my app initially loads a bunch of static local csv data, parses a bunch of dates and loads them into a few pandas data frames. This data is then displayed on a map for the client.\nDoes the app have to reload\/parse all of this data every time a client logs onto the website? Or does the dash server load all the data only the first time it is instantiated and then just dish it out every time a client logs on?\nIf the data reloads every time, I would then use quick parsers like udatetime, but if not, id prefer to use a convenient parser like pendulum which isn't as efficient (but wouldn't matter if it only parses once).\nI hope that question makes sense. Thanks in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":54035114,"Users Score":2,"Answer":"The only thing that is called on every page load is the function you can assign to app.layout. This is useful if you want to display dynamic content like the current date on your page.\nEverything else is just executed once when the app is starting. \nThis means if you load your data outside the app.layout (which I assume is the case) everything is loaded just once.","Q_Score":0,"Tags":"python,performance,plotly-dash","A_Id":54173014,"CreationDate":"2019-01-04T08:03:00.000","Title":"Do Dash apps reload all data upon client log in?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So I have an assignment to build a web interface for a smart sensor, \nI've already written the python code to read the data from the sensor and write it into sqlite3, control the sensor etc. \nI've built the HTML, CSS template and implemented it into Django.\nMy goal is to run the sensor reading script pararel to the Django interface on the same server, so the server will do all the communication with the sensor and the user will be able to read and configure the sensor from the web interface. (Same logic as modern routers - control and configure from a web interface)\nQ: Where do I put my sensor_ctl.py script in my Django project and how I make it to run independent on the server. (To read sensor data 24\/7)\nQ: Where in my Django project I use my classes and method from sensor_ctl.py to write\/read data to my djangos database instead of the local sqlite3 database (That I've used to test sensor_ctl.py)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":54057396,"Users Score":1,"Answer":"Place your code in app\/appname\/management\/commands folder. Use Official guide for management commands. Then you will be able to use your custom command like this:\n.\/manage getsensorinfo\nSo when you will have this command registered, you can just put in in cron and it will be executed every minute.\nSecondly you need to rewrite your code to use django ORM models like this:\nStat.objects.create(temp1=60,temp2=70) instead of INSERT into....","Q_Score":1,"Tags":"django,python-3.x","A_Id":54057526,"CreationDate":"2019-01-05T23:50:00.000","Title":"How do i implement Logic to Django?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Flask app that uses selenium to get data from a website. I have spent 10+ hours trying to get heroku to work with it, but no success. My main problem is selenium. with heroku, there is a \"buildpack\" that you use to get selenium working with it, but with all the other hosting services, I have found no information. I just would like to know how to get selenium to work with any other recommended service than heroku. Thank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":777,"Q_Id":54058237,"Users Score":0,"Answer":"You need hosting service that able to install Chrome, chromedriver and other dependencies. Find for Virtual Private hosting (VPS), or Dedicated Server or Cloud Hosting but not Shared hosting.","Q_Score":1,"Tags":"python,selenium,heroku,hosting","A_Id":54059218,"CreationDate":"2019-01-06T02:49:00.000","Title":"How does selenium work with hosting services?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to make a calculator in python, so when you type x (root) y it will give you the x root of y, e.g. 4 (root) 625 = 5. \nI'm aware of how to do math.sqrt() but is there a way to do other roots?","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2857,"Q_Id":54060609,"Users Score":4,"Answer":"If you want to 625^(1\/4){which is the same as 4th root of 625}\nthen you type 625**(1\/4)\n** is the operator for exponents in python.\nprint(625**(1\/4))\nOutput:\n5.0\nTo generalize:\nif you want to find the xth root of y, you do:\ny**(1\/x)","Q_Score":4,"Tags":"python,math","A_Id":54060651,"CreationDate":"2019-01-06T10:28:00.000","Title":"How do I root in python (other than square root)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a dataset of ~10,000 rows of vehicles sold on a portal similar to Craigslist. The columns include price, mileage, no. of previous owners, how soon the car gets sold (in days), and most importantly a body of text that describes the vehicle (e.g. \"accident free, serviced regularly\"). \nI would like to find out which keywords, when included, will result in the car getting sold sooner. However I understand how soon a car gets sold also depends on the other factors especially price and mileage. \nRunning a TfidfVectorizer in scikit-learn resulted in very poor prediction accuracy. Not sure if I should try including price, mileage, etc. in the regression model as well, as it seems pretty complicated. Currently am considering repeating the TF-IDF regression on a particular segment of the data that is sufficiently huge (perhaps Toyotas priced at $10k-$20k).\nThe last resort is to plot two histograms, one of vehicle listings containing a specific word\/phrase and another for those that do not. The limitation here would be that the words that I choose to plot will be based on my subjective opinion.\nAre there other ways to find out which keywords could potentially be important? Thanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":406,"Q_Id":54097067,"Users Score":1,"Answer":"As you mentioned you could only so much with the body of text, which signifies the amount of influence of text on selling the cars. \nEven though the model gives very poor prediction accuracy, you could ahead to see the feature importance, to understand what are the words that drive the sales. \nInclude phrases in your tfidf vectorizer by setting ngram_range parameter as (1,2)\nThis might gives you a small indication of what phrases influence the sales of a car. \nIf would also suggest you to set norm parameter of tfidf as None, to check if has influence. By default, it applies l2 norm. \nThe difference would come based the classification model, which you are using. Try changing the model also as a last option.","Q_Score":1,"Tags":"python,scikit-learn,nlp,regression,prediction","A_Id":54102609,"CreationDate":"2019-01-08T17:44:00.000","Title":"TF-IDF + Multiple Regression Prediction Problem","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using the PYNQ Linux on Zedboard and when I tried to run a code on Jupyter Notebook to load a model.h5 I got an error message:\n\"The kernel appears to have died. It will restart automatically\"\nI tried to upgrade keras and Jupyter but still have the same error\nI don't know how to fix this problem ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":491,"Q_Id":54113143,"Users Score":0,"Answer":"Model is too large to be loaded into memory so kernel has died.","Q_Score":2,"Tags":"python,jupyter-notebook","A_Id":59205501,"CreationDate":"2019-01-09T15:12:00.000","Title":"Linux Jupyter Notebook : \"The kernel appears to have died. It will restart automatically\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"For homework in my basic python class, we have to start python interpreter in interactive mode and type a statement. Then, we have to open IDLE and type a statement. I understand how to write statements in both, but can't quite tell them apart? I see that there are to different desktop apps for python, one being the python 3.7 (32-bit), and the other being IDLE. Which one is the interpreter, and how do I get it in interactive mode? Also, when I do open IDLE do I put my statement directly in IDLE or, do I open a 'new file' and do it like that? I'm just a bit confused about the differences between them all. But I do really want to learn this language! Please help!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1698,"Q_Id":54119661,"Users Score":2,"Answer":"Python unlike some languages can be written one line at a time with you getting feedback after every line . This is called interactive mode. You will know you are in interactive mode if you see \">>>\" on the far left side of the window. This mode is really only useful for doing small tasks you don't think will come up again.\nMost developers write a whole program at once then save it with a name that ends in \".py\" and run it in an interpreter to get the results.","Q_Score":2,"Tags":"python-3.x","A_Id":54119897,"CreationDate":"2019-01-09T22:59:00.000","Title":"Difference between Python Interpreter and IDLE?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I received a data dump of the SQL database.\nThe data is formatted in an .sql file and is quite large (3.3 GB). I have no idea where to go from here. I have no access to the actual database and I don't know how to handle this .sql file in Python.\nCan anyone help me? I am looking for specific steps to take so I can use this SQL file in Python and analyze the data. \nTLDR; Received an .sql file and no clue how to process\/analyze the data that's in the file in Python. Need help in necessary steps to make the .sql usable in Python.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":3296,"Q_Id":54131953,"Users Score":1,"Answer":"It would be an extraordinarily difficult process to try to construct any sort of Python program that would be capable of parsing the SQL syntax of any such of a dump-file and to try to do anything whatsoever useful with it.\n\"No. Absolutely not. Absolute nonsense.\" (And I have over 30 years of experience, including senior management.) You need to go back to your team, and\/or to your manager, and look for a credible way to achieve your business objective ... because, \"this isn't it.\"\nThe only credible thing that you can do with this file is to load it into another mySQL database ... and, well, \"couldn't you have just accessed the database from which this dump came?\" Maybe so, maybe not, but \"one wonders.\"\nAnyhow \u2013 your team and its management need to \"circle the wagons\" and talk about your credible options. Because, the task that you've been given, in my professional opinion, \"isn't one.\" Don't waste time \u2013 yours, or theirs.","Q_Score":3,"Tags":"python,mysql,sql","A_Id":54132286,"CreationDate":"2019-01-10T15:30:00.000","Title":"How to handle SQL dump with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I received a data dump of the SQL database.\nThe data is formatted in an .sql file and is quite large (3.3 GB). I have no idea where to go from here. I have no access to the actual database and I don't know how to handle this .sql file in Python.\nCan anyone help me? I am looking for specific steps to take so I can use this SQL file in Python and analyze the data. \nTLDR; Received an .sql file and no clue how to process\/analyze the data that's in the file in Python. Need help in necessary steps to make the .sql usable in Python.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":3296,"Q_Id":54131953,"Users Score":2,"Answer":"Eventually I had to install MAMP to create a local mysql server. I imported the SQL dump with a program like SQLyog that let's you edit SQL databases. \nThis made it possible to import the SQL database in Python using SQLAlchemy, MySQLconnector and Pandas.","Q_Score":3,"Tags":"python,mysql,sql","A_Id":54251454,"CreationDate":"2019-01-10T15:30:00.000","Title":"How to handle SQL dump with Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm coming to you with the following issue:\nI have a bunch of physical boxes onto which I still stick QR codes generated using a python module named qrcode. In a nutshell, what I would like to do is everytime someone wants to take the object contained in a box, he scans the qr code with his phone, then takes it and put it back when he is done, not forgetting to scan the QR code again.\nPretty simple, isn't it?\nI already have a django table containing all my objects.\nNow my question is related to the design. I suspect the easiest way to achieve that is to have a POST request link in the QR code which will create a new entry in a table with the name of the object that has been picked or put back, the time (I would like to store this information).\nIf that's the correct way to do, how would you approach it? I'm not too sure I see how to make a POST request with a QR code. Would you have any idea?\nThanks.\nPS: Another alternative I can think of would be to a link in the QR code to a form with a dummy button the user would click on. Once clicked the button would update the database. But I would fine a solution without any button more convenient...","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4065,"Q_Id":54135030,"Users Score":3,"Answer":"The question boils down to a few choices: (a) what data do you want to encode into the QR code; (b) what app will you use to scan the QR code; and (c) how do you want the app to use \/ respond to the encoded data.\nIf you want your users to use off-the-shelf QR code readers (like free smartphone apps), then encoding a full URL to the appropriate API on your backend makes sense. Whether this should be a GET or POST depends on the QR code reader. I'd expect most to use GET, but you should verify that for your choice of app. That should be functionally fine, if you don't have any concerns about who should be able to scan the code.\nIf you want more control, e.g. you'd like to keep track of who scanned the code or other info not available to the server side just from a static URL request, you need a different approach. Something like, store the item ID (not URL) in the QR code; create your own simple QR code scanner app (many good examples exist) and add a little extra logic to that client, like requiring the user to log in with an ID + password, and build the URL dynamically from the item ID and the user ID. Many security variations possible (like JWT token) -- how you do that won't be dictated by the contents of the QR code. You could do a lot of other things in that QR code scanner \/ client, like add GPS location, ask the user to indicate why or where they're taking the item, etc. \nSo you can choose between a simple way with no controls, and a more complex way that would allow you to layer in whatever other controls and extra data you need.","Q_Score":0,"Tags":"python,django,qr-code","A_Id":54136664,"CreationDate":"2019-01-10T18:42:00.000","Title":"Interfacing a QR code recognition to a django database","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"my data looks like this.\n0 199 1028 251 1449 847 1483 1314 23 1066 604 398 225 552 1512 1598\n1 1214 910 631 422 503 183 887 342 794 590 392 874 1223 314 276 1411\n2 1199 700 1717 450 1043 540 552 101 359 219 64 781 953\n10 1707 1019 463 827 675 874 470 943 667 237 1440 892 677 631 425\nHow can I read this file structure in python? I want to extract a specific column from rows. For example, If I want to extract value in the second row, second column, how can I do that? I've tried 'loadtxt' using data type string. But it requires string index slicing, so that I could not proceed because each column has different digits. Moreover, each row has a different number of columns. Can you guys help me?\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":54142589,"Users Score":0,"Answer":"Use something like this to split it\nsplit2=[]\nsplit1=txt.split(\"\\n\")\nfor item in split1:\n split2.append(item.split(\" \"))","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":54142820,"CreationDate":"2019-01-11T08:09:00.000","Title":"How can I read a file having different column for each rows?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a data set from telecom company having lots of categorical features. I used the pandas.get_dummies method to convert them into one hot encoded format with drop_first=True option. Now how can I use the predict function, test input data needs to be encoded in the same way, as the drop_first=True option also dropped some columns, how can I ensure that encoding takes place in similar fashion. \nData set shape before encoding : (7043, 21)\nData set shape after encoding : (7043, 31)","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1013,"Q_Id":54145226,"Users Score":1,"Answer":"When not using drop_first=True you have two options:\n\nPerform the one-hot encoding before splitting the data in training and test set. (Or combine the data sets, perform the one-hot encoding, and split the data sets again).\nAlign the data sets after one-hot encoding: an inner join removes the features that are not present in one of the sets (they would be useless anyway). train, test = train.align(test, join='inner', axis=1)\n\nYou noted (correctly) that method 2 may not do what you expect because you are using drop_first=True. So you are left with method 1.","Q_Score":0,"Tags":"python,machine-learning,sklearn-pandas,one-hot-encoding","A_Id":54145335,"CreationDate":"2019-01-11T11:02:00.000","Title":"How to align training and test set when using pandas `get_dummies` with `drop_first=True`?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am working in python 3.7.0 through a 5.6.0 jupyter notebook inside Anaconda Navigator 1.9.2 running in a windows 7 environment. It seems like I am assuming a lot of overhead, and from the jupyter notebook, python doesn\u2019t see the anytree application module that I\u2019ve installed. (Anytree is working fine with python from my command prompt.)\nI would appreciate either 1) IDE recommendations or 2) advise as to how to make my Anaconda installation better integrated.\n\u200b","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":279,"Q_Id":54152910,"Users Score":0,"Answer":"The core problem with my python IDE environment was that I could not utilize the functions in the anytree module. The anytree functions worked fine from the command prompt python, but I only saw error messages from any of the Anaconda IDE portals.\nSolution: \n1) From the windows start menu, I opened Anaconda Navigator, \"run as administrator.\"\n2) Select Environments. My application only has the single environment, \u201cbase\u201d,\n3.) Open selection \u201cterminal\u201d, and you then have a command terminal window in that environment.\n4.) Execute [ conda install -c techtron anytree ] and the anytree module functions are now available.\n5.) Execute [ conda update \u2013n base \u2013all ] and all the modules are updated to be current.","Q_Score":0,"Tags":"python,anaconda,jupyter-notebook,anytree","A_Id":54173625,"CreationDate":"2019-01-11T19:30:00.000","Title":"Python anytree application challenges with my jupyter notebook \u200b","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I set up a virtual environment in python 3.7.2 using \"python -m venv foldername\". I installed PIL in that folder. Importing PIL works from the terminal, but when I try to import it in VS code, I get an ImportError. Does anyone know how to get VS code to recognize that module?\nI've tried switching interpreters, but the problem persists.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":949,"Q_Id":54156432,"Users Score":0,"Answer":"I ended up changing the python.venvpath setting to a different folder, and then moving the virtual env folder(The one with my project in it) to that folder. After restarting VS code, it worked.","Q_Score":0,"Tags":"python-3.x,visual-studio-code,virtualenv","A_Id":54171724,"CreationDate":"2019-01-12T03:01:00.000","Title":"How do I get VS Code to recognize modules in virtual environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using the yolov3 model running on several surveillance cameras. Besides this I also run tensorflow models on these surveillaince streams. I feel a little lost when it comes to using anything but opencv for rtsp streaming.\nSo far I haven't seen people use anything but opencv in python. Are there any places I should be looking into. Please feel free to chime in.\nSorry if the question is a bit vague, but I really don't know how to put this better. Feel free to edit mods.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":54193968,"Users Score":1,"Answer":"Of course are the alternatives to OpenCV in python if it comes to video capture but in my experience none of them preformed better","Q_Score":0,"Tags":"python,opencv,video,video-streaming,video-processing","A_Id":54197292,"CreationDate":"2019-01-15T06:52:00.000","Title":"Good resources for video processing in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In s3 bucket daily new JSON files are dumping , i have to create solution which pick the latest file when it arrives PARSE the JSON and load it to Snowflake Datawarehouse. may someone please share your thoughts how can we achieve","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1762,"Q_Id":54193979,"Users Score":0,"Answer":"There are some aspects to be considered such as is it a batch or streaming data , do you want retry loading the file in case there is wrong data or format or do you want to make it a generic process to be able to handle different file formats\/ file types(csv\/json) and stages. \nIn our case we have built a generic s3 to Snowflake load using Python and Luigi and also implemented the same using SSIS but for csv\/txt file only.","Q_Score":2,"Tags":"python,amazon-s3,snowflake-cloud-data-platform","A_Id":54209309,"CreationDate":"2019-01-15T06:54:00.000","Title":"Automate File loading from s3 to snowflake","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have installed pythonnet to use clr package for a specific API, which only works with clr in python. Although in my python script (using command or regular .py files) it works without any issues, in jupyter notebook, import clr gives this error, ModuleNotFoundError: No module named 'clr'. Any idea how to address this issue?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1254,"Q_Id":54206227,"Users Score":2,"Answer":"Here is simple suggestion: compare sys.path in both cases and see the differences. Your ipython kernel in jupyter is probably searching in different directories than in normal python process.","Q_Score":3,"Tags":"python,.net,interop,clr,python.net","A_Id":54227259,"CreationDate":"2019-01-15T20:16:00.000","Title":"pythonnet clr is not recognized in jupyter notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have installed pythonnet to use clr package for a specific API, which only works with clr in python. Although in my python script (using command or regular .py files) it works without any issues, in jupyter notebook, import clr gives this error, ModuleNotFoundError: No module named 'clr'. Any idea how to address this issue?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1254,"Q_Id":54206227,"Users Score":0,"Answer":"since you are intended to use clr in jupyter, in jupyter cell, you could also \n!pip install pythonnet for the first time and every later time if the vm is frequently nuked","Q_Score":3,"Tags":"python,.net,interop,clr,python.net","A_Id":61789169,"CreationDate":"2019-01-15T20:16:00.000","Title":"pythonnet clr is not recognized in jupyter notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Ok here's my basic information before I go on:\nMacBook Pro: OS X 10.14.2\nPython Version: 3.6.7\nJava JDK: V8.u201\nI'm trying to install the Apache Spark Python API (PySpark) on my computer. I did a conda installation: conda install -c conda-forge pyspark\nIt appeared that the module itself was properly downloaded because I can import it and call methods from it. However, opening the interactive shell with myuser$ pyspark gives the error: \nNo Java runtime present, requesting install.\nOk that's fine. I went to Java's download page to get the current JDK, in order to have it run, and downloaded it on Safari. Chrome apparently doesn't support certain plugins for it to work (although initially I did try to install it with Chrome). Still didn't work.\nOk, I just decided to start trying to use it.\nfrom pyspark.sql import SparkSession It seemed to import the module correctly because it was auto recognizing SparkSession's methods. However,\nspark = SparkSession.builder.getOrCreate() gave the error:\nException: Java gateway process exited before sending its port number\nReinstalling the JDK doesn't seem to fix the issue, and now I'm stuck with a module that doesn't seem to work because of an issue with Java that I'm not seeing. Any ideas of how to fix this problem? Any and all help is appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":244,"Q_Id":54206569,"Users Score":0,"Answer":"This problem is coming with spark 2.4. please try spark 2.3.","Q_Score":0,"Tags":"java,python,apache-spark,java-8,pyspark","A_Id":54281008,"CreationDate":"2019-01-15T20:47:00.000","Title":"Tried importing Java 8 JDK for PySpark, but PySpark still won't let me start a session","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I want to install some packages on the server which does not access to internet. so I have to take packages and send them to the server. But I do not know how can I install them.","AnswerCount":4,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":17437,"Q_Id":54213323,"Users Score":-2,"Answer":"Download the package from website and extract the tar ball.\nrun python setup.py install","Q_Score":0,"Tags":"python,pip,setup.py,installation-package","A_Id":54213344,"CreationDate":"2019-01-16T08:53:00.000","Title":"Install python packages offline on server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a few basic questions on Dask:\n\nIs it correct that I have to use Futures when I want to use dask for distributed computations (i.e. on a cluster)?\nIn that case, i.e. when working with futures, are task graphs still the way to reason about computations. If yes, how do I create them.\nHow can I generally, i.e. no matter if working with a future or with a delayed, get the dictionary associated with a task graph?\n\nAs an edit:\nMy application is that I want to parallelize a for loop either on my local machine or on a cluster (i.e. it should work on a cluster).\nAs a second edit:\nI think I am also somewhat unclear regarding the relation between Futures and delayed computations.\nThx","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1864,"Q_Id":54232080,"Users Score":16,"Answer":"1) Yup. If you're sending the data through a network, you have to have some way of asking the computer doing the computing for you how's that number-crunching coming along, and Futures represent more or less exactly that.\n2) No. With Futures, you're executing the functions eagerly - spinning up the computations as soon as you can, then waiting for the results to come back (from another thread\/process locally, or from some remote you've offloaded the job onto). The relevant abstraction here would be a Queque (Priority Queque, specifically).\n3) For a Delayed instance, for instance, you could do some_delayed.dask, or for an Array, Array.dask; optionally wrap the whole thing in either dict() or vars(). I don't know for sure if it's reliably set up this way for every single API, though (I would assume so, but you know what they say about what assuming makes of the two of us...).\n4) The simplest analogy would probably be: Delayed is essentially a fancy Python yield wrapper over a function; Future is essentially a fancy async\/await wrapper over a function.","Q_Score":12,"Tags":"python,distributed-computing,dask","A_Id":54235046,"CreationDate":"2019-01-17T08:51:00.000","Title":"Dask: delayed vs futures and task graph generation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it. \nSo basically, how to get labels of that adjacency matrix ?","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":910,"Q_Id":54262904,"Users Score":-1,"Answer":"If the adjacency matrix is generated without passing the nodeList, then you can call G.nodes to obtain the default NodeList, which should correspond to the rows of the adjacency matrix.","Q_Score":0,"Tags":"python-3.x,networkx,adjacency-matrix","A_Id":66045559,"CreationDate":"2019-01-19T00:00:00.000","Title":"Python how to get labels of a generated adjacency matrix from networkx graph?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it. \nSo basically, how to get labels of that adjacency matrix ?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":910,"Q_Id":54262904,"Users Score":0,"Answer":"Assuming you refer to nodes' labels, networkx only keeps the the indices when extracting a graph's adjacency matrix. Networkx represents each node as an index, and you can add more attributes if you wish. All node's attributes except for the index are kept in a dictionary. When generating graph's adjacency matrix only the indices are kept, so if you only wish to keep a single string per node, consider indexing nodes by that string when generating your graph.","Q_Score":0,"Tags":"python-3.x,networkx,adjacency-matrix","A_Id":54274980,"CreationDate":"2019-01-19T00:00:00.000","Title":"Python how to get labels of a generated adjacency matrix from networkx graph?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am making a GUI program where the user can draw on a canvas in Tkinter. What I want to do is that I want the user to be able to draw on the canvas and when the user releases the Mouse-1, the program should wait for 1 second and clear the canvas. If the user starts drawing within that 1 second, the canvas should stay as it is. \nI am able to get the user input fine. The draw function in my program is bound to B1-Motion.\nI have tried things like inducing a time delay but I don't know how to check whether the user has started to draw again.\nHow do I check whether the user has started to draw again?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":129,"Q_Id":54276610,"Users Score":0,"Answer":"You can bind the mouse click event to a function that sets a bool to True or False, then using after to call a function after 1 second which depending on that bool clears the screen.","Q_Score":1,"Tags":"python,python-3.x,tkinter,tkinter-canvas","A_Id":54276767,"CreationDate":"2019-01-20T12:48:00.000","Title":"How to wait for some time between user inputs in tkinter?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a super basic machine learning question. I've been working through various tutorials and online classes on machine learning and the various techniques to learning how to use it, but what I'm not seeing is the persistent application piece.\nSo, for example, I train a network to recognize what a garden gnome looks like, but, after I run the training set and validate with test data, how do I persist the network so that I can feed it an individual picture and have it tell me whether the picture is of a garden gnome or not? Every tutorial seems to have you run through the training\/validation sets without any notion as of how to host the network in a meaningful way for future use.\nThanks!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":309,"Q_Id":54297846,"Users Score":0,"Answer":"Use python pickle library to dump your trained model on your hard drive and load model and test for persistent results.","Q_Score":0,"Tags":"python,machine-learning","A_Id":54303388,"CreationDate":"2019-01-21T21:13:00.000","Title":"Persistent Machine Learning","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"We currently are receiving reports via email (I believe they are SSRS reports) which are embedded in the email body rather than attached. The reports look like images or snapshots; however, when I copy and paste the \"image\" of a report into Excel, the column\/row format is retained and it pastes into Excel perfectly, with the columns and rows getting pasted into distinct columns and rows accordingly. So it isn't truly an image, as there is a structure to the embedded report.\nRight now, someone has to manually copy and paste each report into excel (step 1), then import the report into a table in SQL Server (step 2). There are 8 such reports every day, so the manual copy\/pasting from the email into excel is very time consuming. \nThe question is: is there a way - any way - to automate step 1 so that we don't have to manually copy and paste each report into excel? Is there some way to use python or some other language to detect the format of the reports in the emails, and extract them into .csv or excel files? \nI have no code to show as this is more of a question of - is this even possible? And if so, any hints as to how to accomplish it would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":54299206,"Users Score":0,"Answer":"The most efficient solution is to have the SSRS administrator (or you, if you have permissions) set the subscription to send as CSV. To change this in SSRS right click the report and then click manage. Select \"Subscriptions\" on the left and then click edit next to the subscription you want to change. Scroll down to Delivery Options and select CSV in the Render Format dropdown. Viola, you receive your report in the correct format and don't have to do any weird extraction.","Q_Score":0,"Tags":"python,html,csv,email","A_Id":54312283,"CreationDate":"2019-01-21T23:31:00.000","Title":"Is it possible to extract an SSRS report embedded in the body of an email and export to csv?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am a beginner to python, I wanted to symbolic computations. I came to know with sympy installation into our pc we can do symbolic computation. I have installed python 3.6 and I am using anaconda nagavitor, through which I am using spyder as an editor. now I want to install symbolic package sympy how to do that.\nI checked some post which says use 'conda install sympy'. but where to type this? I typed this in spyder editor and I am getting syntax error. thankyou","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":41528,"Q_Id":54301925,"Users Score":0,"Answer":"To use conda install, open the Anaconda Prompt and enter the conda install sympy command.\nAlternatively, navigate to the scripts sub-directory in the Anaconda directory, and run pip install sympy.","Q_Score":4,"Tags":"python-3.x,anaconda,sympy,spyder","A_Id":54302012,"CreationDate":"2019-01-22T05:44:00.000","Title":"How to install sympy package in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am a beginner to python, I wanted to symbolic computations. I came to know with sympy installation into our pc we can do symbolic computation. I have installed python 3.6 and I am using anaconda nagavitor, through which I am using spyder as an editor. now I want to install symbolic package sympy how to do that.\nI checked some post which says use 'conda install sympy'. but where to type this? I typed this in spyder editor and I am getting syntax error. thankyou","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":41528,"Q_Id":54301925,"Users Score":1,"Answer":"In anaconda navigator:\n\nClick Environments (on the left)\nChoose your environment (if you have more than one)\nOn the middle pick \"All\" from dropbox (\"installed\" by default)\nWrite sympy in search-box on the right\nCheck the package that showed out\nClick apply","Q_Score":4,"Tags":"python-3.x,anaconda,sympy,spyder","A_Id":54302272,"CreationDate":"2019-01-22T05:44:00.000","Title":"How to install sympy package in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I found this rather annoying bug and I couldn\u2019t find anything other than a unanswered question on the opencv website, hopefully someone with more knowledge about the two libraries will be able to point me in the right direction.\nI won\u2019t provide code because that would be beside the point of learning what causes the crash.\nIf I draw a tkinter window and then root.destroy() it, trying to draw a cv2.imshow window will result in a X Window System error as soon as the cv2.waitKey delay is over. I\u2019ve tried to replicate in different ways and it always gets to the error (error_code 3 request_code 15 minor_code 0).\nIt is worth noting that a root.quit() command won\u2019t cause the same issue (as it is my understanding this method will simply exit the main loop rather than destroying the widgets). Also, while any cv2.imshow call will fail, trying to draw a new tkinter window will work just fine.\nWhat resources are being shared among the two libraries? What does root.destroy() cause in the X environment to prevent any cv2 window to be drawn?\nDebian Jessie - Python 3.4 - OpenCV 3.2.0","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":269,"Q_Id":54314386,"Users Score":0,"Answer":"When you destroy the root window, it destroys all children windows as well. If cv2 uses a tkinter window or child window of the root window, it will fail if you destroy the root window.","Q_Score":0,"Tags":"python-3.x,opencv,tkinter,cv2","A_Id":54315569,"CreationDate":"2019-01-22T18:26:00.000","Title":"tkinter.root.destroy and cv2.imshow - X Windows system error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am on Windows and I am trying to figure how to use Pyinstaller to make a file (on Windows) for a Mac.\nI have no trouble with Windows I am just not sure how I would make a file for another OS on it.\nWhat I tried in cmd was: pyinstaller -F myfile.py and I am not sure what to change to make a Mac compatible file.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":178,"Q_Id":54317780,"Users Score":0,"Answer":"Not Possible without using a Virtual Machine","Q_Score":1,"Tags":"python,python-3.x,pyinstaller","A_Id":54318656,"CreationDate":"2019-01-22T23:09:00.000","Title":"How do I use Pyinstaller to make a Mac file on Windows?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a large text file of URLs (>1 million URLs). The URLs represent product pages across several different domains.\nI'm trying to parse out the SKU and product name from each URL, such as:\n\nwww.amazon.com\/totes-Mens-Mike-Duck-Boot\/dp\/B01HQR3ODE\/\n\n\ntotes-Mens-Mike-Duck-Boot\nB01HQR3ODE\n\nwww.bestbuy.com\/site\/apple-airpods-white\/5577872.p?skuId=5577872\n\n\napple-airpods-white\n5577872\n\n\nI already have the individual regex patterns figured out for parsing out the two components of the URL (product name and SKU) for all of the domains in my list. This is nearly 100 different patterns.\nWhile I've figured out how to test this one URL\/pattern at a time, I'm having trouble figuring out how to architect a script which will read in my entire list, then go through and parse each line based on the relevant regex pattern. Any suggestions how to best tackle this?\nIf my input is one column (URL), my desired output is 4 columns (URL, domain, product_name, SKU).","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":176,"Q_Id":54319452,"Users Score":1,"Answer":"While it is possible to roll this all into one massive regex, that might not be the easiest approach. Instead, I would use a two-pass strategy. Make a dict of domain names to the regex pattern that works for that domain. In the first pass, detect the domain for the line using a single regex that works for all URLs. Then use the discovered domain to lookup the appropriate regex in your dict to extract the fields for that domain.","Q_Score":0,"Tags":"python,regex,python-3.x","A_Id":54319613,"CreationDate":"2019-01-23T03:02:00.000","Title":"Parsing list of URLs with regex patterns","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to make a pipeline using Data Factory In MS Azure of processing data in blob storage and then running a python processing code\/algorithm on the data and then sending it to another source.\nMy question here is, how can I do the same in Azure function apps? Or is there a better way to do it?\nThanks in advance.\nShyam","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":757,"Q_Id":54343289,"Users Score":0,"Answer":"I created a Flask API and called my python code through that. And then put it in Azure as a web app and called the blob.","Q_Score":0,"Tags":"python,azure,azure-functions","A_Id":54685275,"CreationDate":"2019-01-24T09:30:00.000","Title":"Python Azure function processing blob storage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I have an old project running (Django 1.6.5, Python 2.7) live for several years. I have to make some changes and have set up a working development environment with all the right django and python requirements (packages, versions, etc.)\nEverything is running fine, except when I am trying to make changes inside the admin panel. I can log on fine and looking at the database (sqlite3) I see my user has superuser privileges. However django says \"You have no permissions to change anything\" and thus not even displaying any of the models registered for the admin interface.\nI am using the same database that is running on the live server. There I have no issues at all (Live server also running in development mode with DEBUG=True has no issues) -> I can only see the history (My Change Log) - Nothing else\nI have also created a new superuser - but same problem here.\nI'd appreciate any pointers (Maybe how to debug this?)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":54345919,"Users Score":0,"Answer":"Finally, I found the issue:\nadmin.autodiscover()\nwas commented out in the project's urls.py for some reason. (I may have done that trying to get the project to work in a more recent version of django) - So admin.site.register was never called and the app_dict never filled. index.html template of django.contrib.admin then returns \n\nYou don't have permission to edit anything.\n\nor it's equivalent translation (which I find confusing, given that the permissions are correct, only no models were added to the admin dictionary.\nI hope this may help anyone running into a similar problem","Q_Score":0,"Tags":"python,django,database,admin,privileges","A_Id":54363022,"CreationDate":"2019-01-24T11:46:00.000","Title":"Django Admin Interface - Privileges On Development Server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new to Selenium. The web interface of our product pops up a EULA agreement which the user has to scroll down and accept before proceeding. This happens ONLY on initial login using that browser for that user. \nI looked at the Selenium API but I am unable to figure out which one to use and how to use it.\nWould much appreciate any suggestions in this regard.\nI have played around with the IDE for Chrome but even over there I don't see anything that I can use for this. I am aware there is an 'if' command but I don't know how to use it to do something like:\nif EULA-pops-up:\n Scroll down and click 'accept'\nproceed with rest of test.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":348,"Q_Id":54354067,"Users Score":0,"Answer":"You may disable the EULA if that is an option for you, I am sure there is a way to do it in registries as well :\nC:\\Program Files (x86)\\Google\\Chrome\\Application there should be a file called master_preferences.\nOpen the file and setting:\nrequire_eula to false","Q_Score":0,"Tags":"python,selenium","A_Id":54354319,"CreationDate":"2019-01-24T19:31:00.000","Title":"How to handle EULA pop-up window that appears only on first login?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to train a model for sentiment analysis and below is my trained Multinomial Naive Bayes Classifier returning an accuracy of 84%.\nI have been unable to figure out how to use the trained model to predict the sentiment of a sentence. For example, I now want to use the trained model to predict the sentiment of the phrase \"I hate you\".\nI am new to this area and any help is highly appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":119,"Q_Id":54362232,"Users Score":1,"Answer":"I don't know the dataset and what is semantic of individual dictionaries, but you are training your model on a dataset which has form as follows:\n[[{\"word\":True, \"word2\": False}, 'neg'], [{\"word\":True, \"word2\": False}, 'pos']]\n\nThat means your input is in form of a dictionary, and output in form of 'neg' label. If you want to predict you need to input a dictionary in a form:\n\n{\"I\": True, \"Hate\": False, \"you\": True}.\n\nThen:\n\nMNB_classifier.classify({\"love\": True})\n>> 'neg'\nor \nMNB_classifier.classify_many([{\"love\": True}])\n>> ['neg']","Q_Score":0,"Tags":"python,python-3.x,classification,sentiment-analysis","A_Id":54363252,"CreationDate":"2019-01-25T09:21:00.000","Title":"Predicting values using trained MNB Classifier","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to use my script that uses pandas library on another linux machine where is no internet access or pip installed. \nIs there a way how to deliver the script with all dependencies?\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":54364440,"Users Score":0,"Answer":"or set needed dependices in script manually by appending sys.modules and pack together all the needed files.","Q_Score":0,"Tags":"python","A_Id":54365941,"CreationDate":"2019-01-25T11:29:00.000","Title":"Deliver python external libraries with script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a folder of . py files(a package made by me) which i have uploaded into my google drive.\nI have mounted my google drive in colab but I still can not import the folder in my notebook as i do in my pc.\nI know how to upload a single .py file into google colab and import it into my code, but i have no idea about how to upload a folder of .py files and import it in notebook and this is what i need to do.\nThis is the code i used to mount drive:\n\nfrom google.colab import drive\ndrive.mount('\/content\/drive')\n!ls 'drive\/My Drive'","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3290,"Q_Id":54379184,"Users Score":2,"Answer":"I found how to do it. \nafter uploading all modules and packages into the directory which my notebook file is in, I changed colab's directory from \"\/content\" to this directory and then i simply imported the modules and packages(folder of .py files) into my code","Q_Score":2,"Tags":"python,upload,google-colaboratory","A_Id":54387414,"CreationDate":"2019-01-26T14:14:00.000","Title":"importing an entire folder of .py files into google colab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm writing my own wraping for ffmpeg on Python 3.7.2 now and want to use it's \"-progress\" option to read current progress since it's highly machine-readable. The problem is \"-progress\" option of ffmpeg accepts as its parameter file names and urls only. But I don't want to create additional files not to setup the whole web-server for this purpose.\nI've google a lot about it, but all the \"progress bars for ffmpeg\" projects rely on generic stderr output of ffmpeg only. Other answers here on Stackoverflow and on Superuser are being satisfied with just \"-v quiet -stats\", since \"progress\" is not very convenient name for parameter to google exactly it's cases.\nThe best solution would be to force ffmpeg write it's \"-progress\" output to separate pipe, since there is some useful data in stderr as well regarding file being encoded and I don't want to throw it away with \"-v quiet\". Though if there is a way to redirect \"-progress\" output to stderr, it would be cool as well! Any pipe would be ok actually, I just can't figure out how to make ffmpeg write it's \"-progress\" not to file in Windows. I tried \"ffmpeg -progress stderr ...\", but it just create the file with this name.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5064,"Q_Id":54385690,"Users Score":12,"Answer":"-progress pipe:1 will write out to stdout, pipe:2 to stderr. If you aren't streaming from ffmpeg, use stdout.","Q_Score":9,"Tags":"python,windows,ffmpeg","A_Id":54386052,"CreationDate":"2019-01-27T06:38:00.000","Title":"How to redirect -progress option output of ffmpeg to stderr?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a python list A and a python list B with words as list elements. I need to check how often the list elements from list B are contained in list A. Is there a python method or how can I implement this efficient?\nThe python intersection method only tells me that a list element from list B occurs in list A, but not how often.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":54404263,"Users Score":0,"Answer":"You could convert list B to a set, so that checking if the element is in B is faster.\nThen create a dictionary to count the amount of times that the element is in A if the element is also in the set of B\nAs mentioned in the comments collections.Counter does the \"heavy lifting\" for you","Q_Score":0,"Tags":"python,performance","A_Id":54404522,"CreationDate":"2019-01-28T14:38:00.000","Title":"How can I check how often all list elements from a list B occur in a list A?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to install some packages globally on my Mac. But I'm not able to install them via npm or pip, because I'll always get the message that the packages does not exist. For Python, I solved this by always using a virtualenv. But now I'm trying to install the @vue\/cli via npm, but I'm not able to access it. The commands are working fine, but I'm just not able to access it. I think it has something to do with my $PATH, but I don't know how to fix that.\nIf I look in my Finder, I can find the @vue folder in \/users\/...\/node_modules\/. Does someone know how I can access this folder with the vue command in Terminal?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":197,"Q_Id":54416057,"Users Score":1,"Answer":"If it's a PATH problem:\n1) Open up Terminal.\n2) Run the following command:\nsudo nano \/etc\/paths\n3) Enter your password, when prompted.\n4) Check if the correct paths exist in the file or not.\n5) Fix, if needed\n6) Hit Control-X to quit.\n7) Enter \u201cY\u201d to save the modified buffer.\nEverything, should work fine now. If it doesn't try re-installing NPM\/PIP.","Q_Score":1,"Tags":"python,node.js,npm,pip","A_Id":54418733,"CreationDate":"2019-01-29T07:42:00.000","Title":"Can't install packages via pip or npm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Is there a way I get can the following disk statistics in Python without using PSUtil?\n\nTotal disk space\nUsed disk space\nFree disk space\n\nAll the examples I have found seem to use PSUtil which I am unable to use for this application.\nMy device is a Raspberry PI with a single SD card. I would like to get the total size of the storage, how much has been used and how much is remaining.\nPlease note I am using Python 2.7.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":6715,"Q_Id":54458308,"Users Score":1,"Answer":"You can do this with the os.statvfs function.","Q_Score":0,"Tags":"python,python-2.7,raspberry-pi","A_Id":54458486,"CreationDate":"2019-01-31T10:19:00.000","Title":"How to get disk space total, used and free using Python 2.7 without PSUtil","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In the below operation, we are using a as an object as well as an argument.\na = \"Hello, World!\"\n\nprint(a.lower()) -> a as an object\nprint(len(a)) -> a as a parameter\n\nMay I know how exactly each operations differs in the way they are accessing a?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":15,"Q_Id":54481186,"Users Score":1,"Answer":"Everything in python (everything that can go on the rhs of an assignment) is an object, so what you can pass as an argument to a function IS an object, always. Actually, those are totally orthogonal concepts: you don't \"use\" something \"as an object\" - it IS an object - but you can indeed \"use it\" (pass it) as an argument to a function \/ method \/ whatever callable.\n\nMay I know how exactly each operations differs in the way they are accessing a?\n\nNot by much actually (except for the fact they do different things with a)... \na.lower() is only syntactic sugar for str.lower(a) (obj.method() is syntactic sugar for type(obj).method(obj), so in both cases you are \"using a as an argument\".","Q_Score":0,"Tags":"python-3.x","A_Id":54481340,"CreationDate":"2019-02-01T14:09:00.000","Title":"How can a same entity function as a parameter as well as an object?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I trained a model in TensorFlow using the tf.estimator API, more specifically using tf.estimator.train_and_evaluate. I have the output directory of the training. How do I load my model from this and then use it?\nI have tried using the tf.train.Saver class by loading the most recent ckpt file and restoring the session. However, then to call sess.run() I need to know what the name of the output node of the graph is so I can pass this to the fetches argument. What is the name\/how can I access this output node? Is there a better way to load and use the trained model?\nNote that I have already trained and saved the model in a ckpt file, so please do not suggest that I use the simple_save function.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":215,"Q_Id":54489497,"Users Score":0,"Answer":"(Answering my own question) I realized that the easiest way to do this was to use the tf.estimator API. By initializing an estimator that warm starts from the model directory, it's possible to just call estimator.predict and pass the correct args (predict_fn) and get the predictions immediately. It's not required to deal with the graph variables in any way.","Q_Score":0,"Tags":"python,tensorflow","A_Id":54505574,"CreationDate":"2019-02-02T02:41:00.000","Title":"Loading and using a trained TensorFlow model in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a pile of ngrams of variable spelling, and I want to map each ngram to it's best match word out of a list of known desired outputs.\nFor example, ['mob', 'MOB', 'mobi', 'MOBIL', 'Mobile] maps to a desired output of 'mobile'.\nEach input from ['desk', 'Desk+Tab', 'Tab+Desk', 'Desktop', 'dsk'] maps to a desired output of 'desktop'\nI have about 30 of these 'output' words, and a pile of about a few million ngrams (much fewer unique).\nMy current best idea was to get all unique ngrams, copy and paste that into Excel and manually build a mapping table, took too long and isn't extensible. \nSecond idea was something with fuzzy (fuzzy-wuzzy) matching but it didn't match well. \nI'm not experienced in Natural Language terminology or libraries at all so I can't find an answer to how this might be done better, faster and more extensibly when the number of unique ngrams increases or 'output' words change. \nAny advice?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":217,"Q_Id":54491245,"Users Score":1,"Answer":"The classical approach would be, to build a \"Feature Matrix\" for each ngram. Each word maps to an Output which is a categorical value between 0 and 29 (one for each class) \nFeatures can for example be the cosine similarity given by fuzzy wuzzy but typically you need many more. Then you train a classification model based on the created features. This model can typically be anything, a neural network, a boosted tree, etc.","Q_Score":2,"Tags":"python,nltk,natural-language-processing","A_Id":54491318,"CreationDate":"2019-02-02T08:14:00.000","Title":"Best way to map words with multiple spellings to a list of key words?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm programming a 2D game with Python and Pygame and now I want to use my internal graphics memory to load images to.\nI have an Intel HD graphics card (2GB VRAM) and a Nvidia GeForce (4GB VRAM).\nI want to use one of them to load images from the hard drive to it (to use the images from there).\nI thought it might be a good idea as I don't (almost) need the VRAM otherwise.\nCan you tell me if and how it is possible? I do not need GPU-Acceleration.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2376,"Q_Id":54524347,"Users Score":0,"Answer":"You have to create your window with the FULLSCREEN, DOUBLEBUF and HWSURFACE flags.\nThen you can create and use a hardware surface by creating it with the HWSURFACE flag.\nYou'll also have to use pygame.display.flip() instead of pygame.display.update().\nBut even pygame itself discourages using hardware surfaces, since they have a bunch of disadvantages, like\n- no mouse cursor\n- only working in fullscreen (at least that's what pygame's documentation says)\n- you can't easily manipulate the surfaces\n- they may not work on all platforms \n(and I never got transparency to work with them).\nAnd it's not even clear if you really get a notable performance boot.\nMaybe they'll work better in a future pygame release when pygame switches to SDL 2 and uses SDL_TEXTURE instead of SDL_HWSURFACE, who knows....","Q_Score":0,"Tags":"python,pygame","A_Id":54532945,"CreationDate":"2019-02-04T21:09:00.000","Title":"Use VRAM (graphics card memory) in pygame for images","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"New to coding; I just downloaded the full Anaconda package for Python 3.7 onto my Mac. However, I can't successfully import Pandas into my program on SublimeText when running my Python3.7 build. It DOES work though, when I change the build to Python 2.7. Any idea how I can get it to properly import when running 3.7 on SublimeText? I'd just like to be able to execute the code within Sublime.\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":54527093,"Users Score":0,"Answer":"Uninstall python 2.7. Unless you use it, its better to uninstall it.","Q_Score":0,"Tags":"python,pandas,macos,import,sublimetext3","A_Id":54527171,"CreationDate":"2019-02-05T02:42:00.000","Title":"Installed Anaconda to macOS that has Python2.7 and 3.7. Pandas only importing to 2.7; how can I import to 3.7?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a simple binary classification problem, and I want to assess the learning feasibility using Hoeffding's Inequality and also if possible VC dimension. \nI understand the theory but, I am still stuck on how to implement it in Python.\nI understand that In-sample Error (Ein) is the training Error. Out of sample Error(Eout) is the error on the test subsample I guess.\nBut how do I plot the difference between these two errors with the Hoeffdings bound?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":172,"Q_Id":54534664,"Users Score":0,"Answer":"Well here is how I handled it : I generate multiple train\/test samples, run the algorithm on them, calculate Ein as the train set error, Eout estimated by the test set error, calculate how many times their differnces exceeds the value of epsilon (for a range of epsilons). And then I plot the curve of these rates of exceeding epsilon and the curve of the right side of the Hoeffding's \/VC inequality so I see if the differences curve is always under the Hoeffding\/VC's Bound curve, this informs me about the learning feasiblity.","Q_Score":0,"Tags":"python,machine-learning","A_Id":54809376,"CreationDate":"2019-02-05T12:40:00.000","Title":"How to check learning feasibility on a binary classification problem with Hoeffding's inequality\/VC dimension with Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Please help me with this. I'd really appreciate it. I have tried alot of things but nothing is working, Please suggest any ideas you have.\nThis is what it keeps saying:\n name = imput('hello')\nNameError: name 'imput' is not defined","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":54561923,"Users Score":1,"Answer":"You misspelled input as imput. imput() is not a function that python recognizes - thus, it assumes it's the name of some variable, searches for wherever that variable was declared, and finds nothing. So it says \"this is undefined\" and raises an error.","Q_Score":0,"Tags":"python-3.x","A_Id":54561947,"CreationDate":"2019-02-06T20:20:00.000","Title":"python keeps saying that 'imput is undefined. how do I fix this?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Here is a scenario for a system where I am trying to understand what is what:\nI'm Joe, a novice programmer and I'm broke. I've got a Flask app and one physical machine. Since I'm broke, I cannot afford another machine for each piece of my system, thus the web server, application and database all live on my one machine.\nI've never deployed an app before, but I know that a server can refer to a machine or software. From here on, lets call the physical machine the Rack. I've loaded an instance of MongoDB on my machine and I know that is the Database Server. In order to handle API requests, I need something on the rack that will handle HTTP\/S requests, so I install and run an instance of NGINX on it and I know that this is the Web Server. However, my web server doesnt know how to run the app, so I do some research and learn about WSGI and come to find out I need another component. So I install and run an instance of Gunicorn and I know that this is the WSGI Server. \nAt this point I have a rack that is home to a web server to handle API calls (really just acts as a reverse proxy and pushes requests to the WSGI server), a WSGI server that serves up dynamic content from my app and a database server that stores client information used by the app.\nI think I've got my head on straight, then my friend asks \"Where is your Application Server?\"\nIs there an application server is this configuration? Do I need one?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":54565501,"Users Score":0,"Answer":"Any basic server architecture has three layers. On one end is the web server, which fulfills requests from clients. The other end is the database server, where the data resides.\nIn between these two is the application server. It consists of the business logic required to interact with the web server to receive the request, and then with the database server to perform operations. \nIn your configuration, the WSGI serve\/Flask app is the application server.\nMost application servers can double up as web servers.","Q_Score":0,"Tags":"python,web-applications,webserver,terminology,application-server","A_Id":54565677,"CreationDate":"2019-02-07T02:36:00.000","Title":"Understanding each component of a web application architecture","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"After training the trained model will be saved as H5 format. But I didn't know how that H5 file can be used as classifier to classifying new data. How H5 model works in theory when classifying new data?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1209,"Q_Id":54566249,"Users Score":1,"Answer":"When you save your model as h5-file, you save the model structure, all its parameters and further informations like state of your optimizer and so on. It is just an efficient way to save huge amounts of information. You could use json or xml file formats to do this as well. \nYou can't classifiy anything only using this file (it is not executable). You have to rebuild the graph as a tensorflow graph from this file. To do so you simply use the load_model() function from keras, which returns a keras.models.Model object. Then you can use this object to classifiy new data, with keras predict() function.","Q_Score":2,"Tags":"python,python-3.x,keras,deep-learning,conv-neural-network","A_Id":54573011,"CreationDate":"2019-02-07T04:21:00.000","Title":"How keras model H5 works in theory","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to use the pyautogui module for python to automate mouse clicks and movements. However, it doesn't seem to be able to recognise any monitor other than my main one, which means i'm not able to input any actions on any of my other screens, and that is a huge problem for the project i am working on.\nI've searched google for 2 hours but i can't find any straight answers on whether or not it's actually possible to work around. If anyone could either tell me that it is or isn't possible, tell me how to do it if it is, or suggest an equally effective alternative (for python) i would be extremely grateful.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5548,"Q_Id":54581006,"Users Score":0,"Answer":"not sure if this is clear but I subtracted an extended monitor's horizontal resolution from 0 because my 2nd monitor is on the left of my primary display. That allowed me to avoid the out of bounds warning. my answer probably isn't the clearest but I figured I would chime in to let folks know it actually can work.","Q_Score":4,"Tags":"python,automation","A_Id":67995225,"CreationDate":"2019-02-07T19:36:00.000","Title":"Using pyautogui with multiple monitors","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"So, a bit of a strange question, but let's say that I have a document (jupyter notebook) and I want to be able to prove to someone that it was made before a certain date, or that it was created on a certain date - does anyone have any ideas as to how I'd achieve that?\nIt would need to be a solution that couldn't be technically re-engineered after the fact (faking the creation date).\nKeen to hear your thoughts :) !","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":54582391,"Users Score":0,"Answer":"email it to yourself or a trusted party \u2013 dandavis yesterday\nGood solution.\nThanks!","Q_Score":0,"Tags":"python,validation,authentication,encryption,jupyter-notebook","A_Id":54601557,"CreationDate":"2019-02-07T21:14:00.000","Title":"How to encrypt(?) a document to prove it was made at a certain time?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm running my Jupyter Notebook using Pytorch on Google Colab. After I received the 'Cuda assert fails: device-side assert triggered' I am unable to run any other code that uses my pytorch module. Does anyone know how to reset my code so that my Pytorch functions that were working before can still run?\nI've already tried implementing CUDA_LAUNCH_BLOCKING=1but my code still doesn't work as the Assert is still triggered!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":11545,"Q_Id":54585685,"Users Score":9,"Answer":"You need to reset the Colab notebook. To run existing Pytorch modules that used to work before, you have to do the following:\n\nGo to 'Runtime' in the tool bar\nClick 'Restart and Run all'\n\nThis will reset your CUDA assert and flush out the module so that you can have another shot at avoiding the error!","Q_Score":3,"Tags":"python,pytorch,google-colaboratory,tensor","A_Id":54585712,"CreationDate":"2019-02-08T03:38:00.000","Title":"How to reset Colab after the following CUDA error 'Cuda assert fails: device-side assert triggered'?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I use toolchain from Kivy for compile Python + Kivy project on MacOS, but by default, toolchain use python2 recipes but I need change to python3.\nI\u00b4m googling but I don't find how I can do this.\nAny idea?\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":593,"Q_Id":54587891,"Users Score":0,"Answer":"For example, recipe \"ios\" and \"pyobjc\" dependency is changed from depends = [\"python\"] to depends = [\"python3\"]. (__init__.py in each packages in receipe folder in kivy-ios package)\nThese recipes are loaded from your request implicitly or explicitly\nThis description of the problem recipes is equal to require hostpython2\/python2. then conflict with python3.\nThe dependency of each recipe can be traced from output of kivy-ios. \"hostpython\" or \"python\" in output(console) were equaled to hostpython2 or python2.(now ver.)","Q_Score":0,"Tags":"python,xcode,macos,kivy","A_Id":60445492,"CreationDate":"2019-02-08T07:38:00.000","Title":"How change hostpython for use python3 on MacOS for compile Python+Kivy project for Xcode","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I use toolchain from Kivy for compile Python + Kivy project on MacOS, but by default, toolchain use python2 recipes but I need change to python3.\nI\u00b4m googling but I don't find how I can do this.\nAny idea?\nThanks","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":593,"Q_Id":54587891,"Users Score":0,"Answer":"your kivy installation is likely fine already. Your kivy-ios installation is not. Completely remove your kivy-ios folder on your computer, then do git clone git:\/\/github.com\/kivy\/kivy-ios to reinstall kivy-ios. Then try using toolchain.py to build python3 instead of python 2\nThis solution work for me. Thanks very much Erik.","Q_Score":0,"Tags":"python,xcode,macos,kivy","A_Id":54769487,"CreationDate":"2019-02-08T07:38:00.000","Title":"How change hostpython for use python3 on MacOS for compile Python+Kivy project for Xcode","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"As title, I know there're some model supporting streaming learning like classification model. And the model has function partial_fit()\nNow I'm studying regression model like SVR and RF regressor...etc in scikit.\nBut most of regression models doesn't support partial_fit .\nSo I want to reach the same effect in neural network. If in tensorflow, how to do like that? Is there any keyword?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":54607881,"Users Score":0,"Answer":"There is no some special function for it in TensorFlow. You make a single training pass over a new chunk of data. And then another training pass over another new chunk of data, etc till you reach the end of the data stream (which, hopefully, will never happen).","Q_Score":0,"Tags":"python,tensorflow,machine-learning","A_Id":54608871,"CreationDate":"2019-02-09T15:50:00.000","Title":"How to reach streaming learning in Neural network?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've been coding a text editor, and it has the function to change the default font displayed in the wx.stc.SyledTextCtrl. \nI would like to be able to save the font as a user preference, and I have so far been unable to save it. \nThe exact object type is .\nWould anyone know how to pickle\/save this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":103,"Q_Id":54615125,"Users Score":1,"Answer":"Probably due to its nature, you cannot pickle a wx.Font.\nYour remaining option is to store its constituent parts.\nPersonally, I store facename, point size, weight, slant, underline, text colour and background colour.\nHow you store them is your own decision.\nI use 2 different options depending on the code. \n\nStore the entries in an sqlite3 database, which allows for multiple\nindexed entries.\nStore the entries in an .ini file using\nconfigobj \n\nBoth sqlite3 and configobj are available in the standard python libraries.","Q_Score":0,"Tags":"python,fonts,wxpython,pickle,wxstyledtextctrl","A_Id":54687573,"CreationDate":"2019-02-10T09:38:00.000","Title":"How to pickle or save a WxPython FontData Object","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have one Django app and in the view of that I am using gzip_str(str) method to compress data and send back to the browser. Now I want to get the original string back in the browser. How can I decode the string in JS.\nP.S. I have found few questions here related to the javascript decode of gzip string but I could not figure out how to use those. Please tell me how can I decode and get the original string.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2530,"Q_Id":54615204,"Users Score":0,"Answer":"Serve the string with an appropriate Content-Encoding, then the browser will decode it for you.","Q_Score":0,"Tags":"javascript,django,python-3.x,gzip","A_Id":60143329,"CreationDate":"2019-02-10T09:51:00.000","Title":"how to decode gzip string in JS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using Anaconda. I would like to know how to remove or uninstall unwanted packages from the base environment. I am using another environment for my coding purpose. \nI tried to update my environment by using yml file (Not base environment). Unexpectedly some packages installed by yml into the base environment. So now it has 200 python packages which have another environment also. I want to clear unwanted packages in the base environment and I am not using any packages in the base environment. Also, my memory is full because of this. \nPlease give me a solution to remove unwanted packages in the base environment in anaconda. \nIt is very hard to remove one by one each package, therefore, I am looking for a better solution.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19414,"Q_Id":54617690,"Users Score":0,"Answer":"Please use the below code:\nconda uninstall -n base ","Q_Score":5,"Tags":"python,anaconda","A_Id":54617754,"CreationDate":"2019-02-10T15:03:00.000","Title":"How to remove unwanted python packages from the Base environment in Anaconda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Say, there is a module a which, among all other stuff, exposes some submodule a.b.\nAFAICS, it is desired to maintain modules in such a fashion that one types import a, import a.b and then invokes something b-specific in a following way: a.b.b_specific_function() or a.a_specific_function(). \nThe questions I'd like to ask is how to achive such effect?\nThere is directory a and there is source-code file a.py inside of it. Seems to be logical choice, thought it would look like import a.a then, rather than import a. The only way I see is to put a.py's code to the __init__.py in the a directory, thought it is definitely wrong...\nSo how do I keep my namespaces clean?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":53,"Q_Id":54622423,"Users Score":1,"Answer":"You can put the code into __init__.py. There is nothing wrong with this for a small subpackage. If the code grows large it is also common to have a submodule with a repeated name like a\/a.py and then inside __init__.py import it using from .a import *.","Q_Score":1,"Tags":"python","A_Id":54622455,"CreationDate":"2019-02-11T00:05:00.000","Title":"Pythonic way to split project into modules?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to perform a summation of the kind i Configure IDLE => Settings => Highlights there is a highlight setting for builtin names (default purple), including a few non-functions like Ellipsis. There is another setting for the names in def (function) and class statements (default blue). You can make def (and class) names be purple also.\nThis will not make function names purple when used because the colorizer does not know what the name will be bound to when the code is run.","Q_Score":0,"Tags":"function,python-3.5,python-idle","A_Id":54731071,"CreationDate":"2019-02-16T07:10:00.000","Title":"How to make functions appear purple","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've been working for a while with some cheap PIR modules and a raspberry pi 3. My aim is to use 4 of these guys to understand if a room is empty, and turn off some lights in case.\nNow, this lovely sensors aren't really precise. They false trigger from time to time, and they don't trigger right after their status has changed, and this makes things much harder.\nI thought I could solve the problem measuring a sort of \"density of triggers\", meaning how many triggers occurred during the last 60 seconds or something.\nMy question is how could I implement effectively this solution? I thought to build a sort of container and fill it with elements with a timer or something, but I'm not really sure this would do the trick.\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":54733751,"Users Score":0,"Answer":"How are you powering PIR sensors? They should be powered with 5V. I had similar problem with false triggers when I was powered PIR sensor with only 3.3V.","Q_Score":0,"Tags":"python,raspberry-pi3","A_Id":54741625,"CreationDate":"2019-02-17T13:30:00.000","Title":"Count number of Triggers in a given Span of Time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a for loop in Python in Pycharm IDE. I have 20 iterations of the for loop. However, the bug seems to be coming from the dataset looped during the 18th iteration. Is it possible to skip the first 17 values of the for loop, and solely jump to debug the 18th iteration?\nCurrently, I have been going through all 17 iterations to reach the 18th. The logic encompassed in the for loop is quite intricate and long. Hence, every cycle of debug through each iteration takes a very long. \nIs there some way to skip to the desired iteration in Pycharm without going in in-depth debugging of the previous iterations?","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":1359,"Q_Id":54739728,"Users Score":-1,"Answer":"You can set a break point with a condition (i == 17 [right click on the breakpoint to put it]) at the start of the loop.","Q_Score":1,"Tags":"python,for-loop,debugging,pycharm","A_Id":68756853,"CreationDate":"2019-02-18T02:33:00.000","Title":"While debugging in pycharm, how to debug only through a certain iteration of the for loop?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Note: I am not simply asking how to execute a Python script within Jupyter, but how to evaluate a python variable which would then result in the full path of the Python script I was to execute.\nIn my particular scenario, some previous cell on my notebook generates a path based on some condition.\nExample on two possible cases:\n\nscript_path = \/project_A\/load.py \nscript_path = \/project_B\/load.py\n\nThen some time later, I have a cell where I just want to execute the script. Usually, I would just do:\n%run -i \/project_A\/load.py \nbut I want to keep the cell's code generic by doing something like:\n%run -i script_path\nwhere script_path is a Python variable whose value is based on the conditions that are evaluated earlier in my Jupyter notebook.\nThe above would not work because Jupyter would then complain that it cannot find script_path.py. \nAny clues how I can have a Python variable passed to the %run magic?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":54752228,"Users Score":0,"Answer":"One hacky way would be to change the directory via %cd path \nand then run the script with %run -i file.py\nE: I know that this is not exactly what you were asking but maybe it helps with your problem.","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":54752417,"CreationDate":"2019-02-18T17:11:00.000","Title":"How to evaluate the path to a python script to be executed within Jupyter Notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to implement doc2vec, but I am not sure how the input for the model should look like if I have pretrained word2vec vectors.\nThe problem is, that I am not sure how to theoretically use pretrained word2vec vectors for doc2vec. I imagine, that I could prefill the hidden layer with the vectors and the rest of the hidden layer fill with random numbers\nAnother idea is to use the vector as input for word instead of a one-hot-encoding but I am not sure if the output vectors for docs would make sense.\nThank you for your answer!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1101,"Q_Id":54762478,"Users Score":4,"Answer":"You might think that Doc2Vec (aka the 'Paragraph Vector' algorithm of Mikolov\/Le) requires word-vectors as a 1st step. That's a common belief, and perhaps somewhat intuitive, by analogy to how humans learn a new language: understand the smaller units before the larger, then compose the meaning of the larger from the smaller. \nBut that's a common misconception, and Doc2Vec doesn't do that. \nOne mode, pure PV-DBOW (dm=0 in gensim), doesn't use conventional per-word input vectors at all. And, this mode is often one of the fastest-training and best-performing options. \nThe other mode, PV-DM (dm=1 in gensim, the default) does make use of neighboring word-vectors, in combination with doc-vectors in a manner analgous to word2vec's CBOW mode \u2013 but any word-vectors it needs will be trained-up simultaneously with doc-vectors. They are not trained 1st in a separate step, so there's not a easy splice-in point where you could provide word-vectors from elsewhere. \n(You can mix skip-gram word-training into the PV-DBOW, with dbow_words=1 in gensim, but that will train word-vectors from scratch in an interleaved, shared-model process.)\nTo the extent you could pre-seed a model with word-vectors from elsewhere, it wouldn't necessarily improve results: it could easily send their quality sideways or worse. It might in some lucky well-managed cases speed model convergence, or be a way to enforce vector-space-compatibility with an earlier vector-set, but not without extra gotchas and caveats that aren't a part of the original algorithms, or well-described practices.","Q_Score":0,"Tags":"python,machine-learning,nlp,word2vec,doc2vec","A_Id":54777057,"CreationDate":"2019-02-19T09:11:00.000","Title":"How to use pretrained word2vec vectors in doc2vec model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I know how to convert characters to ascii and stuff, and I'm making my first encryption algorithm just as a little fun project, nothing serious. I was wondering if there was a way to convert every other character in a string to ascii, I know this is similar to some other questions but I don't think it's a duplicate. Also P.S. I'm fairly new to Python :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":80,"Q_Id":54798259,"Users Score":0,"Answer":"Use ord() function to get ascii value of a character. You can then do a chr() of that value to get the character.","Q_Score":0,"Tags":"python,string,ascii","A_Id":54799855,"CreationDate":"2019-02-21T02:24:00.000","Title":"How to convert every other character in a string to ascii in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am making APIs.\nI'm using CentOS for web server, and another windows server 2016 for API server.\nI'm trying to make things work between web server and window server.\nMy logic is like following flow.\n1) Fill the data form and click button from web server\n2) Send data to windows server\n3) Python script runs and makes more data\n4) More made data must send back to web server\n5) Web server gets more made datas\n6) BAMM! Datas append on browser!\nI had made python scripts.\nbut I can't decide how to make datas go between two servers..\nShould I use ajax Curl in web server?\nI was planning to send a POST type request by Curl from web server to Windows server.\nBut I don't know how to receipt those datas in windows server.\nPlease help! Thank you in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":213,"Q_Id":54799927,"Users Score":1,"Answer":"First option: (Recommended)\nYou can create the python side as an API endpoint and from the PHP server, you need to call the python API.\nSecond option:\nYou can create the python side just like a normal webpage and whenever you call that page from PHP server you pass the params along with HTTP request, and after receiving data in python you print the data in JSON format.","Q_Score":0,"Tags":"php,python,curl","A_Id":54800059,"CreationDate":"2019-02-21T05:36:00.000","Title":"Run python script by PHP from another server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Every example I've found thus-far for development with Kivy in regards to switching screens is always done using a button, Although the user experience doesn't feel very \"native\" or \"Smooth\" for the kind of app I would like to develop.\nI was hoping to incorperate swiping the screen to change the active screen.\nI can sort of imagine how to do this by tracking the users on_touch_down() and on_touch_up() cords (spos) and if the difference is great enough, switch over to the next screen in a list of screens, although I can't envision how this could be implemented within the kv language\nperhaps some examples could help me wrap my head around this better?\nP.S.\nI want to keep as much UI code within the kv language file as possible to prevent my project from producing a speghetti-code sort of feel to it. I'm also rather new to Kivy development altogether so I appologize if this question has an official answer somewhere and I just missed it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":237,"Q_Id":54805487,"Users Score":0,"Answer":"You might want to use a Carousel instead of ScreenManager, but if you want that logic while using the ScreenManager, you'll certainly have to write some python code to manage that in a subclass of it, then use it in kv as a normal ScreenManager. Using previous and next properties to get the right screen to switch to depending on the action. This kind of logic is better done in python, and that doesn't prevent using the widgets in kv after.","Q_Score":0,"Tags":"android,python,kivy,screen,swipe","A_Id":54843815,"CreationDate":"2019-02-21T11:00:00.000","Title":"Kivy Android App - Switching screens with a swipe","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"is it possible to code in python inside android studio? \nhow can I do it.\nI have an android app that I am try to develop. and I want to code some part in python.\nThanks for the help\nhow can I do it.\nI have an android app that I am try to develop. and I want to code some part in python.\nThanks for the help","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":10022,"Q_Id":54809592,"Users Score":4,"Answer":"If you mean coding part of your Android application in python (and another part for example in Java) it's not possible for now. However, you can write Python script and include it in your project, then write in your application part that will invoke it somehow. Also, you can use Android Studio as a text editor for Python scripts. To develop apps for Android in Python you have to use a proper library for it.","Q_Score":7,"Tags":"android,python-3.x,android-studio","A_Id":54809908,"CreationDate":"2019-02-21T14:37:00.000","Title":"is it possible to code in python inside android studio?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm working on a project with a few scripts in the same directory, a pychache folder has been created within that directory, it contains compiled versions of two of my scripts. This has happened by accident I do not know how I did it. One thing I do know is I have imported functions between the two scripts that have been compiled. \nI would like a third compiled python script for a separate file however I do not want to import any modules(if this is even the case). Does anyone know how I can manually create a .cpython-37 file? Any help is appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1413,"Q_Id":54823553,"Users Score":1,"Answer":"There is really no reason to worry about __pycache__ or *.pyc files - these are created and managed by the Python interpreter when it needs them and you cannot \/ should not worry about manually creating them. They contain a cached version of the compiled Python bytecode. Creating them manually makes no sense (and I am not aware of a way to do that), and you should probably let the interpreter decide when it makes sense to cache the bytecode and when it doesn't. \nIn Python 3.x, __pycache__ directories are created for modules when they are imported by a different module. AFAIK Python will not create a __pycache__ entry when a script is ran directly (e.g. a \"main\" script), only when it is imported as a module.","Q_Score":0,"Tags":"python","A_Id":54824065,"CreationDate":"2019-02-22T09:08:00.000","Title":"How to create .cpython-37 file, within __pycache__","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to install python packages in a windows server 2016 sandbox for running a developed python model in production.This doesn't have internet connection. \nMy laptop is windows 2010 and the model is now running in my machine and need to push this to the server.\nMy question is how can i install all the required packages in my server which has no internet connection.\nThanks\nMithun","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1269,"Q_Id":54824614,"Users Score":0,"Answer":"A simply way is to install the same python version on another machine having internet access, and use normally pip on that machine. This will download a bunch of files and installs them cleanly under Lib\\site_packages of your Python installation.\nYou can they copy that folder to the server Python installation. If you want to be able to later add packages, you should keep both installations in sync: do not add or remove any package on the laptop without syncing with the server.","Q_Score":0,"Tags":"python,windows,windows-server-2016","A_Id":54824942,"CreationDate":"2019-02-22T10:05:00.000","Title":"Install python packages in windows server 2016 which has no internet connection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"For background, I am somewhat of a self-taught Python developer with only some formal training with a few CS courses in school. \nIn my job right now, I am working on a Python program that will automatically parse information from a very large text file (thousands of lines) that's a output result of a simulation software. I would like to be doing test driven development (TDD) but I am having a hard time understanding how to write proper unit tests. \nMy trouble is, the output of some of my functions (units) are massive data structures that are parsed versions of the text file. I could go through and create those outputs manually and then test but it would take a lot of time. The whole point of a parser is to save time and create structured outputs. Only testing I've been doing so far is trial and error manually which is also cumbersome. \nSo my question is, are there more intuitive ways to create tests for parsers? \nThank you in advance for any help!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":364,"Q_Id":54833354,"Users Score":2,"Answer":"Usually parsers are tested using a regression testing system. You create sample input sets and verify that the output is correct. Then you put the input and output in libraries. Each time you modify the code, you run the regression test system over the library to see if anything changes.","Q_Score":1,"Tags":"python,unit-testing,parsing","A_Id":54833388,"CreationDate":"2019-02-22T18:47:00.000","Title":"How to write unit tests for text parser?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I use miniconda as my default python installation. What is the current (2019) wisdom regarding when to install something with conda vs. pip?\nMy usual behavior is to install everything with pip, and only using conda if a package is not available through pip or the pip version doesn't work correctly.\nAre there advantages to always favoring conda install? Are there issues associated with mixing the two installers? What factors should I be considering?\n\nOBJECTIVITY: This is not an opinion-based question! My question is when I have the option to install a python package with pip or conda, how do I make an informed decision? Not \"tell me which is better, but \"Why would I use one over the other, and will oscillating back & forth cause problems \/ inefficiencies?\"","AnswerCount":6,"Available Count":1,"Score":0.1651404129,"is_accepted":false,"ViewCount":17147,"Q_Id":54834579,"Users Score":5,"Answer":"This is what I do: \n\nActivate your conda virutal env\nUse pip to install into your virtual env\nIf you face any compatibility issues, use conda\n\nI recently ran into this when numpy \/ matplotlib freaked out and I used the conda build to resolve the issue.","Q_Score":35,"Tags":"python,python-3.x,pip,conda,miniconda","A_Id":54834690,"CreationDate":"2019-02-22T20:17:00.000","Title":"Specific reasons to favor pip vs. conda when installing Python packages","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to make a discord bot, and I read that I need to have an older version of Python so my code will work. I've tried using \"import discord\" on IDLE but an error message keeps on coming up. How can I use Python 3.6 and keep Python 3.7 on my Windows 10 computer?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":323,"Q_Id":54852821,"Users Score":0,"Answer":"Install in different folder than your old Python 3.6 then update path\nUsing Virtualenv and or Pyenv\nUsing Docker\n\nHope it help!","Q_Score":2,"Tags":"python,discord","A_Id":54853052,"CreationDate":"2019-02-24T14:21:00.000","Title":"how can I use python 3.6 if I have python 3.7?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm trying to make a discord bot, and I read that I need to have an older version of Python so my code will work. I've tried using \"import discord\" on IDLE but an error message keeps on coming up. How can I use Python 3.6 and keep Python 3.7 on my Windows 10 computer?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":323,"Q_Id":54852821,"Users Score":0,"Answer":"Just install it in different folder (e.g. if current one is in C:\\Users\\noob\\AppData\\Local\\Programs\\Python\\Python37, install 3.6. to C:\\Users\\noob\\AppData\\Local\\Programs\\Python\\Python36).\nNow, when you'll want to run a script, right click the file and under \"edit with IDLE\" will be multiple versions to choose. Works on my machine :)","Q_Score":2,"Tags":"python,discord","A_Id":54853010,"CreationDate":"2019-02-24T14:21:00.000","Title":"how can I use python 3.6 if I have python 3.7?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm currently in the process of learning how to use the Python Pyramid web framework, and have found the documentation to be quite excellent.\nI have, however, hit a stumbling block when it comes to distinguishing the idea of a \"model\" (i.e. a class defined under SQLAlchemy's declarative system) from the idea of a \"resource\" (i.e. a means of defining access control lists on views for use with Pyramid's auth system).\nI understand the above statements seem to show that I already understand the difference, but I'm having trouble understanding whether I should be making models resources (by adding the __acl__ attribute directly in the model class) or creating a separate resource class (which has the proper __parent__ and __name__ attributes) which represents the access to a view which uses the model.\nAny guidance is appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":181,"Q_Id":54869002,"Users Score":1,"Answer":"I'm having trouble understanding whether I should be making models resources (by adding the acl attribute directly in the model class) or creating a separate resource class\n\nThe answer depends on what level of coupling you want to have. For a simple app, I would recommend making models resources just for simplicity sake. But for a complex app with a high level of cohesion and low level of coupling it would be better to have models separated from resources.","Q_Score":5,"Tags":"python,python-3.x,pyramid","A_Id":54891000,"CreationDate":"2019-02-25T15:00:00.000","Title":"Is a Pyramid \"model\" also a Pyramid \"resource\"?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Currently I am working to learn how to use Gtk3 with Python 3.6. So far I have been able to use a combination of resources to piece together a project I am working on, some old 2.0 references, some 3.0 shallow reference guides, and using the python3 interpreters help function.\nHowever I am stuck at how I could customise the statusbar to display a progressbar. Would I have to modify the contents of the statusbar to add it to the end(so it shows up at the right side), or is it better to build my own statusbar?\nAlso how could I modify the progressbars color? Nothing in the materials list a method\/property for it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":217,"Q_Id":54875813,"Users Score":2,"Answer":"GtkStatusbar is a subclass of GtkBox. You can use any GtkBox method including pack_start and pack_end or even add, which is a method of GtkContainer. \nThus you can simply add you progressbar to statusbar.","Q_Score":0,"Tags":"python-3.x,progress-bar,gtk3,statusbar","A_Id":54889989,"CreationDate":"2019-02-25T22:42:00.000","Title":"Python Gtk3 - Custom Statusbar w\/ Progressbar","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"Instead of creating many topics I'm creating a partition for each consumer and store data using a key. So is there a way to make a consumer in a consumer group read from partition that stores data of a specific key. If so can you suggest how it can done using kafka-python (or any other library).","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":205,"Q_Id":54878692,"Users Score":0,"Answer":"Instead of using the subscription and the related consumer group logic, you can use the \"assign\" logic (it's provided by the Kafka consumer Java client for example).\nWhile with subscription to a topic and being part of a consumer group, the partitions are automatically assigned to consumers and re-balanced when a new consumer joins or leaves, it's different using assign.\nWith assign, the consumer asks to be assigned to a specific partition. It's not part of any consumer group. It's also mean that you are in charge of handling rebalancing if a consumer dies: for example, if consumer 1 get assigned partition 1 but at some point it crashes, the partition 1 won't be reassigned automatically to another consumer. It's up to you writing and handling the logic for restarting the consumer (or another one) for getting messages from partition 1.","Q_Score":0,"Tags":"python,apache-kafka,kafka-consumer-api,kafka-python","A_Id":54879760,"CreationDate":"2019-02-26T04:59:00.000","Title":"Can a consumer read records from a partition that stores data of particular key value?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm having low fps for real-time object detection on my raspberry pi\nI trained the yolo-darkflow object detection on my own data set using my laptop windows 10 .. when I tested the model for real-time detection on my laptop with webcam it worked fine with high fps \nHowever when trying to test it on my raspberry pi, which runs on Raspbian OS, it gives very low fps rate that is about 0.3 , but when I only try to use the webcam without the yolo it works fine with fast frames.. also when I use Tensorflow API for object detection with webcam on pi it also works fine with high fps\ncan someone suggest me something please? is the reason related to the yolo models or opencv or phthon? how can I make the fps rate higher and faster for object detection with webcam?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1551,"Q_Id":54881654,"Users Score":0,"Answer":"The raspberry pi not have the GPU procesors and because of that is very hard for it to do image recognition at a high fps .","Q_Score":0,"Tags":"python,opencv,raspberry-pi,object-detection,yolo","A_Id":55558787,"CreationDate":"2019-02-26T08:57:00.000","Title":"how to increase fps for raspberry pi for object detection","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm having low fps for real-time object detection on my raspberry pi\nI trained the yolo-darkflow object detection on my own data set using my laptop windows 10 .. when I tested the model for real-time detection on my laptop with webcam it worked fine with high fps \nHowever when trying to test it on my raspberry pi, which runs on Raspbian OS, it gives very low fps rate that is about 0.3 , but when I only try to use the webcam without the yolo it works fine with fast frames.. also when I use Tensorflow API for object detection with webcam on pi it also works fine with high fps\ncan someone suggest me something please? is the reason related to the yolo models or opencv or phthon? how can I make the fps rate higher and faster for object detection with webcam?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1551,"Q_Id":54881654,"Users Score":0,"Answer":"My detector on raspberry pi without any accelerator can reach 5 FPS.\nI used SSD mobilenet, and quantize it after training.\nTensorflow Lite supplies a object detection demo can reach about 8 FPS on raspberry pi 4.","Q_Score":0,"Tags":"python,opencv,raspberry-pi,object-detection,yolo","A_Id":66489611,"CreationDate":"2019-02-26T08:57:00.000","Title":"how to increase fps for raspberry pi for object detection","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm currently working with Python3 on Jupyter Notebook. I try to load a text file which is in the exact same directory as my python notebook but it still doesn't find it. My line of code is:\ntext_data = prepare_text('train.txt')\nand the error is a typical\nFileNotFoundError: [Errno 2] No such file or directory: 'train.txt'\nI've already tried to enter the full path to my text file but then I still get the same error. \nDoes anyone know how to solve this?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3104,"Q_Id":54883612,"Users Score":1,"Answer":"I found the answer. Windows put a secont .txt at the end of the file name, so I should have used train.txt.txt instead.","Q_Score":0,"Tags":"python,file,jupyter-notebook","A_Id":54885096,"CreationDate":"2019-02-26T10:41:00.000","Title":"Python3: FileNotFoundError: [Errno 2] No such file or directory: 'train.txt', even with complete path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have a dataset of 27 files, each containing opcodes. I want to use stemming to map all versions of similar opcodes into the same opcode. For example: push, pusha, pushb, etc would all be mapped to push.\nMy dictionary contains 27 keys and each key has a list of opcodes as a value. Since the values contain opcodes and not normal english words, I cannot use the regular stemmer module. I need to write my own stemmer code. Also I cannot hard-code a custom dictionary that maps different versions of the opcodes to the root opcode because I have a huge dataset. \nI think regex expression would be a good idea but I do not know how to use it. Can anyone help me with this or any other idea to write my own stemmer code?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":212,"Q_Id":54890373,"Users Score":0,"Answer":"I would recommend looking at the levenshtein distance metric - it measures the distance between two words in terms of character insertions, deletions, and replacements (so push and pusha would be distance 1 apart if you do the ~most normal thing of weighing insertions = deletions = replacements = 1 each). Based on the example you wrote, you could try just setting up categories that are all distance 1 from each other. However, I don't know if all of your equivalent opcodes will be so similar - if they're not leven might not work.","Q_Score":1,"Tags":"python,regex,nlp,nltk,stemming","A_Id":54891820,"CreationDate":"2019-02-26T16:48:00.000","Title":"Write own stemmer for stemming","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am able to parse from file using this method: \nfor event, elem in ET.iterparse(file_path, events=(\"start\", \"end\")):\nBut, how can I do the same with fromstring function? Instead of from file, xml content is stored in a variable now. But, I still want to have the events as before.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":319,"Q_Id":54891949,"Users Score":0,"Answer":"From the documentation for the iterparse method:\n\n...Parses an XML section into an element tree incrementally, and\n reports what\u2019s going on to the user. source is a filename or file\n object containing XML data...\n\nI've never used the etree python module, but \"or file object\" says to me that this method accepts an open file-like object as well as a file name. It's an easy thing to construct a file-like object around a string to pass as input to a method like this.\nTake a look at the StringIO module.","Q_Score":2,"Tags":"python","A_Id":54892048,"CreationDate":"2019-02-26T18:30:00.000","Title":"Elementree Fromstring and iterparse in Python 3.x","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm learning tensorflow, and the tf.data API confuses me. It is apparently better when dealing with large datasets, but when using the dataset, it has to be converted back into a tensor. But why not just use a tensor in the first place? Why and when should we use tf.data?\nWhy isn't it possible to have tf.data return the entire dataset, instead of processing it through a for loop? When just minimizing a function of the dataset (using something like tf.losses.mean_squared_error), I usually input the data through a tensor or a numpy array, and I don't know how to input data through a for loop. How would I do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1075,"Q_Id":54894799,"Users Score":3,"Answer":"The tf.data module has specific tools which help in building a input pipeline for your ML model. A input pipeline takes in the raw data, processes it and then feeds it to the model.\n\n\nWhen should I use tf.data module?\n\nThe tf.data module is useful when you have a large dataset in the form of a file such as .csv or .tfrecord. tf.data.Dataset can perform shuffling and batching of samples efficiently. Useful for large datasets as well as small datasets. It could combine train and test datasets.\n\nHow can I create batches and iterate through them for training?\n\nI think you can efficiently do this with NumPy and np.reshape method. Pandas can read data files for you. Then, you just need a for ... in ... loop to get each batch amd pass it to your model.\n\nHow can I feed NumPy data to a TensorFlow model?\n\nThere are two options to use tf.placeholder() or tf.data.Dataset.\n\nThe tf.data.Dataset is a much easier implementation. I recommend to use it. Also, has some good set of methods.\nThe tf.placeholder creates a placeholder tensor which feeds the data to a TensorFlow graph. This process would consume more time feeding in the data.","Q_Score":3,"Tags":"python,numpy,tensorflow,machine-learning","A_Id":54897167,"CreationDate":"2019-02-26T21:57:00.000","Title":"Why should I use tf.data?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Right now I'm using virtualenv and just switching over to Pipenv. Today in virtualenv I load in different environment variables and settings depending on whether I'm in development, production, or testingby setting DJANGO_SETTINGS_MODULE to myproject.settings.development, myproject.settings.production, and myproject.settings.testing. \nI'm aware that I can set an .env file, but how can I have multiple versions of that .env file?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5010,"Q_Id":54896106,"Users Score":4,"Answer":"You should create different .env files with different prefixes depending on the environment, such as production.env or testing.env. With pipenv, you can use the PIPENV_DONT_LOAD_ENV=1 environment variable to prevent pipenv shell from automatically exporting the .env file and combine this with export $(cat .env | xargs).\nexport $(cat production.env | xargs) && PIPENV_DONT_LOAD_ENV=1 pipenv shell would configure your environment variables for production and then start a shell in the virtual environment.","Q_Score":12,"Tags":"python,virtualenv,pipenv","A_Id":54896172,"CreationDate":"2019-02-27T00:06:00.000","Title":"Pipenv: Multiple Environments","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm writing an application in PyQt5 which will be used for calibration and test of a product. The important details:\n\nThe product under test uses an old-school UART\/serial communication link at 9600 baud.\n...and the test \/ calibration operation involves communicating with another device which has a UART\/serial communication link at 300 baud(!)\nIn both cases, the communication protocol is ASCII text with messages terminated by a newline \\r\\n.\n\nDuring the test\/calibration cycle the GUI needs to communicate with the devices, take readings, and log those readings to various boxes in the screen. The trouble is, with the slow UART communications (and the long time-outs if there is a comms drop-out) how do I keep the GUI responsive?\nThe Minimally Acceptable solution (already working) is to create a GUI which communicates over the serial port, but the user interface becomes decidedly sluggish and herky-jerky while the GUI is waiting for calls to serial.read() to either complete or time out.\nThe Desired solution is a GUI which has a nice smooth responsive feel to it, even while it is transmitting and receiving serial data.\nThe Stretch Goal solution is a GUI which will log every single character of the serial communications to a text display used for debugging, while still providing some nice \"message-level\" abstraction for the actual logic of the application.\nMy present \"minimally acceptable\" implementation uses a state machine where I run a series of short functions, typically including the serial.write() and serial.read() commands, with pauses to allow the GUI to update. But the state machine makes the GUI logic somewhat tricky to follow; the code would be much easier to understand if the program flow for communicating to the device was written in a simple linear fashion.\nI'm really hesitant to sprinkle a bunch of processEvents() calls throughout the code. And even those don't help when waiting for serial.read(). So the correct solution probably involves threading, signals, and slots, but I'm guessing that \"threading\" has the same two Golden Rules as \"optimization\": Rule 1: Don't do it. Rule 2 (experts only): Don't do it yet.\nAre there any existing architectures or design patterns to use as a starting point for this type of application?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":397,"Q_Id":54898967,"Users Score":0,"Answer":"Okay for the past few days I've been digging, and figured out how to do this. Since there haven't been any responses, and I do think this question could apply to others, I'll go ahead and post my solution. Briefly:\n\nYes, the best way to solve this is with with PyQt Threads, and using Signals and Slots to communicate between the threads.\nFor basic function (the \"Desired\" solution above) just follow the existing basic design pattern for PyQt multithreaded GUI applications:\n\n\nA GUI thread whose only job is to display data and relay user inputs \/ commands, and,\nA worker thread that does everything else (in this case, including the serial comms).\n\nOne stumbling point along the way: I'd have loved to write the worker thread as one linear flow of code, but unfortunately that's not possible because the worker thread needs to get info from the GUI at times.\n\n\nThe only way to get data back and forth between the two threads is via Signals and Slots, and the Slots (i.e. the receiving end) must be a callable, so there was no way for me to implement some type of getdata() operation in the middle of a function. Instead, the worker thread had to be constructed as a bunch of individual functions, each one of which gets kicked off after it receives the appropriate Signal from the GUI.\n\nGetting the serial data monitoring function (the \"Stretch Goal\" above) was actually pretty easy -- just have the low-level serial transmit and receive routines already in my code emit Signals for that data, and the GUI thread receives and logs those Signals.\n\nAll in all it ended up being a pretty straightforward application of existing principles, but I'm writing it down so hopefully the next guy doesn't have to go down so many blind alleys like I did along the way.","Q_Score":0,"Tags":"python,pyqt5","A_Id":54964121,"CreationDate":"2019-02-27T05:59:00.000","Title":"How to architect a GUI application with UART comms which stays responsive to the user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm new to django, I want to register users using different tables for different users like students, teaching staff, non teaching staff, 3 tables.\nHow can i do it instead of using default auth_users table for registration","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":125,"Q_Id":54906708,"Users Score":0,"Answer":"cf Sam's answer for the proper solutions from a technical POV. From a design POV, \"student\", \"teaching staff\" etc are not entities but different roles a user can have. \nOne curious things with living persons and real-life things in general is that they tend to evolve over time without any respect for our well-defined specifications and classifications - for example it's not uncommon for a student to also have teaching duties at some points, for a teacher to also be studying some other topic, or for a teacher to stop teaching and switch to more administrative tasks. If you design your model with distinct entities instead of one single entitie and distinct roles, it won't properly accomodate those kind of situations (and no, having one account as student and one as teacher is not a proper solution either). \nThat's why the default user model in Django is based on one single entity (the User model) and features allowing roles definitions (groups and permissions) in such a way that one user can have many roles, whether at the same time or in succession.","Q_Score":1,"Tags":"python,django","A_Id":54907387,"CreationDate":"2019-02-27T13:33:00.000","Title":"how to register users of different kinds using different tables in django?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm new to django, I want to register users using different tables for different users like students, teaching staff, non teaching staff, 3 tables.\nHow can i do it instead of using default auth_users table for registration","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":125,"Q_Id":54906708,"Users Score":0,"Answer":"In Django authentication, there is Group model available which have many to many relationship with User model. You can add students, teaching staff and non teaching staff to Group model for separating users by their type.","Q_Score":1,"Tags":"python,django","A_Id":54906887,"CreationDate":"2019-02-27T13:33:00.000","Title":"how to register users of different kinds using different tables in django?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've been given a simple file-conversion task: whenever an MP4 file is in a certain directory, I do some magic to it and move it to a different directory. Nice and straightforward, and easy to automate.\nHowever, if a user is copying some huge file into the directory, I worry that my script might catch it mid-copy, and only have half of the file to work with.\nIs there a way, using Python 3 on Windows, to check whether a file is done copying (in other words, no process is currently writing to it)?\nEDIT: To clarify, I have no idea how the files are getting there: my script just needs to watch a shared network folder and process files that are put there. They might be copied from a local folder I don't have access to, or placed through SCP, or downloaded from the web; all I know is the destination.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":715,"Q_Id":54917025,"Users Score":0,"Answer":"you could try first comparing the size of the file initially, or alternatively see if there are new files in the folder, capture the name of the new file and see if its size increases in x time, if you have a script, you could show the code....","Q_Score":0,"Tags":"python,windows,file,copy","A_Id":54917942,"CreationDate":"2019-02-28T01:29:00.000","Title":"How do I know if a file has finished copying?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I just have a graph.pbtxt file. I want to view the graph in tensorboard. But I am not aware of how to do that. Do I have to write any python script or can I do it from the terminal itself? Kindly help me to know the steps involved.","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":5032,"Q_Id":54917785,"Users Score":5,"Answer":"Open tensorboard and use the \"Upload\" button on the left to upload the pbtxt file will directly open the graph in tensorboard.","Q_Score":5,"Tags":"python,tensorflow,tensorboard","A_Id":60148794,"CreationDate":"2019-02-28T03:04:00.000","Title":"Viewing Graph from saved .pbtxt file on Tensorboard","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have 2 cumulative distributions that I want to find the intersection of. To get an underlying function, I used the scipy interpol1d function. What I\u2019m trying to figure out now, is how to calculate their intersection. Not sure how I can do it. Tried fsolve, but I can\u2019t find how to restrict the range in which to search for a solution (domain is limited).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":54930134,"Users Score":0,"Answer":"Use scipy.optimize.brentq for bracketed root-finding:\nbrentq(lambda x: interp1d(xx, yy)(x) - interp1d(xxx, yyy)(x), -1, 1)","Q_Score":0,"Tags":"python,python-3.x,scipy","A_Id":54950068,"CreationDate":"2019-02-28T16:24:00.000","Title":"Intersection of interpol1d objects","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am having some nii images and each having same height and width but different depth. So I need to make the depth of each image equal, how can I do that? Also I didn't find any Python code, which can help me.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":112,"Q_Id":54932433,"Users Score":0,"Answer":"Once you have defined the depth you want for all volumes, let it be D, you can instantiate an image (called volume when D > 1) of dimensions W x H x D, for every volume you have.\nThen you can fill every such volume, pixel by pixel, by mapping the pixel position onto the original volume and retrieving the value of the pixel by interpolating the values in neighboring pixels.\nFor example, a pixel (i_x, i_y, i_z) in the new volume will be mapped in a point (i_x, i_y, i_z') of the old volume. One of the simplest interpolation methods is the linear interpolation: the value of (i_x, i_y, i_z) is a weighted average of the values (i_x, i_y, floor(i_z')) and (i_x, i_y, floor(i_z') + 1).","Q_Score":1,"Tags":"python,image-processing,medical","A_Id":54934401,"CreationDate":"2019-02-28T18:54:00.000","Title":"How to make depth of nii images equal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've built a data pipeline. Pseudo code is as follows:\n\ndataset -> \ndataset = augment(dataset)\ndataset = dataset.batch(35).prefetch(1)\ndataset = set_from_generator(to_feed_dict(dataset)) # expensive op\ndataset = Cache('\/tmp', dataset)\ndataset = dataset.unbatch()\ndataset = dataset.shuffle(64).batch(256).prefetch(1)\nto_feed_dict(dataset)\n\n1 to 5 actions are required to generate the pretrained model outputs. I cache them as they do not change throughout epochs (pretrained model weights are not updated). 5 to 8 actions prepare the dataset for training.\nDifferent batch sizes have to be used, as the pretrained model inputs are of a much bigger dimensionality than the outputs.\nThe first epoch is slow, as it has to evaluate the pretrained model on every input item to generate templates and save them to the disk. Later epochs are faster, yet they're still quite slow - I suspect the bottleneck is reading the disk cache.\nWhat could be improved in this data pipeline to reduce the issue?\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":257,"Q_Id":54934207,"Users Score":0,"Answer":"prefetch(1) means that there will be only one element prefetched, I think you may want to have it as big as the batch size or larger. \nAfter first cache you may try to put it second time but without providing a path, so it would cache some in the memory.\nMaybe your HDD is just slow? ;)\nAnother idea is you could just manually write to compressed TFRecord after steps 1-4 and then read it with another dataset. Compressed file has lower I\/O but causes higher CPU usage.","Q_Score":0,"Tags":"python,tensorflow,machine-learning,tensorflow-datasets","A_Id":54935427,"CreationDate":"2019-02-28T21:02:00.000","Title":"Tensorflow data pipeline: Slow with caching to disk - how to improve evaluation performance?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"My girlfriend has been given the task of getting all the data from a webpage. The web page belongs to an adult education centre. To get to the webpage, you must first log in. The url is a .asp file. \nShe has to put the data in an Excel sheet. The entries are student names, numbers, ID card number, telephone, etc. There are thousands of entries. HR students alone has 70 pages of entries. This all shows up on the webpage as a table. It is possible to copy and paste.\nI can handle Python openpyxl reasonably and I have heard of web-scraping, which I believe Python can do.\nI don't know what .asp is.\nCould you please give me some tips, pointers, about how to get the data with Python? \nCan I automate this task? \nIs this a case for MySQL? (About which I know nothing.)","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":60,"Q_Id":54943792,"Users Score":1,"Answer":"This is a really broad question and not really in the style of Stack Overflow. To give you some pointers anyway. In the end .asp files, as far as I know, behave like normal websites. Normal websites are interpreted in the browser like HTML, CSS etc. This can be parsed with Python. There are two approaches to this that I have used in the past that work. One is to use a library like requests to get the HTML of a page and then read it using the BeautifulSoup library. This gets more complex if you need to visit authenticated pages. The other option is to use Selenium for python. This module is more a tool to automate browsing itself. You can use this to automate visiting the website and entering login credentials and then read content on the page. There are probably more options which is why this question is too broad. Good luck with your project though! \nEDIT: You do not need MySql for this. Especially not if the required output is an Excel file, which I would generate as a CSV instead because standard Python works better with CSV files than Excel.","Q_Score":0,"Tags":"python","A_Id":54944088,"CreationDate":"2019-03-01T11:32:00.000","Title":"Get data from an .asp file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"My girlfriend has been given the task of getting all the data from a webpage. The web page belongs to an adult education centre. To get to the webpage, you must first log in. The url is a .asp file. \nShe has to put the data in an Excel sheet. The entries are student names, numbers, ID card number, telephone, etc. There are thousands of entries. HR students alone has 70 pages of entries. This all shows up on the webpage as a table. It is possible to copy and paste.\nI can handle Python openpyxl reasonably and I have heard of web-scraping, which I believe Python can do.\nI don't know what .asp is.\nCould you please give me some tips, pointers, about how to get the data with Python? \nCan I automate this task? \nIs this a case for MySQL? (About which I know nothing.)","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":60,"Q_Id":54943792,"Users Score":1,"Answer":"Try using the tool called Octoparse.\nDisclaimer: I've never used it myself, but only came close to using it. So, from my knowledge of its features, I think it would be useful for your need.","Q_Score":0,"Tags":"python","A_Id":54945063,"CreationDate":"2019-03-01T11:32:00.000","Title":"Get data from an .asp file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm a beginner, I have really hit a brick wall, and would greatly appreciate any advice someone more advanced can offer.\nI have been having a number of extremely frustrating issues the past few days, which I have been round and round google trying to solve, tried all sorts of things to no avail.\nProblem 1)\nI can't import pygame in Idle with the error:\nModuleNotFoundError: No module named 'pygame' - even though it is definitely installed, as in terminal, if I ask pip3 to install pygame it says:\nRequirement already satisfied: pygame in \/usr\/local\/lib\/python3.7\/site-packages (1.9.4)\nI think there may be a problem with several conflicting versions of python on my computer, as when i type sys.path in Idle (which by the way displays Python 3.7.2 ) the following are listed:\n'\/Users\/myname\/Documents', '\/Library\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python37.zip', '\/Library\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7', '\/Library\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7\/lib-dynload', '\/Users\/myname\/Library\/Python\/3.7\/lib\/python\/site-packages', '\/Library\/Frameworks\/Python.framework\/Versions\/3.7\/lib\/python3.7\/site-packages'\nSo am I right in thinking pygame is in the python3.7\/sitepackages version, and this is why idle won't import it? I don't know I'm just trying to make sense of this. I have absoloutely no clue how to solve this,\"re-set the path\" or whatever. I don't even know how to find all of these versions of python as only one appears in my applications folder, the rest are elsewhere?\nProblem 2)\nApparently there should be a python 2.7 system version installed on every mac system which is vital to the running of python regardless of the developing environment you use. Yet all of my versions of python seem to be in the library\/downloaded versions. Does this mean my system version of python is gone? I have put the computer in recovery mode today and done a reinstall of the macOS mojave system today, so shouldn't any possible lost version of python 2.7 be back on the system now?\nProblem 3)\nWhen I go to terminal, frequently every command I type is 'not found'.\nI have sometimes found a temporary solution is typing:\nexport PATH=\"\/usr\/local\/bin:\/usr\/bin:\/bin:\/usr\/sbin:\/sbin\"\nbut the problems always return!\nAs I say I also did a system reinstall today but that has helped none!\nCan anybody please help me with these queries? I am really at the end of my tether and quite lost, forgive my programming ignorance please. Many thanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":54953355,"Users Score":0,"Answer":"You should actually add the export PATH=\"\/usr\/local\/bin:\/usr\/bin:\/bin:\/usr\/sbin:\/sbin\" to your .bash_profile (if you are using bash). Do this by opening your terminal, verifying that it says \"bash\" at the top. If it doesn't, you may have a .zprofile instead. Type ls -al and it will list all the invisible files. If you have .bash_profile listed, use that one. If you have .zprofile, use that.\nType nano .bash_profile to open and edit the profile and add the command to the end of it. This will permanently add the path to your profile after you restart the terminal.\nUse ^X to exit nano and type Y to save your changes. Then you can check that it works when you try to run the program from IDLE.","Q_Score":0,"Tags":"python-3.x,macos,command-line,terminal,pygame","A_Id":72088835,"CreationDate":"2019-03-01T22:45:00.000","Title":"Pygame\/Python\/Terminal\/Mac related","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using a screen on my server. When I ask which python inside the screen I see it is using the default \/opt\/anaconda2\/bin\/python version which is on my server, but outside the screen when I ask which python I get ~\/anaconda2\/bin\/python. I want to use the same python inside the screen but I don't know how can I set it. Both path are available in $PATH","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":213,"Q_Id":54971247,"Users Score":1,"Answer":"You could do either one of the following:\n\nUse a virtual environment (install virtualenv). You can specify\nthe version of Python you want to use when creating the virtual\nenvironment with -p \/opt\/anaconda2\/bin\/python.\nUse an alias:\nalias python=\/opt\/anaconda2\/bin\/python.","Q_Score":0,"Tags":"python,anaconda,gnu-screen","A_Id":54980660,"CreationDate":"2019-03-03T16:50:00.000","Title":"Force screen session to use specific version of python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"i'm trying to create a chess simulator.\nconsider this scenario:\nthere is a black rook (instance object of Rook class) in square 2B called rook1.\nthere is a white rook in square 2C called rook2.\nwhen the player moves rook1 to square 2C , the i should remove rook2 object from memory completely.\nhow can i do it? \nP.S. i'v already tried del rook2 , but i don't know why it doesn't work.","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":54,"Q_Id":54988852,"Users Score":4,"Answer":"Trying to remove objects from memory is the wrong way to go. Python offers no option to do that manually, and it would be the wrong operation to perform anyway.\nYou need to alter whatever data structure represents your chess board so that it represents a game state where there is a black rook at c2 and no piece at b2, rather than a game state where there is a black rook at b2 and a white rook at c2. In a reasonable Python beginner-project implementation of a chess board, this probably means assigning to cells in a list of lists. No objects need to be manually removed from memory to do this.\nHaving rook1 and rook2 variables referring to your rooks is unnecessary and probably counterproductive.","Q_Score":0,"Tags":"python,python-3.x,oop,python-3.7","A_Id":54988924,"CreationDate":"2019-03-04T17:51:00.000","Title":"How can i remove an object in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to develop a text classifier that will classify a piece of text as Private or Public. Take medical or health information as an example domain. A typical classifier that I can think of considers keywords as the main distinguisher, right? What about a scenario like bellow? What if both of the pieces of text contains similar keywords but carry a different meaning. \nFollowing piece of text is revealing someone's private (health) situation (the patient has cancer): \nI've been to two clinics and my pcp. I've had an ultrasound only to be told it's a resolving cyst or a hematoma, but it's getting larger and starting to make my leg ache. The PCP said it can't be a cyst because it started out way too big and I swear I have NEVER injured my leg, not even a bump. I am now scared and afraid of cancer. I noticed a slightly uncomfortable sensation only when squatting down about 9 months ago. 3 months ago I went to squat down to put away laundry and it kinda hurt. The pain prompted me to examine my leg and that is when I noticed a lump at the bottom of my calf muscle and flexing only made it more noticeable. Eventually after four clinic visits, an ultrasound and one pcp the result seems to be positive and the mass is getting larger.\n[Private] (Correct Classification)\nFollowing piece of text is a comment from a doctor which is definitely not revealing is health situation. It introduces the weaknesses of a typical classifier model: \nDon\u2019t be scared and do not assume anything bad as cancer. I have gone through several cases in my clinic and it seems familiar to me. As you mentioned it might be a cyst or a hematoma and it's getting larger, it must need some additional diagnosis such as biopsy. Having an ache in that area or the size of the lump does not really tells anything bad. You should visit specialized clinics few more times and go under some specific tests such as biopsy, CT scan, pcp and ultrasound before that lump become more larger.\n[Private] (Which is the Wrong Classification. It should be [Public]) \nThe second paragraph was classified as private by all of my current classifiers, for obvious reason. Similar keywords, valid word sequences, the presence of subjects seemed to make the classifier very confused. Even, both of the content contains subjects like I, You (Noun, Pronouns) etc. I thought about from Word2Vec to Doc2Vec, from Inferring meaning to semantic embeddings but can't think about a solution approach that best suits this problem.\nAny idea, which way I should handle the classification problem? Thanks in advance. \nProgress so Far:\nThe data, I have collected from a public source where patients\/victims usually post their own situation and doctors\/well-wishers reply to those. I assumed while crawling is that - posts belongs to my private class and comments belongs to public class. All to gether I started with 5K+5K posts\/comments and got around 60% with a naive bayes classifier without any major preprocessing. I will try Neural Network soon. But before feeding into any classifier, I just want to know how I can preprocess better to put reasonable weights to either class for better distinction.","AnswerCount":3,"Available Count":1,"Score":-0.1325487884,"is_accepted":false,"ViewCount":452,"Q_Id":54992220,"Users Score":-2,"Answer":"(1) Bayes is indeed a weak classifier - I'd try SVM. If you see improvement than further improvement can be achieved using Neural Network (and perhaps Deep Learning)\n(2) Feature engineering - use TFiDF , and try other things (many people suggest Word2Vec, although I personally tried and it did not improve). Also you can remove stop words.\nOne thing to consider, because you give two anecdotes is to measure objectively the level of agreement between human beings on the task. It is sometime overlooked that two people given the same text can disagree on labels (some might say that a specific document is private although it is public). Just a point to notice - because if e.g. the level of agreement is 65%, then it will be very difficult to build an algorithm that is more accurate.","Q_Score":9,"Tags":"python,nlp,text-classification,natural-language-processing","A_Id":55115795,"CreationDate":"2019-03-04T22:00:00.000","Title":"Text classification beyond the keyword dependency and inferring the actual meaning","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I have a Python script that I want to profile using vmprof to figure out what parts of the code are slow. Since PyPy is generally faster, I also want to profile the script while it is using the PyPy JIT. If the script is named myscript.py, how do you structure the command on the command line to do this?\nI have already installed vmprof using \n\npip install vmprof","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":69,"Q_Id":54994853,"Users Score":0,"Answer":"I would be suprised if it works, but the command is pypy -m vmprof myscript.py . I would expect it to crash saying vmprof is not supported on windows.","Q_Score":0,"Tags":"python,command-line,profiling,pypy","A_Id":55011625,"CreationDate":"2019-03-05T03:08:00.000","Title":"How do you profile a Python script from Windows command line using PyPy and vmprof?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"OK\nI was afraid to use the terminal, so I installed the python-3.7.2-macosx10.9 package downloaded from python.org\nRan the certificate and shell profile scripts, everything seems fine.\nNow the \"which python3\" has changed the path from 3.6 to the new 3.7.2\nSo everything seems fine, correct?\nMy question (of 2) is what's going on with the old python3.6 folder still in the applications folder. Can you just delete it safely? Why when you install a new version does it not at least ask you if you want to update or install and keep both versions?\nSecond question, how would you do this from the terminal?\nI see the first step is to sudo to the root.\nI've forgotten the rest.\nBut from the terminal, would this simply add the new version and leave\nthe older one like the package installer?\nIt's pretty simple to use the package installer and then delete a folder.\nSo, thanks in advance. I'm new to python and have not much confidence \nusing the terminal and all the powerful shell commands.\nAnd yeah I see all the Brew enthusiasts. I DON'T want to use Brew for the moment.\nThe python snakes nest of pathways is a little confusing, for the moment.\nI don't want to get lost with a zillion pathways from Brew because it's\nconfusing for the moment.\nI love Brew, leave me alone.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27436,"Q_Id":55013809,"Users Score":0,"Answer":"Each version of the Python installation is independent of each other. So its safe to delete the version you don't want, but be cautious of this because it can lead to broken dependencies :-).\nYou can run any version by adding the specific version i.e $python3.6 or $python3.7\nThe best approach is to use virtual environments for your projects to enhance consistency. see pipenv","Q_Score":10,"Tags":"python,macos,installation,homebrew","A_Id":66958012,"CreationDate":"2019-03-06T00:43:00.000","Title":"How to update python 3.6 to 3.7 using Mac terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using Django and Python 3.7. I want to write a script to help me easily migrate my application from my local machien (a Mac High Sierra) to a CentOS Linux instance. I'm using a virtual environment in both places. There are many things that need to be done here, but to keep the question specific, how do I determine on my remote machine (where I'm deploying my project to), what dependencies are lacking? I'm using rsync to copy the files (minus the virtual environment)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":63,"Q_Id":55035229,"Users Score":2,"Answer":"On the source system execute pip freeze > requirements.txt, then copy the requiremnts.txt to the target system and then on the target system install all the dependencies with pip install -r requirements.txt. Of course you will need to activate the virtual environments on both systems before execute the pip commands.\nIf you are using a source code management system like git it is a good idea to keep the requirements.txt up to date in your source code repository.","Q_Score":1,"Tags":"django,python-3.x,centos,dependencies","A_Id":55035280,"CreationDate":"2019-03-07T02:42:00.000","Title":"How do I figure out what dependencies to install when I copy my Django app from one system to another?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm new to both angular and flask framework so plz be patient with me.\nI'm trying to build a web app with flask as a backend server and Angular for the frontend (I didn't start it yet), and while gathering infos and looking at tutorials and some documentation (a little bit) I'm wondering: \nDoes Angular server and flask server need both to be running at the same time or will just flask be enough? Knowing that I want to send data from the server to the frontend to display and collecting data from users and sending it to the backend. \nI noticed some guys building the angular app and using the dist files but I don't exactly know how that works.\nSo can you guys suggest what should I have to do or how to proceed with this?\nThank you ^^","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1198,"Q_Id":55040986,"Users Score":1,"Answer":"Angular does not need a server. It's a client-side framework so it can be served by any server like Flask. Probably in most tutorials, the backend is served by nodejs, not Flask.","Q_Score":0,"Tags":"angular,python-3.x,flask","A_Id":55041112,"CreationDate":"2019-03-07T10:03:00.000","Title":"Does angular server and flask server have both to be running at the same?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I would like to change the font color of a single word in a Tkinter label widget.\nI understand that something similar to what I would like to be done can be achieved with a Text widget.. for example making the word \"YELLOW\" show in yellow: \nself.text.tag_config(\"tag_yel\", fg=clr_yellow)\nself.text.highligh_pattern(\"YELLOW\", \"tag_yel\")\nBut my text is static and all I want is to change the word \"YELLOW\" to show as yellow font and \"RED\" in red font and I cannot seem to figure out how to change text color without changing it all with label.config(fg=clr).\nAny help would be appreciated","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":533,"Q_Id":55069724,"Users Score":1,"Answer":"You cannot do what you want. A label supports only a single foreground color and a single background color. The solution is to use a text or canvas widget., or to use two separate labels.","Q_Score":1,"Tags":"python,python-2.7,tkinter","A_Id":55069875,"CreationDate":"2019-03-08T19:25:00.000","Title":"Change color of single word in Tk label widget","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Is it possible to execute short python expressions in one line in terminal, without passing a file? \ne.g. (borrowing from how I would write an awk expression)\npython 'print(\"hello world\")'","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":215,"Q_Id":55101575,"Users Score":3,"Answer":"python3 -c \"print('Hello')\"\nUse the -c flag as above.","Q_Score":0,"Tags":"python,terminal","A_Id":55101619,"CreationDate":"2019-03-11T12:10:00.000","Title":"Running python directly in terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Is it possible to execute short python expressions in one line in terminal, without passing a file? \ne.g. (borrowing from how I would write an awk expression)\npython 'print(\"hello world\")'","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":215,"Q_Id":55101575,"Users Score":0,"Answer":"For completeness, I found you can also feed a here-string to python. \npython <<< 'print(\"hello world\")'","Q_Score":0,"Tags":"python,terminal","A_Id":55103516,"CreationDate":"2019-03-11T12:10:00.000","Title":"Running python directly in terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have trained a single layer neural network model in python (a simple model without keras and tensorflow).\nHow canI save it after training along with weights in python, and how to load it later?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":10072,"Q_Id":55102781,"Users Score":1,"Answer":"So you write it down yourself. You need some simple steps:\n\nIn your code for neural network, store weights in a variable. It could be simply done by using self.weights.weights are numpy ndarrays. for example if weights are between layer with 10 neurons to layer with 100 neurons, it is a 10 * 100(or 100* 10) nd array.\nUse numpy.save to save the ndarray.\nFor next use of your network, use numpy.load to load weights\nIn the first initialization of your network, use weights you've loaded.\nDon't forget, if your network is trained, weights should be frozen. It can be done by zeroing learning rate.","Q_Score":6,"Tags":"python,scikit-learn,pickle","A_Id":55103222,"CreationDate":"2019-03-11T13:21:00.000","Title":"How to save and load my neural network model after training along with weights in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I try to calculate noise for input data using the gradient of the loss function from the input-data:\nmy_grad = tf.gradients(loss, input)\nloss is an array of size (n x 1) where n is the number of datasets, m is the size of the dataset, input is an array of (n x m) where m is the size of a single dataset.\nI need my_grad to be of size (n x m) - so for each dataset the gradient is calulated. But by definition the gradients where i!=j are zero - but tf.gradients allocates huge amount of memory and runs for prettymuch ever...\nA version, which calulates the gradients only where i=j would be great - any Idea how to get there?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":132,"Q_Id":55121377,"Users Score":0,"Answer":"I suppose I have found a solution:\nmy_grad = tf.gradients(tf.reduce_sum(loss), input)\nensures, that the cross dependencies i!=j are ignored - that works really nicely and fast..","Q_Score":1,"Tags":"python,tensorflow,diagonal,gradient","A_Id":55122132,"CreationDate":"2019-03-12T12:23:00.000","Title":"tf.gradient acting like tfp.math.diag_jacobian","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm.\nIt was located in C:\\users\\my_name\\Anaconda3\\python.exe, and for some reason I can't find it anywhere!\nYet, all the packages are here (in the site-packages folder), and only the C:\\users\\my_name\\Anaconda3\\pythonw.exe is available.\nWith the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized.\nTherefore, how to get back the python.exe file?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":3410,"Q_Id":55124386,"Users Score":1,"Answer":"The answer repeats the comment to the question.\nI had the same issue once after Anaconda update - python.exe was missing. It was Anaconda 3 installed to Program Files folder by MS Visual Studio (Python 3.6 on Windows10 x64).\nTo solve the problem I manually copied python.exe file from the most fresh python package available (folder pkgs then folder like python-3.6.8-h9f7ef89_7).","Q_Score":3,"Tags":"python,pycharm,anaconda,exe,pythoninterpreter","A_Id":55125807,"CreationDate":"2019-03-12T14:50:00.000","Title":"Lost my python.exe in Pycharm with Anaconda3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm.\nIt was located in C:\\users\\my_name\\Anaconda3\\python.exe, and for some reason I can't find it anywhere!\nYet, all the packages are here (in the site-packages folder), and only the C:\\users\\my_name\\Anaconda3\\pythonw.exe is available.\nWith the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized.\nTherefore, how to get back the python.exe file?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3410,"Q_Id":55124386,"Users Score":0,"Answer":"My Python.exe was missing today in my existing environment in anaconda, so I clone my environment with anaconda to recreate Python.exe and use it again in Spyder.","Q_Score":3,"Tags":"python,pycharm,anaconda,exe,pythoninterpreter","A_Id":63613700,"CreationDate":"2019-03-12T14:50:00.000","Title":"Lost my python.exe in Pycharm with Anaconda3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm.\nIt was located in C:\\users\\my_name\\Anaconda3\\python.exe, and for some reason I can't find it anywhere!\nYet, all the packages are here (in the site-packages folder), and only the C:\\users\\my_name\\Anaconda3\\pythonw.exe is available.\nWith the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized.\nTherefore, how to get back the python.exe file?","AnswerCount":3,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":3410,"Q_Id":55124386,"Users Score":3,"Answer":"I just had the same issue and found out that Avast removed it because it thought it was a threat. I found it in Avast -> Protection -> Virus Chest. And from there, you have the option to restore it.","Q_Score":3,"Tags":"python,pycharm,anaconda,exe,pythoninterpreter","A_Id":64929387,"CreationDate":"2019-03-12T14:50:00.000","Title":"Lost my python.exe in Pycharm with Anaconda3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"the code is supposed to give 3 questions with 2 attempts. if the answer is correct the first try, 3 points. second try gives 1 point. if second try is incorrect, the game will end.\nhowever, the scores are not adding up to create a final score after the 3 rounds. how do i make it so that it does that?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":62,"Q_Id":55128175,"Users Score":2,"Answer":"First move import random to the top of the script because you're importing it every time in the loop and the score is calculated just in the last spin of the program since you empty scoreList[] every time","Q_Score":0,"Tags":"python","A_Id":55128549,"CreationDate":"2019-03-12T18:13:00.000","Title":"trouble with appending scores in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I\u2019ve been using the Luigi visualizer for pipelining my python code.\nNow I\u2019ve started using an aws instance, and want to access the visualizer from my own machine.\nAny ideas on how I could do that?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":163,"Q_Id":55134741,"Users Score":1,"Answer":"We had the very same problem today on GCP, and solved with the following steps:\n\nsetting firewall rules for incoming TCP connections on port used by the service (which by default is 8082);\ninstalling apache2 server on the instance with a site.conf configuration that resolve incoming requests on ip-of-instance:8082.\n\nThat's it. Hope this can help.","Q_Score":1,"Tags":"python,amazon-web-services,luigi","A_Id":55143381,"CreationDate":"2019-03-13T05:05:00.000","Title":"Accessing Luigi visualizer on AWS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"We have to refactor scraping algorithm. To speed it up we came up to conclusion to multi-thread processes (and limit them to max 3). Generally speaking scraping consists of following aspects:\n\nScraping (async request, takes approx 2 sec)\nImage processing (async per image, approx 500ms per image)\nChanging source item in DB (async request, approx 2 sec)\n\nWhat I am aiming to do is to create batch of scraping requests and while looping through them, create a stack of consequent async operations: Process images and as soon as images are processed -> change source item.\nIn other words - scraping goes. but image processing and changing source items must be run in separate limited async threads.\nOnly think I don't know how to stack the batch and limit threads.\nHas anyone came across the same task and what approach have you used?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":61,"Q_Id":55138277,"Users Score":1,"Answer":"What you're looking for is consumer-producer pattern. Just create 3 different queues and when you process the item in one of them, queue new work in another. Then you can 3 different threads each of them processing one queue.","Q_Score":1,"Tags":"python,multithreading","A_Id":55138456,"CreationDate":"2019-03-13T09:24:00.000","Title":"Async, multithreaded scraping in Python with limited threads","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Pymongo inserts _id in original array after insert_many .how to avoid insertion of _id ? And why original array is updated with _id? Please explain with example, if anybody knows? Thanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":21,"Q_Id":55150525,"Users Score":0,"Answer":"Pymongo driver explicitly inserts _id of type ObjectId into the original array and hence original array gets updated before inserting into mongo. This is the expected behaviour of pymongo for insertmany query as per my previous experiences. Hope this answers your question.","Q_Score":0,"Tags":"python,pymongo","A_Id":55150655,"CreationDate":"2019-03-13T20:16:00.000","Title":"Pymongo inserts _id in original array after insert_many .how to avoid insertion of _id?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"is there any way to prevent the user from closing the cmd window of a python script on windows or maybe just disable the (X) close button ?? I have looked for answers already but i couldn't find anything that would help me","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":306,"Q_Id":55151495,"Users Score":0,"Answer":"I dont think its possible, what you can do instead is to not display the cmd window (backgroundworker) and make it into a hidden process with system rights so that it cant be shutdown until it finishes.","Q_Score":2,"Tags":"python,python-3.x,windows,python-2.7,cmd","A_Id":55151598,"CreationDate":"2019-03-13T21:29:00.000","Title":"how can i prevent the user from closing my cmd window in a python script on windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"how can i search for patterns in texts that cover multiple lines and have fixed positions relating each other, for example a pattern consisting of 3 letters of x directly below each other and I want to find them at any position in the line, not just at the beginning for example. \nThank you in advance for the answer!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":55153197,"Users Score":0,"Answer":"I believe the problem you are asking about is \"Find patterns that appear at the same offset in a series of lines.\"\nI do not think this describes a regular language, so you would need to draw on Python's extended regex features to have a chance at a regex-based solution. But I do not believe Python supports sufficiently extended features to accomplish this task [1].\nIf it is acceptable that they occur at a particular offset (rather than \"any offset, so long as the offset is consistent\"), then something like this should work:\n\/^.{OFFSET}PATTERN.*\\n^.{OFFSET}PATTERN.*\\n^.{OFFSET}PATTERN\/, using the MULTILINE flag so that ^ matches the beginning of a series of lines instead of just the beginning of the entire text.\n[1] In particular, you could use a backreference to capture the text preceding the desired pattern on one line, but I do not think you can query the length of the captured content \"inline\". You could search for the same leading text again on the next line, but that does not sound like what you want.","Q_Score":0,"Tags":"python,regex,multiline","A_Id":55153266,"CreationDate":"2019-03-14T00:37:00.000","Title":"regex python multiline","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have the following problem, I have many files of 3D volumes that I open to extract a bunch of numpy arrays.\nI want to get those arrays randomly, i.e. in the worst case I open as many 3D volumes as numpy arrays I want to get, if all those arrays are in separate files.\nThe IO here isn't great, I open a big file only to get a small numpy array from it.\nAny idea how I can store all these arrays so that the IO is better?\nI can't pre-read all the arrays and save them all in one file because then that file would be too big to open for RAM.\nI looked up LMDB but it all seems to be about Caffe.\nAny idea how I can achieve this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1082,"Q_Id":55166874,"Users Score":0,"Answer":"One trivial solution can be pre-processing your dataset and saving multiple smaller crops of the original 3D volumes separately. This way you sacrifice some disk space for more efficient IO.\nNote that you can make a trade-off with the crop size here: saving bigger crops than you need for input allows you to still do random crop augmentation on the fly. If you save overlapping crops in the pre-processing step, then you can ensure that still all possible random crops of the original dataset can be produced.\nAlternatively you may try using a custom data loader that retains the full volumes for a few batch. Be careful, this might create some correlation between batches. Since many machine learning algorithms relies on i.i.d samples (e.g. Stochastic Gradient Descent), correlated batches can easily cause some serious mess.","Q_Score":0,"Tags":"python,machine-learning,dataset,pytorch,lmdb","A_Id":55169008,"CreationDate":"2019-03-14T15:52:00.000","Title":"Faster pytorch dataset file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new at this part of web developing and was trying to figure out a way of creating a web app with the basic specifications as the example bellow:\n\nA user1 opens a page with a textbox (something where he can add text or so), and it will be modified as it decides to do it.\n\nIf the user1 has problems he can invite other user2 to help with the typing.\n\n\nThe user2 (when logged to the Channel\/Socket) will be able to modify that field and the modifications made will be show to the user1 in real time and vice versa.\n\n\nOr another example is a room on CodeAcademy:\n\nImagine that I am learning a new coding language, however, at middle of it I jeopardize it and had to ask for help.\n\nSo I go forward and ask help to another user. This user access the page through a WebSocket (or something related to that).\n\n\nThe user helps me changing my code and adding some comments at it in real time, and I also will be able to ask questions through it (real time communication)\n\n\nMy questions is: will I be able to developed certain app using Django Channels 2 and multiplexing? or better move to use NodeJS or something related to that?\nObs: I do have more experience working with python\/django, so it will more productive for me right know if could find a way working with this combo.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":238,"Q_Id":55170669,"Users Score":1,"Answer":"This is definitely possible. They will be lots of possibilities, but I would recommend the following.\n\nHave a page with code on. The page has some websocket JS code that can connect to a Channels Consumer.\nThe JS does 2 simple things. When code is updated code on the screen, send a message to the Consumer, with the new text (you can optimize this later). When the socket receives a message, then replace the code on screen with the new code.\nIn your consumer, add your consumer to a channel group when connecting (the group will contain all of the consumers that are accessing the page)\nWhen a message is received, use group_send to send it to all the other consumers\nWhen your consumer callback function gets called, then send a message to your websocket","Q_Score":0,"Tags":"python,django,websocket,multiplexing","A_Id":55243910,"CreationDate":"2019-03-14T19:33:00.000","Title":"How does multiplexing in Django sockets work?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I installed Python 3.7.2 and VSCode 1.32.1 on Mac OS X 10.10.2. In VSCode I installed the Pyhton extension and got a message saying: \n\"Operating system does not meet the minimum requirements of the language server. Reverting to alternative, Jedi\". \nWhen clicking the \"More\" option under the message I got information indicating that I need OS X 10.12, at least. \nI tried to install an older version of the extension, did some reading here and asked Google, but I'm having a hard time since I don\u00b4t really know what vocabulary to use. \nMy questions are: \nWill the extension work despite the error message?\nDo I need to solve this, and how do I do that?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1138,"Q_Id":55171449,"Users Score":1,"Answer":"The extension will work without the language server, but some thing won't work quite as well (e.g. auto-complete and some refactoring options). Basically if you remove the \"python.jediEnabled\" setting -- or set it to false -- and the extension works fine for you then that's the important thing. :)","Q_Score":0,"Tags":"python,macos,visual-studio-code,error-messaging","A_Id":55172116,"CreationDate":"2019-03-14T20:28:00.000","Title":"Operating system does not meet the minimum requirements of the language server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"How should developers indicate how users should cite the package, other than on the documentation?\nR packages return the preferred citation using citation(\"pkg\").\nI can think of pkg.CITATION, pkg.citation and pkg.__citation__. Are there others? If there is no preferred way (which seems to be the case to me as I did not find anything on python.org), what are the pros and cons of each?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":271,"Q_Id":55211780,"Users Score":0,"Answer":"Finally I opted for the dunder option. Only the dunder option (__citation__) makes clear, that this is not a normal variable needed for runtime.\nYes, dunder strings should not be used inflationary because python might use them at a later time. But if python is going to use __citation__, then it will be for a similar purpose. Also, I deem the relative costs higher with the other options.","Q_Score":4,"Tags":"python,citations","A_Id":55306548,"CreationDate":"2019-03-17T20:51:00.000","Title":"What is the preferred way to a add a citation suggestion to python packages?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I was wondering how I could see the history in the Pycharm Python console using a shortcut. I can see the history using the upper arrow key, but If I want to go further back in history I have to go to each individual line if more lines are ran at the time. Is it possible that each time I press a button the full previous commands that are ran are shown?\nI don't want to search in history, I want to go back in history similar using arrow up key but each time I enter arrow up I want to see the previous full code that was ran.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1662,"Q_Id":55223233,"Users Score":0,"Answer":"Go to preferences -> Appereance & Behaviour -> Keymap. You can search for \"Browse Console History\" and add a keyboard shortcut with right click -> Add Keyboard shortcut.","Q_Score":1,"Tags":"python,console,pycharm,keyboard-shortcuts","A_Id":55223390,"CreationDate":"2019-03-18T14:05:00.000","Title":"How to see the full previous command in Pycharm Python console using a shortcut","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"in C# we have to get\/set to make rules, but I don't know how to do this in Python.\nExample:\nOrcs can only equip Axes, other weapons are not eligible\nHumans can only equip swords, other weapons are eligible.\nHow can I tell Python that an Orc cannot do something like in the example above?\nThanks for answers in advance, hope this made any sense to you guys.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1846,"Q_Id":55226942,"Users Score":0,"Answer":"Python language doesn't have an effective mechanism for restricting access to an instance or method. There is a convention though, to prefix the name of a field\/method with an underscore to simulate \"protected\" or \"private\" behavior. \nBut, all members in a Python class are public by default.","Q_Score":4,"Tags":"python,pygame","A_Id":55227059,"CreationDate":"2019-03-18T17:28:00.000","Title":"Python how to to make set of rules for each class in a game","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"My input text looks like this:\n\nPut in 3 extenders but by the 4th floor it is weak on signal these don't piggy back of each other. ST -99. 5G DL 624.26 UP 168.20 4g DL 2\n Up .44\n\nI am having difficulty writing a regex that will match any instances of 4G\/5G\/4g\/5g and give me all the corresponding measurements after the instances of these codes, which are numbers with decimals. \nThe output should be:\n\n5G 624.26 168.20 4g 2 .44\n\nAny thoughts how to achieve this? I am trying to do this analysis in Python.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":78,"Q_Id":55228336,"Users Score":1,"Answer":"I would separate it in different capture group like this:\n(?i)(?P5?4?G)\\sDL\\s(?P[^\\s]*)\\sUP\\s(?P[^\\s]*)\n(?i) makes the whole regex case insensitive\n(?P5?4?G) is the first group matching on either 4g, 5g, 4G or 5G.\n(?P[^\\s]*) is the second and third group matching on everything that is not a space.\nThen in Python you can do:\nmatch = re.match('(?i)(?P5?4?G)\\sDL\\s(?P[^\\s]*)\\sUP\\s(?P[^\\s]*)', input)\nAnd access each group like so:\nmatch.group('g1') etc.","Q_Score":0,"Tags":"python,regex","A_Id":55228523,"CreationDate":"2019-03-18T18:58:00.000","Title":"Regex to get key words, all digits and periods","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In Zapier, I have a \"Run Python\" action triggered by a \"Twitter\" event. One of the fields passed to me by the Twitter event is called \"Entities URLs Display URL\". It's the list of anchor texts of all of the links in the tweet being processed.\nZapier is passing this value into my Python code as a single comma-separated string. I know I can use .split(',') to get a list, but this results in ambiguity if the original strings contained commas.\nIs there some way to get Zapier to pass this sequence of strings into my code as a sequence of strings rather than as a single joined-together string?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":124,"Q_Id":55233255,"Users Score":2,"Answer":"David here, from the Zapier Platform team. \nAt this time, all inputs to a code step are coerced into strings due to the way data is passed between zap steps. This is a great request though and I'll make a note of it internally.","Q_Score":1,"Tags":"python,zapier","A_Id":55244199,"CreationDate":"2019-03-19T03:35:00.000","Title":"In Zapier, how do I get the inputs to my Python \"Run Code\" action to be passed in as lists and not joined strings?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have made a code using pytesseract and whenever I run it, I get this error:\nTesseractNotFoundError: tesseract is not installed or it's not in your path\nI have installed tesseract using HomeBrew and also pip installed it.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2063,"Q_Id":55235369,"Users Score":2,"Answer":"If installed with Homebrew, it will be located in \/usr\/local\/bin\/tesseract by default. To verify this, run which tesseract in the terminal as Dmitrrii Z. mentioned.\nIf it's there, you can set it up in your python environment by adding the following line to your python script, after importing the library:\npytesseract.pytesseract.tesseract_cmd = r'\/usr\/local\/bin\/tesseract'","Q_Score":0,"Tags":"python,python-3.x,macos,tesseract,python-tesseract","A_Id":58385393,"CreationDate":"2019-03-19T07:09:00.000","Title":"Where is the tesseract executable file located on MacOS, and how to define it in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"If I have the files frame.py and bindings.py both with classes Frame and Bindings respectively inside of them, I import the bindings.py file into frame.py by using from bindings import Bindings but how do I go about importing the frame.py file into my bindings.py file. If I use import frame or from frame import Frame I get the error ImportError: cannot import name 'Bindings' from 'bindings'. Is there any way around this without restructuring my code?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":55237179,"Users Score":0,"Answer":"Instead of using from bindings import Bindings try import bindings.","Q_Score":0,"Tags":"python","A_Id":55237563,"CreationDate":"2019-03-19T09:10:00.000","Title":"Call function from file that has already imported the current file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm creating a web applcation in Python and I only want the user to be able to enter a weekday that is older than today's date. I've had a look at isoweekday() for example but don't know how to integrate it into a flask form. The form currently looks like this:\nappointment_date = DateField('Appointment Date', format='%Y-%m-%d', validators=[DataRequired()])\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":126,"Q_Id":55258105,"Users Score":0,"Answer":"If you just want a weekday, you should put a select or a textbox, not a date picker.\nIf you put a select, you can disable the days before today so you don't even need a validation","Q_Score":0,"Tags":"python,forms,flask","A_Id":55258169,"CreationDate":"2019-03-20T10:03:00.000","Title":"How to only enter a date that is a weekday in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need suggestions on how to speed up access to python programs when called from Golang. I really need fast access time (very low latency).\nApproach 1:\nfunc main() {\n...\n...\n cmd = exec.Command(\"python\", \"test.py\")\n o, err = cmd.CombinedOutput()\n...\n}\nIf my test.py file is a basic print \"HelloWorld\" program, the execution time is over 50ms. I assume most of the time is for loading the shell and python in memory.\nApproach 2:\nThe above approach can be speeded up substantially by having python start a HTTP server and then gaving the Go code POST a HTTP request and get the response from the HTTP server (python). Speeds up response times to less than 5ms.\nI guess the main reason for this is probably because the python interpretor is already loaded and warm in memory.\nAre there other approaches I can use similar to approach 2 (shared memory, etc.) which could speed up the response from my python code?. Our application requires extremely low latency and the 50 ms I am currently seeing from using Golang's exec package is not going to cut it.\nthanks,","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":235,"Q_Id":55271734,"Users Score":0,"Answer":"Approach 1: Simple HTTP server and client\nApproach 2: Local socket or pipe\nApproach 3: Shared memory\nApproach 4: GRPC server and client\nIn fact, I prefer the GRPC method by stream way, it will hold the connection (because of HTTP\/2), it's easy, fast and secure. And it's easy moving python node to another machine.","Q_Score":0,"Tags":"python,go,exec,low-latency","A_Id":55272973,"CreationDate":"2019-03-20T23:43:00.000","Title":"Speed up access to python programs from Golang's exec packaqe","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm a newbie in image processing and python in general. For an image recognition project, I want to compare every pixel with one another. For that, I need to create a program that iterates through every pixel, takes it's value (for example \"[28, 78, 72]\") and creates some kind of values through comparing it to every other pixel. I did manage to access one single number in an array element \/pixel (output: 28) through a bunch of for loops, but I just couldn't figure out how to access every number in every pixel, in every row. Does anyone know a good algorithm to solve my problem? I use OpenCV for reading in the image by the way.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":55288421,"Users Score":0,"Answer":"Comparing every pixel with a \"pattern\" can be done with convolution. You should take a look at Haar cascade algorithm.","Q_Score":0,"Tags":"python-3.x,algorithm,image-recognition","A_Id":55295961,"CreationDate":"2019-03-21T20:01:00.000","Title":"Python: Iterate through every pixel in an image for image recognition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using numpy.savetxt() to save an array, but its rounding my values to the first decimal point, which is a problem. anyone have any clue how to change this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":789,"Q_Id":55288883,"Users Score":1,"Answer":"You can set the precision through changing fmt parameter. For example np.savetxt('tmp.txt',a, fmt='%1.3f') would leave you with an output with the precision of first three decimal points","Q_Score":0,"Tags":"python,save","A_Id":55289028,"CreationDate":"2019-03-21T20:38:00.000","Title":"numpy.savetxt() rounding values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have basic knowledge of SVM, but now I am working with images. I have images in 5 folders, each folder, for example, has images for letters a, b, c, d, e. The folder 'a' has images of handwriting letters for 'a, folder 'b' has images of handwriting letters for 'b' and so on.\nNow how can I use the images as my training data in SVM in Python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":55292341,"Users Score":0,"Answer":"as far i understood you want to train your svm to classify these images into the classes named a,b,c,d . For that you can use any of the good image processing techniques to extract features (such as HOG which is nicely implemented in opencv) from your image and then use these features , and the label as the input to your SVM training (the corresponding label for those would be the name of the folders i.e a,b,c,d) you can train your SVM using the features only and during the inference time , you can simply calculate the HOG feature of the image and feed it to your SVM and it will give you the desired output.","Q_Score":1,"Tags":"python,svm","A_Id":55294058,"CreationDate":"2019-03-22T03:06:00.000","Title":"Training SVM in Python with pictures","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a contanarized flask app with external db, that logs users on other site using selenium. Everything work perfectly in localhost. I want to deploy this app using containers and found selenium container with google chrome within could make the job. And my question is: how to execute scripts\/methods from flask container in selenium container? I tried to find some helpful info, but I didn't find anything. \nShould I make an API call from selenium container to flask container? Is it the way or maybe something different?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":115,"Q_Id":55299697,"Users Score":1,"Answer":"As far as i understood, you are trying to take your local implementation, which runs on your pc and put it into two different docker containers. Then you want to make a call from the selenium container to your container containing the flask script which connects to your database.\nIn this case, you can think of your containers like two different computers. You can tell docker do create an internal network between these two containers and send the request via API call, like you suggested. But you are not limited to this approach, you can use any technique, that works for two computers to exchange commands.","Q_Score":0,"Tags":"python,selenium,docker,flask,google-chrome-headless","A_Id":55302692,"CreationDate":"2019-03-22T12:32:00.000","Title":"How to execute script from container within another container?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I downloaded VS2019 preview to try how it works with Python.\nI use Anaconda, and VS2019 sees the Anaconda virtual environment, terminal opens and works but when I try to launch 'import numpy', for example, I receive this:\n\nAn internal error has occurred in the Interactive window. Please\n restart Visual Studio. Intel MKL FATAL ERROR: Cannot load\n mkl_intel_thread.dll. The interactive Python process has exited.\n\nDoes anyone know how to fix it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1421,"Q_Id":55307873,"Users Score":0,"Answer":"I had same issue, this worked for me:\nTry to add conda-env-root\/Library\/bin to the path in the run environment.","Q_Score":1,"Tags":"python,visual-studio,anaconda,visual-studio-2019","A_Id":58827105,"CreationDate":"2019-03-22T21:15:00.000","Title":"Visual Studio doesn't work with Anaconda environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have some model where there are date field and CharField with choices New or Done, and I want to show some message for this model objects in my API views if 2 conditions are met, date is past and status is NEW, but I really don't know how I should resolve this.\nI was thinking that maybe there is option to make some field in model that have choices and set suitable choice if conditions are fulfilled but I didn't find any information if something like this is possible so maybe someone have idea how resolve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":55326465,"Users Score":0,"Answer":"You need override the method save of your model. An overrided method must check the condition and show message\nYou may set the signal receiver on the post_save signal that does the same like (1).","Q_Score":0,"Tags":"python,django","A_Id":55328234,"CreationDate":"2019-03-24T17:23:00.000","Title":"Automatically filled field in model","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I have a dataframe with 5632 columns, and I only want to keep 500 of them. I have the columns names (that I wanna keep) in a dataframe as well, with the names as the row index. Is there any way to do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":55330844,"Users Score":0,"Answer":"Let us assume your DataFrame is named as df and you have a list cols of column indices you want to retain. Then you should use:\ndf1 = df.iloc[:, cols]\nThis statement will drop all the columns other than the ones whose indices have been specified in cols. Use df1 as your new DataFrame.","Q_Score":0,"Tags":"python-3.x,pandas,dataframe","A_Id":55332549,"CreationDate":"2019-03-25T03:15:00.000","Title":"how to drop multiple (~5000) columns in the pandas dataframe?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using the PuLP library in Python to solve an MILP problem. I have run my problem successfully with the default solver (CBC). Now I would like to use PuLP with another solver (GLPK). How do I set up PuLP with GLPK?\nI have done some research online and found information on how to use GLPK (e.g. with lp_prob.solve(pulp.GLPK_CMD())) but haven't found information on how to actually set up PuLP with GLPK (or any other solver for that matter), so that it finds my GLPK installation. I have already installed GLPK seperately (but I didn't add it to my PATH environment variable).\nI ran the command pulp.pulpTestAll()\nand got:\nSolver unavailable\nI know that I should be getting a \"passed\" instead of an \"unavailable\" to be able to use it.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":3898,"Q_Id":55362984,"Users Score":3,"Answer":"After reading in more detail the code and testing out some things, I finally found out how to use GLPK with PuLP, without changing anything in the PuLP package itself.\nYour need to pass the path as an argument to GLPK_CMD in solve as follows (replace with your glpsol path):\nlp_prob.solve(GLPK_CMD(path = 'C:\\\\Users\\\\username\\\\glpk-4.65\\\\w64\\\\glpsol.exe')\nYou can also pass options that way, e.g.\nlp_prob.solve(GLPK_CMD(path = 'C:\\\\Users\\\\username\\\\glpk-4.65\\\\w64\\\\glpsol.exe', options = [\"--mipgap\", \"0.01\",\"--tmlim\", \"1000\"])","Q_Score":3,"Tags":"python,pulp,glpk","A_Id":55382399,"CreationDate":"2019-03-26T17:26:00.000","Title":"How to configure PuLP to call GLPK solver","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using the PuLP library in Python to solve an MILP problem. I have run my problem successfully with the default solver (CBC). Now I would like to use PuLP with another solver (GLPK). How do I set up PuLP with GLPK?\nI have done some research online and found information on how to use GLPK (e.g. with lp_prob.solve(pulp.GLPK_CMD())) but haven't found information on how to actually set up PuLP with GLPK (or any other solver for that matter), so that it finds my GLPK installation. I have already installed GLPK seperately (but I didn't add it to my PATH environment variable).\nI ran the command pulp.pulpTestAll()\nand got:\nSolver unavailable\nI know that I should be getting a \"passed\" instead of an \"unavailable\" to be able to use it.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":3898,"Q_Id":55362984,"Users Score":1,"Answer":"I had same problem, but is not related with glpk installation, is with solution file create, the message is confusim. My problem was I use numeric name for my variables, as '0238' ou '1342', I add a 'x' before it, then they looked like 'x0238'.","Q_Score":3,"Tags":"python,pulp,glpk","A_Id":62127258,"CreationDate":"2019-03-26T17:26:00.000","Title":"How to configure PuLP to call GLPK solver","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Consider a set of n cubes with colored facets (each one with a specific color\nout of 4 possible ones - red, blue, green and yellow). Form the highest possible tower of k cubes ( k \u2264 n ) properly rotated (12 positions of a cube), so the lateral faces of the tower will have the same color, using and evolutionary algorithm.\nWhat I did so far:\nI thought that the following representation would be suitable: an Individual could be an array of n integers, each number having a value between 1 and 12, indicating the current position of the cube (an input file contains n lines, each line shows information about the color of each face of the cube). \nThen, the Population consists of multiple Individuals.\nThe Crossover method should create a new child(Individual), containing information from its parents (approximately half from each parent).\nNow, my biggest issue is related to the Mutate and Fitness methods.\nIn Mutate method, if the probability of mutation (say 0.01), I should change the position of a random cube with other random position (for example, the third cube can have its position(rotation) changed from 5 to 12).\nIn Fitness method, I thought that I could compare, two by two, the cubes from an Individual, to see if they have common faces. If they have a common face, a \"count\" variable will be incremented with the number of common faces and if all the 4 lateral faces will be the same for these 2 cubes, the count will increase with another number of points. After comparing all the adjacent cubes, the count variable is returned. Our goal is to obtain as many adjacent cubes having the same lateral faces as we can, i.e. to maximize the Fitness method.\nMy question is the following:\nHow can be a rotation implemented? I mean, if a cube changes its position(rotation) from 3, to 10, how do we know the new arrangement of the faces? Or, if I perform a mutation on a cube, what is the process of rotating this cube if a random rotation number is selected?\nI think that I should create a vector of 6 elements (the colors of each face) for each cube, but when the rotation value of a cube is modified, I don't know in what manner the elements of its vector of faces should be rearranged.\nShuffling them is not correct, because by doing this, two opposite faces could become adjacent, meaning that the vector doesn't represent that particular cube anymore (obviously, two opposite faces cannot be adjacent).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":227,"Q_Id":55367429,"Users Score":0,"Answer":"First, I'm not sure how you get 12 rotations; I get 24: 4 orientations with each of the 6 faces on the bottom. Use a standard D6 (6-sided die) and see how many different layouts you get.\nApparently, the first thing you need to build is a something (a class?) that accurately represents a cube in any of the available orientations. I suggest that you use a simple structure that can return the four faces in order -- say, front-right-back-left -- given a cube and the rotation number.\nI think you can effectively represent a cube as three pairs of opposing sides. Once you've represented that opposition, the remaining organization is arbitrary numbering: any valid choice is isomorphic to any other. Each rotation will produce an interleaved sequence of two opposing pairs. For instance, a standard D6 has opposing pairs [(1, 6), (2, 5), (3, 4)]. The first 8 rotations would put 1 and 6 on the hidden faces (top and bottom), giving you the sequence 2354 in each of its 4 rotations and their reverses.\nThat class is one large subsystem of your problem; the other, the genetic algorithm, you seem to have well in hand. Stack all of your cubes randomly; \"fitness\" is a count of the most prevalent 4-show (sequence of 4 sides) in the stack. At the start, this will generally be 1, as nothing will match.\nFrom there, you seem to have an appropriate handle on mutation. You might give a higher chance of mutating a non-matching cube, or perhaps see if some cube is a half-match: two opposite faces match the \"best fit\" 4-show, so you merely rotate it along that axis, preserving those two faces, and swapping the other pair for the top-bottom pair (note: two directions to do that).\nDoes that get you moving?","Q_Score":1,"Tags":"python,artificial-intelligence,evolutionary-algorithm","A_Id":55368146,"CreationDate":"2019-03-26T23:03:00.000","Title":"Tower of colored cubes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a DAG that imports data from a source to a server. From there, I am looking to download that file from the server to the Windows network. I would like to keep this part in Airflow for automation purposes. Does anyone know how to do this in Airflow? I am not sure whether to use the os package, the shutil package, or maybe there is a different approach.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1629,"Q_Id":55385808,"Users Score":0,"Answer":"I think you're saying you're looking for a way to get files from a cloud server to a windows shared drive or onto a computer in the windows network, these are some options I've seen used:\n\nUse a service like google drive, dropbox, box, or s3 to simulate a synced folder on the cloud machine and a machine in the windows network. \nCall a bash command to SCP the files to the windows server or a worker in the network. This could work in the opposite direction too. \nAdd the files to a git repository and have a worker in the windows network sync the repository to a shared location. This option is only good in very specific cases. It has the benefit that you can track changes and restore old states (if the data is in CSV or another text format), but it's not great for large files or binary files.\nUse rsync to transfer the files to a worker in the windows network which has the shared location mounted and move the files to the synced dir with python or bash.\nMount the network drive to the server and use python or bash to move the files there.\n\nAll of these should be possible with Airflow by either using python (shutil) or a bash script to transfer the files to the right directory for some other process to pick up or by calling a bash sub-process to perform the direct transfer by SCP or commit the data via git. You will have to find out what's possible with your firewall and network settings. Some of these would require coordinating tasks on the windows side (the git option for example would require some kind of cron job or task scheduler to pull the repository to keep the files up to date).","Q_Score":0,"Tags":"python,airflow,samba,smb","A_Id":55387555,"CreationDate":"2019-03-27T20:20:00.000","Title":"Airflow: How to download file from Linux to Windows via smbclient","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"basically I have this window with a bunch of buttons but I want the background of the window to be invisible\/transparent so the buttons are essentially floating. However, GTK seems to be pretty limited with CSS and I haven't found a way to do it yet. I've tried making the main window opacity 0 but that doesn't seem to work. Is this even possible and if so how can I do it? Thanks.\nEdit: Also, I'm using X11 forwarding.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":55423219,"Users Score":0,"Answer":"For transparency Xorg requires a composite manager running on the X11 server. The compmgr program from Xorg is a minimal composite manager.","Q_Score":1,"Tags":"python,gtk3","A_Id":55425036,"CreationDate":"2019-03-29T18:04:00.000","Title":"Python GTK+ 3: Is it possible to make background window invisible?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm currently using an Android device (of Samsung), Pydroid 3.\nI tried to see any graphs, but it doesn't works.\nWhen I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.\n(means that i can't see even terminal screen, which always showed me [Program Finished]) \nWell, even the basic sample code which Pydroid gives me doesn't show me the graph :(\nI've seen many tutorials which successfully showed graphs, but well, mine can't do that things.\nUnfortunately, cannot grab any errors.\nUsing same code which worked at Windows, so don't think the code has problem.\nOf course, matplotlib is installed, numpy is also installed.\nIf there's any possible problems, please let me know.","AnswerCount":8,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":8119,"Q_Id":55434255,"Users Score":2,"Answer":"I also had this problem a while back, and managed to fix it by using plt.show()\nat the end of your code. With matplotlib.pyplot as plt.","Q_Score":4,"Tags":"android,python,matplotlib,pydroid","A_Id":56324221,"CreationDate":"2019-03-30T18:02:00.000","Title":"Matplotlib with Pydroid 3 on Android: how to see graph?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm currently using an Android device (of Samsung), Pydroid 3.\nI tried to see any graphs, but it doesn't works.\nWhen I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.\n(means that i can't see even terminal screen, which always showed me [Program Finished]) \nWell, even the basic sample code which Pydroid gives me doesn't show me the graph :(\nI've seen many tutorials which successfully showed graphs, but well, mine can't do that things.\nUnfortunately, cannot grab any errors.\nUsing same code which worked at Windows, so don't think the code has problem.\nOf course, matplotlib is installed, numpy is also installed.\nIf there's any possible problems, please let me know.","AnswerCount":8,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":8119,"Q_Id":55434255,"Users Score":0,"Answer":"After reinstalling it worked.\nThe problem was that I forced Pydroid to update matplotlib via Terminal, not the official PIP tab.\nThe version of matplotlib was too high for pydroid","Q_Score":4,"Tags":"android,python,matplotlib,pydroid","A_Id":60702515,"CreationDate":"2019-03-30T18:02:00.000","Title":"Matplotlib with Pydroid 3 on Android: how to see graph?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm currently using an Android device (of Samsung), Pydroid 3.\nI tried to see any graphs, but it doesn't works.\nWhen I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.\n(means that i can't see even terminal screen, which always showed me [Program Finished]) \nWell, even the basic sample code which Pydroid gives me doesn't show me the graph :(\nI've seen many tutorials which successfully showed graphs, but well, mine can't do that things.\nUnfortunately, cannot grab any errors.\nUsing same code which worked at Windows, so don't think the code has problem.\nOf course, matplotlib is installed, numpy is also installed.\nIf there's any possible problems, please let me know.","AnswerCount":8,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":8119,"Q_Id":55434255,"Users Score":0,"Answer":"You just need to add a line\nplt.show()\nThen it will work. You can also save the file before showing\nplt.savefig(\"*imageName*.png\")","Q_Score":4,"Tags":"android,python,matplotlib,pydroid","A_Id":66386763,"CreationDate":"2019-03-30T18:02:00.000","Title":"Matplotlib with Pydroid 3 on Android: how to see graph?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I was trying to install python 3 because I wanted to work on a project using python 3. Instructions I'd found were not working, so I boldly ran brew install python. Wrong move. Now when I run python -V I get \"Python 3.7.3\", and when I try to enter a virtualenv I get -bash: \/Users\/elliot\/Library\/Python\/2.7\/bin\/virtualenv: \/usr\/local\/opt\/python\/bin\/python2.7: bad interpreter: No such file or directory\nMy ~\/.bash_profile reads \nexport PATH=\"\/Users\/elliot\/Library\/Python\/2.7\/bin:\/usr\/local\/opt\/python\/libexec\/bin:\/Library\/PostgreSQL\/10\/bin:$PATH\" \nbut ls \/usr\/local\/Cellar\/python\/ gets me 3.7.3 so it seems like brew doesn't even know about my old 2.7 version anymore.\nI think what I want is to reset my system python to 2.7, and then add python 3 as a separate python running on my system. I've been googling, but haven't found any advice on how to specifically use brew to do this.\nEdit: I'd also be happy with keeping Python 3.7, if I knew how to make virtualenv work again. I remember hearing that upgrading your system python breaks everything, but I'd be super happy to know if that's outdated knowledge and I'm just being a luddite hanging on to 2.7.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":220,"Q_Id":55437402,"Users Score":0,"Answer":"So, I got through it by completely uninstalling Python, which I'd been reluctant to do, and then reinstalled Python 2. I had to update my path and open a new shell to get it to see the new python 2 installation, and things fell into place. I'm now using pyenv for my Python 3 project, and it's a dream.","Q_Score":1,"Tags":"python,python-3.x,python-2.7,homebrew","A_Id":55445585,"CreationDate":"2019-03-31T02:36:00.000","Title":"Accidentally used homebrew to change my default python to 3.7, how do I change it back to 2.7?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I created numerous python scripts on my pc laptop, and I want to run those scripts on my android phone. How can I do that? How can I move python scripts from my windows pc laptop, and use those python scripts on my samsung adroid phone? \nI have downloaded qpython from the google playstore, but I still don't know how to get my pc python programs onto my phone. I heard some people talk about \"ftp\" but I don't even know what that means. \nThanks","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":385,"Q_Id":55438484,"Users Score":0,"Answer":"Send them to yourself via email, then download the scripts onto your phone and run them through qpython.\nHowever you have to realize not all the modules on python work on qpython so your scripts may not work the same when you transfer them.","Q_Score":0,"Tags":"android,python,ftp,qpython","A_Id":56649272,"CreationDate":"2019-03-31T06:41:00.000","Title":"How does one transfer python code written in a windows laptop to a samsung android phone?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I created numerous python scripts on my pc laptop, and I want to run those scripts on my android phone. How can I do that? How can I move python scripts from my windows pc laptop, and use those python scripts on my samsung adroid phone? \nI have downloaded qpython from the google playstore, but I still don't know how to get my pc python programs onto my phone. I heard some people talk about \"ftp\" but I don't even know what that means. \nThanks","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":385,"Q_Id":55438484,"Users Score":0,"Answer":"you can use TeamViewer to control your android phone from your PC. And copy and paste the code easily.\nor\nYou can transfer your scripts on your phone memory in the qpython folder and open it using qpython for android.","Q_Score":0,"Tags":"android,python,ftp,qpython","A_Id":56236136,"CreationDate":"2019-03-31T06:41:00.000","Title":"How does one transfer python code written in a windows laptop to a samsung android phone?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a text file like this:\n\n...\n NAME : name-1\n ...\n NAME : name-2\n ...\n ...\n ...\n NAME : name-n\n ...\n\nI want output text files like this:\n\nname_1.txt : NAME : name-1 ...\n name_2.txt : NAME : name-2 ...\n ...\n name_n.txt : NAME : name-n ...\n\nI have the basic knowledge of grep, sed, awk, shell scripting, python.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":87,"Q_Id":55459973,"Users Score":0,"Answer":"With GNU sed:\nsed \"s\/\\(.*\\)\\(name-.*\\)\/echo '\\1 \\2' > \\2.txt\/;s\/-\/_\/2e\" input-file\n\nTurn line NAME : name-2 into echo \"NAME : name-2\" > name-2.txt\nThen replace the second - with _ yielding echo \"NAME : name-2\" > name_2.txt\ne have the shell run the command constructed in the pattern buffer.\n\nThis outputs blank lines to stdout, but creates a file for each matching line.\nThis depends on the file having nothing but lines matching this format... but you can expand the gist here to skip other lines with n.","Q_Score":0,"Tags":"python,awk,sed,grep","A_Id":55462346,"CreationDate":"2019-04-01T16:47:00.000","Title":"how to find text before and after given words and output into different text files?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a task to compare data of two tables in two different oracle databases. We have access of views in both of db. Using SQLAlchemy ,am able to fetch rows from views but unable to parse it. \nIn one db the type of ID column is : Raw \nIn db where column type is \"Raw\", below is the row am getting from resultset . \n(b'\\x0b\\x975z\\x9d\\xdaF\\x0e\\x96>[Ig\\xe0\/', 1, datetime.datetime(2011, 6, 7, 12, 11, 1), None, datetime.datetime(2011, 6, 7, 12, 11, 1), b'\\xf2X\\x8b\\x86\\x03\\x00K|\\x99(\\xbc\\x81n\\xc6\\xd3', None, 'I', 'Inactive')\nID Column data: b'\\x0b\\x975z\\x9d\\xdaF\\x0e\\x96>[_Ig\\xe0\/'\nActual data in ID column in database: F2588B8603004B7C9928BC816EC65FD3\nThis data is not complete hexadecimal format as it has some speical symbols like >|[_ etc. I want to know that how can I parse the data in ID column and get it as a string.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":55471457,"Users Score":0,"Answer":"bytes.hex() solved the problem","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":55590150,"CreationDate":"2019-04-02T09:36:00.000","Title":"Unable to parse the rows in ResultSet returned by connection.execute(), Python and SQLAlchemy","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm on Windows and want to use the Python package apt_pkg in PyCharm.\nOn Linux I get the package by doing sudo apt-get install python3-apt but how to install apt_pkg on Windows?\nThere is no such package on PyPI.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1074,"Q_Id":55474845,"Users Score":1,"Answer":"There is no way to run apt-get in Windows; the package format and the supporting infrastructure is very explicitly Debian-specific.","Q_Score":1,"Tags":"python,pycharm,apt","A_Id":55474963,"CreationDate":"2019-04-02T12:30:00.000","Title":"How to install Python packages from python3-apt in PyCharm on Windows?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"When I done with my work, I try to close my jupyter notebook via 'Close and Halt' under the file menu. However it somehow do not functioning.\nI am running the notebook from Canopy, version: 2.1.9.3717, under macOs High Sierra.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":556,"Q_Id":55498271,"Users Score":1,"Answer":"If you are running Jupyter notebook from Canopy, then the Jupyter notebook interface is not controlling the kernel; rather, Canopy's built-in ipython Qtconsole is. You can restart the kernel from the Canopy run menu.","Q_Score":0,"Tags":"python,macos,jupyter-notebook,macos-high-sierra,canopy","A_Id":55539791,"CreationDate":"2019-04-03T14:59:00.000","Title":"\u201cClose and Halt\u201d feature does not functioning in jupyter notebook launched under Canopy on macOs High Sierra","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Python script which communicates with a Financial site through an API. I also have a Django site, i would like to create a basic form on my site where i input something and, according to that input, my Python script should perform some operations. \nHow can i do this? I'm not asking for any code, i just would like to understand how to accomplish this. How can i \"run\" a python script on a Django project? Should i make my Django project communicate with the script through a post request? Or is there a simpler way?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1367,"Q_Id":55501346,"Users Score":0,"Answer":"I agree with @Daniel Roseman\nIf you are looking for your program to be faster, maybe multi-threading would be useful.","Q_Score":1,"Tags":"python,django","A_Id":55501698,"CreationDate":"2019-04-03T17:51:00.000","Title":"Running an external Python script on a Django site","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Python script which communicates with a Financial site through an API. I also have a Django site, i would like to create a basic form on my site where i input something and, according to that input, my Python script should perform some operations. \nHow can i do this? I'm not asking for any code, i just would like to understand how to accomplish this. How can i \"run\" a python script on a Django project? Should i make my Django project communicate with the script through a post request? Or is there a simpler way?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1367,"Q_Id":55501346,"Users Score":2,"Answer":"Since you don't want code, and you didn't get detailed on everything required required, here's my suggestion:\n\nMake sure your admin.py file has editable fields for the model you're using.\nMake an admin action,\nTake the selected row with the values you entered, and run that action with the data you entered.\n\nI would be more descriptive, but I'd need more details to do so.","Q_Score":1,"Tags":"python,django","A_Id":55501726,"CreationDate":"2019-04-03T17:51:00.000","Title":"Running an external Python script on a Django site","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"How are you today?\nI'm a newbie in Python. I'm working with SQL server 2014 and Python 3.7. So, my issue is: When any change occurs in a table on DB, I want to receive a message (or event, or something like that) on my server (Web API - if you like this name). \nI don't know how to do that with Python. \nI have an practice (an exp. maybe). I worked with C# and SQL Server, and in this case, I used \"SQL Dependency\" method in C# to solve that. It's really good!\nHave something like that in Python? Many thank for any idea, please!\nThank you so much.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":681,"Q_Id":55507064,"Users Score":0,"Answer":"I do not know many things about SQL. But I guess there are tools for SQL to detect those changes. And then you could create an everlasting loop thread using multithreading package to capture that change. (Remember to use time.sleep() to block your thread so that It wouldn't occupy the CPU for too long.) Once you capture the change, you could call the function that you want to use. (Actually, you could design a simple event engine to do that). I am a newbie in Computer Science and I hope my answer is correct and helpful. :)","Q_Score":0,"Tags":"python,sql-server,change-tracking","A_Id":55507101,"CreationDate":"2019-04-04T02:39:00.000","Title":"Tracking any change in an table on SQL Server With Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using ubuntu 16 version and running Odoo erp system 12.0 version.\nOn my application log file i see information says \"virtual real time limit (178\/120s) reached\".\nWhat exactly it means & what damage it can cause to my application?\nAlso how i can increase the virtual real time limit?","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":10514,"Q_Id":55510726,"Users Score":5,"Answer":"Open your config file and just add below parameter :\n--limit-time-real=100000","Q_Score":9,"Tags":"python,ubuntu,odoo","A_Id":59282499,"CreationDate":"2019-04-04T07:59:00.000","Title":"virtual real time limit (178\/120s) reached","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm wondering how to handle multiple major versions of a dependency library.\nI have an open source library, Foo, at an early release stage. The library is a wrapper around another open source library, Bar. Bar has just launched a new major version. Foo currently only supports the previous version. As I'm guessing that a lot of people will be very slow to convert from the previous major version of Bar to the new major version, I'm reluctant to switch to the new version myself. \nHow is this best handled? As I see it I have these options\n\nSwitch to the new major version, potentially denying people on the old version.\nKeep going with the old version, potentially denying people on the new version.\nHave two different branches, updating both branches for all new features. Not sure how this works with PyPi. Wouldn't I have to release at different version numbers each time?\nSeparate the repository into two parts. Don't really want to do this.\n\nThe ideal solution for me would be to have the same code base, where I could have some sort of C\/C++ macro-like thing where if the version is new, use new_bar_function, else use old_bar_function. When installing the library from PyPi, the already installed version of the major version dictates which version is used. If no version is installed, install the newest. \nWould much appreciate some pointers.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":558,"Q_Id":55519636,"Users Score":1,"Answer":"Have two different branches, updating both branches for all new features. Not sure how this works with PyPI. Wouldn't I have to release at different version numbers each time?\n\nYes, you could have a 1.x release (that supports the old version) and a 2.x release (that supports the new version) and release both simultaneously. This is a common pattern for packages that want to introduce a breaking change, but still want to continue maintaining the previous release as well.","Q_Score":1,"Tags":"python,pip,dependencies,wrapper,pypi","A_Id":55519997,"CreationDate":"2019-04-04T15:23:00.000","Title":"How to handle multiple major versions of dependency","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have successfully used Q-learning to solve some classic reinforcement learning environments from OpenAI Gym (i.e. Taxi, CartPole). These environments allow for a single action to be taken at each time step. However I cannot find a way to solve problems where multiple actions are taken simultaneously at each time step. For example in the Roboschool Reacher environment, 2 torque values - one for each axis - must be specify at each time step. The problem is that the Q matrix is built from (state, action) pairs. However, if more than one action are taken simultaneously, it is not straightforward to build the Q matrix.\nThe book \"Deep Reinforcement Learning Hands-On\" by Maxim Lapan mentions this but does not give a clear answer, see quotation below.\n\nOf course, we're not limited to a single action to perform, and the environment could have multiple actions, such as pushing multiple buttons simultaneously or steering the wheel and pressing two pedals (brake and accelerator). To support such cases, Gym defines a special container class that allows the nesting of several action spaces into one unified action.\n\nDoes anybody know how to deal with multiple actions in Q learning?\nPS: I'm not talking about the issue \"continuous vs discrete action space\", which can be tackled with DDPG.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":937,"Q_Id":55539820,"Users Score":2,"Answer":"You can take one of two approaches - depend on the problem:\n\nThink of the set of actions you need to pass to the environment as independent and make the network output actions values for each one (make softmax separately) - so if you need to pass two actions, the network will have two heads, one for each axis.\nThink of them as dependent and look on the Cartesian product of the sets of actions, and then make the network to output value for each product - so if you have two actions that you need to pass and 5 options for each, the size of output layer will be 2*5=10, and you just use softmax on that.","Q_Score":4,"Tags":"python,reinforcement-learning,openai-gym,q-learning","A_Id":55540865,"CreationDate":"2019-04-05T16:28:00.000","Title":"How do apply Q-learning to an OpenAI-gym environment where multiple actions are taken at each time step?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Setting up to start python for data analytics and want to install python 3.6 in Ubuntu 18.0 . Shall i run both version in parallel or overwrite 2.7 and how ? I am getting ambiguous methods when searched up.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":67,"Q_Id":55552885,"Users Score":0,"Answer":"Try pyenv and\/or pipenv . Both are excellent tools to maintain local python installations.","Q_Score":0,"Tags":"python-3.x,python-2.7","A_Id":55552911,"CreationDate":"2019-04-06T19:50:00.000","Title":"How to install python3.6 in parallel with python 2.7 in Ubuntu 18","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"please, how do I display the month in the form? example:\n07\/04\/2019 i want to change it in 07 april, 2019 \nThank you in advance","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":77,"Q_Id":55556852,"Users Score":2,"Answer":"Try with following steps:\n\nGo to Translations > Languages\nOpen record with your current language.\nEdit date format with %d %A, %Y","Q_Score":1,"Tags":"python,odoo","A_Id":55556884,"CreationDate":"2019-04-07T08:00:00.000","Title":"how to display the month in from view ? (Odoo11)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"When migrating my project to Python 3 (2to3-3.7 -w -f print *), I observed that a lot of (but not all) print statements became print((...)), so these statements now print tuples instead of performing the expected behavior. I gather that if I'd used -p, I'd be in a better place right now because from __future__ import print_function is at the top of every affected module.\nI'm thinking about trying to use sed to fix this, but before I break my teeth on that, I thought I'd see if anyone else has dealt with this before. Is there a 2to3 feature to clean this up?\nI do use version control (git) and have commits immediately before and after (as well as the .bak files 2to3 creates), but I'm not sure how to isolate the changes I've made from the print situations.","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":4214,"Q_Id":55559825,"Users Score":7,"Answer":"If your code already has print() functions you can use the -x print argument to 2to3 to skip the conversion.","Q_Score":15,"Tags":"python-3.x,sed,python-2to3","A_Id":57280672,"CreationDate":"2019-04-07T14:08:00.000","Title":"How to fix print((double parentheses)) after 2to3 conversion?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In python how can I write subsets of an array to disk, without holding the entire array in memory?\nThe xarray input\/output docs note that xarray does not support incremental writes, only incremental reads except by streaming through dask.array. (Also that modifying a dataset only affects the in-memory copy, not the connected file.) The dask docs suggest it might be necessary to save the entire array after each manipulation?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":100,"Q_Id":55567542,"Users Score":0,"Answer":"This can be done using netCDF4 (the python library of low level NetCDF bindings). Simply assign to a slice of a dataset variable, and optionally call the dataset .sync() method afterward to ensure no delay before those changes are flushed to the file. \nNote this approach also provides the opportunity to progressively grow a dimension of the array (by calling createDimension with size None, making it the first dimension of a variable, and iteratively assigning to incrementally larger indices along that dimension of the variable).\nAlthough random-access window (i.e. subset) writes appear to require the lower level package, more systematic subset writes (eventually covering the entire array) can be done incrementally with xarray (by specifying a chunk size parameter to trigger use of the dask.array backend), and provided that your algorithm is refactored so that the main loop occurs in the dask\/xarray store-to-file call. This means you will not have explicit control over the sequence in which chunks are generated and written.","Q_Score":1,"Tags":"python,large-data,netcdf","A_Id":55602989,"CreationDate":"2019-04-08T06:39:00.000","Title":"Windowed writes in python, e.g. to NetCDF","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to make a smart IOT device (capable of performing smart Computer Vision operations, on the edge device itself). A Deep Learning algorithm (written in python) is implemented on Raspberry Pi. Now, while shipping this product (software + hardware) to my customer, I want that no one should log in to the raspberry pi and get access to my code. The flow should be something like, whenever someone logs into pi, there should be some kind of key that needs to be input to get access to code. But in that case how OS will get access to code and run it (without key). Then I may have to store the key on local. But still there is a chance to get access to key and get access to the code. I have applied a patent for my work and want to protect it.\nI am thinking to encrypt my code (written in python) and just ship the executable version. I tried pyinstaller for it, but somehow there is a script available on the internet that can reverse engineer it. \nNow I am little afraid as it can leak all my effort of 6 months at one go. Please suggest a better way of doing this.\nThanks in Advance.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":330,"Q_Id":55575277,"Users Score":-1,"Answer":"Keeping the code on your server and using internet access is the only way to keep the code private (maybe). Any type of distributed program can be taken apart eventually. You can't (possibly shouldn't) try to keep people from getting inside devices they own and are in their physical possession. If you have your property under patent it shouldn't really matter if people are able to see the code as only you will be legally able to profit from it.\nAs a general piece of advice, code is really difficult to control access to. Trying to encrypt software or apply software keys to it or something like that is at best a futile attempt and at worst can often cause issues with software performance and usability. The best solution is often to link a piece of software with some kind of custom hardware device which is necessary and only you sell. That might not be possible here since you're using generic hardware but food for thought.","Q_Score":0,"Tags":"python,raspberry-pi,deep-learning,iot","A_Id":55578897,"CreationDate":"2019-04-08T14:03:00.000","Title":"Is there any way to hide or encrypt your python code for edge devices? Any way to prevent reverse engineering of python code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new to Machine Learning. I am trying to build a classifier that classifies the text as having a url or not having a url. The data is not labelled. I just have textual data. I don't know how to proceed with it. Any help or examples is appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1580,"Q_Id":55576378,"Users Score":1,"Answer":"Since it's text, you can use bag of words technique to create vectors.\n\nYou can use cosine similarity to cluster the common type text.\nThen use classifier, which would depend on number of clusters.\nThis way you have a labeled training set. \n\nIf you have two cluster, binary classifier like logistic regression would work. \nIf you have multiple classes, you need to train model based on multinomial logistic regression\nor train multiple logistic models using One vs Rest technique.\n\nLastly, you can test your model using k-fold cross validation.","Q_Score":0,"Tags":"python,machine-learning,classification","A_Id":55576706,"CreationDate":"2019-04-08T15:02:00.000","Title":"How to classify unlabelled data?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"As a project grows, so do dependencies and event chains, especially in overridden save() methods and post_save and pre_save signals. \nExample:\nAn overridden A.save creates two related objects to A - B and C. When C is saved, the post_save signal is invoked that does something else, etc...\nHow can these event chins be made more clear? Is there a way to visualize (generate automatically) such chains\/flows? I'm not looking for ERD nor a Class diagram. I need to be sure that doing one thing one place won't affect something on the other side of the project, so simple visualization would be the best.\nEDIT\nTo be clear, I know that it would be almost impossible to check dynamically generated signals. I just want to check all (not dynamically generated) post_save, pre_save, and overridden save methods and visualize them so I can see immediately what is happening and where when I save something.","AnswerCount":5,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":1398,"Q_Id":55578230,"Users Score":6,"Answer":"(Too long to fit into a comment, lacking code to be a complete answer) \nI can't mock up a ton of code right now, but another interesting solution, inspired by Mario Orlandi's comment above, would be some sort of script that scans the whole project and searches for any overridden save methods and pre and post save signals, tracking the class\/object that creates them. It could be as simple as a series of regex expressions that look for class definitions followed by any overridden save methods inside. \nOnce you have scanned everything, you could use this collection of references to create a dependency tree (or set of trees) based on the class name and then topologically sort each one. Any connected components would illustrate the dependencies, and you could visualize or search these trees to see the dependencies in a very easy, natural way. I am relatively naive in django, but it seems you could statically track dependencies this way, unless it is common for these methods to be overridden in multiple places at different times.","Q_Score":20,"Tags":"python,django,architecture,software-design","A_Id":55660187,"CreationDate":"2019-04-08T16:54:00.000","Title":"Django - how to visualize signals and save overrides?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Python Flask application with a HTML form which accept few inputs from user, uses those in an python program which returns the processed values back to flask application return statement.\nI wanted to capture the time took for whole processing and rendering output data on browser but not sure how to do that. At present I have captured the take by python program to process the input values but it doesn't account for complete time between \"submit\" action and rendering output data.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":55587410,"Users Score":0,"Answer":"Use ajax request to submit form. Fetch the time on clicking the button and after getting the response and then calculate the difference.","Q_Score":0,"Tags":"python,html,python-3.x,flask,flask-wtforms","A_Id":55587577,"CreationDate":"2019-04-09T07:36:00.000","Title":"Capturing time between HTML form submit action and printing response","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I found there are some library for extracting images from PDF or word, like docx2txt and pdfimages. But how can I get the content around the images (like there may be a title below the image)? Or get a page number of each image\uff1f\nSome other tools like PyPDF2 and minecart can extract image page by page. However, I cannot run those code successfully.\nIs there a good way to get some information of the images? (from the image got from docx2txt or pdfimages, or another way to extract image with info)","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":401,"Q_Id":55589189,"Users Score":0,"Answer":"docx2python pulls the images into a folder and leaves -----image1.png---- markers in the extracted text. This might get you close to where you'd like to go.","Q_Score":0,"Tags":"python,shell,pdf,ms-word,image-extraction","A_Id":56978698,"CreationDate":"2019-04-09T09:15:00.000","Title":"How to extract images from PDF or Word, together with the text around images?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am recording audio in a web browser and sending it to a flask backend. From there, I want to transcribe the audio using Watson Speech to Text. I cannot figure out what data format I'm receiving the audio and how to convert it to a format that works for watson.\nI believe watson expects a bytestring like b'\\x0c\\xff\\x0c\\xffd. The data I receive from the browser looks like [ -4 -27 -34 -9 1 -8 -1 2 10 -28], which I can't directly convert to bytes because of the negative values (using bytes() gives me that error).\nI'm really at a loss for what kind of conversion I need to be making here. Watson doesn't return any errors for any kind of data I throw at it just doesn't respond.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":276,"Q_Id":55599385,"Users Score":1,"Answer":"Those values should be fine, but you have to define how you want them stored before getting the bytes representation of them.\nYou'd simply want to convert those values to signed 2-byte\/16-bit integers, then get the bytes representation of those.","Q_Score":1,"Tags":"python,html,audio,types,data-conversion","A_Id":55601491,"CreationDate":"2019-04-09T18:46:00.000","Title":"What is this audio datatype and how do I convert it to wav\/l16?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I need SSIM as a loss function in my network, but my network has 2 outputs. I need to use SSIM for first output and cross-entropy for the next. The loss function is a combination of them. However, I need to have a higher SSIM and lower cross-entropy, so I think the combination of them isn't true. Another problem is that I could not find an implementation of SSIM in keras.\nTensorflow has tf.image.ssim, but it accepts the image and I do not think I can use it in loss function, right? Could you please tell me what should I do? I am a beginner in keras and deep learning and I do not know how can I make SSIM as a custom loss function in keras.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3145,"Q_Id":55600106,"Users Score":0,"Answer":"other choice would be\nssim_loss = 1 - tf.reduce_mean(tf.image.ssim(target, output, max_val=self.max_val))\nthen\ncombine_loss = mae (or mse) + ssim_loss\nIn this way, you are minimizing both of them.","Q_Score":3,"Tags":"python,tensorflow,keras,loss-function","A_Id":70400820,"CreationDate":"2019-04-09T19:37:00.000","Title":"how do I implement ssim for loss function in keras?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I have created a 4-cluster k-means customer segmentation in scikit learn (Python). The idea is that every month, the business gets an overview of the shifts in size of our customers in each cluster.\nMy question is how to make these clusters 'durable'. If I rerun my script with updated data, the 'boundaries' of the clusters may slightly shift, but I want to keep the old clusters (even though they fit the data slightly worse).\nMy guess is that there should be a way to extract the paramaters that decides which case goes to their respective cluster, but I haven't found the solution yet.\nI would appreciate any help","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":454,"Q_Id":55631944,"Users Score":1,"Answer":"Got the answer in a different topic: \nJust record the cluster means. Then when new data comes in, compare it to each mean and put it in the one with the closest mean.","Q_Score":1,"Tags":"python,scikit-learn,k-means","A_Id":55648613,"CreationDate":"2019-04-11T11:57:00.000","Title":"KMeans: Extracting the parameters\/rules that fill up the clusters","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am setting up a script for counting number of days with passing each day in odoo.\nHow i can count day passing each day till end of the month.\nFor example : i have set two dates to find days between them.I need function which compare number of days with each passing day. When meet remaining day is 0 then will call a cron job.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":55633617,"Users Score":0,"Answer":"Write a scheduled action that runs python code daily. The first thing that this code should do is to check the number of days you talk about and if it is 0, it should trigger whatever action it is needed.","Q_Score":0,"Tags":"python,odoo","A_Id":55644031,"CreationDate":"2019-04-11T13:25:00.000","Title":"how to count number of days via cron job in odoo 10?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using getstream.io to create feeds. The user can follow feeds and add reaction like and comments. If a user adds a comment on feed and another wants to reply on the comment then how I can achieve this and also retrieve all reply on the comment.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":583,"Q_Id":55644656,"Users Score":0,"Answer":"you can add the child reaction by using reaction_id","Q_Score":1,"Tags":"python,getstream-io","A_Id":55813711,"CreationDate":"2019-04-12T04:46:00.000","Title":"How to add reply(child comments) to comments on feed in getstream.io python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have tried using the similarity function of spacy to get the best matching sentence in a document. However it fails for bullet points because it considers each bullet as the a sentence and the bullets are incomplete sentences (eg sentence 1 \"password should be min 8 characters long , sentence 2 in form of a bullet \" 8 characters\"). It does not know it is referring to password and so my similarity comes very low.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":55651267,"Users Score":0,"Answer":"Bullets are considered but the thing is it doesn't understand who 8 characters is referring to so I thought of finding the heading of the paragraph and replacing the bullets with it \nI found the headings using python docs but it doesn't read bullets while reading the document ,is there a way I can read it using python docs ? \nIs there any way I can find the headings of a paragraph in spacy?\nIs there a better approach for it","Q_Score":0,"Tags":"python-3.x,spacy","A_Id":55677993,"CreationDate":"2019-04-12T12:06:00.000","Title":"how to find the similarity between two documents","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I have tried using the similarity function of spacy to get the best matching sentence in a document. However it fails for bullet points because it considers each bullet as the a sentence and the bullets are incomplete sentences (eg sentence 1 \"password should be min 8 characters long , sentence 2 in form of a bullet \" 8 characters\"). It does not know it is referring to password and so my similarity comes very low.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":55651267,"Users Score":0,"Answer":"Sounds to me like you need to do more text processing before attempting to use similarity. If you want bullet points to be considered part of a sentence, you need to modify your spacy pipeline to understand to do so.","Q_Score":0,"Tags":"python-3.x,spacy","A_Id":55655171,"CreationDate":"2019-04-12T12:06:00.000","Title":"how to find the similarity between two documents","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to write some code in python to retrieve some data from Infoblox. To do this i need to Import the Infoblox Module.\nCan anyone tell me how to do this ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":55653169,"Users Score":0,"Answer":"Before you can import infoblox you need to install it:\n\nopen a command prompt (press windows button, then type cmd)\nif you are working in a virtual environment access it with activate yourenvname (otherwise skip this step)\nexecute pip install infoblox to install infoblox, then you should be fine\nto test it from the command prompt, execute python, and then try executing import infoblox\n\nThe same process works for basically every package.","Q_Score":0,"Tags":"python","A_Id":55653452,"CreationDate":"2019-04-12T13:48:00.000","Title":"Trying to Import Infoblox Module in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So I'm following a book that's teaching me how to make a Learning Log using Python and the Django web framework. I was asked to go to a terminal and create a directory called \"learning_log\" and change the working directory to \"learning_log\" (did that with no problems). However, when I try to create the virtual environment, I get an error (seen at the bottom of this post). Why am I getting this error and how can I fix this to move forward in the book?\nI already tried installing a virtualenv with pip and pip3 (as the book prescribed). I was then instructed to enter the command: \nlearning_log$ virtualenv ll_env\nAnd I get: \nbash: virtualenv: command not found\nSince I'm using Python3.6, I tried: \nlearning_log$ virtualenv ll_env --python=python3\nAnd I still get:\nbash: virtualenv: command not found\nBrandons-MacBook-Pro:learning_log brandondusch$ python -m venv ll_env\nError: Command '['\/Users\/brandondusch\/learning_log\/ll_env\/bin\/python', '-Im', 'ensurepip', '--upgrade', '-\n-default-pip']' returned non-zero exit status 1.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":538,"Q_Id":55659897,"Users Score":0,"Answer":"I had the same error. I restarted my computer and tried it again, but the error was still there. Then I tried python3 -m venv ll_env and it moved forward.","Q_Score":0,"Tags":"macos,terminal,virtualenv,python-3.6,python-venv","A_Id":60223146,"CreationDate":"2019-04-12T21:52:00.000","Title":"Why do I keep getting this error when trying to create a virtual environment with Python 3 on MacOS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"So I'm following a book that's teaching me how to make a Learning Log using Python and the Django web framework. I was asked to go to a terminal and create a directory called \"learning_log\" and change the working directory to \"learning_log\" (did that with no problems). However, when I try to create the virtual environment, I get an error (seen at the bottom of this post). Why am I getting this error and how can I fix this to move forward in the book?\nI already tried installing a virtualenv with pip and pip3 (as the book prescribed). I was then instructed to enter the command: \nlearning_log$ virtualenv ll_env\nAnd I get: \nbash: virtualenv: command not found\nSince I'm using Python3.6, I tried: \nlearning_log$ virtualenv ll_env --python=python3\nAnd I still get:\nbash: virtualenv: command not found\nBrandons-MacBook-Pro:learning_log brandondusch$ python -m venv ll_env\nError: Command '['\/Users\/brandondusch\/learning_log\/ll_env\/bin\/python', '-Im', 'ensurepip', '--upgrade', '-\n-default-pip']' returned non-zero exit status 1.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":538,"Q_Id":55659897,"Users Score":0,"Answer":"For Ubuntu:\nThe simple is if virtualenv --version returns something like virtualenv: command not found and which virtualenv prints nothing on the console, then virtualenv is not installed on your system. Please try to install using pip3 install virtualenv or sudo apt-get install virtualenv but this one might install a bit older one.\nEDIT\nFor Mac:\nFor Mac, you need to install that using sudo pip install virtualenv after you have installed Python3 on your Mac.","Q_Score":0,"Tags":"macos,terminal,virtualenv,python-3.6,python-venv","A_Id":55660257,"CreationDate":"2019-04-12T21:52:00.000","Title":"Why do I keep getting this error when trying to create a virtual environment with Python 3 on MacOS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"In gensim I have a trained doc2vec model, if I have a document and either a single word or two-three words, what would be the best way to calculate the similarity of the words to the document? \nDo I just do the standard cosine similarity between them as if they were 2 documents? Or is there a better approach for comparing small strings to documents?\nOn first thought I could get the cosine similarity from each word in the 1-3 word string and every word in the document taking the averages, but I dont know how effective this would be.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":343,"Q_Id":55665180,"Users Score":3,"Answer":"There's a number of possible approaches, and what's best will likely depend on the kind\/quality of your training data and ultimate goals. \nWith any Doc2Vec model, you can infer a vector for a new text that contains known words \u2013 even a single-word text \u2013 via the infer_vector() method. However, like Doc2Vec in general, this tends to work better with documents of at least dozens, and preferably hundreds, of words. (Tiny 1-3 word documents seem especially likely to get somewhat peculiar\/extreme inferred-vectors, especially if the model\/training-data was underpowered to begin with.) \nBeware that unknown words are ignored by infer_vector(), so if you feed it a 3-word documents for which two words are unknown, it's really just inferring based on the one known word. And if you feed it only unknown words, it will return a random, mild initialization vector that's undergone no inference tuning. (All inference\/training always starts with such a random vector, and if there are no known words, you just get that back.)\nStill, this may be worth trying, and you can directly compare via cosine-similarity the inferred vectors from tiny and giant documents alike. \nMany Doc2Vec modes train both doc-vectors and compatible word-vectors. The default PV-DM mode (dm=1) does this, or PV-DBOW (dm=0) if you add the optional interleaved word-vector training (dbow_words=1). (If you use dm=0, dbow_words=0, you'll get fast training, and often quite-good doc-vectors, but the word-vectors won't have been trained at all - so you wouldn't want to look up such a model's word-vectors directly for any purposes.)\nWith such a Doc2Vec model that includes valid word-vectors, you could also analyze your short 1-3 word docs via their individual words' vectors. You might check each word individually against a full document's vector, or use the average of the short document's words against a full document's vector. \nAgain, which is best will likely depend on other particulars of your need. For example, if the short doc is a query, and you're listing multiple results, it may be the case that query result variety \u2013 via showing some hits that are really close to single words in the query, even when not close to the full query \u2013 is as valuable to users as documents close to the full query. \nAnother measure worth looking at is \"Word Mover's Distance\", which works just with the word-vectors for a text's words, as if they were \"piles of meaning\" for longer texts. It's a bit like the word-against-every-word approach you entertained \u2013 but working to match words with their nearest analogues in a comparison text. It can be quite expensive to calculate (especially on longer texts) \u2013 but can sometimes give impressive results in correlating alternate texts that use varied words to similar effect.","Q_Score":1,"Tags":"python,gensim,doc2vec","A_Id":55670156,"CreationDate":"2019-04-13T11:57:00.000","Title":"How do I calculate the similarity of a word or couple of words compared to a document using a doc2vec model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I have just now started learning python from Learn Python 3 The Hard Way by Zed Shaw. In exercise 3 of the book, there was a problem to get the value of 100 - 25 * 3 % 4. The solution to this problem is already mentioned in the archives, in which the order preference is given to * and %(from left to right).\nI made a problem on my own to get the value of 100 - 25 % 3 + 4. The answer in the output is 103.\nI just wrote: print (\"the value of\", 100 - 25 % 3 + 4), which gave the output value 103.\nIf the % is given the preference 25 % 3 will give 3\/4. Then how the answer is coming 103. Do I need to mention any float command or something?\nI would like to know how can I use these operations. Is there any pre-defined rule to solve these kinds of problems?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":55676780,"Users Score":0,"Answer":"The % operator is used to find the remainder of a quotient. So 25 % 3 = 1 not 3\/4.","Q_Score":0,"Tags":"python,python-3.x","A_Id":55676820,"CreationDate":"2019-04-14T15:12:00.000","Title":"operations order in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have just now started learning python from Learn Python 3 The Hard Way by Zed Shaw. In exercise 3 of the book, there was a problem to get the value of 100 - 25 * 3 % 4. The solution to this problem is already mentioned in the archives, in which the order preference is given to * and %(from left to right).\nI made a problem on my own to get the value of 100 - 25 % 3 + 4. The answer in the output is 103.\nI just wrote: print (\"the value of\", 100 - 25 % 3 + 4), which gave the output value 103.\nIf the % is given the preference 25 % 3 will give 3\/4. Then how the answer is coming 103. Do I need to mention any float command or something?\nI would like to know how can I use these operations. Is there any pre-defined rule to solve these kinds of problems?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":58,"Q_Id":55676780,"Users Score":1,"Answer":"Actually, the % operator gives you the REMAINDER of the operation.\nTherefore, 25 % 3 returns 1, because 25 \/ 3 = 8 and the remainder of this operation is 1.\nThis way, your operation 100 - 25 % 3 + 4 is the same as 100 - 1 + 4 = 103","Q_Score":0,"Tags":"python,python-3.x","A_Id":55676836,"CreationDate":"2019-04-14T15:12:00.000","Title":"operations order in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'd like to compare ORB, SIFT, BRISK, AKAZE, etc. to find which works best for my specific image set. I'm interested in the final alignment of images.\nIs there a standard way to do it?\nI'm considering this solution: take each algorithm, extract the features, compute the homography and transform the image.\nNow I need to check which transformed image is closer to the target template.\nMaybe I can repeat the process with the target template and the transformed image and look for the homography matrix closest to the identity but I'm not sure how to compute this closeness exactly. And I'm not sure which algorithm should I use for this check, I suppose a fixed one.\nOr I could do some pixel level comparison between the images using a perceptual difference hash (dHash). But I suspect the the following hamming distance may not be very good for images that will be nearly identical.\nI could blur them and do a simple subtraction but sounds quite weak.\nThanks for any suggestions.\nEDIT: I have thousands of images to test. These are real world pictures. Images are of documents of different kinds, some with a lot of graphics, others mostly geometrical. I have about 30 different templates. I suspect different templates works best with different algorithms (I know in advance the template so I could pick the best one).\nRight now I use cv2.matchTemplate to find some reference patches in the transformed images and I compare their locations to the reference ones. It works but I'd like to improve over this.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":89,"Q_Id":55679644,"Users Score":0,"Answer":"From your question, it seems like the task is not to compare the feature extractors themselves, but rather to find which type of feature extractor leads to the best alignment.\nFor this, you need two things:\n\na way to perform the alignment using the features from different extractors\na way to check the accuracy of the alignment\n\nThe algorithm you suggested is a good approach for doing the alignment. To check if accuracy, you need to know what is a good alignment.\nYou may start with an alignment you already know. And the easiest way to know the alignment between two images is if you made the inverse operation yourself. For example, starting with one image, you rotate it some amount, you translate\/crop\/scale or combine all this operations. Knowing how you obtained the image, you can obtain your ideal alignment (the one that undoes your operations).\nThen, having the ideal alignment and the alignment generated by your algorithm, you can use one metric to evaluate its accuracy, depending on your definition of \"good alignment\".","Q_Score":0,"Tags":"python,opencv,computer-vision","A_Id":55680593,"CreationDate":"2019-04-14T20:15:00.000","Title":"Comparing feature extractors (or comparing aligned images)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have django project in which I can display records from raspberry pi device. I had mysql database and i have send records from raspberry there. I can display it via my api, but I want to work on this records.I want to change this to django database but I don't know how I can get access to django database which is on VPS server from raspberry pi device.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":66,"Q_Id":55692096,"Users Score":0,"Answer":"ALERT: THIS CAN LEAD TO SECURITY ISSUES\nA Django database is no different from any other database. In this case a MySQL.\nThe VPS server where the MySQL is must have a public IP, the MySQL must be listening on that IP (if the VPS has a public IP but MySQL is not listening\/bind on that IP, it won't work) and the port of the MySQL open (default is 3306), then you can connect to that database from any program with the required configurations params (host, port, user, password,...).\nI'm not a sysadmin expert, but having a MySQL on a public IP is a security hole. So the best approach IMO is to expose the operations you want to do via API with Django.","Q_Score":0,"Tags":"python,mysql,django","A_Id":55693009,"CreationDate":"2019-04-15T15:05:00.000","Title":"How to get access to django database from other python program?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"What happened after entered \"flask run\" on a terminal under the project directory?\nHow the python interpreter gets the file of flask.__main__.py and starts running project's code?\nI know how Flask locates app. What I want to figure out is how command line instruction \"flask run\" get the flask\/__main__.py bootup","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":117,"Q_Id":55707275,"Users Score":2,"Answer":"flask is a Python script. Since you stated you are not a beginner, you should simply open the file (\/usr\/bin\/flask) in your favorite text editor and start from there. There is no magic under the hood.","Q_Score":0,"Tags":"python,flask","A_Id":55708668,"CreationDate":"2019-04-16T11:37:00.000","Title":"What happened after entered \" flask run\" on a terminal under the project directory?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I can't understand the difference between dag_concurrency and parallelism. documentation and some of the related posts here somehow contradicts my findings.\nThe understanding I had before was that the parallelism parameter allows you to set the MAX number of global(across all DAGs) TaskRuns possible in airflow and dag_concurrency to mean the MAX number of TaskRuns possible for a single Dag.\nSo I set the parallelism to 8 and dag_concurrency to 4 and ran a single Dag. And I found out that it was running 8 TIs at a time but I was expecting it to run 4 at a time. \n\nHow is that possible? \nAlso, if it helps, I have set the pool size to 10 or so for these tasks. But that shouldn't have mattered as \"config\" parameters are given higher priorities than the pool's, Right?","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":4743,"Q_Id":55722733,"Users Score":5,"Answer":"The other answer is only partially correct:\ndag_concurrency does not explicitly control tasks per worker. dag_concurrency is the number of tasks running simultaneously per dag_run. So if your DAG has a place where 10 tasks could be running simultaneously but you want to limit the traffic to the workers you would set dag_concurrency lower. \nThe queues and pools setting also have an effect on the number of tasks per worker. \nThese setting are very important as you start to build large libraries of simultaneously running DAGs. \nparallelism is the maximum number of tasks across all the workers and DAGs.","Q_Score":10,"Tags":"python,airflow","A_Id":55786029,"CreationDate":"2019-04-17T08:02:00.000","Title":"what's the difference between airflow's 'parallelism' and 'dag_concurrency'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to import win32api in python 2.7.9. i did the \"pip install pypiwin32\" and made sure all the files were intalled correctly (i have the win32api.pyd under ${PYTHON_HOME}\\Lib\\site-packages\\win32). i also tried coping the files from C:\\Python27\\Lib\\site-packages\\pywin32_system32 to C:\\Python27\\Lib\\site-packages\\win32. I also tried restarting my pc after each of these steps but nothing seems to work! i still get the error 'No module named 'win32api''","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":129,"Q_Id":55731387,"Users Score":0,"Answer":"Well, turns out the answer is upgrading my python to 3.6.\npython 2.7 seems to old to work with outside imports (I'm just guessing here, because its not the first time I'm having an import problem)\nhope it helps :)","Q_Score":0,"Tags":"python-2.7,pywin32","A_Id":55775086,"CreationDate":"2019-04-17T15:40:00.000","Title":"how do i fix \"No module named 'win32api'\" on python2.7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I tried to paste few lines code from online sources with the symbol like \">>>\". My question is how to paste without these symbols? \n(Line by line works but it will be very annoying if pasting a big project.)\nCheers","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":55731644,"Users Score":0,"Answer":"Go to Edit > Find and Replace, in which find for >>> and replace with empty. Enjoy :)","Q_Score":0,"Tags":"python,jupyter-notebook,copy-paste","A_Id":55901372,"CreationDate":"2019-04-17T15:55:00.000","Title":"paste code to Jupyter notebook without symbols","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So I notice that we say in python that sets have no order or arrangement, although of course you can sort the list generated from a set.\nSo I was wondering how the iteration over a set is defined in python. Does it just follow the sorted list ordering, or is there some other footgun that might crop up at some point?\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":281,"Q_Id":55734364,"Users Score":1,"Answer":"A temporary order is used to iterate over the set, but you can't reliably predict it (practically speaking, as it depends on the insertion and deletion history of the set). If you need a specific order, use a list.","Q_Score":3,"Tags":"python,set","A_Id":55734436,"CreationDate":"2019-04-17T19:02:00.000","Title":"How can python iterate over a set if no order is defined?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to write a function that when given two cameras, their rotation, translation matrices, focal point, and the coordinates of a point for each camera, will be able to triangulate the point into 3D space. Basically, given all the extrinsic\/intrinsic values needed\nI'm familiar with the general idea: to somehow create two rays and find the closest point that satisfies the least squares problem, however, I don't know exactly how to translate the given information to a series of equations to the coordinate point in 3D.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1933,"Q_Id":55740284,"Users Score":0,"Answer":"Assume you have two cameras -- camera 1 and camera 2. \nFor each camera j = 1, 2 you are given:\n\nThe distance hj between it's center Oj, (is \"focal point\" the right term? Basically the point Oj from which the camera is looking at its screen) and the camera's screen. The camera's coordinate system is centered at Oj, the Oj--->x and Oj--->y axes are parallel to the screen, while the Oj--->z axis is perpendicular to the screen. \nThe 3 x 3 rotation matrix Uj and the 3 x 1 translation vector Tj which transforms the Cartesian 3D coordinates with respect to the system of camera j (see point 1) to the world-coordinates, i.e. the coordinates with respect to a third coordinate system from which all points in the 3D world are described.\nOn the screen of camera j, which is the plane parallel to the plane Oj-x-y and at a distance hj from the origin Oj, you have the 2D coordinates (let's say the x,y coordinates only) of point pj, where the two points p1 and p2 are in fact the projected images of the same point P, somewhere in 3D, onto the screens of camera 1 and 2 respectively. The projection is obtained by drawing the 3D line between point Oj and point P and defining point pj as the unique intersection point of this line with with the screen of camera j. The equation of the screen in camera j's 3D coordinate system is z = hj , so the coordinates of point pj with respect to the 3D coordinate system of camera j look like pj = (xj, yj, hj) and so the 2D screen coordinates are simply pj = (xj, yj) . \n\nInput: You are given the 2D points p1 = (x1, y1), p2 = (x2, y2) , the twp cameras' focal distances h1, h2 , two 3 x 3 rotation matrices U1 and U2, two translation 3 x 1 vector columns T1 and T2 . \nOutput: The coordinates P = (x0, y0, z0) of point P in the world coordinate system. \nOne somewhat simple way to do this, avoiding homogeneous coordinates and projection matrices (which is fine too and more or less equivalent), is the following algorithm:\n\nForm Q1 = [x1; y1; h1] and Q2 = [x2; y2; h2] , where they are interpreted as 3 x 1 vector columns;\nTransform P1 = U1*Q1 + T1 and P2 = U1*Q2 + T1 , where * is matrix multiplication, here it is a 3 x 3 matrix multiplied by a 3 x 1 column, givin a 3 x 1 column;\nForm the lines X = T1 + t1*(P1 - T1) and X = T2 + t2*(P2 - T2) ;\nThe two lines from the preceding step 3 either intersect at a common point, which is the point P or they are skew lines, i.e. they do not intersect but are not parallel (not coplanar). \nIf the lines are skew lines, find the unique point X1 on the first line and the uniqe point X2 on the second line such that the vector X2 - X1 is perpendicular to both lines, i.e. X2 - X1 is perpendicular to both vectors P1 - T1 and P2 - T2. These two point X1 and X2 are the closest points on the two lines. Then point P = (X1 + X2)\/2 can be taken as the midpoint of the segment X1 X2. \n\nIn general, the two lines should pass very close to each other, so the two points X1 and X2 should be very close to each other.","Q_Score":2,"Tags":"python,numpy,triangulation,vision","A_Id":56049754,"CreationDate":"2019-04-18T06:31:00.000","Title":"How to triangulate a point in 3D space, given coordinate points in 2 image and extrinsic values of the camera","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am running an airflow DAG and wanted to understand how the execution date gets set. This is the code I am running: \n{{ execution_date.replace(day=1).strftime(\"%Y-%m-%d\") }} \nThis always returns the first day of the month. This is the functionality that I want, but I just want to find a way to understand what is happening.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":495,"Q_Id":55748657,"Users Score":1,"Answer":"The reason this always returns the first of the month is that you are using a Replace to ensure the day is forced to be the 1st of the month. Simply remove \".replace(day=1)\".","Q_Score":2,"Tags":"python,airflow","A_Id":55748730,"CreationDate":"2019-04-18T14:54:00.000","Title":"Understanding execution_date in Airflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am running an airflow DAG and wanted to understand how the execution date gets set. This is the code I am running: \n{{ execution_date.replace(day=1).strftime(\"%Y-%m-%d\") }} \nThis always returns the first day of the month. This is the functionality that I want, but I just want to find a way to understand what is happening.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":495,"Q_Id":55748657,"Users Score":0,"Answer":"execution_date returns a datatime object. You are using the replace method of that object to replace the \u201cday\u201d with the first. Then outputting that to a string with the format method.","Q_Score":2,"Tags":"python,airflow","A_Id":55750885,"CreationDate":"2019-04-18T14:54:00.000","Title":"Understanding execution_date in Airflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I created a Python script (format .py) that works.\nI would like to convert this file to .exe, to use it in a computer without having Python installed.\nHow can I do?\nI have Python from Anaconda3.\nWhat can I do?\nThank you!\nI followed some instruction found here on Stackoverflow.\n.I modify the Path in the 'Environment variables' in the windows settings, edited to the Anaconda folder.\n.I managed to install pip in conda prompt (I guess).\nStill, nothing is working. I don't know how to proceed and in general how to do things properly.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2453,"Q_Id":55760356,"Users Score":2,"Answer":"I personaly use pyinstaller, its available from pip.\nBut it will not really compile, it will just bundle.\nThe difference is compiling means translating to real machine code while bundling is creating a big exe file with all your libs and your python interpreter.\nEven if pyinstaller create bigger file and is slower than cython (at execution), I prefer it because it work all the time without work (except lunching it).","Q_Score":0,"Tags":"python-3.x,windows,anaconda,executable","A_Id":55760631,"CreationDate":"2019-04-19T10:22:00.000","Title":"How to convert file .py to .exe, having Python from Anaconda Navigator? (in which command prompt should I write installation codes?)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Nvidia Jetson tx2 with the orbitty shield on it. \nI got it from a friend who worked on it last year. It came with ubuntu 16.04. I updated everything on it and i installed the latest python3.7 and pip.\nI tried checking the version of opencv to see what i have but when i do import cv2 it gives me :\nTraceback (most recent call last):\n File \"\", line 1, in \nModuleNotFoundError: No module named 'cv2'\nSomehow besides python3.7 i have python2.7 and python3.5 installed. If i try to import cv2 on python2.7 and 3.5 it works, but in 3.7 it doesn't. \nCan u tell me how can i install opencv in python3.7 and the latest version?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1498,"Q_Id":55764829,"Users Score":0,"Answer":"Does python-3.7 -m pip install opencv-python work? You may have to change the python-3.7 to whatever path\/alias you use to open your own python 3.7.","Q_Score":2,"Tags":"python,opencv,ubuntu","A_Id":55764936,"CreationDate":"2019-04-19T16:25:00.000","Title":"How can i install opencv in python3.7 on ubuntu?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using CatBoost for ranking task. I am using QueryRMSE as my loss function. I notice for some features, the feature importance values are negative and I don't know how to interpret them.\nIt says in the documentation, the i-th feature importance is calculated as the difference between loss(model with i-th feature excluded) - loss(model).\nSo a negative feature importance value means that feature makes my loss go up?\nWhat does that suggest then?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2295,"Q_Id":55777986,"Users Score":0,"Answer":"Negative feature importance value means that feature makes the loss go up. This means that your model is not getting good use of this feature. This might mean that your model is underfit (not enough iteration and it has not used the feature enough) or that the feature is not good and you can try removing it to improve final quality.","Q_Score":0,"Tags":"python,machine-learning,catboost","A_Id":56173568,"CreationDate":"2019-04-20T21:40:00.000","Title":"Negative Feature Importance Value in CatBoost LossFunctionChange","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Using python and OpenCV, is it possible to display the same image on multiple users?\nI am using cv2.imshow but it only displays the image for the user that runs the code.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":55804120,"Users Score":0,"Answer":"I was able to display the images on another user\/host by setting the DISPLAY environment variable of the X server to match the desired user's DISPLAY.","Q_Score":0,"Tags":"python,cv2,multi-user","A_Id":55821709,"CreationDate":"2019-04-23T04:00:00.000","Title":"cv2 - multi-user image display","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm doing a homework and I want to know how can I move turtle to a random location a small step each time. Like can I use turtle.goto() in a slow motion?\nSomeone said I should use turtle.setheading() and turtle.forward() but I'm confused on how to use setheading() when the destination is random.\nI'm hoping the turtle could move half radius (which is 3.5) each time I update the program to that random spot.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":120,"Q_Id":55805378,"Users Score":0,"Answer":"Do you mean that you want to move a small step, stop, and repeat? If so, you can \u2018import time\u2019 and add \u2018time.sleep(0.1)\u2019 after each \u2018forward\u2019","Q_Score":1,"Tags":"python,python-3.x,turtle-graphics","A_Id":55810553,"CreationDate":"2019-04-23T06:19:00.000","Title":"Move turtle slightly closer to random coordinate on each update","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a bytearray containing some bytes, it currently look like this (Converted to ASCII):\n\n['0b1100001', '0b1100010', '0b1100011', '0b10000000']\n\nI need to add a number of 0 bits to this, is that possible or would I have to add full bytes? If so, how do I do that?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":805,"Q_Id":55814420,"Users Score":1,"Answer":"Where do you need the bits added to? Each element of your list or an additional element that contains all 0's?\nThe former:\nmyList[0] = myList[0] * 2 # ASL\nThe later\nmyList.append(0b000000)","Q_Score":1,"Tags":"python,arrays,append","A_Id":55814499,"CreationDate":"2019-04-23T15:17:00.000","Title":"Python append single bit to bytearray","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to remove a folder, but I can\u2019t get in pycache to delete the pyc and pyo$ files. I have done it before, but I don\u2019t know how I did it.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":55818435,"Users Score":0,"Answer":"If you want to remove your python file artifacts, such as the .pyc and .pyo cache files, maybe you could try the following:\n\nMove into your project's root directory\ncd \nRemove python file artifacts\nfind . -name '*.pyc' -exec rm -f {} +\nfind . -name '*.pyo' -exec rm -f {} +\n\nHopefully that helps!","Q_Score":0,"Tags":"python,bash","A_Id":55825883,"CreationDate":"2019-04-23T19:50:00.000","Title":"How to access _pycache_ directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to extract numerical entities like temperature and duration mentioned in unstructured formats of texts using neural models like CRF using python. I would like to know how to proceed for numerical extraction as most of the examples available on the internet are for specific words or strings extraction. \nInput: 'For 5 minutes there, I felt like baking in an oven at 350 degrees F'\nOutput: temperature: 350\n duration: 5 minutes","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":55862614,"Users Score":0,"Answer":"So far my research shows that you can treat numbers as words. \nThis raises an issue : learning 5 will be ok, but 19684 will be to rare to be learned.\nOne proposal is to convert into words. \"nineteen thousands six hundred eighty four\" and embedding each word. The inconvenient is that you are now learning a (minimum) 6 dimensional vector (one dimension per word)\nBased on your usage, you can also embed 0 to 3000 with distinct ids, and say 3001 to 10000 will map id 3001 in your dictionary, and then add one id in your dictionary for each 10x.","Q_Score":0,"Tags":"python-3.x,nlp,named-entity-recognition","A_Id":58901069,"CreationDate":"2019-04-26T07:18:00.000","Title":"numerical entity extraction from unstructured texts using python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Python WebJob living in Azure and I'm trying to pass parameters to it. \nI've found documentation saying that I should be able to post the URL and add:?arguments={'arg1', 'arg2'} after it. \nHowever, when I do that and then try to print(sys.argv) in my code, it's only printing the name of the Python file and none of the arguments I pass to it. \nHow do I get the arguments to pass to my Python code? I am also using a run.cmd in my Azure directory to trigger my Python code if that makes a difference. \nUPDATE: So I tested it in another script without the run.cmd and that certainly is the problem. If I just do ?arguments=22 66 it works. So how do I pass parameters when I'm using a run.cmd file?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":184,"Q_Id":55870100,"Users Score":2,"Answer":"I figured it out: in the run.cmd file, you need to put \"%*\" after your script name and it will detect any arguments you passed in the URL.","Q_Score":1,"Tags":"python,azure,azure-webjobs","A_Id":55873033,"CreationDate":"2019-04-26T14:50:00.000","Title":"Python Azure webjob passing parameters","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I try to turn my pc off and restart it on LAN. \nWhen getting one of the commands (turnoff or restart), I execute one of the followings:\nsubprocess.call([\"shutdown\", \"-f\", \"-s\", \"-y\"]) # Turn off\nsubprocess.call([\"shutdown\", \"-f\", \"-r\", \"-t\", \"-c\", \"-y\"]) # Restart\nI'd like to inform the other side if the process was successfully initiated, and if the PC is in the desired state.\nI know that it is possible to implement a function which will check if the PC is alive (which is a pretty good idea) several seconds after executing the commands, but how one can know how many seconds are needed? And what if the PC will be shut down a moment after sending a message stating that it is still alive?\nI'm curious to know- what really happens after those commands are executed? Will the script keep running until the task manager will kill it? Will it stop running right after the command?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":35,"Q_Id":55882903,"Users Score":1,"Answer":"Programs like shutdown merely send a message to init (or whatever modern replacement) and exit immediately; it\u2019s up to it what happens next. Typical Unix behavior is to first shut down things like SSH servers (which probably doesn\u2019t kill your connection to the machine), then send SIGTERM to all processes, wait a few seconds (5 is typical) for signal handlers to run, and then send SIGKILL to any survivors. Finally, filesystems are unmounted and the hardware halt or reboot happens.\nWhile there\u2019s no guarantee that the first phase takes long enough for you to report successful shutdown execution, it generally will; if it\u2019s a concern, you can catch the SIGTERM to buy yourself those few extra seconds to get the message out.","Q_Score":0,"Tags":"python,multithreading,subprocess,task,shutdown","A_Id":55883196,"CreationDate":"2019-04-27T17:03:00.000","Title":"What happens after shutting down the PC via subprocess?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I am trying to generate a vCard QR code with the pyqrcode library but I cannot figure out the way to do it.\nI have read their documentation 5 times and it doesn't say anything about vCard, only about URL and on the internet, I could found only about wifi. Does anybody know how can I do it?\nI want to make a vCard QR code and afterward to display it on django web page.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3595,"Q_Id":55884546,"Users Score":0,"Answer":"Let's say :\nWe've two libraries:\n\npyqrcode : QR reader \/ writer \nvobject : vCard serializer \/ deserializer\n\nFlow:\na. Generate a QR img from \"some\" web site :\nweb site send JSON info => get info from JSON and serialize using vobject to obtain a vcard string => pyqrcode.create(vcard string) \nb. Show human redeable info from QR img :\npyqrcode read an QR img ( created from a. ) => deserialize using vobject to obtain a JSON => show info parsing JSON in the web site.\nOR... after deserialize using vobject you can write a .vcard file","Q_Score":3,"Tags":"python,django,vcf-vcard","A_Id":55886326,"CreationDate":"2019-04-27T20:18:00.000","Title":"How can I create a vCard qrcode with pyqrcode?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So, I am doing a etl process in which I use Apache NiFi as an etl tool along with a postgresql database from google cloud sql to read csv file from GCS. As a part of the process, I need to write a query to transform data read from csv file and insert to the table in the cloud sql database. So, based on NIFi, I need to write a python to execute a sql queries automatically on a daily basis. But the question here is that how can I write a python to connect with the cloud sql database? What config that should be done? I have read something about cloud sql proxy but can I just use an cloud sql instance's internal ip address and put it in some config file and creating some dbconnector out of it? \nThank you\nEdit: I can connect to cloud sql database from my vm using psql -h [CLOUD_SQL_PRIVATE_IP_ADDR] -U postgres but I need to run python script for the etl process and there's a part of the process that need to execute sql. What I am trying to ask is that how can I write a python file that use for executing the sql \ne.g. In python, query = 'select * from table ....' and then run\npostgres.run_sql(query) which will execute the query. So how can I create this kind of executor?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":526,"Q_Id":55891829,"Users Score":0,"Answer":"I don't understand why you need to write any code in Python? I've done a similar process where I used GetFile (locally) to read a CSV file, parse and transform it, and then used ExecuteSQLRecord to insert the rows into a SQL server (running on a cloud provider). The DBCPConnectionPool needs to reference your cloud provider as per their connection instructions. This means the URL likely reference something.google.com and you may need to open firewall rules using your cloud provider administration.","Q_Score":0,"Tags":"python,google-cloud-storage,etl,google-cloud-sql,apache-nifi","A_Id":55894286,"CreationDate":"2019-04-28T15:35:00.000","Title":"Cloud SQL\/NiFi: Connect to cloud sql database with python and NiFi","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"These days i'm trying to teach myself machine learning and i'm going though some issues with my dataset.\nSome of my rows (i work with csv files that i create with some js script, i feel more confident doing that in js) are empty wich is normal as i'm trying to build some guessing model but the issue is that it results in having nan values on my training set.\nMy NN was not training so i added a piece of code to remove them from my set but now i have some issues where my model can't work with input from different size..\nSo my question is: how do i handle missing data ? (i basically have 2 rows and can only have the value from 1 and can't merge them as it will not give good results)\ni can remove it from my set, wich would reduce the accuracy of my model in the end.\nPS: if needed i'll post some code when i come back home.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3144,"Q_Id":55903882,"Users Score":6,"Answer":"You need to have the same input size during training and inference. If you have a few missing values (a few %), you can always choose to replace the missing values by a 0 or by the average of the column. If you have more missing values (more than 50%) you are probably better off ignoring the column completely. Note that this theoretical, the best way to make it work is to try different strategies on your data.","Q_Score":3,"Tags":"python,machine-learning,keras,neural-network","A_Id":55904338,"CreationDate":"2019-04-29T12:55:00.000","Title":"Keras \/ NN - Handling NaN, missing input","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to train a doc2vec model using training data, then finding the similarity of every document in the test data for a specific document in the test data using the trained doc2vec model. However, I am unable to determine how to do this.\nI currently using model.docvecs.most_similar(...). However, this function only finds the similarity of every document in the training data for a specific document in the test data. \nI have tried manually comparing the inferred vector of a specific document in the test data with the inferred vectors of every other document in the test data using model.docvecs.n_similarity(inferred_vector.tolist(), testvectors[i].tolist()) but this returns KeyError: \"tag '-0.3502606451511383' not seen in training corpus\/invalid\" as there are vectors not in the dictionary.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1786,"Q_Id":55924378,"Users Score":1,"Answer":"The act of training-up a Doc2Vec model leaves it with a record of the doc-vectors learned from the training data, and yes, most_similar() just looks among those vectors. \nGenerally, doing any operations on new documents that weren't part of training will require the use of infer_vector(). Note that such inference:\n\nignores any unknown words in the new document\nmay benefit from parameter tuning, especially for short documents\nis currently done just one document at time in a single thread \u2013 so, acquiring inferred-vectors for a large batch of N-thousand docs can actually be slower than training a fresh model on the same N-thousand docs\nisn't necessarily deterministic, unless you take extra steps, because the underlying algorithms use random initialization and randomized selection processes during training\/inference\njust gives you the vector, without loading it into any convenient storage-object for performing further most_similar()-like comparisons\n\nOn the other hand, such inference from a \"frozen\" model can be parallelized across processes or machines. \nThe n_similarity() method you mention isn't really appropriate for your needs: it's expecting lists of lookup-keys ('tags') for existing doc-vectors, not raw vectors like you're supplying.\nThe similarity_unseen_docs() method you mention in your answer is somewhat appropriate, but just takes a pair of docs, re-calculating their vectors each time \u2013 somewhat wasteful if a single new document's doc-vector needs to be compared against many other new documents' doc-vectors. \nYou may just want to train an all-new model, with both your \"training documents\" and your \"test documents\". Then all the \"test documents\" get their doc-vectors calculated, and stored inside the model, as part of the bulk training. This is an appropriate choice for many possible applications, and indeed could learn interesting relationships based on words that only appear in the \"test docs\" in a totally unsupervised way. And there's not yet any part of your question that gives reasons why it couldn't be considered here.\nAlternatively, you'd want to infer_vector() all the new \"test docs\", and put them into a structure like the various KeyedVectors utility classes in gensim - remembering all the vectors in one array, remembering the mapping from doc-key to vector-index, and providing an efficient bulk most_similar() over the set of vectors.","Q_Score":0,"Tags":"python,machine-learning,gensim,doc2vec","A_Id":55927699,"CreationDate":"2019-04-30T15:36:00.000","Title":"Doc2Vec - Finding document similarity in test data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to train a doc2vec model using training data, then finding the similarity of every document in the test data for a specific document in the test data using the trained doc2vec model. However, I am unable to determine how to do this.\nI currently using model.docvecs.most_similar(...). However, this function only finds the similarity of every document in the training data for a specific document in the test data. \nI have tried manually comparing the inferred vector of a specific document in the test data with the inferred vectors of every other document in the test data using model.docvecs.n_similarity(inferred_vector.tolist(), testvectors[i].tolist()) but this returns KeyError: \"tag '-0.3502606451511383' not seen in training corpus\/invalid\" as there are vectors not in the dictionary.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1786,"Q_Id":55924378,"Users Score":0,"Answer":"It turns out there is a function called similarity_unseen_docs(...) which can be used to find the similarity of 2 documents in the test data. \nHowever, I will leave the question unsolved for now as it is not very optimal since I would need manually compare the specific document with every other document in the test data. Also, it compares the words in the documents instead of the vectors which could affect accuracy.","Q_Score":0,"Tags":"python,machine-learning,gensim,doc2vec","A_Id":55924682,"CreationDate":"2019-04-30T15:36:00.000","Title":"Doc2Vec - Finding document similarity in test data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to run a python program in the online IDE SourceLair. I've written a line of code that simply prints hello, but I am embarrassed to say I can't figure out how to RUN the program. \nI have the console, web server, and terminal available on the IDE already pulled up. I just don't know how to start the program. I've tried it on Mac OSX and Chrome OS, and neither work. \nI don't know if anyone has experience with this IDE, but I can hope. Thanks!!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":63,"Q_Id":55929577,"Users Score":1,"Answer":"Can I ask you why you are using SourceLair?\nWell I just figured it out in about 2 mins....its the same as using any other editor for python.\nAll you have to do is to run it in the terminal. python (nameoffile).py","Q_Score":0,"Tags":"python,ide","A_Id":55929861,"CreationDate":"2019-04-30T22:47:00.000","Title":"How to run a python program using sourcelair?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to implement clipped PPO algorithm for classical control task like keeping room temperature, charge of battery, etc. within certain limits. So far I've seen the implementations in game environments only. My question is the game environments and classical control problems are different when it comes to the implementation of the clipped PPO algorithm? If they are, help and tips on how to implement the algorithm for my case are appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":878,"Q_Id":55943678,"Users Score":2,"Answer":"I'm answering your question from a general RL point of view, I don't think the particular algorithm (PPO) makes any difference in this question.\nI think there is no fundamental differences, both can be seen as discrete control problems. In a game you observe the state, then choose an action and act according to it, and receive reward an the observation of the subsequent state.\nNow if you take a simple control problem, instead of a game you probably have a simulation (or just a very simple dynamic model) that describes the behavior of your problem. For example the equations of motion for an inverted pendulum (another classical control problem). In some case you might directly interact with the real system, not a model of it, but this is rare as it can be really slow, and the typical sample complexities of RL algorithms make learning on a real (physical) system less practical.\nEssentially you interact with the model of your problem just the same way as you do with a game: you observe a state, take an action and act, and observe the next state. The only difference is that while in games reward is usually pre-defined (some score or goal state), probably you need to define the reward function for your problem. But again, in many cases you also need to define rewards for games, so this is not a major difference either.","Q_Score":1,"Tags":"python,keras,reinforcement-learning","A_Id":55948320,"CreationDate":"2019-05-01T22:51:00.000","Title":"How to implement Proximal Policy Optimization (PPO) Algorithm for classical control problems?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am wondering how to go about visualization of my frozen graph def. I need it to figure out my tensorflow networks input and output nodes. I have already tried several methods to no avail, like the summarize graph tool. Does anyone have an answer for some things that I can try? I am open to clarifying questions, thanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":221,"Q_Id":55977680,"Users Score":0,"Answer":"You can try to use TensorBoard. It is on the Tensorflow website...","Q_Score":0,"Tags":"python,tensorflow,tensorboard","A_Id":55977698,"CreationDate":"2019-05-03T22:13:00.000","Title":"Visualizing a frozen graph_def.pb","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new to Python and to train myself, I would like to use Python build a database that would store information about wine - bottle, date, rating etc. The idea is that:\n\nI could use to database to add a new wine entries\nI could use the database to browse wines I have previously entered\nI could run some small analyses\n\nThe design of my Python I am thinking of is: \n\nDesign database with Python package sqlite3\nMake a GUI built on top of the database with the package Tkinter, so that I can both enter new data and query the database if I want.\n\nMy question is: would your recommend this design and these packages? Is it possible to build a GUI on top of a database? I know StackOverflow is more for specific questions rather than \"project design\" questions so I would appreciate if anyone could point me to forums that discuss project design ideas.\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2745,"Q_Id":55985778,"Users Score":0,"Answer":"If it's just for you, sure there is no problem with that stack.\nIf I were doing it, I would skip Tkinter, and build something using Flask (or Django.) Doing a web page as a GUI yields faster results, is less fiddly, and more applicable to the job market.","Q_Score":0,"Tags":"python,database,sqlite,user-interface","A_Id":55985903,"CreationDate":"2019-05-04T18:47:00.000","Title":"Python: how to create database and a GUI together?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have some trubble reading with pandas a csv file which include the special character '\u0153'.\nI've done some reseach and it appears that this character has been added to the ISO 8859-15 encoding standard.\nI've tried to specify this encoding standard to the pandas read_csv methods but it doesn't properly get this special character (I got instead a '\u2610') in the result dataframe :\ndf= pd.read_csv(my_csv_path, \";\", header=None, encoding=\"ISO-8859-15\")\nDoes someone know how could I get the right '\u0153' character (or eaven better the string 'oe') instead of this ?\nThank's a lot :)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":167,"Q_Id":55987923,"Users Score":0,"Answer":"Anyone have a clue ? I've manage the problem by manually rewrite this special character before reading my csv with pandas but that doesn't answer my question :(","Q_Score":0,"Tags":"python-3.x,pandas,encoding","A_Id":55993149,"CreationDate":"2019-05-05T00:30:00.000","Title":"Pandas read_csv method can't get '\u0153' character properly while using encoding ISO 8859-15","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to run a python script using OpenCV with PyPy, but all the documentation that I found didn't work.\nThe installation of PyPy went well, but when I try to run the script it says that it can't find OpenCV modules like 'cv2' for example, despite having cloned opencv for pypy directly from a github repository.\nI would need to know how to do it exactly.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3007,"Q_Id":56004743,"Users Score":1,"Answer":"pip install opencv-python worked well for me on python 2.7, I can import and use cv2.","Q_Score":3,"Tags":"python,opencv,pypy","A_Id":56004857,"CreationDate":"2019-05-06T11:57:00.000","Title":"Using OpenCV with PyPy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a SOAP url , while running the url through browser I am getting a wsdl response.But when I am trying to call a method in the response using the required parameter list, and it is showing \"ARERR [149] A user name must be supplied in the control record\".I tried using PHP as well as python but I am getting the same error.\nI searched this error and got the information like this : \"The name field of the ARControlStruct parameter is empty. Supply the name of an AR System user in this field.\".But nowhere I saw how to supply the user name parameter.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1235,"Q_Id":56016625,"Users Score":0,"Answer":"I got the solution for this problem.Following are the steps I followed to solve the issue (I have used \"zeep\" a 3rd party module to solve this):\n\nRun the following command to understand WSDL:\n\npython -mzeep wsdl_url\n\nSearch for string \"Service:\". Below that we can see our operation name\nFor my operation I found following entry:\n\nMyOperation(parameters..., _soapheaders={parameters: ns0:AuthenticationInfo})\nwhich clearly communicates that, I have to pass parameters and an auth param using kwargs \"_soapheaders\"\nWith that I came to know that I have to pass my authentication element as _soapheaders argument to MyOperation function.\n\nCreated Auth Element:\n\nauth_ele = client.get_element('ns0:AuthenticationInfo')\nauth = auth_ele(userName='me', password='mypwd')\n\nPassed the auth to my Operation:\n\ncleint.service.MyOperation('parameters..', _soapheaders=[auth])","Q_Score":0,"Tags":"php,python,web-services,soap,wsdl","A_Id":56127550,"CreationDate":"2019-05-07T06:21:00.000","Title":"Getting ARERR 149 A user name must be supplied in the control record","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":1,"REVIEW":0},{"Question":"I'm trying to implement a BFS function that will print a list of nodes of a directed graph as visited using Breadth-First-Search traversal. The function has to be implemented non-recursively and it has to traverse through all the nodes in a graph, so if there are multiple trees it will print in the following way:\nTree 1: a, b\nTree 2: d, e, h\nTree 3: .....\nMy main difficulty is understanding how to make the BFS function traverse through all the nodes if the graph has several trees, without reprinting previously visited nodes.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2212,"Q_Id":56030239,"Users Score":0,"Answer":"BFS is usually done with a queue. When you process a node, you push its children onto the queue. After processing the node, you process the next one in the queue.\nThis is by nature non-recursive.","Q_Score":2,"Tags":"python,breadth-first-search","A_Id":56030323,"CreationDate":"2019-05-07T20:45:00.000","Title":"How to implement Breadth-First-Search non-recursively for a directed graph on python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"sorry for the noob question, but how do I kill the Tensorflow PID?\nIt says:\nReusing TensorBoard on port 6006 (pid 5128), started 4 days, 18:03:12 ago. (Use '!kill 5128' to kill it.)\nBut I can not find any PID 5128 in the windows taks manager. Using '!kill 5128' within jupyter the error returns that comand kill cannot be found. Using it in the Windows cmd or conda cmd does not work either. \nThanks for your help.","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":10028,"Q_Id":56036922,"Users Score":11,"Answer":"If you clear the contents of AppData\/Local\/Temp\/.tensorboard-info, and delete your logs, you should be able to have a fresh start","Q_Score":7,"Tags":"python,tensorflow,tensorboard","A_Id":64094622,"CreationDate":"2019-05-08T08:51:00.000","Title":"How to kill tensorboard with Tensorflow2 (jupyter, Win)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have created a word2vec file and I want to extract only the line at position [0]\nthis is the word2vec file\n`36 16\nActivity 0.013954502 0.009596351 -0.0002082094 -0.029975398 -0.0244055 -0.001624907 0.01995442 0.0050479663 -0.011549354 -0.020344704 -0.0113901375 -0.010574887 0.02007604 -0.008582828 0.030914625 -0.009170294\nDATABASED%GWC%5 0.022193532 0.011890317 -0.018219836 0.02621059 0.0029900416 0.01779779 -0.026217759 0.0070709535 -0.021979155 0.02609082 0.009237218 -0.0065825963 -0.019650755 0.024096865 -0.022521153 0.014374277\nDATABASED%GWC%7 0.021235622 -0.00062567473 -0.0045315344 0.028400827 0.016763352 0.02893731 -0.013499333 -0.0037113864 -0.016281538 0.004078895 0.015604254 -0.029257657 0.026601797 0.013721668 0.016954066 -0.026421601`","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":12,"Q_Id":56039771,"Users Score":0,"Answer":"glove_model[\"Activity\"] should get you its vector representation from the loaded model. This is because glove_model is an object of type KeyedVectors and you can use key value to index into it.","Q_Score":0,"Tags":"python,pycharm","A_Id":56039845,"CreationDate":"2019-05-08T11:29:00.000","Title":"how to extract line from a word2vec file?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm starting to work with Django, already done some models, but always done that with 'code-first' approach, so Django handled the table creations etc. Right now I'm integrating an already existing database with ORM and I encountered some problems. \nDatabase has a lot of many-to-many relationships so there are quite a few tables linking two other tables. I ran inspectdb command to let Django prepare some models for me. I revised them, it did rather good job guessing the fields and relations, but the thing is, I think I don't need those link tables in my models, because Django handles many-to-many relationships with ManyToManyField fields, but I want Django to use that link tables under the hood.\nSo my question is: Should I delete the models for link tables and add ManyToManyFields to corresponding models, or should I somehow use this models?\nI don't want to somehow mess-up database structure, it's quite heavy populated.\nI'm using Postgres 9.5, Django 2.2.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":217,"Q_Id":56045891,"Users Score":0,"Answer":"In many cases it doesn't matter. If you would like to keep the code minimal then m2m fields are a good way to go. If you don't control the database structure it might be worth keeping the inspectdb schema in case you have to do it again after schema changes that you don't control. If the m2m link tables can grow properties of their own then you need to keep them as models.","Q_Score":0,"Tags":"python,django,many-to-many,django-orm","A_Id":56046079,"CreationDate":"2019-05-08T17:12:00.000","Title":"Handling many-to-many relationship from existing database using Django ORM","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am plotting plots on one figure using matplotlib from csv files however, I want the plots in order. I want to somehow use the read_csv method to read the csv files from a directory in the order they are listed in so that they are outputted in the same fashion.\nI want the plots listed under each other the same way the csv files are listed in the directory.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":47,"Q_Id":56048345,"Users Score":2,"Answer":"you could use os.listdir() to get all the files in the folder and then sort them out in a certain way, for example by name(it would be enough using the python built in sorted() ). Instead if you want more fancy ordering you could retrieve both the name and last modified date and store them in a dictionary, order the keys and retrieve the values. So as @Fausto Morales said it all only depends on which order you would like them to be sorted.","Q_Score":1,"Tags":"python,pandas,csv,matplotlib,python-3.7","A_Id":56048584,"CreationDate":"2019-05-08T20:10:00.000","Title":"Is there a way to use the \"read_csv\" method to read the csv files in order they are listed in a directory?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I Have a python script that monitors a website, and I want it to send me a notification when some particular change happens to the website.\nMy question is how can I make that Python script runs for ever in some place else (Not my machine, because I want it to send me a notification even when my machine is off)?\nI have thought about RDP, but I wanted to have your opinions also.\n(PS: FREE Service if it's possible, otherwise the lowest cost)\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":724,"Q_Id":56056794,"Users Score":1,"Answer":"I would suggest you to setup AWS EC2 instance with whatever OS you want. \nFor beginner, you can get 750 hours of usage for free where you can run your script on.","Q_Score":0,"Tags":"python,web-services,monitoring,network-monitoring","A_Id":56057837,"CreationDate":"2019-05-09T09:52:00.000","Title":"How to make a python script run forever online?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm working on a tool generates dummy binary files for a project. We have a spec that describes the real binary files, which are created from a stream of values with various bit lengths. I use input and spec files to create a list of values, and the bitstring library's BitArray class to convert the values and join them together.\nThe problem is that the values' lengths don't always add up to full bytes, and I need the file to contain the bits as-is. Normally I could use BitArray.tofile(), but that method automatically pads the file with zeroes at the end.\nIs there another way how to write the bits to a file?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":990,"Q_Id":56098610,"Users Score":0,"Answer":"You need to give padding to the, say 7-bit value so it matches a whole number of bytes:\n1010101 (7 bits) --> 01010101\n1111 (4 bits) --> 00001111\nThe padding of the most significant digits does not affect the data taken from the file.","Q_Score":2,"Tags":"python,binaryfiles,binary-data,bitstring","A_Id":56099355,"CreationDate":"2019-05-12T11:06:00.000","Title":"How to write binary file with bit length not a multiple of 8 in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have two computers with internet connection. They both have public IPs and they are NATed. What I want is to send a variable from PC A to PC B and close the connection. \nI have thought of two approaches for this:\n1) Using sockets. PC B will have listen to a connection from PC A. Then, when the variable will be sent, the connection will be closed. The problem is that, the sockets will not communicate, because I have to forward the traffic from my public IP to PC B.\n2) An out of the box idea, is to have the variable broadcasted online somewhere. I mean making a public IP hold the variable in HTML and then the PC would GET the IP from and get the variable. The problem is, how do I make that variable accessible over the internet?\nAny ideas would be much appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":233,"Q_Id":56129037,"Users Score":2,"Answer":"Figured a solution out. I make a dummy server using flask and I hosted it at pythonanywhere.com for free. The variables are posted to the server from PC A and then, PC B uses the GET method to get them locally.","Q_Score":1,"Tags":"python,python-3.x,sockets,networking","A_Id":56148950,"CreationDate":"2019-05-14T11:09:00.000","Title":"Send variable between PCs over the internet using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In airflow, the \"Gantt\" chart offers quite a good view on performance of the ran tasks. It offers stats like start\/end time, duration and etc.\nDo you guys know a way to programmatically pull these stats via the Airflow API? I would like to use these stats and generate periodic reports on the performance of my tasks and how it changes over time.\nMy airflow version is: 1.9\nPython: 3.6.3\nRunning on top of docker\nThanks!\nKelvin\nAirflow online documentation","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1250,"Q_Id":56156636,"Users Score":2,"Answer":"One easy approach could be to set up a SQL alchemy connection, airflow stores\/sends all the data in there once the configuration is completed(dag info\/stat\/fail, task info\/stats\/ etc.).\nEdit airflow.cfg and add:\nsql_alchemy_conn = mysql:\/\/------\/table_name","Q_Score":1,"Tags":"python-3.x,airflow","A_Id":56157305,"CreationDate":"2019-05-15T19:48:00.000","Title":"Pulling duration stats API in Airflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using Eclipse on Linux to develop C applications, and the build system I have makes use of make and python. I have a custom virtualenv installed and managed by pyenv, and it works fine from the command line if I pre-select the virtualenv with, say pyenv shell myvenv.\nHowever I want Eclipse to make use of this virtualenv when building (via \"existing makefile\") from within Eclipse. Currently it runs my Makefile but uses the system python in \/usr\/bin\/python, which is missing all of the packages needed by the build system.\nIt isn't clear to me how to configure Eclipse to use a custom Python interpreter such as the one in my virtualenv. I have heard talk of setting PYTHONPATH however this seems to be for finding site-packages rather than the interpreter itself. My virtualenv is based on python 3.7 and my system python is 2.7, so setting this alone probably isn't going to work.\nI am not using PyDev (this is a C project, not a Python project) so there's no explicit support for Python in Eclipse. I'd prefer not to install PyDev if I can help it.\nI've noticed that pyenv adds its plugins, shims and bin directories to PATH when activated. I could explicitly add these to PATH in Eclipse, so that Eclipse uses pyenv to find an interpreter. However I'd prefer to point directly at a specific virtualenv rather than use the pyenv machinery to find the current virtualenv.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":930,"Q_Id":56159051,"Users Score":0,"Answer":"I had the same trouble and after some digging, there are two solutions; project-wide and workspace-wide. I prefer the project-wide as it will be saved in the git repository and the next person doesn't have to pull their hair.\nFor the project-wide add \/Users\/${USER}\/.pyenv\/shims: to the start of the \"Project properties > C\/C++ Build > Environment > PATH\".\nI couldn't figure out the other method fully (mostly because I'm happy with the other one) but it should be with possible to modify \"Eclipse preferences > C\/C++ > Build > Environment\". You should change the radio button and add PATH variable.","Q_Score":2,"Tags":"python,eclipse,makefile,virtualenv,pyenv","A_Id":61663567,"CreationDate":"2019-05-15T23:51:00.000","Title":"How to use a Pyenv virtualenv from within Eclipse?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using Eclipse on Linux to develop C applications, and the build system I have makes use of make and python. I have a custom virtualenv installed and managed by pyenv, and it works fine from the command line if I pre-select the virtualenv with, say pyenv shell myvenv.\nHowever I want Eclipse to make use of this virtualenv when building (via \"existing makefile\") from within Eclipse. Currently it runs my Makefile but uses the system python in \/usr\/bin\/python, which is missing all of the packages needed by the build system.\nIt isn't clear to me how to configure Eclipse to use a custom Python interpreter such as the one in my virtualenv. I have heard talk of setting PYTHONPATH however this seems to be for finding site-packages rather than the interpreter itself. My virtualenv is based on python 3.7 and my system python is 2.7, so setting this alone probably isn't going to work.\nI am not using PyDev (this is a C project, not a Python project) so there's no explicit support for Python in Eclipse. I'd prefer not to install PyDev if I can help it.\nI've noticed that pyenv adds its plugins, shims and bin directories to PATH when activated. I could explicitly add these to PATH in Eclipse, so that Eclipse uses pyenv to find an interpreter. However I'd prefer to point directly at a specific virtualenv rather than use the pyenv machinery to find the current virtualenv.","AnswerCount":3,"Available Count":3,"Score":-0.0665680765,"is_accepted":false,"ViewCount":930,"Q_Id":56159051,"Users Score":-1,"Answer":"Typing CMD+SHIFT+. will show you dotfiles & directories that begin with dot in any Mac finder dialog box...","Q_Score":2,"Tags":"python,eclipse,makefile,virtualenv,pyenv","A_Id":59180632,"CreationDate":"2019-05-15T23:51:00.000","Title":"How to use a Pyenv virtualenv from within Eclipse?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using Eclipse on Linux to develop C applications, and the build system I have makes use of make and python. I have a custom virtualenv installed and managed by pyenv, and it works fine from the command line if I pre-select the virtualenv with, say pyenv shell myvenv.\nHowever I want Eclipse to make use of this virtualenv when building (via \"existing makefile\") from within Eclipse. Currently it runs my Makefile but uses the system python in \/usr\/bin\/python, which is missing all of the packages needed by the build system.\nIt isn't clear to me how to configure Eclipse to use a custom Python interpreter such as the one in my virtualenv. I have heard talk of setting PYTHONPATH however this seems to be for finding site-packages rather than the interpreter itself. My virtualenv is based on python 3.7 and my system python is 2.7, so setting this alone probably isn't going to work.\nI am not using PyDev (this is a C project, not a Python project) so there's no explicit support for Python in Eclipse. I'd prefer not to install PyDev if I can help it.\nI've noticed that pyenv adds its plugins, shims and bin directories to PATH when activated. I could explicitly add these to PATH in Eclipse, so that Eclipse uses pyenv to find an interpreter. However I'd prefer to point directly at a specific virtualenv rather than use the pyenv machinery to find the current virtualenv.","AnswerCount":3,"Available Count":3,"Score":-0.0665680765,"is_accepted":false,"ViewCount":930,"Q_Id":56159051,"Users Score":-1,"Answer":"For me, following steps worked ( mac os 10.12, eclipse photon version, with pydev plugin)\n\nProject -> properties\nPydev-Interpreter\/Grammar \nClick here to configure an interpreter not listed (under interpret combobox)\nopen interpreter preference page \nBrowse for python\/pypy exe -> my virtualenvdirectory\/bin\/python\nThen the chosen python interpreter path should show ( for me still, it was not pointing to my virtual env, but I typed my path explicitly here and it worked)\n\nIn the bottom libraries section, you should be able to see the site-packages from your virtual env\nExtra tip - In my mac os the virtual env was starting with .pyenv, since it's a hidden directory, I was not able to select this directory and I did not know how to view the hidden directory in eclipse file explorer. Therefore I created an softlink ( without any . in the name) to the hidden directory (.pyenv) and then I was able to select the softlink","Q_Score":2,"Tags":"python,eclipse,makefile,virtualenv,pyenv","A_Id":57618150,"CreationDate":"2019-05-15T23:51:00.000","Title":"How to use a Pyenv virtualenv from within Eclipse?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to cluster a data set containing mixed data(nominal and ordinal) using k_prototype clustering based on Huang, Z.: Clustering large data sets with mixed numeric and categorical values.\nmy question is how to find the optimal number of clusters?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":56166439,"Users Score":0,"Answer":"There is not one optimal number of clusters. But dozens. Every heuristic will suggest a different \"optimal\" number for another poorly defined notion of what is \"optimal\" that likely has no relevancy for the problem that you are trying to solve in the first place.\nRather than being overly concerned with \"optimality\", rather explore and experiment more. Study what you are actually trying to achieve, and how to get this into mathematical form to be able to compute what is solving your problem, and what is solving someone else's...","Q_Score":0,"Tags":"python,cluster-analysis","A_Id":56202424,"CreationDate":"2019-05-16T10:29:00.000","Title":"the clustering of mixed data using python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I was training a neural network with images of an eye that are shaped 36x60. So I can only predict the result using a 36x60 image? But in my application I have a video stream, this stream is divided into frames, for each frame 68 points of landmarks are predicted. In the eye range, I can select the eye point, and using the 'boundingrect' function from OpenCV, it is very easy to get a cropped image. But this image has no form 36x60. What is the correct way to get 36x60 data that can be used for forecasting? Or how to use a neural network for data of another form?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":81,"Q_Id":56181395,"Users Score":2,"Answer":"Neural networks (insofar as I've encountered) have a fixed input shape, freedom permitted only to batch size. This (probably) goes for every amazing neural network you've ever seen. Don't be too afraid of reshaping your image with off-the-shelf sampling to the network's expected input size. Robust computer-vision networks are generally trained on augmented data; randomly scaled, skewed, and otherwise transformed in order to---among other things---broaden the network's ability to handle this unavoidable scaling situation.\nThere are caveats, of course. An input for prediction should be as similar to the dataset it was trained on as possible, which is to say that a model should be applied to the data for which it was designed. For example, consider an object detection network made for satellite applications. If that same network is then applied to drone imagery, the relative size of objects may be substantially larger than the objects for which the network (specifically its anchor-box sizes) was designed. \nTl;dr: Assuming you're using the right network for the job, don't be afraid to scale your images\/frames to fit the network's inputs.","Q_Score":5,"Tags":"python-3.x,opencv,keras,neural-network,data-science","A_Id":56182355,"CreationDate":"2019-05-17T07:14:00.000","Title":"How to predict different data via neural network, which is trained on the data with 36x60 size?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"i'm using odoo 11 and i want to know how can I configure odoo in a way that the HR manager and the employee received alert before the expiration of contract.\nIs it possible to do it ? Any idea for help please ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":56187289,"Users Score":0,"Answer":"This type of scenario is only archived by developing custom addon.\nIn custom addon you have to specify cron file which will automatically fire some action at regular basis, and which will send email notification to HR Manager that some of employee's contract are going to be expired.","Q_Score":0,"Tags":"python-3.x,odoo-11","A_Id":56493033,"CreationDate":"2019-05-17T13:19:00.000","Title":"How to configure alerts for employee contract expiration in odoo 11?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am attempting a rather large unsupervised learning project and am not sure how to properly utilize word2vec. We're trying to cluster groups of customers based on some stats about them and what actions they take on our website. Someone recommended I use word2vec and treat each action a user takes as a word in a \"sentence\". The reason this step is necessary is because a single customer can create multiple rows in the database (roughly same stats, but new row for each action on the website in chronological order). In order to perform kmeans on this data we need to get that down to one row per customer ID. Hence the previous idea to collapse down the actions as words in a sentence \"describing the user's actions\"\nMy question is I've come across countless tutorials and resources online that show you how to use word2vec (combined with kmeans) to cluster words on their own, but none of them show how to use the word2vec output as part of a larger kmeans model. I need to be able to use the word2vec model along side other values about the customer. How should I go about this? I'm using python for the clustering if you want to be specific with coding examples, but I could also just be missing something super obvious and high level. It seems the word2vec outputs vectors, but kmeans needs straight numbers to work, no? Any guidance is appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":56188976,"Users Score":0,"Answer":"There are two common approaches.\n\nTaking the average of all words. That is easy, but the resulting vectors tend to be, well, average. They are not similar to the keywords of the document, but rather similar to the most average and least informative words... My experiences with this approach are pretty disappointing, despite this being the most mentioned approach.\npar2vec\/doc2vec. You add a \"word\" for each user to all it's contexts, in addition to the neighbor words, during training. This way you get a \"predictive\" vector for each paragraph\/document\/user the same way you get a word in the first word2vec. These are supposedly more informative but require much more effort to train - you can't download a pretrained model because they are computed during training.","Q_Score":0,"Tags":"python,cluster-analysis,k-means,word2vec,unsupervised-learning","A_Id":56202144,"CreationDate":"2019-05-17T14:56:00.000","Title":"User word2vec model output in larger kmeans project","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using openCV to process an image and use houghcircles to detect the circles in the image under test, and also calculating the distance between their centers using euclidean distance.\nSince this would be in pixels, I need the absolute distances in mm or inches, can anyone let me know how this can be done\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1149,"Q_Id":56191574,"Users Score":0,"Answer":"The image formation process implies taking a 2D projection of the real, 3D world, through a lens. In this process, a lot of information is lost (e.g. the third dimension), and the transformation is dependent on lens properties (e.g. focal distance).\nThe transformation between the distance in pixels and the physical distance depends on the depth (distance between the camera and the object) and the lens. The complex, but more general way, is to estimate the depth (there are specialized algorithms which can do this under certain conditions, but require multiple cameras\/perspectives) or use a depth camera which can measure the depth. Once the depth is known, after taking into account the effects of the lens projection, an estimation can be made.\nYou do not give much information about your setup, but the transformation can be measured experimentally. You simply take a picture of an object of known dimensions and you determine the physical dimension of one pixel (e.g. if the object is 10x10 cm and in the picture it has 100x100px, then 10px is 1mm). This is strongly dependent on the distance to the camera from the object.\nAn approach a bit more automated is to use a certain pattern (e.g. checkerboard) of known dimensions. It can be automatically detected in the image and the same transformation can be performed.","Q_Score":0,"Tags":"python,opencv,image-processing","A_Id":56192030,"CreationDate":"2019-05-17T18:10:00.000","Title":"Conversion from pixel to general Metric(mm, in)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have an airflow DAG that has over 100,000 tasks.\nI am able to run only up to 1000 tasks. Beyond that the scheduler hangs, the webserver cannot render tasks and is extremely slow on the UI.\nI have tried increasing, min_file_process_interval and processor_poll_interval config params.\nI have set num_duration to 3600 so that scheduler restarts every hour.\nAny limits I'm hitting on the webserver or scheduler? In general, how to deal with a large number of tasks in Airflow? Any config settings, etc would be very helpful.\nAlso, should I be using SubDagOperator at this scale or not? please advice.\nThanks,","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1319,"Q_Id":56203824,"Users Score":0,"Answer":"I was able to run more than 165,000 airflow tasks!\nBut there's a catch. Not all the tasks were scheduled and rendered in a single Airflow Dag.\nThe problems I faced when I tried to schedule more and more tasks are that of scheduler and webserver.\nThe memory and cpu consumption on scheduler and webserver dramatically increased as more and more tasks were being scheduled (it is obvious and makes sense). It went to a point where the node couldn't handle it anymore (scheduler was using over 80GB memory for 16,000+ tasks)\nI split the single dag into 2 dags. One is a leader\/master. The second one being the worker dag.\nI have an airflow variable that says how many tasks to process at once (for example, num_tasks=10,000). Since I have over 165,000 tasks, the worker dag will process 10k tasks at a time in 17 batches.\nThe leader dag, all it does is trigger the same worker dag over and over with different sets of 10k tasks and monitor the worker dag run status. The first trigger operator triggers the worker dag for the first set of 10k tasks and keeps waiting until the worker dag completes. Once it's complete, it triggers the same worker dag with the next batch of 10k tasks and so on.\nThis way, the worker dag keeps being reused and never have to schedule more than X num_tasks\nThe bottom line is, figure out the max_number of tasks your Airflow setup can handle. And then launch the dags in leader\/worker fashion for max_tasks over and over again until all the tasks are done.\nHope this was helpful.","Q_Score":4,"Tags":"python,python-3.x,airflow,airflow-scheduler","A_Id":62958902,"CreationDate":"2019-05-19T00:17:00.000","Title":"How to run Airflow dag with more than 100 thousand tasks?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have set of threads which can execute a synchronized method in python. Currently when a thread comes to critical section it enters to the critical section if no thread is executing the critical section. Otherwise wait and enter the critical section after lock is released. (it works as synchronization supposed to work). But I have a high priority thread which should enter the critical section whether a low priority thread is in the critical section or not. Is this possible? If so how can I implement this?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1241,"Q_Id":56204232,"Users Score":2,"Answer":"As another answer described very well, this is not possible, there is no way to do it.\nWhat you can and often should do is prevent another lower priority thread from entering this critical section first, before high priority thread. \nI.e. if a critical section is being held by some thread, this thread needs to exit it first. But by that time there might be multiple threads waiting for this critical section, some low and some high priority. You may want to ensure higher priority thread gets the critical section first in such situation.","Q_Score":2,"Tags":"python,python-3.x,multithreading","A_Id":56204377,"CreationDate":"2019-05-19T02:12:00.000","Title":"Let high priority python thread to enter the critical section while low priority thread is execution in the critical section","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Is it possible to use os.popen() to achieve a result similar to os.system? I know that os.popen() is more secure, but I want to know how to be able to actually run the commands through this function. When using os.system(), things can get very insecure and I want to be able to have a secure way of accessing terminal commands.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":495,"Q_Id":56212056,"Users Score":3,"Answer":"Anything that uses the shell to execute commands is insecure for obvious reasons (you don't want someone running rm -rf \/ in your shell :). Both os.system and os.popen use the shell.\nFor security, use the subprocess module with shell = False\nEither way, both of those functions have been deprecated since Python 2.6","Q_Score":1,"Tags":"python,popen,python-os","A_Id":56212206,"CreationDate":"2019-05-19T21:14:00.000","Title":"Replace os.system with os.popen for security purposes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"My Telegram bot code was working fine for weeks and I didn't changed anything today suddenly I got [SSL: CERTIFICATE_VERIFY_FAILED] error and my bot code no longer working in my PC.\nI use Ubuntu 18.04 and I'm usng telepot library.\nWhat is wrong and how to fix it?\nEdit: I'm using getMe method and I don't know where is the certificate and how to renew it and I didn't import requests in my bot code. I'm using telepot API by importing telepot in my code.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1148,"Q_Id":56220056,"Users Score":1,"Answer":"Probably your certificate expired, that is why it worked fine earlier. Just renew it and all should be good. If you're using requests under the hood you can just pass verify=False to the post or get method but that is unwise.\nThe renew procedure depends on from where do you get your certificate. If your using letsencrypt for example with certbot. Issuing sudo certbot renew command from shell will suffice.","Q_Score":2,"Tags":"python,python-3.x,ssl,telegram-bot,telepot","A_Id":56220074,"CreationDate":"2019-05-20T11:26:00.000","Title":"\"SSL: CERTIFICATE_VERIFY_FAILED\" error in my telegram bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have a driver which is written in C#, .NET 4.7.0 and build as DLL. I don't have sources from this driver. I want to use this driver in python application.\nI wrapped some functionality from driver into method of another C# project. Then I built it into DLL. I used RGiesecke.DllExport to make one method available in python. When i call this method from python using ctypes, I get WinError -532462766 Windows Error 0xe0434352.\nIf I exclude driver code and keep only wrapper code in exported method everything runs fine.\nCould you please give me some advice how to make this working or help me find better sollution? Moving from python to IronPython is no option here.\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":306,"Q_Id":56224958,"Users Score":0,"Answer":"PROBLEM CAUSE:\nPython didn't run wrapper from directory where it was stored together with driver. That caused problem with loading driver.","Q_Score":1,"Tags":"c#,python,dll,wrapper,dllexport","A_Id":56241575,"CreationDate":"2019-05-20T16:35:00.000","Title":"Use C# DLL in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a bokeh plot with multiple y axes. I want to be able to zoom in one y axis while having the other one's displayed range stay the same. Is this possible in bokeh, and if it is, how can I accomplish that?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27,"Q_Id":56225582,"Users Score":0,"Answer":"Bokeh does not support this, twin axes are always linked to maintain their original relative scale.","Q_Score":0,"Tags":"python,bokeh","A_Id":56226834,"CreationDate":"2019-05-20T17:25:00.000","Title":"How to make multiple y axes zoomable individually","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a shell script that runs some java program on a remote server. \nBut this shell script is to be executed by a python script which is on my local machine.\nHere's the flow : Python script executes the shell script (with paramiko), the shell script then executes a java class. \nI am getting an error : 'The java class could not be loaded. java.lang.UnsupportedClassVersionError: (Unsupported major.minor version 50.0)' whenever I run python code.\nLimitations: I cannot make any changes to the shell script.\nI believe this is java version issue. But I don't know how to explicitly have a python program to run in a specific java environment.\nPlease suggest how I can get rid of this error.\nThe java version of unix machine (where shell script executes) : 1.6.0\nJava version of my local machine (where python script executes): 1.7.0","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":49,"Q_Id":56232838,"Users Score":1,"Answer":"The shell script can stay the same, update java on the remote system to java 1.7 or later. Then it should work.\nAnother possibility could be to compile the java application for java 1.6 instead. The java compiler (javac) has the arguments -source and -target and adding -source 1.6 -target 1.6 when compiling the application should solve this issue, too (but limits the application to use java 1.6 features).\nAlso be aware: If you use a build system like gradle or maven, then you have a different way to set source and target version.","Q_Score":0,"Tags":"java,python-3.x,sh","A_Id":56232988,"CreationDate":"2019-05-21T07:05:00.000","Title":"Unsupported major.minor version when running a java program from shell script which is executed by a python program","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I had so many Python installations that it was getting frustrating, so I decided to do a full reinstall. I removed the \/Library\/Frameworks\/Python.Frameworks\/ folder, and meant to remove the \/usr\/local\/bin\/python folder too, but I accidentally removed the \/usr\/bin\/python instead. I don't see any difference, everything seems to be working fine for now, but I've read multiple articles online saying that I should never touch \/usr\/bin\/python as OS X uses it and things will break.\nI tried Time Machine but there are no viable recovery options. How can I manually \"restore\" what was deleted? Do I even need to, since everything seems to be working fine for now? I haven't restarted the Mac yet, in fear that things might break.\nI believe the exact command I ran was rm -rf \/usr\/bin\/python*, and I don't have anything python related in my \/usr\/bin\/ folder.\nI'm running on macOS Mojave 10.14.5","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1220,"Q_Id":56233672,"Users Score":1,"Answer":"Items can't be recovered when you perform rm -rf. However, you can try the following:\ncp \/usr\/local\/bin\/python* \/usr\/bin \nThis would copy user local python to usr bin and most probably will bail you out.\nDon't worry, nothing will happen to your OS. It should work fine :)","Q_Score":0,"Tags":"python,macos,terminal","A_Id":56233846,"CreationDate":"2019-05-21T07:54:00.000","Title":"Accidentally deleted \/usr\/bin\/python instead of \/usr\/local\/bin\/python on OS X\/macOS, how to restore?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I would like go implement a hierarchical resnet architecture. However, I could not find any solution for this. For example, my data structure is like:\n\nclass A\n\n\nSubclass 1\nSubclass 2\n....\n\nclass B\n\n\nsubclass 6\n........\n\n\nSo i would like to train and predict the main class and then the subclass of the chosen\/predicted mainclass. Can someone provide a simple example how to do this with generators?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":286,"Q_Id":56235267,"Users Score":0,"Answer":"The easiest way to do so would be to train multiple classifiers and build a hierarchical system by yourself.\nOne classifier detecting class A, B etc. After that make a new prediction for subclasses.\nIf you want only one single classifier:\nWhat about just killing the first hierarchy of parent classes? Should be also quite easy. If you really want a model, where the hierarchy is learned take a look at Hierarchical Multi-Label Classification Networks.","Q_Score":0,"Tags":"python,keras,conv-neural-network,hierarchical,deep-residual-networks","A_Id":56241130,"CreationDate":"2019-05-21T09:29:00.000","Title":"How to build a resnet with Keras that trains and predicts the subclass from the main class?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"How do I create azure datafactory for incremental load using python?\nWhere should I mention file load option(Incremental Load:LastModifiedOn) while creating activity or pipeline??\nWe can do that using UI by selecting File Load Option. But how to do the same pragmatically using python?\nDoes python api for datafactory support this or not?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":273,"Q_Id":56265151,"Users Score":0,"Answer":"My investigations suggest that the Python SDK has not yet implemented this feature. I used the SDK to connect to my existing instance and fetched two example datasets. I did not find anything that looked like the 'last modified date'. I tried dataset.serialize() , dataset.__dict__ , dataset.properties.__dict__ . I also tried .__slots__ .\nTrying serialize() is significant because there ought to be parity between the JSON generated in the GUI and the JSON generated by the Python. The lack of parity suggests the SDK version lags behind the GUI version.\nUPDATE: The SDK's are being updated.","Q_Score":0,"Tags":"python,azure-data-factory,incremental-load","A_Id":56279704,"CreationDate":"2019-05-22T21:18:00.000","Title":"AzureDataFactory Incremental Load using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Watson voice assistant instance connected using SIP trunk to a Twilio API. I want to enable to the IBM Speech-To-Text add-on from the Twilio Marketplace which will allow me to obtain full transcriptions of phone calls made to the Watson Assistant bot. I want to store these transcriptions in a Cloudant Database I have created in IBM Cloud. Can I use the endpoint of my Cloudant Database as the callback URL for my Twilio add-on so that when the add-on is activated, the transcription will be added as a document in my Cloudant Database?\nIt seems that I should be able to somehow call a trancsription service through IBM Cloud's STT service in IBM Cloud, but since my assistant is connected through Twilio, this add-on seems like an easier option. I am new to IBM Cloud and chat-bot development so any information is greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":129,"Q_Id":56265812,"Users Score":0,"Answer":"Twilio developer evangelist here.\nFirst up, I don't believe that you can enable add-ons for voice services that are served through Twilio SIP trunking.\nUnless I am mistaken and you are making a call through a SIP trunk to a Twilio number that is responding with TwiML. In this case, then you can add the STT add-on. I'm not sure it would be the best idea to set the webhook URL to your Cloudant DB URL as the webhook is not going to deliver the data in the format that Cloudant expects.\nInstead I would build out an application that can provide an endpoint to receive the webhook, transform the data into something Cloudant will understand and then send it on to the DB.\nDoes that help at all?","Q_Score":0,"Tags":"twilio,ibm-cloud,cloudant,python-cloudant,ibm-voice-gateway","A_Id":56334855,"CreationDate":"2019-05-22T22:30:00.000","Title":"Can I connect my IBM Cloudant Database as the callback URL for my Twilio IBM STT add-on service?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am connecting my slave via TCP\/IP, everything looks fine by using the Wireshark software I can validate that the CRC checksum always valid \u201cgood\u201d, but I am wondering how I can corrupt the CRC checksum so I can see like checksum \u201cInvalid\u201d. Any suggestion how can I get this done maybe python code or any other way if possible.\nThank you all \nTariq","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":406,"Q_Id":56268062,"Users Score":1,"Answer":"I think you use a library that computes CRC. You can form Modbus packet without it, if you want simulate bad CRC condition","Q_Score":1,"Tags":"python,tcp,checksum,crc,modbus-tcp","A_Id":56268121,"CreationDate":"2019-05-23T04:26:00.000","Title":"How corrupt checksum over TCP\/IP","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I noticed that the same transaction had a different transaction ID the second time I pulled it. Why is this the case? Is it because pending transactions have different transaction IDs than those same transactions once posted? Does anyone have recommendations for how I can identify unique transactions if the trx IDs are in fact changing?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":415,"Q_Id":56282490,"Users Score":0,"Answer":"Turns out that the transaction ID often does change. When a transaction is posted (stops pending), the original transaction ID becomes the pending transaction ID, and a new transaction ID is assigned.","Q_Score":4,"Tags":"javascript,python,api,banking,plaid","A_Id":57262467,"CreationDate":"2019-05-23T20:27:00.000","Title":"How to identify Plaid transactions if transaction ID's change","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I trained a model in keras and I'm thinking of pruning my fully connected network. I'm little bit lost on how to prune the layers. \nAuthor of 'Learning both Weights and Connections for Efficient\nNeural Networks', say that they add a mask to threshold weights of a layer. I can try to do the same and fine tune the trained model. But, how does it reduce model size and # of computations?","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2526,"Q_Id":56299034,"Users Score":4,"Answer":"If you add a mask, then only a subset of your weights will contribute to the computation, hence your model will be pruned. For instance, autoregressive models use a mask to mask out the weights that refer to future data so that the output at time step t only depends on time steps 0, 1, ..., t-1.\nIn your case, since you have a simple fully connected layer, it is better to use dropout. It randomly turns off some neurons at each iteration step so it reduces the computation complexity. However, the main reason dropout was invented is to tackle overfitting: by having some neurons turned off randomly, you reduce neurons' co-dependencies, i.e. you avoid that some neurons rely on others. Moreover, at each iteration, your model will be different (different number of active neurons and different connections between them), hence your final model can be interpreted as an ensamble (collection) of several diifferent models, each specialized (we hope) in the understanding of a specific subset of the input space.","Q_Score":11,"Tags":"python,tensorflow,optimization,deep-learning,inference","A_Id":56299140,"CreationDate":"2019-05-24T20:15:00.000","Title":"How to implement neural network pruning?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am currently building a media website using node js. I would like to be able to control Kodi, which is installed of the server computer, remotely from the website browser.How would I go about doing this? My first idea was \n\nto simply see if I could somehow pipe the entire Kodi GUI into the\nbrowser such that the full program stays on the server\nand just the GUI is piped to the browser, sending commands back to\nthe server;\n\nhowever, I could find little documentation on how to do that.\nSecond, I thought of making a script (eg Python) that would be able to control Kodi and just interface node js with the Python script, but again, \nI could find little documentation on that.\n Any help would be much appreciated. \nThank You!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":93,"Q_Id":56306073,"Users Score":0,"Answer":"Can't you just go to settings -> services -> control and then the 'remote control via http' settings? I use this to login to my local ip e.g. 192.168.1.150:8080 (you can set the port on this page) from my browser and I can do anything from there","Q_Score":0,"Tags":"python,node.js,kodi","A_Id":57015259,"CreationDate":"2019-05-25T15:12:00.000","Title":"Controlling Kodi from Browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I have a list of strings \nmy_list = ['1Jordan1', '2Michael2', '3Jesse3'].\nIf I should delete the first and last character, how would I do it in python??","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1843,"Q_Id":56308585,"Users Score":0,"Answer":"You would use slicing. I would use [1:-1].","Q_Score":1,"Tags":"python,python-3.x","A_Id":56308611,"CreationDate":"2019-05-25T20:48:00.000","Title":"How to i split a String by first and last character in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am running a LSTM model on a multivariate time series data set with 24 features. I have ran feature extraction using a few different methods (variance testing, random forest extraction, and Extra Tree Classifier). Different methods have resulted in a slightly different subset of features. I now want to test my LSTM model on all subsets to see which gives the best results.\nMy problem is that the test\/train RMSE scores for my 3 models are all very similar, and every time I run my model I get slightly different answers. This question is coming from a person who is naive and still learning the intricacies of neural nets, so please help me understand: in a case like this, how do you go about determining which model is best? Can you do seeding for neural nets? Or some type of averaging over a certain amount of trials?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20,"Q_Id":56310122,"Users Score":2,"Answer":"Since you have mentioned that using the different feature extraction methods, you are only getting slightly different feature sets, so the results are also similar. Also since your LSTM model is then also getting almost similar RMSE values, the models are able to generalize well and learn similarly and extract important information from all the datasets.\nThe best model depends on your future data, the computation time and load of different methods and how well they will last in production. Setting a seed is not really a good idea in neural nets. The basic idea is that your model should be able to reach the optimal weights no matter how they start. If your models are always getting similar results, in most cases, it is a good thing.","Q_Score":1,"Tags":"python,neural-network,lstm,data-science,feature-extraction","A_Id":56315167,"CreationDate":"2019-05-26T02:09:00.000","Title":"Comparing results of neural net on two subsets of features","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using openpyxl to create charts. For some reason, I do not want to insert row names when adding data. So, I want to edit the legend entries manually. I am wondering if anyone know how to do this.\nMore specifically \nclass openpyxl.chart.legend.Legend(legendPos='r', legendEntry=(), \n layout=None, overlay=None, spPr=None, txPr=None, extLst=None). I want to edit the legendEntry field","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":621,"Q_Id":56333244,"Users Score":0,"Answer":"You cannot do that. You need to set the rows when creating the plots. That will create the titles for your charts","Q_Score":0,"Tags":"python,excel,openpyxl","A_Id":56371107,"CreationDate":"2019-05-27T23:00:00.000","Title":"Setting legend entries manually","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"There's this event that my organization runs, and we have the ticket sales historic data from 2016, 2017, 2018. This data contains the quantity of tickets selled by day, considering all the sales period.\nTo the 2019 edition of this event, I was asked to make a prediction of the quantity of tickets selled by day, considering all the sales period, sort of to guide us through this period, meaning we would have the information if we are above or below the expected sales average. \nThe problem is that the historic data has a different size of sales period in days:\nIn 2016, the total sales period was 46 days.\nIn 2017, 77 days.\nIn 2018, 113 days.\nIn 2019 we are planning 85 days. So how do I ajust those historical data, in a logic\/statistical way, so I could use them as inputs to a statistical predictive model (such as ARIMA model)?\nAlso, I'm planning to do this on Python, so if you have any suggestions about that, I would love to hear them too!\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":56333832,"Users Score":0,"Answer":"Based on what I understand after reading your question, I would approach this problem in the following way.\n\nFor each day, find how far out the event is from that day. The max\nvalue for this number is 46 in 2016, 77 in 2017 etc. Scale this value\nby the max day.\nUse the above variable, along with day of the month, day of the week\netc as extraneous variable\nAdditionally, use lag information from ticket sales. You can try one\nday lag, one week lag etc.\nYou would be able to generate all this data from the sale start until\nend.\nUse the generated variables as predictor for each day and use ticket\nsales as target variable and generate a machine learning model\ninstead of forecasting.\nUse the machine learning model along with generated variables to predict future sales.","Q_Score":1,"Tags":"python,statistics,time-series,prediction","A_Id":56335423,"CreationDate":"2019-05-28T00:51:00.000","Title":"Time series prediction: need help using series with different periods of days","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm a new GitHub user, and this question may be a trivial newbie problem. So I apologize in advance.\nI'm using PyCharm for a Python project. I've set up a Git repository for the project and uploaded the files manually through the Git website. I also linked the repository to my PyCharm project. \nWhen I modify a file, PyCharm allows me to \"commit\" it, but when I try to \"push\" it, I get a PyCharm pop-up error message saying \"Push rejected.\" No further information is provided. How do I figure out what went wrong -- and how to fix it?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2214,"Q_Id":56335672,"Users Score":1,"Answer":"If you manually uploaded files to the Github by dropping them, it now likely has a different history than your local files. \nOne way you could get around this is to store all of your changes in a different folder, do a git pull in pycharm, abandoning your changes so you are up to date with origin\/master, then commit the files and push as you have been doing.","Q_Score":0,"Tags":"python,git,pycharm","A_Id":56335967,"CreationDate":"2019-05-28T05:42:00.000","Title":"Why can't I push from PyCharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am currently implementing decision tree algorithm. If I have a continous featured data how do i decide a splitting point. I came across few resources which say to choose mid points between every two points but considering I have 8000 rows of data this would be very time consuming. The output\/feature label is having category data. Is any approach where I can perform this operation quicker","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":56406338,"Users Score":0,"Answer":"Decision tree works calculating entropy and information gain to determine the most important feature. Indeed, 8000 row is not too much for decision tree. But generally, Random forest is similar to decision tree. It is working as ensemble. You can review and try it.Moreover, maybe being slowly is related to another thing.","Q_Score":0,"Tags":"python,machine-learning,artificial-intelligence,decision-tree,machine-learning-model","A_Id":56410719,"CreationDate":"2019-06-01T11:32:00.000","Title":"How to choose a split variables for continous features for decision tree","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"This is my first question so I appologize if it's not the best quality.\nI have a use case: User creates a monitoring task which sends an http request to a website every X hours. User can have thousands of these tasks and can add\/modify and delete them. When a user creates a task, django signals create a Celery periodic task which then is running periodically.\nI'm searching for a more scalable solution using AWS. I've read about using Lambda + Cloudwatch Events. \nMy question is: how do I approach this to let my users create tens of thousands of these tasks in the cheapest \/ most scalable way?\nThank you for reading my question!\nPeter","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":209,"Q_Id":56413651,"Users Score":0,"Answer":"There is no straight forward solution to your problem .You have to proceed step by step with some plumbing along the way .\nEvent management\n1- Create a lambda function that creates a cloudwatch schedule.\n2 - Create a lambda function that deletes a cloudwatch schedule.\n3 - Persist any event created using dynamodb\n4 - Create 2 API gateway that will invoke the 2 lambda above.\n5 - Create anohter lambda function (used by cloudwatch) that will invoke the API gateway below.\n6 - Create API gateway that will invoke the website via http request.\nWhen the user creates an event from the app, there will be a chaining calls as follow :\n4 -> 1,3 -> 5-> 6\nNow there are two other parameters to take into consideration :\nLambda concurrency: you can't run simultaneously more than 1000 lambda in same region.\nCloudwatch: You can not create more than 100 rules per region . Rule is where you define the schedule.","Q_Score":1,"Tags":"python,django,amazon-web-services,aws-lambda,amazon-cloudwatch","A_Id":56414951,"CreationDate":"2019-06-02T09:05:00.000","Title":"What is a scalable way of creating cron jobs on Amazon Web Services?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Jupyter notebook script that will be used to teach others how to use python. \nInstead of asking each participant to install the required packages, I would like to provide a folder with the environment ready from the start.\nHow can I do this?\nWhat is the easiest way to teach python without running into technical problems with packages\/environments etc.?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1280,"Q_Id":56420000,"Users Score":2,"Answer":"The easiest way I have found to package python files is to use pyinstaller which packages your python file into an executable file. \nIf it's a single file I usually run pyinstaller main.py --onefile\nAnother option is to have a requirements file\nThis reduces installing all packages to one command pip install -r requirements.txt","Q_Score":0,"Tags":"python,jupyter-notebook,package,environment-variables,virtual-environment","A_Id":56420070,"CreationDate":"2019-06-03T00:39:00.000","Title":"Run python script from another computer without installing packages\/setting up environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Jupyter notebook script that will be used to teach others how to use python. \nInstead of asking each participant to install the required packages, I would like to provide a folder with the environment ready from the start.\nHow can I do this?\nWhat is the easiest way to teach python without running into technical problems with packages\/environments etc.?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1280,"Q_Id":56420000,"Users Score":2,"Answer":"You would need to use a program such as py2exe, pyinstaller, or cx_freeze to package each the file, the modules, and a lightweight interpreter. The result will be an executable which does not require the user to have any modules or even python installed to access it; however, because of the built-in interpreter, it can get quite large (which is why Python is not commonly used to make executables).","Q_Score":0,"Tags":"python,jupyter-notebook,package,environment-variables,virtual-environment","A_Id":56420227,"CreationDate":"2019-06-03T00:39:00.000","Title":"Run python script from another computer without installing packages\/setting up environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to use python-gitlab projects.files.create to upload a string content to gitlab.\nThe string contains '\\n' which I'd like it to be the real newline char in the gitlab file, but it'd just write '\\n' as a string to the file, so after uploading, the file just contains one line.\nI'm not sure how and at what point should I fix this, I'd like the file content to be as if I print the string using print() in python.\nThanks for your help.\nEDIT---\nSorry, I'm using python 3.7 and the string is actually a csv content, so it's basically like:\n',col1,col2\\n1,data1,data2\\n'\nSo when I upload it the gitlab file I want it to be:\n,col1,col2\n1,data1,data2","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":56442415,"Users Score":0,"Answer":"I figured out by saving the string to a file and read it again, this way the \\n in the string will be translated to the actual newline char.\nI'm not sure if there's other of doing this but just for someone that encounters a similar situation.","Q_Score":0,"Tags":"python,gitlab","A_Id":56462585,"CreationDate":"2019-06-04T10:52:00.000","Title":"how to use python-gitlab to upload file with newline?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to install Django 2.2.2 on my MacBook pro (latest generation), and I am a user of python 3x. However, my default version of python is python 2x and I cannot pip install Django version 2x when I am using python 2x. Could anyone explain how to change the default version of python on MacBook I have looked at many other questions on this site and none have worked. All help is appreciated thank you :)","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":56448721,"Users Score":0,"Answer":"You can simply use pip3 instead of pip to install Python 3 packages.","Q_Score":0,"Tags":"python,django,bash,macos,pip","A_Id":56448819,"CreationDate":"2019-06-04T17:28:00.000","Title":"How do you install Django 2x on pip when python 2x is your default version of python but you use python 3x on Bash","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":1,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am looking for an example of using python multiprocessing (i.e. a process-pool\/threadpool, job queue etc.) with hylang.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":56465109,"Users Score":0,"Answer":"Note that a straightforward translation runs into a problem on macOS (which is not officially supported, but mostly works anyway): Hy sets sys.executable to the Hy interpreter, and multiprocessing relies on that value to start up new processes. You can work around that particular problem by calling (multiprocessing.set_executable hy.sys_executable), but then it will fail to parse the file containing the Hy code itself, which it does again for some reason in the child process. So there doesn't seem to be a good solution for using multiprocessing with Hy running natively on a Mac.\nWhich is why we have Docker, I suppose.","Q_Score":0,"Tags":"multiprocessing,python-multiprocessing,hy","A_Id":68931008,"CreationDate":"2019-06-05T17:11:00.000","Title":"Example of using hylang with python multiprocessing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to create a program that can read and store the data from a qr scanning device but i don't know how to get the input from the barcode scanner as an image or save it in a variable to read it after with openCV","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":601,"Q_Id":56469264,"Users Score":0,"Answer":"Typically a barcode scanner automatically outputs to the screen, just like a keyboard (except really quickly), and there is an end of line character at the end (like and enter). \nUsing a python script all you need to do is start the script, connect a scanner, scan something, and get the input (STDIN) of the script. If you built a script that was just always receiving input and storing or processing them, you could do whatever you please with the data. \nA QR code is read in the same way that a barcode scanner works, immediately outputting the encoded data as text. Just collect this using the STDIN of a python script and you're good to go!","Q_Score":0,"Tags":"python,input,qr-code,barcode-scanner","A_Id":56469292,"CreationDate":"2019-06-05T23:27:00.000","Title":"How to use python with qr scanner devices?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to know briefly about all the available estimators like logisticregression or multinomial regression or SVMs which can be used for classification problems.\nThese are the three I know. Are there any others like these? and relatively how long they run or how accurate can they get than these?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":26,"Q_Id":56471908,"Users Score":1,"Answer":"The following can be used for classification problems:\n\nLogistic Regression\nSVM\nRandomForest Classifier\nNeural Networks","Q_Score":0,"Tags":"python-3.x,scikit-learn","A_Id":56472750,"CreationDate":"2019-06-06T06:24:00.000","Title":"What are the available estimators which we can use as estimator in onevsrest classifier?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to create a form in Django using Django form. \nI need two types of forms.\n\nA form that collect data from user, do some calculations and show the results to user without saving the data to database. I want to show the result to user once he\/she press button (calculate) next to it not in different page. \nA form that collect data from user, look for it in a column in google sheet, and if it's unique, add it to the column otherwise inform the user a warning that the data is not unique.\n\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":299,"Q_Id":56473174,"Users Score":0,"Answer":"You could use AJAX and javascript to achieve this, but I suggest doing this only via javascript. This means you will have to rewrite the math in JS and output it directly in the element.\nPlease let me know if you need any help :)\nJasper","Q_Score":0,"Tags":"html,django,python-3.x,django-forms","A_Id":56474809,"CreationDate":"2019-06-06T07:55:00.000","Title":"How to use data obtained from a form in Django form?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to create a form in Django using Django form. \nI need two types of forms.\n\nA form that collect data from user, do some calculations and show the results to user without saving the data to database. I want to show the result to user once he\/she press button (calculate) next to it not in different page. \nA form that collect data from user, look for it in a column in google sheet, and if it's unique, add it to the column otherwise inform the user a warning that the data is not unique.\n\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":299,"Q_Id":56473174,"Users Score":0,"Answer":"Start by writing it in a way that the user submits the form (like any normal django form), you process it in your view, do the calculation, and return the same page with the calculated values (render the template). That way you know everything is working as expected, using just Django\/python.\nThen once that works, refactor to make your form submit the data using AJAX and your view to just return the calculation results in JSON. Your AJAX success handler can then insert the results in the current page.\nThe reason I suggest you do this in 2 steps is that you're a beginner with javascript, so if you directly try to build this with AJAX, and you're not getting the results you expect, it's difficult to understand where things go wrong.","Q_Score":0,"Tags":"html,django,python-3.x,django-forms","A_Id":56479353,"CreationDate":"2019-06-06T07:55:00.000","Title":"How to use data obtained from a form in Django form?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have two clients (separate docker containers) both writing to a Cassandra cluster.\nThe first is writing real-time data, which is ingested at a rate that the cluster can handle, albeit with little spare capacity. This is regarded as high-priority data and we don't want to drop any. The ingestion rate varies quite a lot from minute to minute. Sometimes data backs up in the queue from which the client reads and at other times the client has cleared the queue and is (briefly) waiting for more data.\nThe second is a bulk data dump from an online store. We want to write it to Cassandra as fast as possible at a rate that soaks up whatever spare capacity there is after the real-time data is written, but without causing the cluster to start issuing timeouts.\nUsing the DataStax Python driver and keeping the two clients separate (i.e. they shouldn't have to know about or interact with each other), how can I throttle writes from the second client such that it maximises write throughput subject to the constraint of not impacting the write throughput of the first client?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":246,"Q_Id":56474650,"Users Score":0,"Answer":"The solution I came up with was to make both data producers write to the same queue.\nTo meet the requirement that the low-priority bulk data doesn't interfere with the high-priority live data, I made the producer of the low-priority data check the queue length and then add a record to the queue only if the queue length is below a suitable threshold (in my case 5 messages).\nThe result is that no live data message can have more than 5 bulk data messages in front of it in the queue. If messages start backing up on the queue then the bulk data producer stops queuing more data until the queue length falls below the threshold.\nI also split the bulk data into many small messages so that they are relatively quick to process by the consumer.\nThere are three disadvantages of this approach:\n\nThere is no visibility of how many queued messages are low priority and how many are high priority. However we know that there can't be more than 5 low priority messages.\nThe producer of low-priority messages has to poll the queue to get the current length, which generates a small extra load on the queue server.\nThe threshold isn't applied strictly because there is a race between the two producers from checking the queue length to queuing a message. It's not serious because the low-priority producer queues only a single message when it loses the race and next time it will know the queue is too long and wait.","Q_Score":0,"Tags":"python,cassandra,datastax-python-driver","A_Id":56577125,"CreationDate":"2019-06-06T09:28:00.000","Title":"Cassandra write throttling with multiple clients","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to extract\/sync data through Pardot API v4 into a local DB. Most APIs were fine, just used the query method with created_after search criteria. But the Visit API does not seem to support neither a generic query of all visit data, nor a created_after search criteria to retrieve new items. \nAs far as I can see I can only query Visits in the context of a Visitor or a Prospect.\nAny ideas why, and how could I implement synchronisation? (sorry, no access to Pardot DB...)\nI have been using pypardot4 python wrapper for convenience but would be happy to use the API natively if it makes any difference.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":185,"Q_Id":56494946,"Users Score":2,"Answer":"I managed to get a response from Pardot support, and they have confirmed that such response filtering is not available on the Visits API. I asked for a feature request, but hardly any chance to get enough up-votes to be considered :(","Q_Score":1,"Tags":"python-3.x,pardot","A_Id":58770292,"CreationDate":"2019-06-07T13:05:00.000","Title":"Pardot Visit query API - generic query not available","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I make a Graph (not Digraph) from a data frame (Huge network) with networkx.\nI used this code to creat my graph:\nnx.from_pandas_edgelist(R,source='A',target='B',create_using=nx.Graph())\nHowever, in the output when I check the edge list, my source node and the target node has been changed based on the sort and I don't know how to keep it as the way it was in the dataframe (Need the source and target node stay as the way it was in dataframe).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":309,"Q_Id":56509345,"Users Score":0,"Answer":"If you mean the order has changed, check out nx.OrderedGraph","Q_Score":1,"Tags":"python,pandas,networkx","A_Id":56509709,"CreationDate":"2019-06-08T19:04:00.000","Title":"How can I stop networkx to change the source and the target node?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Control-M job that calls a python script. The python script contains a function that returns True or False. \nIs it possible to make the job to fail when the function returns False?\nI have to use a shell scrip for this? If yes how should i create it?\nThank you","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":799,"Q_Id":56509567,"Users Score":0,"Answer":"Return a non-zero value -- i.e. call sys.exit(1) when function returns False, and sys.exit(0) otherwise.","Q_Score":1,"Tags":"python,shell,control-m","A_Id":57284287,"CreationDate":"2019-06-08T19:37:00.000","Title":"How to fail a Control M job when running a python function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"We are working on early action prediction but we are unable to understand the dataset itself NTU rgbd dataset is 1.3 tb.my laptop Hard disk is 931 GB\n .first problem : how to deal with such a big dataset?\nSecond problem : how to understand dataset?\nThird problem: how to load dataset ?\nThanks for the help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":128,"Q_Id":56535700,"Users Score":0,"Answer":"The overall size of the dataset is 1.3 TB and this size will decrease after processing the data and converting it into numpy arrays or something else.\nBut I do not think you will work on the entire dataset, what is the part you want to work on it in the dataset?","Q_Score":0,"Tags":"python,machine-learning","A_Id":56695079,"CreationDate":"2019-06-11T02:31:00.000","Title":"How to load NTU rgbd dataset?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to install Pytorch on a computer with no internet connection.\nI've tried finding information about this online but can't find a single piece of documentation.\nDo you know how I can do this? Is it even possible?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":5814,"Q_Id":56539865,"Users Score":13,"Answer":"An easy way with pip:\n\nCreate an empty folder\npip download torch using the connected computer. You'll get the pytorch package and all its dependencies.\nCopy the folder to the offline computer. You must be using the same python setup on both computers (this goes for virtual environments as well)\npip install * on the offline computer, in the copied folder. This installs all the packages in the correct order. You can then use pytorch.\n\nNote that this works for (almost) any kind of python package.","Q_Score":3,"Tags":"python,pytorch","A_Id":56539996,"CreationDate":"2019-06-11T08:49:00.000","Title":"How do I install Pytorch offline?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I have 5 csv files that I am trying to put into one graph in python. In the first column of each csv file, all of the numbers are the same, and I want to treat these as the x values for each csv file in the graph. However, there are two more columns in each csv file (to make 3 columns total), but I just want to graph the second column as the 'y-values' for each csv file on the same graph, and ideally get 5 different lines, one for each file. Does anyone have any ideas on how I could do this?\nI have already uploaded my files to the variable file_list","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":932,"Q_Id":56547899,"Users Score":0,"Answer":"Read the first file and create a list of lists in which each list filled by two columns of this file. Then read the other files one by one and append y column of them to the correspond index of this list.","Q_Score":0,"Tags":"python,pandas,csv,matplotlib,graph","A_Id":56548032,"CreationDate":"2019-06-11T16:14:00.000","Title":"Graphing multiple csv lists into one graph in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have downloaded postgresql as well as django and python but when I try running the command \"python manage.py runserver\" it gives me an error saying \"Fatal: password authentication failed for user\" . I am trying to run it locally but am unable to figure out how to get past this issue. \nI was able to connect to the server in pgAdmin but am still getting password authentication error message","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":227,"Q_Id":56548140,"Users Score":0,"Answer":"You need to change the password used to connect to your local Database, and this can be done, modifying your setting.py file in \"DATABASES\" object","Q_Score":0,"Tags":"python,django,postgresql","A_Id":63626428,"CreationDate":"2019-06-11T16:32:00.000","Title":"Password authentication failed when trying to run django application on server","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have a list of points that represent a needle\/catheter in a 3D volume. This volume is voxalized. I want to get all the voxels that the line that connects the point intersects. The line needs to go through all the points. \nIdeally, since the round needle\/catheter has a width I would like to be able to get the voxels that intersect the actual three dimensional object that is the needle\/catheter. (I imagine this is much harder so if I could get an answer to the first problem I would be very happy!)\nI am using the latest version of Anaconda (Python 3.7). I have seen some similar problems, but the code is always in C++ and none of it seems to be what I'm looking for. I am fairly certain that I need to use raycasting or a 3D Bresenham algorithm, but I don't know how. \nI would appreciate your help!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":628,"Q_Id":56565073,"Users Score":0,"Answer":"I ended up solving this problem myself. For anyone who is wondering how, I'll explain it briefly. \nFirst, since all the catheters point in the general direction of the z-axis, I got the thickness of the slices along that axis. Both input points land on a slice. I then got the coordinates of every intersection between the line between the two input points and the z-slices. Next, since I know the radius of the catheter and I can calculate the angle between the two points, I was able to draw ellipse paths on each slice around the points I had previously found (when you cut a cone at and angle, the cross-section is an ellipse). Then I got the coordinates of all the voxels on every slice along the z-axis and checked which voxels where within my ellipse paths. Those voxels are the ones that describe the volume of the catheter. If you would like to see my code please let me know.","Q_Score":0,"Tags":"python,3d,line,voxel,bresenham","A_Id":56839312,"CreationDate":"2019-06-12T14:54:00.000","Title":"How to get a voxel array from a list of 3D points that make up a line in a voxalized volume?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a matrix of data with three indexes: i, j and k\nI want to enter some of the data in this matrix into a dictionary, and be able to find them afterwards in the dictionary.\nThe data itself can not be the key for the dict.\nI would like the i,j,k set of indexes to be the key.\nI think I need to \"hash\" (some sort of hash) in one number from which I can get back the i,j,k. I need the result key to be ordered so that:\n\nkey1 for 1,2,3 is greater than\nkey2 for 2,1,3 is greater than\nkey3 for 2,3,1\n\nDo you know any algorithms to get the keys from this set of indexes? Or is there a better structure in python to do what I want to do?\nI can't know before I store the data how much I will get, so I think I cannot just append the data with its indexes.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":56570667,"Users Score":0,"Answer":"Only immutable elements can be used as dictionary keys\n\nThis mean you can't use a list (mutable data type) but you can use a tuple as the key of your dictionary: dict_name[(i, j, k)] = data","Q_Score":0,"Tags":"python,dictionary,indexing","A_Id":56570699,"CreationDate":"2019-06-12T21:34:00.000","Title":"Hash a set of three indexes in a key for dictionary?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm wondering how sqlite3 works when working in something like repl.it? I've been working on learning chatterbot on my own computer through Jupiter notebook. I'm a pretty amateur coder, and I have never worked with databases or SQL. When working from my own computer, I pretty much get the concept that when setting up a new bot with chatterbot, it creates a sqlite3 file, and then saves conversations to it to improve the chatbot. However, if I create a chatbot the same way only through repl.it and give lots of people the link, is the sqlite3 file saved online somewhere? Is it big enough to save lots of conversations from many people to really improve the bot well?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":217,"Q_Id":56571509,"Users Score":0,"Answer":"I am not familiar with repl.it, but for all the answers you have asked the answer is yes. For example, I have made a simple web page that uses the chatterbot library. Then I used my own computer as a server using ngrok and gather training data from users.","Q_Score":0,"Tags":"python,sqlite,chatterbot,repl.it","A_Id":56579507,"CreationDate":"2019-06-12T23:20:00.000","Title":"Chatterbot sqlite store in repl.it","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am attempting to download modules in Python through pip. No matter how many times I edit the PATH to show the pip.exe, it shows the same error:\n'pip' is not recognized as an internal or external command,\noperable program or batch file.\nI have changed the PATH many different times and ways to make pip usable, but these changes go unnoticed by the command prompt terminal.\nHow should I fix this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":56591058,"Users Score":0,"Answer":"Are you using PyCharm? if yes change the environment to your desired directory and desired interpreter if you do have multiple interpreter available","Q_Score":0,"Tags":"python,path,pip","A_Id":56592051,"CreationDate":"2019-06-14T03:35:00.000","Title":"Command prompt does not recognize changes in PATH. How do I fix this?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Learning python workflow on android tablet\nI have been using Qpython3 but find it unsatisfactory\nCan anybody tell me how best to learn the python workflow using an android tablet... that is what IDE works best with android and any links to pertinent information. Thank you.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2770,"Q_Id":56605247,"Users Score":1,"Answer":"Try pydroid3 instead of Qpython.it have almost all scientific Python libraries like Numpy,scikit-learn,matplotlib,pandas etc.All you have to do is to download the scripting library.You can save your file with extension ' .py ' and then upload it to drive and then to colab\nHope this will help.......","Q_Score":0,"Tags":"android,python","A_Id":59534068,"CreationDate":"2019-06-14T21:20:00.000","Title":"Using python on android tablet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am running Wing IDE 5 with Python 2.4. Everything was fine until I tried to debug and set a breakpoint. Arriving at the breakpoint I get an error message: \n\"The debug server encountered an error in probing locals or globals...\"\nAnd the Stack Data display looks like:\n locals \n globals \nI am not, to my knowledge, using a server client relationship or anything special, I am simply debugging a single threaded program running directly under the IDE. Anybody seen this or know how to fix it?\nWing IDE 5.0.9-1","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":239,"Q_Id":56615332,"Users Score":0,"Answer":"That's a pretty old version of Wing and likely a bug that's been fixed since then, so trying a newer version of Wing may solve it.\nHowever, if you are stuck with Python 2.4 then that's the latest that supports it (except that unofficially Wing 6 may work with Python 2.4 on Windows). \nA work-around would be to inspect data from the Debug Probe and\/or Watch tools (both available in the Tools menu).\nAlso, Clear Stored Value Errors in the Debug menu may allow Wing to load the data in a later run if the problem doesn't reoccur.","Q_Score":1,"Tags":"python,debugging,wing-ide","A_Id":56631566,"CreationDate":"2019-06-16T01:10:00.000","Title":"Wing IDE The debug server encountered an error in probing locals","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am having a model Employee with a OneToOneField relationship with Django USER model. Now for some reason, I want to change it to the ManyToOne(ForeignKey) relationship with the User model.\nBoth these tables have data filled. Without losing the data how can I change it?\nCan I simply change the relationship and migrate?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":151,"Q_Id":56648907,"Users Score":0,"Answer":"makemigrations in this case would only correspond to an sql of Alter field you can see the result of makemigrations, the same sql will be executed when you migrate the model so the data would not be affected","Q_Score":1,"Tags":"python,django","A_Id":56649220,"CreationDate":"2019-06-18T12:13:00.000","Title":"How to change a OneToOneField into ForeignKey in django model with data in both table?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In a Python 3.7 shell I get some unexpected results when escaping strings, see examples below. Got the same results in the Python 2.7 shell.\nA quick read in the Python docs seems to say that escaping can be done in strings, but doesn't seem to say it can't be used in the shell. (Or I have missed it).\nCan someone explain why escaping doesn't seem to work as expected.\nExample one:\ninput:\n>>> \"I am 6'2\\\" tall\"\noutput:\n'I am 6\\'2\" tall'\nwhile >>> print(\"I am 6'2\\\" tall\")\nreturns (what I expected):\nI am 6'2\" tall\n(I also wonder how the backslash, in the unexpected result, ends up behind the 6?)\nAnother example:\ninput:\n>>> \"\\tI'm tabbed in.\"\noutput:\n\"\\tI'm tabbed in.\"\nWhen inside print() the tab is replaced with a proper tab. (Can't show it, because stackoverflow seems the remove the tab\/spaces in front of the line I use inside a code block).","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":24,"Q_Id":56651066,"Users Score":2,"Answer":"The interactive shell will give you a representation of the return value of your last command. It gives you that value using the repr() method, which tries to give a valid source code representation of the value; i.e. something you could copy and paste into code as is.\nprint on the other hand prints the contents of the string to the console, without regards whether it would be valid source code or not.","Q_Score":0,"Tags":"python,escaping","A_Id":56651143,"CreationDate":"2019-06-18T14:09:00.000","Title":"Does escaping work differently in Python shell? (Compared to code in file)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am working with an Altera DE1-SoC board where I am reading data from a sensor using a C program. The data is being read continually, in a while loop and written to a text file. I want to read this data using a python program and display the data.\nThe problem is that I am not sure how to avoid collision during the read\/write from the file as these need to happen simultaneously. I was thinking of creating a mutex, but I am not sure how to implement it so the two different program languages can work with it. \nIs there a simple way to do this? Thanks.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":524,"Q_Id":56652022,"Users Score":1,"Answer":"You could load a C library into Python using cdll.LoadLibrary and call a function to get the status of the C mutex. Then in Python if the C mutex is locking then don't read, and if it is unlocked then it can read.","Q_Score":1,"Tags":"python,c,io,mutex","A_Id":56652310,"CreationDate":"2019-06-18T15:01:00.000","Title":"Writing to a file in C while reading from it in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am working with an Altera DE1-SoC board where I am reading data from a sensor using a C program. The data is being read continually, in a while loop and written to a text file. I want to read this data using a python program and display the data.\nThe problem is that I am not sure how to avoid collision during the read\/write from the file as these need to happen simultaneously. I was thinking of creating a mutex, but I am not sure how to implement it so the two different program languages can work with it. \nIs there a simple way to do this? Thanks.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":524,"Q_Id":56652022,"Users Score":1,"Answer":"Operating system will take care of this as long as you can open that file twice (one for read and one for write). Just remember to flush from C code to make sure your data are actually written to disk, instead of being kept in cache in memory.","Q_Score":1,"Tags":"python,c,io,mutex","A_Id":56652594,"CreationDate":"2019-06-18T15:01:00.000","Title":"Writing to a file in C while reading from it in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have written some bunch of python files and i want to make a window application from that. \nThe structure looks like this:\nSay, a.py,b.py,c.py are there. a.by is the file which i want application to open and it is basically a GUI which has import commands for \"b.py\" and \"c.py\". \nI know this might be a very basic problem,but i have just started to packaging and deployment using python.Please tell me how to do that , or if is there any way to do it by py2exe and pyinstaller?\nI have tried to do it by py2exe and pyinstaller from the info available on internet , but that seems to create the app which is running only \"a.py\" .It is not able to then use \"b\" and \"c \" as well.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1980,"Q_Id":56680262,"Users Score":0,"Answer":"I am not sure on how you do this with py2exe. I have used py2app before which is very similar, but it is for Mac applications. For Mac there is a way to view the contents of the application. In here you can add the files you want into the resources folder (where you would put your 'b.py' and 'c.py').\nI hope there is something like this in Windows and hope it helps.","Q_Score":0,"Tags":"python,file,window","A_Id":56680443,"CreationDate":"2019-06-20T06:34:00.000","Title":"Building a window application from the bunch of python files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using Tensorflow Object detection api. What I understood reading the faster_rcnn_inception_v2_pets.config file is that num_steps mean the total number of steps and not the epochs. But then what is the point of specifying batch_size?? Lets say I have 500 images in my training data and I set batch size = 5 and num_steps = 20k. Does that mean number of epochs are equal to 200 ??\nWhen I run model_main.py it shows only the global_steps loss. So if these global steps are not the epochs then how should I change the code to display train loss and val loss after each step and also after each epoch.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2207,"Q_Id":56686630,"Users Score":1,"Answer":"So you are right with your assumption, that you have 200 epochs.\nI had a similar problem with the not showing of loss.\nmy solution was to go to the model_main.py file and then insert\ntf.logging.set_verbosity(tf.logging.INFO)\nafter the import stuff.\nthen it shows you the loss after each 100 steps.\nyou could change the set_verbosity if you want to have it after every epoch ;)","Q_Score":1,"Tags":"python,tensorflow,object-detection-api","A_Id":57010899,"CreationDate":"2019-06-20T13:04:00.000","Title":"How to display number of epochs in tensorflow object detection api with Faster Rcnn?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to be able to search for any user using facebook API v3.3 in python 3. \nI have written a function that can only return my details and that's fine, but now I want to search for any user and I am not succeeding so far, it seems as if in V3.3 I can only search for places and not users\n\nThe following function search and return a place, how can I modify it so that I can able to search for any Facebook users?\n\ndef search_friend():\n graph = facebook.GraphAPI(token)\n find_user = graph.search(q='Durban north beach',type='place')\n print(json.dumps(find_user, indent=4))","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":116,"Q_Id":56687213,"Users Score":1,"Answer":"You can not search for users any more, that part of the search functionality has been removed a while ago.\nPlus you would not be able to get any user info in the first place, unless the user in question logged in to your app first, and granted it permission to access at least their basic profile info.","Q_Score":0,"Tags":"python-3.x,facebook-graph-api","A_Id":56687408,"CreationDate":"2019-06-20T13:36:00.000","Title":"how can i search for facebook users ,using facebook API(V3.3) in python 3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I would like to use the microphone of peppers tablet to implement speech recognition.\nI already do speech recognition with the microphones in the head. \nBut the audio I get from the head microphones is noisy due to the fans in the head and peppers joints movement.\nDoes anybody know how to capture the audio from peppers tablet? \nI am using Pepper 2.5. and would like to solve this with python.\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":87,"Q_Id":56699117,"Users Score":0,"Answer":"With NAOqi 2.5 on Pepper it is not possible to access the tablet's microphone.\nYou can either upgrade to 2.9.x and use the Android API for this, or stay in 2.5 and use Python to get the sound from Pepper's microphones.","Q_Score":0,"Tags":"python,speech-recognition,tablet,microphone,pepper","A_Id":62589810,"CreationDate":"2019-06-21T07:53:00.000","Title":"Record Audio from Peppers Tablet Microphone","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new with zarr, HDF5 and LMDB. I have converted data from HDF5 to Zarr but i got many files with extension .n (n from 0 to 31). I want to have just one file with .zarr extension. I tried to use LMDB (zarr.LMDBStore function) but i don't understand how to create .mdb file ? Do you have an idea how to do that ? \nThank you !","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":194,"Q_Id":56701950,"Users Score":0,"Answer":"@kish When trying your solution i got this error:\nfrom comtypes.gen import Access\nImportError: cannot import name 'Access'","Q_Score":0,"Tags":"python,hdf5,lmdb,zarr","A_Id":56704233,"CreationDate":"2019-06-21T10:48:00.000","Title":"How to create .mdb file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've got a partially trained model in Keras, and before training it any further I'd like to change the parameters for the dropout, l2 regularizer, gaussian noise etc. I have the model saved as a .h5 file, but when I load it, I don't know how to remove these regularizing layers or change their parameters. Any clue as to how I can do this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":356,"Q_Id":56706219,"Users Score":0,"Answer":"Create a model with your required hyper-parameters and load the parameters to the model using load_weight().","Q_Score":1,"Tags":"python,python-3.x,keras,regularized","A_Id":56706524,"CreationDate":"2019-06-21T15:20:00.000","Title":"How to remove regularisation from pre-trained model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"how to display contact without company in odoo 11 , exemple : if mister X in Company Y, in odoo, display this mister and company : Y, X. But i want only X. thanks","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1041,"Q_Id":56707951,"Users Score":2,"Answer":"That name comes via name_get method written inside res.partner.py You need to extend that method in your custom module and remove company name as a prefix from the contact name.","Q_Score":1,"Tags":"python,xml,odoo","A_Id":56713226,"CreationDate":"2019-06-21T17:20:00.000","Title":"How to display contact without company in odoo?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have one Python3 script that exits without any traceback from time to time.\nSome said in another question that it was caused by calling sys.exit, but I am not pretty sure whether this is the case.\nSo how can I make Python3 script always exit with traceback, of course except when it is killed with signal 9?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":32,"Q_Id":56718917,"Users Score":0,"Answer":"It turns out that the script crashed when calling some function from underlying so, and crashed without any trackback. .","Q_Score":0,"Tags":"python-3.x,traceback","A_Id":57567523,"CreationDate":"2019-06-22T20:23:00.000","Title":"Python3 script exit with any traceback?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I get this error:\n\nCould not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path: \/home\/yosra\/Desktop\/CERT.RSA\n\nWhen I run: $ virtualenv venv\nSo I put a random CERT.RSA on the Desktop which worked and I created my virtual environment, but then when I run: pip install -r requirements.txt\nI got this one:\n\nCould not install packages due to an EnvironmentError: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: \/KristianOellegaard\/django-hvad\/archive\/2.0.0-beta.tar.gz (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:3715)'),))\n\nI feel that these 2 errors are linked to each other, but I want to know how can I fix the first one?","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":28651,"Q_Id":56738345,"Users Score":0,"Answer":"We get this all the time for various 'git' actions. We have our own CA + intermediary and we don't customize our software installations enough to accomodate that fact.\nOur general fix is update your ca-bundle.crt with the CA cert pems via either concatenation or replacement.\ne.g. cat my_cert_chain.pem >> $(python -c \"import certifi; print(certifi.where())\")\nThis works great if you have an \/etc\/pki\/tls\/certs directory, but with python the python -c \"import certifi; print(certifi.where())\" tells you the location of python's ca-bundle.crt file.\nAlthought it's not a purist python answer, since we're not adding a new file \/ path, it solves alot of other certificate problems with other software when you understand the underlying issue.\nI recommended concatenating in this case as I don't know what else the file is used for vis-a-vis pypi.","Q_Score":14,"Tags":"python,ssl,pip","A_Id":71769428,"CreationDate":"2019-06-24T14:06:00.000","Title":"Could not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I get this error:\n\nCould not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path: \/home\/yosra\/Desktop\/CERT.RSA\n\nWhen I run: $ virtualenv venv\nSo I put a random CERT.RSA on the Desktop which worked and I created my virtual environment, but then when I run: pip install -r requirements.txt\nI got this one:\n\nCould not install packages due to an EnvironmentError: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: \/KristianOellegaard\/django-hvad\/archive\/2.0.0-beta.tar.gz (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:3715)'),))\n\nI feel that these 2 errors are linked to each other, but I want to know how can I fix the first one?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":28651,"Q_Id":56738345,"Users Score":1,"Answer":"I received this error while running the command as \"pip install flask\" in Pycharm.\nIf you look at the error, you will see that the error points out to \"packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle -- Invalid path\".\nI solved this by removing the environment variable \"REQUESTS_CA_BUNDLE\" OR you can just change the name of the environment variable \"REQUESTS_CA_BUNDLE\" to some other name.\nRestart your Pycharm and this should be solved.\nThank you !","Q_Score":14,"Tags":"python,ssl,pip","A_Id":65888336,"CreationDate":"2019-06-24T14:06:00.000","Title":"Could not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Is it possible to productionize Python code in a .NET\/C# environment without installing Python and without converting the Python code to C#, i.e. just deploy the code as is?\nI know installing the Python language would be the reasonable thing to do but my hesitation is that I just don't want to introduce a new language to my production environment and deal with its testing and maintenance complications, since I don't have enough manpower who know Python to take care of these issues.\nI know IronPython is built on CLR, but don't know how exactly it can be hosted and maintained inside .NET. Does it enable one to treat PYthon code as a \"package\" that can be imported into C# code, without actually installing Python as a standalone language? How can IronPython make my life easier in this situation? Can python.net give me more leverage?","AnswerCount":5,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2227,"Q_Id":56743561,"Users Score":5,"Answer":"IronPython is limited compared to running Python with C based libraries needing the Python Interpreter, not the .NET DLR. I suppose it depends how you are using the Python code, if you want to use a lot of third party python libraries, i doubt that IronPython will fit your needs.\nWhat about building a full Python application but running it all from Docker? \nThat would require your environments to have Docker installed, but you could then also deploy your .NET applications using Docker too, and they would all be isolated and not dirty your 'environment'.\nThere are base docker images out there that are specifically for Building Python and .NET Project and also for running.","Q_Score":7,"Tags":"c#,python,.net,ironpython,python.net","A_Id":56885931,"CreationDate":"2019-06-24T20:31:00.000","Title":"Running Python Code in .NET Environment without Installing Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using windows with Python 3.7.3, I installed NumPy via command prompt with \"pip install NumPy\", and it installed NumPy 1.16.4 perfectly. However, when I run \"import numpy as np\" in a program, it says \"ModuleNotFoundError: No module named 'numpy'\"\nI only have one version of python installed, and I don't know how I can fix this. How do I fix this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":696,"Q_Id":56755112,"Users Score":0,"Answer":"python3 is not supported under NumPy 1.16.4. Try to install a more recent version of NumPy:\npip uninstall numpy\npip install numpy","Q_Score":0,"Tags":"python,numpy,import,package","A_Id":56755246,"CreationDate":"2019-06-25T13:23:00.000","Title":"No module named 'numpy' Even When Installed","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a flask application installed on cpanel and it's giving me some error while the application is running. Application makes an ajax request from the server, but server returns the response with a 500 error. I have no idea how I get the information that occurs to throw this error.\nThere's no information on the cpanel error log and is it possible to create some log file that logs errors when occur in the same application folder or something?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":424,"Q_Id":56764314,"Users Score":1,"Answer":"When you log into cPanel go to the Errors menu and it will give a more detailed response to your errors there. You can also try and check: \/var\/log\/apache\/error.log or \/var\/log\/daemon.log","Q_Score":0,"Tags":"python,flask","A_Id":56767500,"CreationDate":"2019-06-26T02:23:00.000","Title":"How I get the error log generates from a flask app installed on CPanel?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm trying to use PyAV to output video to a V4l2 loopback device (\/dev\/video1), but I can't figure out how to do it. It uses the avformat_write_header() from libav* (ffmpeg bindings).\nI've been able to get ffmpeg to output to the v4l2 device from the CLI but not from python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":409,"Q_Id":56766273,"Users Score":0,"Answer":"Found the solution. The way to do this is:\n\nSet the container format to v4l2\nSet the stream format as \"rawvideo\"\nSet the framerate (if it's a live stream, set the framerate to 1 fps higher than the stream is so that you don't get an error)\nSet pixel format to either RGB24 or YUV420","Q_Score":0,"Tags":"python,ffmpeg,v4l2","A_Id":56783972,"CreationDate":"2019-06-26T06:12:00.000","Title":"How can I output to a v4l2 driver using FFMPEG's avformat_write_header?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"This is a question on a conceptual level. \nI'm building a piece of small-scale algorithmic trading software, and I am wondering how I should set up the data collection\/retrieval within that system. The system should be fully autonomous. \nCurrently my algorithm that I want to trade live is doing so on a very low frequency, however I would like to be able to trade with higher frequency in the future and therefore I think that it would be a good idea to set up the data collection using a websocket to get real time trades straight away. I can aggregate these later if need be. \nMy first question is: considering the fact that the data will be real time, can I use a CSV-file for storage in the beginning, or would you recommend something more substantial?\nIn any case, the data collection would proceed as a daemon in my application. \nMy second question is: are there any frameworks available to handle real-time incoming data to keep the database constant while the rest of the software is querying it to avoid conflicts?\nMy third and final question is: do you believe it is a wise approach to use a websocket in this case or would it be better to query every time data is needed for the application?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":88,"Q_Id":56769559,"Users Score":0,"Answer":"CSV is a nice exchange format, but as it is based on a text file, it is not good for real-time updates. Only my opinion but I cannot imagine a reason to prefere that to database.\nIn order to handle real time conflicts, you will later need a professional grade database. PostgreSQL has the reputation of being robust, MariaDB is probably a correct choice too. You could use a liter database in development mode like SQLite, but beware of the slight differences: it is easy to write something that will work on one database and will break on another one. On another hand, if portability across databases is important, you should use at least 2 databases: one at development time and a different one at integration time.\nA question to ask yourself immediately is whether you want a relational database or a noSQL one. Former ensures ACID (Atomicity, Consistency, Isolation, Durability) transations, the latter offers greater scalability.","Q_Score":0,"Tags":"python,database,algorithmic-trading","A_Id":56769956,"CreationDate":"2019-06-26T09:33:00.000","Title":"How to set up data collection for small-scale algorithmic trading software","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"So when I run my python selenium script through Jenkins, how should I write the driver = webdriver.Chrome()\nHow should I put the chrome webdriver EXE in jenkins?\nWhere should I put it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":56784344,"Users Score":0,"Answer":"If you have added your repository path in jenkins during job configuration, Jenkins will create a virtual copy of your workspace. So, as long as the webdriver file is somewhere in your project folder structure and as long as you are using relative path to reference it in your code, there shouldn't be any issues with respect to driver in invocation.\nYou question also depends on several params like:\n1. Whether you are using Maven to run the test\n2. Whether you are running tests on Jenkins locally or on a remote machine using Selenium Grid Architecture.","Q_Score":0,"Tags":"python,selenium,jenkins,webdriver","A_Id":56784544,"CreationDate":"2019-06-27T05:15:00.000","Title":"So when I run my python selenium script through jenkins, how should I write the 'driver = webdriver.Chrome()'?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to export my DataFrame to Excel. Everything is good but I need two \"rows\" of headers in my output file. That mean I need two columns headers. I don't know how to export it and make double headers in DataFrame. My DataFrame is created with dictionary but I need to add extra header above.\nI tried few dumb things but nothing gave me a good result. I want to have on first level header for every three columns and on second level header for each column. They must be different.\nI expect output with two headers above columns.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1093,"Q_Id":56787460,"Users Score":0,"Answer":"Had a similar issue. Solved by persisting cell-by-cell using worksheet.write(i, j, df.iloc[i,j]), with i starting after the header rows.","Q_Score":0,"Tags":"python,pandas,export-to-excel","A_Id":59754774,"CreationDate":"2019-06-27T08:57:00.000","Title":"Multiple header in Pandas DataFrame to_excel","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a numpy array as:\ngroups=np.array([('Species1',), ('Species2', 'Species3')], dtype=object).\nWhen I ask np.where(groups == ('Species2', 'Species3')) or even np.where(groups == groups[1]) I get an empty reply: (array([], dtype=int64),)\nWhy is this and how can I get the indexes for such an element?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1220,"Q_Id":56788435,"Users Score":1,"Answer":"It's not means search a tuple('Species2', 'Species3') from groups when you use \nnp.where(groups == ('Species2', 'Species3'))\nit means search 'Species2' and 'Species3' separately if you have a Complete array like this \ngroups=np.array([('Species1',''), ('Species2', 'Species3')], dtype=object)","Q_Score":1,"Tags":"python,numpy","A_Id":56789899,"CreationDate":"2019-06-27T09:51:00.000","Title":"How can I find the index of a tuple inside a numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have constructed 2 graphs and calculated the eigenvector centrality of each node. Each node can be considered as an individual project contributor. Consider 2 different rankings of project contributors. They are ranked based on the eigenvector of the node. \nRanking #1:\nRank 1 - A\nRank 2 - B\nRank 3 - C\nRanking #2:\nRank 1 - B\nRank 2 - C\nRank 3 - A\nThis is a very small example but in my case, I have almost 400 contributors and 4 different rankings. My question is how can I merge all the rankings and get an aggregate ranking. Now I can't just simply add the eigenvector centralities and divide it by the number of rankings. I was thinking to use the Khatri-Rao product or Kronecker Product to get the result. \nCan anyone suggest me how can I achieve this?\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":56790761,"Users Score":0,"Answer":"Rank both graphs separately each node gets a rank in both graphs then do simple matrix addition. Now normalize the rank. This should keep the relationship like rank1>rank2>rank3>rank4 true and relationships like rank1+rank1>rank1+rank2 true. I don't know how it would help you taking the Khatri-Rao product of the matrix. That would make you end up with more than 400 nodes. Then you would need to compress them back to 400 nodes in-order to have 400 ranked nodes at the end. Who told you to use Khatri-Rao product?","Q_Score":0,"Tags":"python,data-mining,matrix-multiplication,data-analysis,ranking","A_Id":58783825,"CreationDate":"2019-06-27T12:15:00.000","Title":"Aggregate Ranking using Khatri-Rao product","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to use write code in Javascript, in the Spyder IDE, that is meant for Python. I have read that Spyder supports multiple languages but I'm not sure how to use it. I have downloaded Nodejs and added it to the environment variables. I'd like to know how get Javascript syntax colouring, possibly auto-completion and Help options as well ,and I'd also like to know how to conveniently execute the .js file and see the results in a console.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3235,"Q_Id":56810244,"Users Score":5,"Answer":"(Spyder maintainer here) Sorry but for now we only support Python for all the functionality that you are looking for (code completion, help and code execution).\nOur next major version (Spyder 4, to be released later in 2019) will have the ability to give code completion and linting for other programming languages, but it'll be more of a power-user feature than something anyone can use.","Q_Score":5,"Tags":"javascript,python,spyder","A_Id":56813493,"CreationDate":"2019-06-28T16:20:00.000","Title":"How to use Javascript in Spyder IDE?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to make a temporary table a create on pyspark available via Thrift. My final goal is to be able to access that from a database client like DBeaver using JDBC.\nI'm testing first using beeline.\nThis is what i'm doing.\n\nStarted a cluster with one worker in my own machine using docker and added spark.sql.hive.thriftServer.singleSession true on spark-defaults.conf\nStarted Pyspark shell (for testing sake) and ran the following code:\nfrom pyspark.sql import Row\nl = [('Ankit',25),('Jalfaizy',22),('saurabh',20),('Bala',26)]\nrdd = sc.parallelize(l)\npeople = rdd.map(lambda x: Row(name=x[0], age=int(x[1])))\npeople = people.toDF().cache()\npeebs = people.createOrReplaceTempView('peebs')\nresult = sqlContext.sql('select * from peebs')\nSo far so good, everything works fine.\nOn a different terminal I initialize spark thrift server:\n.\/sbin\/start-thriftserver.sh --hiveconf hive.server2.thrift.port=10001 --conf spark.executor.cores=1 --master spark:\/\/172.18.0.2:7077\nThe server appears to start normally and I'm able to see both pyspark and thrift server jobs running on my spark cluster master UI.\nI then connect to the cluster using beeline\n.\/bin\/beeline\nbeeline> !connect jdbc:hive2:\/\/172.18.0.2:10001\nThis is what I got\n\nConnecting to jdbc:hive2:\/\/172.18.0.2:10001\n Enter username for jdbc:hive2:\/\/172.18.0.2:10001: \n Enter password for jdbc:hive2:\/\/172.18.0.2:10001: \n 2019-06-29 20:14:25 INFO Utils:310 - Supplied authorities: 172.18.0.2:10001\n 2019-06-29 20:14:25 INFO Utils:397 - Resolved authority: 172.18.0.2:10001\n 2019-06-29 20:14:25 INFO HiveConnection:203 - Will try to open client transport with JDBC Uri: jdbc:hive2:\/\/172.18.0.2:10001\n Connected to: Spark SQL (version 2.3.3)\n Driver: Hive JDBC (version 1.2.1.spark2)\n Transaction isolation: TRANSACTION_REPEATABLE_READ\n\nSeems to be ok.\nWhen I list show tables; I can't see anything.\n\nTwo interesting things I'd like to highlight is:\n\nWhen I start pyspark I get these warnings\n\nWARN ObjectStore:6666 - Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0\nWARN ObjectStore:568 - Failed to get database default, returning NoSuchObjectException\nWARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException\n\nWhen I start the thrift server I get these:\n\nrsync from spark:\/\/172.18.0.2:7077\n ssh: Could not resolve hostname spark: Name or service not known\n rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\n rsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]\n starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to ...\n\n\nI've been through several posts and discussions. I see people saying we can't have temporary tables exposed via thrift unless you start the server from within the same code. If that's true how can I do that in python (pyspark)?\nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1684,"Q_Id":56820752,"Users Score":0,"Answer":"createOrReplaceTempView creates an in-memory table. The Spark thrift server needs to be started on the same driver JVM where we created the in-memory table.\nIn the above example, the driver on which the table is created and the driver running STS(Spark Thrift server) are different.\nTwo options\n1. Create the table using createOrReplaceTempView in the same JVM where the STS is started.\n2. Use a backing metastore, and create tables using org.apache.spark.sql.DataFrameWriter#saveAsTable so that tables are accessible independent of the JVM(in fact without any Spark driver. \nRegarding the errors:\n1. Relates to client and server metastore version.\n2. Seems like some rsync script trying to decode spark:\\\\ url\nBoth doesnt seems to be related to the issue.","Q_Score":1,"Tags":"python,apache-spark,pyspark,thrift,spark-thriftserver","A_Id":56824231,"CreationDate":"2019-06-29T20:41:00.000","Title":"How to view pyspark temporary tables on Thrift server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Basically I want to store a buttons service server-side that way it can persist through browser closes and page refresh.\nHere's what the user is trying to do\n\nThe user searches in a search bar for a list of products.\nWhen the results show up, they are shown a button that triggers an action for each individual product. They are also shown a master button that can trigger the same action for each product that is listed.\nUpon clicking the button, I want to disable it for 30 seconds and have this persist through page refreshes and browser close.\n\nWhat I've done\nCurrently I have this implemented using AJAX calls on the client side, but if the page refreshes it resets the button and they can click it again. So I looked into using javascript's localStorage function, but in my situation it would be better just to store this on the server.\nWhat I think needs to happen\n\nCreate a model in my Django app for a button. Its attributes would be its status and maybe some meta data (last clicked, etc).\nWhenever the client requests a list of products, the views will send the list of products and it will be able to query the database for the respective button's status and implement a disabled attribute directly into the template.\nIf the button is available to be pressed then the client side will make an AJAX POST call to the server and the server will check the buttons status. If it's available it will perform the action, update the buttons status to disabled for 30 seconds, and send this info back to the client in order to reflect it in the DOM.\n\nA couple questions\n\nIs it just a matter of creating a model for the buttons and then querying the database like normal?\nHow do I have Django update the database after 30 seconds to make a button's status go from disabled back to enabled?\nWhen the user presses the button it's going to make it disabled, but it will only be making it disabled in the database. What is the proper way to actually disable the button without a page refresh on the client side? Do I just disable the button in javascript for 30 seconds, and then if they try to refresh the page then the views will see the request for the list of products and it will check the database for each button's status and it will serve the button correctly?\n\nThank you very much for the help!!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":246,"Q_Id":56821634,"Users Score":1,"Answer":"Is it just a matter of creating a model for the buttons and then\n querying the database like normal?\n\nModel could be something like Button (_id, last_clicked as timestamp, user_id)\nWhile querying you could simply sort by timestamp and LIMIT 1 to get the last click. By not overwriting the original value it would ensure a bit faster write. \nIf you don't want the buttons to behave similarly for each user you will have to create a mapping of the button with the user who clicked it. Even if your current requirements don't need them, create an extensible solution where mapping the user with this table is quite easy. \n\nHow do I have Django update the database after 30 seconds to make a\n button's status go from disabled back to enabled?\n\nI avoid changing the database without a client request mapped to the change. This ensures the concurrency and access controls. And also has higher predictability for the current state of data. Following that, I would suggest not to update the db after the time delta(30 sec). \nInstead of that you could simply compare the last_clicked timestamp and calculate the delta either server side before sending the response or in client side. \nThis decision could be important, consider a scenario when the client has a different time on his system than the server time.\n\nWhen the user presses the button it's going to make it disabled, but\n it will only be making it disabled in the database. What is the proper\n way to actually disable the button without a page refresh on the\n client side? Do I just disable the button in javascript for 30\n seconds, and then if they try to refresh the page then the views will\n see the request for the list of products and it will check the\n database for each button's status and it will serve the button\n correctly?\n\nYou'd need to do a POST request to communicate the button press timestamp with the db. You'd also need to ensure that the POST request is successful as an unsuccessful request would not persist the data in case of browser closure. \nAfter doing the above two you could disable the button only from the client side without trying the get the button last_clicked timestamp.","Q_Score":0,"Tags":"javascript,python,django,ajax","A_Id":56827841,"CreationDate":"2019-06-30T00:14:00.000","Title":"How do I store information about a front-end button on the Django server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I am creating a Discord bot that needs to check all messages to see if a certain string is in an embed message created by any other Discord bot. I know I can use message.content to get a string of the message a user has sent but how can I do something similar with bot embeds in Python?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":467,"Q_Id":56822722,"Users Score":0,"Answer":"Use message.embeds instead to get the embed string content","Q_Score":1,"Tags":"python,bots,embed,message,discord","A_Id":56823113,"CreationDate":"2019-06-30T05:53:00.000","Title":"Using a Discord bot, how do I get a string of an embed message from another bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have parsed the content of a file to a variable that looks like this;\n\nb'8,092436.csv,,20f85'\n\nI would now like to find out what kind of filetype this data is coming from, with;\n\nprint(magic.from_buffer(str(decoded, 'utf-8'), mime=True))\n\nThis prints;\n\napplication\/octet-stream\n\nAnyone know how I would be able to get a result saying 'csv'?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":458,"Q_Id":56833465,"Users Score":1,"Answer":"Use magic on the original file. \nYou also need to take into account that CSV is really just a text file that uses particular characters to delimit the content. There is no explicit identifier that indicates that the file is a CSV file. Even then the CSV module needs to be configured to use the appropriate delimiters.\nThe delimiter specification of a CSV file is either defined by your program or needs to be configured (see importing into Excel as an example, you are presented with a number of options to configure the type of CSV to import).","Q_Score":1,"Tags":"python","A_Id":56833965,"CreationDate":"2019-07-01T09:49:00.000","Title":"Python magic is not recognizing the correct content","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am looking to use a public API running on a distant server from within my company. For security reasons, I am supposed to redirect all the traffic via the company's PROXY. Does anyone know how to do this in Python?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":312,"Q_Id":56841342,"Users Score":1,"Answer":"Set the HTTP_PROXY environment variable before starting your python script\ne.g. export HTTP_PROXY=http:\/\/proxy.host.com:8080","Q_Score":0,"Tags":"python,proxy,api","A_Id":56841343,"CreationDate":"2019-07-01T15:01:00.000","Title":"Configure proxy with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am looking to use a public API running on a distant server from within my company. For security reasons, I am supposed to redirect all the traffic via the company's PROXY. Does anyone know how to do this in Python?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":312,"Q_Id":56841342,"Users Score":2,"Answer":"Directly in python you can do :\nos.environ[\"HTTP_PROXY\"] = http:\/\/proxy.host.com:8080.\nOr as it has been mentioned before launching by @hardillb on a terminal :\nexport HTTP_PROXY=http:\/\/proxy.host.com:8080","Q_Score":0,"Tags":"python,proxy,api","A_Id":56841409,"CreationDate":"2019-07-01T15:01:00.000","Title":"Configure proxy with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I just wonder how apache server can know the domain you come from you can see that in Vhost configuration","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":56875383,"Users Score":0,"Answer":"By a reverse DNS lookup of the IP; socket.gethostbyaddr(). \nResults vary; many IPs from consumer ISPs won't resolve to anything interesting, because of NAT and just not maintaining a generally informative reverse zone.","Q_Score":0,"Tags":"python","A_Id":56875761,"CreationDate":"2019-07-03T17:35:00.000","Title":"How can I find domain that has been used from a client to reach my server in python socket?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using Databricks to connect to an Eventhub, where each message comming from the EventHub may be very different from another.\nIn the message, I have a body and an id.\nI am looking for performance, so I am avoiding collecting data or doing unecessary processings, also I want to do the saving in parallel by partition. However I am not sure on how to do this in a proper way. \nI want to append the body of each ID in a different AND SPECIFIC table in batches, the ID will give me the information I need to save in the right table. So in order to do that I have been trying 2 approachs:\n\nPartitioning: Repartition(numPartitions, ID) -> ForeachPartition\nGrouping: groupBy('ID').apply(myFunction) #@pandas_udf GROUPED_MAP\n\nThe approach 1 doens't look very attracting to me, the repartition process looks kind unecessary and I saw in the docs that even if I set a column as a partition, it may save many ids of that column in a single partition. It only garantees that all data related to that id is in the partition and not splitted\nThe approach 2 forces me to output from the pandas_udf, a dataframe with the same schema of the input, which is not going to happen since I am transforming the eventhub message from CSV to dataframe in order to save it to the table. I could return the same dataframe that I received, but it sounds weird.\nIs there any nice approach I am not seeing?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":134,"Q_Id":56878553,"Users Score":1,"Answer":"If your Id has distinct number of values (kind of type\/country column) you can use partitionBy to store and thereby saving them to different table will be faster.\nOtherwise create a derive column(using withColumn) from you id column by using the logic same as you want to use while deviding data across tables. Then you can use that derive column as a partition column in order to have faster load.","Q_Score":0,"Tags":"python-3.x,pyspark,azure-databricks","A_Id":56882432,"CreationDate":"2019-07-03T22:16:00.000","Title":"How to write each dataframe partition into different tables","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I am getting only upto 10000 members when using telethon how to get more than 10000 \nI tried to run multiple times to check whether it is returning random 10000 members but still most of them are same only few changed that also not crossing two digits\nExpected greater than 10000\nbut actual is 10000","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":131,"Q_Id":56884669,"Users Score":0,"Answer":"there is no simple way. you can play with queries like 'a*', 'b*' and so on","Q_Score":0,"Tags":"python,telegram,telethon","A_Id":56884748,"CreationDate":"2019-07-04T09:21:00.000","Title":"how to get the memebrs of a telegram group greater than 10000","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"For semantic image segmentation, I understand that you often have a folder with your images and a folder with the corresponding masks. In my case, I have gray-scale images with the dimensions (32, 32, 32). The masks naturally have the same dimensions. The labels are saved as intensity values (value 1 = label 1, value 2 = label 2 etc.). 4 classes in total. Imagine I have found a model that was built with the keras model API. How do I know how to prepare my label data for it to be accepted by the model? Does it depend on the loss function? Is it defined in the model (Input parameter). Do I just add another dimension (4, 32, 32, 32) in which the 4 represents the 4 different classes and one-hot code it? \nI want to build a 3D convolutional neural network for semantic segmentation but I fail to understand how to feed in the data correctly in keras. The predicted output is supposed to be a 4-channel 3D image, each channel showing the probability values of each pixel to belong to a certain class.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":207,"Q_Id":56888245,"Users Score":0,"Answer":"The Input() function defines the shape of the input tensor of a given model. For 3D images, often a 5D Tensor is expected, e.g. (None, 32, 32, 32, 1), where None refers to the batch size. Therefore the training images and labels have to be reshaped. Keras offers the to_categorical function to one-hot encode the label data (which is necessary). The use of generators helps to feed in the data. In this case, I cannot use the ImageDataGenerator from keras as it can only deal with RGB and grayscale images and therefore have to write a custom script.","Q_Score":1,"Tags":"python,image,keras,deep-learning,conv-neural-network","A_Id":56973276,"CreationDate":"2019-07-04T12:44:00.000","Title":"Keras preprocessing for 3D semantic segmentation task","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So, i trying to install with the command ecmwf api client conda install -c conda-forge ecmwf-api-client then the warning in the title shows up. I don't know how to proceede\n(base) C:\\Users\\caina>conda install -c conda-forge ecmwf-api-client\nCollecting package metadata (current_repodata.json): done\nSolving environment: failed\nCollecting package metadata (repodata.json): done\nSolving environment: failed\nUnsatisfiableError: The following specifications were found to be incompatible with each other:\n\npip -> python=3.6","AnswerCount":4,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":35577,"Q_Id":56895109,"Users Score":4,"Answer":"Install into a new environment instead of the conda base environment. Recent Anaconda and Miniconda installers have Python 3.7 in the base environment, but you're trying to install something that requires Python 3.6.","Q_Score":19,"Tags":"python,anaconda,conda","A_Id":56959899,"CreationDate":"2019-07-04T23:38:00.000","Title":"How to fix \"UnsatisfiableError: The following specifications were found to be incompatible with each other: - pip -> python=3.6\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So, i trying to install with the command ecmwf api client conda install -c conda-forge ecmwf-api-client then the warning in the title shows up. I don't know how to proceede\n(base) C:\\Users\\caina>conda install -c conda-forge ecmwf-api-client\nCollecting package metadata (current_repodata.json): done\nSolving environment: failed\nCollecting package metadata (repodata.json): done\nSolving environment: failed\nUnsatisfiableError: The following specifications were found to be incompatible with each other:\n\npip -> python=3.6","AnswerCount":4,"Available Count":2,"Score":-0.049958375,"is_accepted":false,"ViewCount":35577,"Q_Id":56895109,"Users Score":-1,"Answer":"Simply go to Anaconda navigator.\nGo to Environments, Select Installed (packages, etc.) and then click the version of Python. Downgrade it to a lower version. In your case Python 3.6","Q_Score":19,"Tags":"python,anaconda,conda","A_Id":59539739,"CreationDate":"2019-07-04T23:38:00.000","Title":"How to fix \"UnsatisfiableError: The following specifications were found to be incompatible with each other: - pip -> python=3.6\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have been trying to use python 3 for text mining on a 650 MB csv file, which my computer was not powerful enough to do. My second solution was to reach out to google cloud. I have set up my VMs and my jupyter notebook on google cloud, and it works perfectly well. The problem, however, is that I am in constant fear of getting disconnected. As a matter of fact, my connection with google server was lost a couple of time and so was my whole work.\nMy question: Is there a way to have the cloud run my code without fear of getting disconnected? I need to be able to have access to my csv file and also the output file.\nI know there is more than one way to do this and have read a lot of material. However, they are too technical for a beginner like me to understand. I really appreciate a more dummy-friendly version. Thanks!\nUPDATE: here is how I get access to my jupyter notebook on google cloud:\n1- I run my instance on google cloud\n2- I click on SSH \n3- in the window that appears, I type the following:\njupyter notebook --ip=0.0.0.0 --port=8888 --no-browser &\nI have seen people recommend to add nohup to the beginning of the same commend. I have tried it and got this message:\nnohup: ignoring input and appending output to 'nohup.out'\nAnd nothing happens.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":627,"Q_Id":56916897,"Users Score":8,"Answer":"If I understand your problem correctly, you could just run the program inside a screen instance:\nAfter connecting via ssh type screen\nRun your command\nPress ctrl + a, ctrl + d\nNow you can disconnect from ssh and your code will continue to run. You can reconnect to the screen via screen -r","Q_Score":2,"Tags":"python,google-cloud-platform","A_Id":56917173,"CreationDate":"2019-07-06T19:03:00.000","Title":"how to run my python code on google cloud without fear of getting disconnected - an absolute beginner?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"On my jupyter notebook, running import sknw throws a ModuleNotFoundError error. \nI have tried pip install sknw and pip3 install sknw and python -m pip install sknw. It appears to have downloaded successfully, and get requirement already satisfied if I try to download it again.\nAny help on how to get the sknw package to work in jupyter notebook would be very helpful!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":56937918,"Users Score":0,"Answer":"check on which environment you using pip.","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":56938165,"CreationDate":"2019-07-08T15:13:00.000","Title":"Importing sknw on jupyter ModuleNotFoundError","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Hello I am developing a web scraper and I am using in a particular website, this website has a lot of URLs, maybe more than 1.000.000, and for scraping and getting the information I have the following architecture.\nOne set to store the visited sites and another set to store the non-visited sites.\nFor scraping the website I am using multithreading with a limit of 2000 threads.\nThis architecture has a problem with a memory size and can never finish because the program exceeds the memory with the URLs\nBefore putting a URL in the set of non-visited, I check first if this site is in visited, if the site was visited then I will never store in the non-visited sites.\nFor doing this I am using python, I think that maybe a better approach would be storing all sites in a database, but I fear that this can be slow\nI can fix part of the problem by storing the set of visited URLs in a database like SQLite, but the problem is that the set of the non-visited URL is too big and exceeds all memory\nAny idea about how to improve this, with another tool, language, architecture, etc...?\nThanks","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":108,"Q_Id":56938689,"Users Score":1,"Answer":"At first, i never crawled pages using Python. My preferd language is c#. But python should be good, or better.\nOk, the first thing your detected is quiet important. Just operating on your memory will NOT work. Implementing a way to work on your harddrive is important. If you just want to work on memory, think about the size of the page.\nIn my opinion, you already got the best(or a good) architecture for webscraping\/crawling. You need some kind of list, which represents the urls you already visited and another list in which you could store the new urls your found. Just two lists is the simplest way you could go. Cause that means, you are not implementing some kind of strategy in crawling. If you are not looking for something like that, ok. But think about it, because that could optimize the usage of memory. Therefor you should look for something like deep and wide crawl. Or recursive crawl. Representing each branch as a own list, or a dimension of an array.\nFurther, what is the problem with storing your not visited urls in a database too? Cause you only need on each thread. If your problem with putting it in db is the fact, that it could need some time swiping through it, then you should think about using multiple tables for each part of the page.\nThat means, you could use one table for each substring in url:\nwwww.example.com\/\nwwww.example.com\/contact\/\nwwww.example.com\/download\/\nwwww.example.com\/content\/\nwwww.example.com\/support\/\nwwww.example.com\/news\/\nSo if your url is:\"wwww.example.com\/download\/sweetcats\/\", then you should put it in the table for wwww.example.com\/download\/.\nWhen you have a set of urls, then you have to look at first for the correct table. Afterwards you can swipe through the table.\nAnd at the end, i have just one question. Why are you not using a library or a framework which already supports these features? I think there should be something available for python.","Q_Score":0,"Tags":"python,performance,web-scraping,architecture","A_Id":56953619,"CreationDate":"2019-07-08T16:03:00.000","Title":"What is the best approach to scrape a big website?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"Hello I am developing a web scraper and I am using in a particular website, this website has a lot of URLs, maybe more than 1.000.000, and for scraping and getting the information I have the following architecture.\nOne set to store the visited sites and another set to store the non-visited sites.\nFor scraping the website I am using multithreading with a limit of 2000 threads.\nThis architecture has a problem with a memory size and can never finish because the program exceeds the memory with the URLs\nBefore putting a URL in the set of non-visited, I check first if this site is in visited, if the site was visited then I will never store in the non-visited sites.\nFor doing this I am using python, I think that maybe a better approach would be storing all sites in a database, but I fear that this can be slow\nI can fix part of the problem by storing the set of visited URLs in a database like SQLite, but the problem is that the set of the non-visited URL is too big and exceeds all memory\nAny idea about how to improve this, with another tool, language, architecture, etc...?\nThanks","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":108,"Q_Id":56938689,"Users Score":1,"Answer":"2000 threads is too many. Even 1 may be too many. Your scraper will probably be thought of as a DOS (Denial Of Service) attach and your IP address will be blocked.\nEven if you are allowed in, 2000 is too many threads. You will bottleneck somewhere, and that chokepoint will probably lead to going slower than you could if you had some sane threading. Suggest trying 10. One way to look at it -- Each thread will flip-flop between fetching a URL (network intensive) and processing it (cpu intensive). So, 2 times the number of CPUs is another likely limit.\nYou need a database under the covers. This will let you top and restart the process. More importantly, it will let you fix bugs and release a new crawler without necessarily throwing away all the scraped info.\nThe database will not be the slow part. The main steps:\n\nPick a page to go for (and lock it in the database to avoid redundancy).\nFetch the page (this is perhaps the slowest part)\nParse the page (or this could be the slowest)\nStore the results in the database\nRepeat until no further pages -- which may be never, since the pages will be changing out from under you.\n\n(I did this many years ago. I had a tiny 0.5GB machine. I quit after about a million analyzed pages. There were still about a million pages waiting to be scanned. And, yes, I was accused of a DOS attack.)","Q_Score":0,"Tags":"python,performance,web-scraping,architecture","A_Id":57539538,"CreationDate":"2019-07-08T16:03:00.000","Title":"What is the best approach to scrape a big website?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I need to scrape a web page, but the problem is when i click on the link on website, it works fine, but when i go through the link manually by typing url in browser, it gives Access Denied error, so may be they are validating referrer on their end, Can you please tell me how can i sort this issue out using selenium in python ? \nor any idea that can solve this issue? i am unable to scrape the page because its giving Access Denied error.\nPS. i am working with python3\nWaiting for help.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":322,"Q_Id":56940886,"Users Score":0,"Answer":"I solved myself by using seleniumwire ;) selenium doesn't support headers, but seleniumwire supports, so that solved my issue.\nThanks","Q_Score":0,"Tags":"python","A_Id":56940887,"CreationDate":"2019-07-08T17:12:00.000","Title":"How to set Referrer in driver selenium python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I was trying to using python requests and mechanize to gather information from a website. This process needs me to post some information then get the results from that website. I automate this process using for loop in Python. However, after ~500 queries, I was told that I am blocked due to high query rate. It takes about 1 sec to do each query. I was using some software online where they query multiple data without problems. Could anyone help me how to avoid this issue? Thanks!\nNo idea how to solve this.\n--- I am looping this process (by auto changing case number) and export data to csv....\nAfter some queries, I was told that my IP was blocked.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":56943620,"Users Score":0,"Answer":"Optimum randomized delay time between requests. \nRandomized real user-agents for\neach request.\nEnabling cookies.\nUsing a working proxy pool and\nselecting a random proxy for each request.","Q_Score":0,"Tags":"python,web-scraping,python-requests,export-to-csv","A_Id":56954734,"CreationDate":"2019-07-08T23:22:00.000","Title":"While query data (web scraping) from a website with Python, how to avoid being blocked by the server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I feel like this subject is touched in some other questions but it doesn't get into Python (3.7) specifically, which is the language I'm most familiar with.\nI'm starting to get the hang of abstract classes and how to use them as blueprints for subclasses I'm creating.\nWhat I don't understand though, is the purpose of concrete methods in abstract classes.\nIf I'm never going to instantiate my parent abstract class, why would a concrete method be needed at all, shouldn't I just stick with abstract methods to guide the creation of my subclasses and explicit the expected behavior?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":526,"Q_Id":56960959,"Users Score":1,"Answer":"This question is not Python specific, but general object oriented.\nThere may be cases in which all your sub-classes need a certain method with a common behavior. It would be tedious to implement the same method in all your sub-classes. If you instead implement the method in the parent class, all your sub-classes inherit this method automatically. Even callers may call the method on your sub-class, although it is implemented in the parent class. This is one of the basic mechanics of class inheritance.","Q_Score":1,"Tags":"python-3.x,oop,abstract-class","A_Id":56961464,"CreationDate":"2019-07-09T21:53:00.000","Title":"What is the purpose of concrete methods in abstract classes in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Recently, I want to deploy a Deeplearning model (Tensorflow) on mobile (Android\/iOS) and I found that Kivy Python is a good choice to write cross-platform apps. (I am not familiar with Java Android) \nBut I don't know how to integrate Tensorflow libs when building .apk file. \nThe guide for writing \"buildozer recipe\" is quite complicate for this case.\nIs there any solution for this problem without using native Java Android and Tensorflow Lite?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1955,"Q_Id":56967553,"Users Score":0,"Answer":"Fortunately found someone facing the same issues as I am but unfortunately I found that Kivy couldn't compile Tensorflow library yet. In other words, not supported, yet. I don't know when will they update the features.","Q_Score":7,"Tags":"android,python,tensorflow,kivy","A_Id":66493760,"CreationDate":"2019-07-10T09:24:00.000","Title":"Building Kivy Android app with Tensorflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have an array with size (4,4) that can have values 0 and 1, so I can have 65536 different arrays. I need to produce all these arrays without repeating. I use wt_random=np.random.randint(2, size=(65536,4,4)) but I am worried they are not unique. could you please tell me this code is correct or not and what should I do to produce all possible arrays? Thank you.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":452,"Q_Id":56969477,"Users Score":0,"Answer":"If you need all possible arrays in random order, consider enumerating them in any arbitrary deterministic order and then shuffling them to randomize the order. If you don't want all arrays in memory, you could write a function to generate the array at a given position in the deterministic list, then shuffle the positions. Note that Fisher-Yates may not even need a dense representation of the list to shuffle... if you keep track of where the already shuffled entries end up you should have enough.","Q_Score":2,"Tags":"python,arrays,numpy,random","A_Id":56969631,"CreationDate":"2019-07-10T11:12:00.000","Title":"how do I produce unique random numbers as an array in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a file contain some columns which the second column is time. Like what I show below. I need to add a column of time which all are in seconds like this: \"2.13266 2.21784 2.20719 2.02499 2.16543\", to the time column in the first file (below). My question is how to add these two time to each other. And maybe in some cases when I add these times, then it goes to next day, and in this case how to change the date in related row.\n2014-08-26 19:49:32 0\n2014-08-28 05:43:21 0\n2014-08-30 11:47:54 0\n2014-08-30 03:26:10 0","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":673,"Q_Id":56970023,"Users Score":0,"Answer":"Probably the easiest way is to read your file into a pandas data-frame and parse each row as a datetime object. Then you create a datetime.timedelta object passing the fractional seconds.\nA datetime object + a timedelta handles wrapping around for days quite nicely so this should work without any additional code. Finally, write back your updated dataframe to a file.","Q_Score":0,"Tags":"python,datetime,time,add,timedelta","A_Id":56970162,"CreationDate":"2019-07-10T11:43:00.000","Title":"How to add a column of seconds to a column of times in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a file contain some columns which the second column is time. Like what I show below. I need to add a column of time which all are in seconds like this: \"2.13266 2.21784 2.20719 2.02499 2.16543\", to the time column in the first file (below). My question is how to add these two time to each other. And maybe in some cases when I add these times, then it goes to next day, and in this case how to change the date in related row.\n2014-08-26 19:49:32 0\n2014-08-28 05:43:21 0\n2014-08-30 11:47:54 0\n2014-08-30 03:26:10 0","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":673,"Q_Id":56970023,"Users Score":0,"Answer":"Ok. Finally it is done via this code:\n d= 2.13266\ndd= pd.to_timedelta (int(d), unit='s')\ndf= pd.Timestamp('2014-08-26 19:49:32')\nnew = df + dd","Q_Score":0,"Tags":"python,datetime,time,add,timedelta","A_Id":56973402,"CreationDate":"2019-07-10T11:43:00.000","Title":"How to add a column of seconds to a column of times in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a matrix called POS which has form (10,132) and I need to average those first 10 elements in such a way that my averaged matrix has the form of (1,132)\nI have tried doing \nmeans = pos.mean (axis = 1)\nor\nmenas = np.mean(pos)\nbut the result in the first case is a matrix of (10,) and in the second it is a simple number\ni expect the ouput a matrix of shape (1,132)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":95,"Q_Id":56974114,"Users Score":0,"Answer":"The solution is to specify the correct axis and use keepdims=True which is noted by several commenters (If you add your answer I will delete mine).\nThis can be done with either pos.mean(axis = 0,keepdims=True) or np.mean(pos,axis=0,keepdims=True)","Q_Score":0,"Tags":"python,numpy","A_Id":56981888,"CreationDate":"2019-07-10T15:34:00.000","Title":"how to average in a specific dimension with numpy.mean?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to delete an empty directory in Jupyter notebook. \nWhen I select the folder and click Delete, an error message pops up saying:\n'A directory must be empty before being deleted.'\nThere are no files or folders in the directory and it is empty.\nAny advice on how to delete it?\nThank you!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":11155,"Q_Id":56978517,"Users Score":0,"Answer":"Go to your local directory where it stores the workbench files, ex:(C:\\Users\\prasadsarada)\nYou can see all the folders you have created in Jupyter Notebook. delete it there.","Q_Score":2,"Tags":"python,jupyter-notebook","A_Id":59487551,"CreationDate":"2019-07-10T20:56:00.000","Title":"Delete empty directory from Jupyter notebook error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am trying to delete an empty directory in Jupyter notebook. \nWhen I select the folder and click Delete, an error message pops up saying:\n'A directory must be empty before being deleted.'\nThere are no files or folders in the directory and it is empty.\nAny advice on how to delete it?\nThank you!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":11155,"Q_Id":56978517,"Users Score":2,"Answer":"Usually, Jupyter itself creates a hidden .ipynb_checkpoints folder within the directory when you inspect it. You can check its existence (or any other hidden file\/folders) in the directory using ls -a in a terminal that has a current working directory as the corresponding folder.","Q_Score":2,"Tags":"python,jupyter-notebook","A_Id":56978899,"CreationDate":"2019-07-10T20:56:00.000","Title":"Delete empty directory from Jupyter notebook error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"By trying to find an optimization to my server on python, I have stumbled on a concept called select. By trying to find any code possible to use, no matter where I looked, Windows compatibility with this subject is hard to find.\nAny ideas how to program a TCP server with select on windows? I know about the idea of unblocking the sockets to maintain the compatibility with it. Any suggestions will be welcomed.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":98,"Q_Id":56979085,"Users Score":1,"Answer":"Using select() under Windows is 99% the same as it is under other OS's, with some minor variations. The minor variations (at least the ones I know about) are:\n\nUnder Windows, select() only works for real network sockets. In particular, don't bother trying to select() on stdin under Windows, as it won't work.\nUnder Windows, if you attempt a non-blocking TCP connection and the TCP connection fails asynchronously, you will get a notification of that failure via the third (\"exception\") fd_set only. (Under other OS's you will get notified that the failed-to-connect TCP-socket is ready-for-read\/write also)\nUnder Windows, select() will fail if you don't pass in at least one valid socket to it (so you can't use select([], [], [], timeoutInSeconds) as an alternative to time.sleep() like you can under some other OS's)\n\nOther than that select() for Windows is like select() for any other OS. (If your real question about how to use select() in general, you can find information about that using a web search)","Q_Score":0,"Tags":"python,windows,sockets,select","A_Id":56979188,"CreationDate":"2019-07-10T21:47:00.000","Title":"TCP Socket on Server Side Using Python with select on Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using Tensorflow 2.0 and trying to write a tf.keras.callbacks.Callback that reads both the inputs and outputs of my model for the batch. \nI expected to be able to override on_batch_end and access model.inputs and model.outputs but they are not EagerTensor with a value that I could access. Is there anyway to access the actual tensors values that were involved in a batch? \nThis has many practical uses such as outputting these tensors to Tensorboard for debugging, or serializing them for other purposes. I am aware that I could just run the whole model again using model.predict but that would force me to run every input twice through the network (and I might also have non-deterministic data generator). Any idea on how to achieve this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":354,"Q_Id":56988498,"Users Score":1,"Answer":"No, there is no way to access the actual values for input and output in a callback. That's not just part of the design goal of callbacks. Callbacks only have access to model, args to fit, the epoch number and some metrics values. As you found, model.input and model.output only points to the symbolic KerasTensors, not actual values.\nTo do what you want, you could take the input, stack it (maybe with RaggedTensor) with the output you care about, and then make it an extra output of your model. Then implement your functionality as a custom metric that only reads y_pred. Inside your metric, unstack the y_pred to get the input and output, and then visualize \/ serialize \/ etc. Metrics\nAnother way might be to implement a custom Layer that uses py_function to call a function back in python. This will be super slow during serious training but may be enough for use during diagnostic \/ debugging.","Q_Score":10,"Tags":"python,tensorflow,keras,tensorflow2.0,tf.keras","A_Id":68777974,"CreationDate":"2019-07-11T11:48:00.000","Title":"Tensorflow 2.0: Accessing a batch's tensors from a callback","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new in drone, can you please explain one thing:\nIs it possible to have RC controller programmed by python?\nAs I understood using telemetry module and DroneKit, it is possible to control the drone using python.\nBut usually telemetry module supporting drones are custom drones and as I understood telemetry module does not work as good as RC.\nSo to have cheaper price, can someone suggest me solution about how to control RC drone using python?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":268,"Q_Id":56990495,"Users Score":0,"Answer":"You can use tello drones .These drones can be programmed as per your requirement using python .","Q_Score":0,"Tags":"python,dronekit-python","A_Id":58305698,"CreationDate":"2019-07-11T13:34:00.000","Title":"Drone control by python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am making a program in python and want to clear what the user has enterd, this is because I am using the keyboard function to register input as is is given, but there is still text left over after a keypress is registerd and I don't want this to happen.\nI was woundering if there is a module that exists to remove text that is being entered\nAny help would be greatly apreciated, and just the name of a module is fine; I can figure out how to use it, just cant find an appropriate module.\nEDIT:\nSorry if i did not make my self clear, I dont really want to clear the whole screen, just what the user has typed. So that they don't have to manually back space after their input has been taken.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":56994310,"Users Score":0,"Answer":"'sys.stdout.write' is the moduel I was looking for.","Q_Score":0,"Tags":"python,python-3.6","A_Id":56995012,"CreationDate":"2019-07-11T17:14:00.000","Title":"Is there a way of deleting specific text for the user in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a numpy array ids = np.array([1,1,1,1,2,2,2,3,4,4])\nand another array of equal length vals = np.array([1,2,3,4,5,6,7,8,9,10])\nNote: the ids array is sorted by ascending order\nI would like to insert 4 zeros before the beginning of each new id - i.e. \nnew array = np.array([0,0,0,0,1,2,3,4,0,0,0,0,5,6,7,0,0,0,0,8,0,0,0,0,9,10])\nOnly, way I am able to produce this is by iterating through the array which is very slow - and I am not quite sure how to do this using insert, pad, or expand_dim ...","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":126,"Q_Id":57015429,"Users Score":0,"Answer":"u can use np.zeros and append it to your existing array like\nnewid=np.append(np.zeros((4,), dtype=int),ids)\nGood Luck!","Q_Score":0,"Tags":"python-3.x,numpy","A_Id":57016163,"CreationDate":"2019-07-13T01:06:00.000","Title":"Quickest way to insert zeros into numpy array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm working with NetCDF files from NCAR and I'm trying to plot sea-ice thickness. This variable is on a curvilinear (TLAT,TLON) grid. What is the best way to plot this data on a map projection? Do I need to re-grid it to a regular grid or is there a way to plot it directly? I'm fairly new to Python so any help would be appreciated. Please let me know if you need any more information. Thank you! \nI've tried libraries like iris, scipy, and basemap, but I couldn't really get a clear explanation on how to implement them for my case.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":276,"Q_Id":57023555,"Users Score":0,"Answer":"I am pretty sure you can already use methods like contour, contourf, pcolormesh from Python's matplotlib without re-gridding the data. The same methods work for Basemap.","Q_Score":1,"Tags":"python,interpolation,netcdf","A_Id":57053572,"CreationDate":"2019-07-13T23:37:00.000","Title":"How can I put my curvilinear coordinate data on a map projection?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":1},{"Question":"take a input as a csv file and generate text\/sentence using nlg. I have tried with pynlg and markov chain.But nothing worked .What else I can use?","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":311,"Q_Id":57035069,"Users Score":-1,"Answer":"There are not much python libraries for NLG!!. Try out nlglib a python wrapper around SimpleNLG. For tutorial purposes, you could read Building Natural Language Generation systems by e.reiter.","Q_Score":0,"Tags":"python,nlp","A_Id":57142523,"CreationDate":"2019-07-15T07:29:00.000","Title":"how to use natural language generation from a csv file input .which python module we should use.can any one share a sample tutorial?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I am a newbie in Python, and have a problem. When I code Python using Sublime Text 3 and run directly on it, it does not find some Python library which I already imported. I Googled this problem and found out Sublime Text is just a Text Editor.\nI already had code in Sublime Text 3 file, how can I run it without this error? \nFor example: \n\n'ModuleNotFoundError: No module named 'matplotlib'. \n\nI think it should be run by cmd but I don't know how.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5433,"Q_Id":57038252,"Users Score":2,"Answer":"Depending on what OS you are using this is easy. On Windows you can press win + r, then type cmd. This will open up a command prompt. Then, type in pip install matplotlib. This will make sure that your module is installed. Then, navigate to the folder which your code is located in. You can do this by typing in cd Documents if you first need to get to your documents and then for each subsequent folder. \nThen, try typing in python and hitting enter. If a python shell opens up then type quit() and then type python filename.py and it will run.\nIf no python shell opens up then you need to change your environment variables. Press the windows key and pause break at the same time, then click on Advanced system settings. Then press Environment Variables. Then double click on Path. Then press New. Then locate the installation folder of you Python install, which may be in C:\\Users\\YOURUSERNAME\\AppData\\Local\\Programs\\Python\\Python36 Now put in the path and press ok. You should now be able to run python from your command line.","Q_Score":3,"Tags":"python,sublimetext3","A_Id":57038467,"CreationDate":"2019-07-15T10:55:00.000","Title":"[How to run code by using cmd from sublime text 3 ]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In my raspberry pi, i need to run two motors with a L298N.\nI can pwm on enable pins to change speeds. But i saw that gpiozero robot library can make things a lot easier. But \nWhen using gpiozero robot library, how can i alter speeds of those motors by giving signel to the enable pins.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":503,"Q_Id":57038494,"Users Score":1,"Answer":"I have exactly the same situation. You can of course program the motors separately but it is nice to use the robot class.\nLooking into the gpiocode for this, I find that in our case the left and right tuples have a third parameter which is the pin for PWM motor speed control. (GPIO Pins 12 13 18 19 have hardware PWM support). The first two outout pins in the tuple are to be signalled as 1, 0 for forward, 0,1 for back. \nSo here is my line of code:\n Initio = Robot(left=(4, 5, 12), right=(17, 18, 13))\nHope it works for you!\nI have some interesting code on the stocks for controlling the robot's absolute position, so it can explore its environment.","Q_Score":1,"Tags":"python-3.x,raspberry-pi2,robotics,gpiozero","A_Id":59144212,"CreationDate":"2019-07-15T11:09:00.000","Title":"how can I use gpiozero robot library to change speeds of motors via L298N","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Is there a way to print a Page\/Widget\/Label in Kivy? (or some other way in python).\nUnfortunately, I don't know how to ask the question correctly since I am new to software development. \nI want to build a price tracking app for my business in which i will have to print some stuff.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":200,"Q_Id":57046969,"Users Score":2,"Answer":"Not directly, no, but the printing part isn't really Kivy's responsibility - probably you can find another Python module to handle this.\nIn terms of what is printed, you can export an image of any part of the Kivy gui and print that.","Q_Score":4,"Tags":"python,kivy","A_Id":57047007,"CreationDate":"2019-07-15T20:45:00.000","Title":"Does Kivy have laserjet printer support?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a dictionary in python and I need to access that dictionary from a C program? \nor for example, convert this dictionary into struct map in C \nI don't have any idea how this could be done.\nI will be happy to get some hints regarding that or if there are any libraries that could help.\nUpdate:\nthe dictionary is generated from the abstract syntax tree of C program by using pycparser. \nso, I wrote a python function to generate this dictionary and I can dump it using pickle or save it as a text file.\nNow I want to use keys and their values from a c program and I don't know how to access that dictionary.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":90,"Q_Id":57055940,"Users Score":1,"Answer":"You could export the dictionary to a JSON and parse the JSON file from C...","Q_Score":0,"Tags":"python,c,dictionary,struct,abstract-syntax-tree","A_Id":57056007,"CreationDate":"2019-07-16T11:12:00.000","Title":"How to access python dictionary from C?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In my Flask application, I have one html file that holds some html and some js that semantically belongs together and cannot be used separately in a sensible way. I include this file in 2 of my html templates by using Jinja's {%include ... %}.\nNow my first approach was to put this file in my templates folder. However, I never call render_template on this file, so it seems unapt to store it in that directory. \nAnother approach would be to put it into the static folder, since its content is indeed static. But then I don't know how to tell Jinja to look for it in a different directory, since all the files using Jinja are in the templates folder.\nIs there a way to accomplish this with Jinja, or is there a better approach altogether?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":216,"Q_Id":57057614,"Users Score":1,"Answer":"You're over-thinking this. If it's included by Jinja, then it's a template file and belongs in the templates directory.","Q_Score":1,"Tags":"python,html,flask,jinja2","A_Id":57057757,"CreationDate":"2019-07-16T12:46:00.000","Title":"Flask non-template HTML files included by Jinja","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"my GPU is NVIDIA RTX 2080 TI\nKeras 2.2.4\nTensorflow-gpu 1.12.0\nCUDA 10.0\nOnce I load build a model ( before compilation ), I found that GPU memory is fully allocated\n[0] GeForce RTX 2080 Ti | 50'C, 15 % | 10759 \/ 10989 MB | issd\/8067(10749M)\nWhat could be the reason, how can i debug it?\nI don't have spare memory to load the data even if I load via generators\nI have tried to monitor the GPUs memory usage found out it is full just after building the layers (before compiling model)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":502,"Q_Id":57058071,"Users Score":0,"Answer":"I meet a similar problem when I load pre-trained ResNet50. The GPU memory usage just surges to 11GB while ResNet50 usually only consumes less than 150MB.\nThe problem in my case is that I also import PyTorch without actually used it in my code. After commented it, everything works fine.\nBut I have another PC with the same code that works just fine. So I uninstall and reinstall the Tensorflow and PyTorch with the correct version. Then everything works fine even if I import PyTorch.","Q_Score":0,"Tags":"python-3.x,tensorflow,keras","A_Id":67595461,"CreationDate":"2019-07-16T13:11:00.000","Title":"Keras, Tensorflow are reserving all GPU memory on model build","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have been going through source code of python. It looks like every object is derived from PyObject. But, in C, there is no concept of object oriented programming. So, how exactly is this implemented without inheritance?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":86,"Q_Id":57058653,"Users Score":2,"Answer":"What makes the Object Oriented programming paradigm is the relation between \"classes\" as templates for a data set and functions that will operate on this data set. And, the inheritance mechanism which is a relation from a class to ancestor classes. \nThese relations, however, do not depend on a particular language Syntax - just that they are present in anyway. \nSo, nothing stops one from doing \"object orientation\" in C, and in fact, organized libraries, even without an OO framework, end up with an organization related to OO.\nIt happens that the Python object system is entirely defined in pure C, with objects having a __class__ slot that points to its class with a C pointer - only when \"viwed\" from Python the full represenation of the class is resented. Classes in their turn having a __mro__ and __bases__ slots that point to the different arrangements of superclasses (the pointers this time are for containers that will be seen from Python as sequences).\nSo, when coding in C using the definitions and API of the Python runtime, one can use OOP just in the same way as coding in Python - and in fact use Python objects that are interoperable with the Python language. (The cython project will even transpile a superset of the Python language to C and provide transparent ways f writing native code with Python syntax)\nThere are other frameworks available to C that provide different OOP systems, that are equaly conformant, for example, glib - which defines \"gobject\" and is the base for all GTK+ and GNOME applications.","Q_Score":1,"Tags":"python","A_Id":57059180,"CreationDate":"2019-07-16T13:41:00.000","Title":"how is every object related to pyObject when c does not have Inheritance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm doing a project using Python with MPI. Every node of my project needs to know if there is any incoming message for it before continuing the execution of other tasks. \nI'm working on a system where multiple nodes executes some operations. Some nodes may need the outputs of another nodes and therefore needs to know if this output is available.\nFor illustration purposes, let's consider two nodes, A and B. A needs the output of B to execute it's task, but if the output is not available A needs to do some other tasks and then verify if B has send it's output again, in a loop. What I want to do is this verification of availability of output from B in A.\nI made some research and found something about a method called probe, but don't understood neither found a usefull documentation about what it does or how to use. So, I don't know if it solves my problem.\nThe idea of what I want is ver simple: I just need to check if there is data to be received when I use the method \"recv\" of mpi4py. If there is something the code do some tasks, if there ins't the code do some other taks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":335,"Q_Id":57061178,"Users Score":1,"Answer":"(elaborating on Gilles Gouaillardet's comment)\nIf you know you will eventually receive a message, but want to be able to run some computations while it is being prepared and sent, you want to use non-blocking receives, not probe.\nBasically use MPI_Irecv to setup a receive request as soon as possible. If you want to know whether the message is ready yet, use MPI_Test to check the request.\nThis is much better than using probes, because you ensure that a receive buffer is ready as early as possible and the sender is not blocked, waiting for the receiver to see that there is a message and post the receive.\nFor the specific implementation you will have to consult the manual of the Python MPI wrapper you use. You might also find helpful information in the MPI standard itself.","Q_Score":0,"Tags":"python,mpi","A_Id":57077576,"CreationDate":"2019-07-16T15:52:00.000","Title":"How can I verify if there is an incoming message to my node with MPI?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a pivot table in excel that I want to read the raw data from that table into python. Is it possible to do this? I do not see anything in the documentation on it or on Stack Overflow.\nIf the community could be provided some examples on how to read the raw data that drives pivot tables, this could greatly assist in routine analytical tasks.\nEDIT: \nIn this scenario there are no raw data tabs. I want to know how to ping the pivot table get the raw data and read it into python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":888,"Q_Id":57078501,"Users Score":0,"Answer":"First, recreate raw data from the pivot table. The pivot table has full information to rebuild the raw data.\n\nMake sure that none of the items in the pivot table fields are hidden -- clear all the filters and Slicers that have been applied.\nThe pivot table does not need to contain all the fields -- just make sure that there is at least one field in the Values area.\nShow the grand totals for rows and columns. If the totals aren't visible, select a cell in the pivot table, and on the Ribbon, under PivotTable Tools, click the Analyze tab. In the Layout group, click Grand totals, then click On for Rows and Columns.\nDouble-click the grand total cell at the bottom right of the pivot table. This should create a new sheet with the related records from the original source data.\n\nThen, you could read the raw data from the source.","Q_Score":5,"Tags":"python,excel,pandas,pivot-table","A_Id":71120859,"CreationDate":"2019-07-17T14:40:00.000","Title":"Getting the Raw Data Out of an Excel Pivot Table in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"i have created a model for classification of two types of shoes \nnow how to deploy it in OpenCv (videoObject detection)??\nthanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5883,"Q_Id":57093345,"Users Score":0,"Answer":"You would save the model to H5 file model.save(\"modelname.h5\") , then load it in OpenCV code load_model(\"modelname.h5\"). Then in a loop detect the objects you find via model.predict(ImageROI)","Q_Score":6,"Tags":"python,opencv,keras","A_Id":57093852,"CreationDate":"2019-07-18T11:24:00.000","Title":"how to add custom Keras model in OpenCv in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Since I have selected my project's interpreter as Pipenv during project creation, PyCharm has automatically created the virtualenv. Now, when I try to remove the virtualenv via pipenv --rm, I get the error You are attempting to remove a virtualenv that Pipenv did not create. Aborting. So, how can I properly remove this virtualenv?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":6431,"Q_Id":57100248,"Users Score":3,"Answer":"the command \"pipenv\" actually comes from the virtualenv,he can't remove himself.you should close the project and remove it without activated virtualenv","Q_Score":3,"Tags":"python,pycharm,virtualenv,jetbrains-ide,pipenv","A_Id":57197166,"CreationDate":"2019-07-18T17:58:00.000","Title":"How to remove a virtualenv which is created by PyCharm?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am trying to create a basic model for stock price prediction and some of the features I want to include come from the companies quarterly earnings report (every 3 months) so for example; if my data features are Date, OpenPrice, Close Price, Volume, LastQrtrRevenue how do I include LastQrtrRevenue if I only have a value for it every 3 months? Do I leave the other days blank (or null) or should I just include a constant of the LastQrtrRevenue and just update it on the day the new figures are released? Please if anyone has any feedback on dealing with data that is released infrequently but is important to include please share.... Thank you in advance.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":161,"Q_Id":57103601,"Users Score":1,"Answer":"I would be tempted to put the last quarter revenue in a separate table, with a date field representing when that quarter began (or ended, it doesn't really matter). Then you can write queries to work the way that most suits your application. You could certainly reconstitute the view you mention above using that table, as long as you can relate it to the main table.\nYou would just need to join the main table by company name, while selected the max() of the last quarter revenue table.","Q_Score":1,"Tags":"python,deep-learning,dataset,missing-data","A_Id":57103655,"CreationDate":"2019-07-18T22:58:00.000","Title":"How to deal with infrequent data in a time series prediction model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"For a list say l = [1, 2, 3, 4] how do I compare l[0] < l[1] < l[2] < l[3] in pythonic way?","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":57111549,"Users Score":0,"Answer":"Another way would be to use the .sort() method in which case you'd have to return a new list altogether.","Q_Score":2,"Tags":"python,python-3.x","A_Id":57111877,"CreationDate":"2019-07-19T11:26:00.000","Title":"Compare list items in pythonic way","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Often during debugging Spark Jobs on failure we can find the appropriate Stage and task responsible for the failure such as String Index Out of Bounds exception but it becomes difficult to understand which transformation is responsible for this failure.The UI shows information such as Exchange\/HashAggregate\/Aggregate but finding the actual transformation responsible for this failure becomes really difficult in 500+ lines of code, so how should it be possible to debug Spark task failures and tracing the transformation responsible for the same?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":117,"Q_Id":57117739,"Users Score":0,"Answer":"Break your execution down. It's the easiest way to understand where the error might be coming from. Running a 500+ line of code for the first time is never a good idea. You want to have the intermediate results while you are working with it. Another way is to use an IDE and walk through the code. This can help you understand where the error originated from. I prefer PyCharm (Community Edition is free), but VS Code might be a good alternative too.","Q_Score":2,"Tags":"python,scala,apache-spark,pyspark,apache-spark-sql","A_Id":57124598,"CreationDate":"2019-07-19T18:13:00.000","Title":"How does log in spark stage\/tasks help in understanding actual spark transformation it corresponds to","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I wrote a Python script which scrapes a website and sends emails if a certain condition is met. It repeats itself every day in a loop.\nI converted the Python file to an EXE and it runs as an application on my computer. But I don't think this is the best solution to my needs since my computer isn't always on and connected to the internet.\nIs there a specific website I can host my Python code on which will allow it to always run?\nMore generally, I am trying to get the bigger picture of how this works. What do you actually have to do to have a Python script running on the cloud? Do you just upload it? What steps do you have to undertake?\nThanks in advance!","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1140,"Q_Id":57126697,"Users Score":3,"Answer":"well i think one of the best option is pythonanywhere.com there you can upload your python script(script.py) and then run it and then finish.\ni did this with my telegram bot","Q_Score":1,"Tags":"python,cloud,hosting","A_Id":63171389,"CreationDate":"2019-07-20T16:40:00.000","Title":"How to host a Python script on the cloud?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I wrote a Python script which scrapes a website and sends emails if a certain condition is met. It repeats itself every day in a loop.\nI converted the Python file to an EXE and it runs as an application on my computer. But I don't think this is the best solution to my needs since my computer isn't always on and connected to the internet.\nIs there a specific website I can host my Python code on which will allow it to always run?\nMore generally, I am trying to get the bigger picture of how this works. What do you actually have to do to have a Python script running on the cloud? Do you just upload it? What steps do you have to undertake?\nThanks in advance!","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":1140,"Q_Id":57126697,"Users Score":2,"Answer":"You can deploy your application using AWS Beanstalk. It will provide you with the whole python environment along with server configuration likely to be changed according to your needs. Its a PAAS offering from AWS cloud.","Q_Score":1,"Tags":"python,cloud,hosting","A_Id":57130698,"CreationDate":"2019-07-20T16:40:00.000","Title":"How to host a Python script on the cloud?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I'm trying to build an OCR desktop application using Java and, to do this, I have to use libraries and functions that were created using the Python programming language, so I want to figure out: how can I use those libraries inside my Java application?\nI have already seen Jython, but it is only useful for cases when you want to run Java code in Python; what I want is the other way around (using Python code in Java applications).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":137,"Q_Id":57127836,"Users Score":1,"Answer":"I have worked in projects where Python was used for ML (machine learning) tasks and everything else was written in Java.\nWe separated the execution environments entirely. Instead of mixing Python and Java in some esoteric way, you create independent services (one for Python, one for Java), and then handle inter-process communication via HTTP or messaging or some other mechanism. \"Mircoservices\" if you will.","Q_Score":0,"Tags":"java,python,javafx","A_Id":57128018,"CreationDate":"2019-07-20T19:16:00.000","Title":"Using Python Libraries or Codes in My Java Application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"Hello I am creating a spotfire dashboard which I would like to be reusable for each year.\nCurrently my layout is designed as a page with 8 buttons containing the names of stores, if clicked on, spotfire applies a filter so that only informations relating to that store shows. (these were individually created manually)\nIs there a way to automate this with JS or Iron Python, so that for each store a button is automatically created, and in action control for each button is to apply that stores filter?\nI have looked around but cannot find anything relating to dynamically creating buttons. Not asking for you to code this for me, but if someone can point me towards some resources or general logic on how this could be done it would be much appreciated.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":327,"Q_Id":57137631,"Users Score":1,"Answer":"Why not just putting a textarea on your page? Inside this textarea, you add a filter control that filters data the way you want ;)\nWith this you don't have problem with elements to create dynamically, because it's impossible to create spotfirecontrols dynamically.","Q_Score":1,"Tags":"javascript,ironpython,spotfire","A_Id":57162196,"CreationDate":"2019-07-21T23:11:00.000","Title":"Spotfire - Dynamically creating buttons using","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Hello I am creating a spotfire dashboard which I would like to be reusable for each year.\nCurrently my layout is designed as a page with 8 buttons containing the names of stores, if clicked on, spotfire applies a filter so that only informations relating to that store shows. (these were individually created manually)\nIs there a way to automate this with JS or Iron Python, so that for each store a button is automatically created, and in action control for each button is to apply that stores filter?\nI have looked around but cannot find anything relating to dynamically creating buttons. Not asking for you to code this for me, but if someone can point me towards some resources or general logic on how this could be done it would be much appreciated.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":327,"Q_Id":57137631,"Users Score":1,"Answer":"Think txemsukr is right. This is not possible. To do it with JS or IP, the API would have to exist. Several of the elements you mentioned (action controls), you can't control with the API.","Q_Score":1,"Tags":"javascript,ironpython,spotfire","A_Id":57171287,"CreationDate":"2019-07-21T23:11:00.000","Title":"Spotfire - Dynamically creating buttons using","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Hello I am creating a spotfire dashboard which I would like to be reusable for each year.\nCurrently my layout is designed as a page with 8 buttons containing the names of stores, if clicked on, spotfire applies a filter so that only informations relating to that store shows. (these were individually created manually)\nIs there a way to automate this with JS or Iron Python, so that for each store a button is automatically created, and in action control for each button is to apply that stores filter?\nI have looked around but cannot find anything relating to dynamically creating buttons. Not asking for you to code this for me, but if someone can point me towards some resources or general logic on how this could be done it would be much appreciated.","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":327,"Q_Id":57137631,"Users Score":2,"Answer":"instead of buttons, why not a dropdown populated by the unique values in the \"store names\" column to set a document property, and have your data listing limit the data to [store_name] = ${store_name}","Q_Score":1,"Tags":"javascript,ironpython,spotfire","A_Id":57333250,"CreationDate":"2019-07-21T23:11:00.000","Title":"Spotfire - Dynamically creating buttons using","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"My laptop had a problem with training big a dataset but not for predicting. Can I use Google Cloud Platform for training, only then export and download some sort of weights or model of that machine learning, so I can use it on my own laptop, and if so how to do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":57139636,"Users Score":0,"Answer":"Decide if you want to use Tensorflow or Keras etc. Prepare scripts to train and save model, and another script to use it for prediction. \nIt should be simple enough to use GCP for training and download the model to use on your machine. You can choose to use a high end machine (lot of memory, cores, GPU) on GCP. Training in distributed mode may be more complex. Then download the model and use it on local machine.\nIf you run into issues, post your scripts and ask another question.","Q_Score":0,"Tags":"python,machine-learning,google-cloud-platform","A_Id":57139854,"CreationDate":"2019-07-22T05:27:00.000","Title":"Can I use GCP for training only but predict with my own AI machine?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm not sure if this is a valid question, but I'm stuck doing this.\nI've a python script which does some operation on my local system\nUsers\/12345\/Desktop\/Sample\/one.py\nI want the same to be run on remote server whose path is \nServer\/Users\/23552\/Dir\/ASR\/Desktop\/Sample\/one.py\nI know how to do this in PHP using define path APP_HOME sort of I'm baffled in Python\nCan someone pl help me?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":57157730,"Users Score":0,"Answer":"You can always use the relative path, I guess relative path should solve your issue.","Q_Score":1,"Tags":"python,python-3.x","A_Id":57409263,"CreationDate":"2019-07-23T06:15:00.000","Title":"Define dynamic environment path variables for different system configuration","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I started learning Linear Regression and I was solving this problem. When i draw scatter plot between independent variable and dependent variable, i get vertical lines. I have 0.5M sample data. X-axis data is given within range of let say 0-20. In this case I am getting multiple target value for same x-axis point hence it draws vertical line.\nMy question is, Is there any way i can transform the data in such a way that it doesn't perform vertical line and i can get my model working. There are 5-6 such independent variable that draw the same pattern. \nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":850,"Q_Id":57157943,"Users Score":0,"Answer":"Instead of fitting y as a function of x, in this case you should fit x as a function of y.","Q_Score":0,"Tags":"python,linear-regression","A_Id":57157989,"CreationDate":"2019-07-23T06:31:00.000","Title":"how to get best fit line when we have data on vertical line?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Loading the excel file using read_excel takes quite long. Each Excel file has several sheets. The first sheet is pretty small and is the sheet I'm interested in but the other sheets are quite large and have graphs in them. Generally this wouldn't be a problem if it was one file, but I need to do this for potentially thousands of files and pick and combine the necessary data together to analyze. If somebody knows a way to efficiently load in the file directly or somehow quickly make a copy of the Excel data as text that would be helpful!","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":768,"Q_Id":57173573,"Users Score":-1,"Answer":"The method read_excel() reads the data into a Pandas Data Frame, where the first parameter is the filename and the second parameter is the sheet.\ndf = pd.read_excel('File.xlsx', sheetname='Sheet1')","Q_Score":0,"Tags":"python,pandas,python-2.7","A_Id":61889824,"CreationDate":"2019-07-23T23:56:00.000","Title":"How to best(most efficiently) read the first sheet in Excel file into Pandas Dataframe?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a problem because I'm new guy in Odoo 11, my task is combine 2 pivot ( Sales and Pos Order ) into 1 pivot view of new Module that i create. So how can i do this? step by step, because I'm just new guy. Please help me, thanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":99,"Q_Id":57175408,"Users Score":0,"Answer":"You Can use select queries for both the models and there is no need for same field or relation you can just use Union All.For Example\nselect pos_order as po\nLEFT JOIN pos_order_line pol ON(pol.order_id = po.id)\nUNION ALL\nselect sale_order so\nLEFT JOIN sale_order_line sol ON(sol.order_id = so.id)\nHope this will help you in this regard and don't forget to define the fields you want to show on the pivot view.","Q_Score":0,"Tags":"pivot,report,python-3.6,odoo-11","A_Id":72230974,"CreationDate":"2019-07-24T04:43:00.000","Title":"How can I combine 2 pivot ( Sale and Pos Order ) into 1 pivot view on my new module?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am also trying to understand how to use Tkinter so could you please explain the basics?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":229,"Q_Id":57179821,"Users Score":1,"Answer":"What is the difference between the _tkinter and tkinter modules?\n\n_tkinter is a C-based module that exposes an embedded tcl\/tk interpreter. When you import it, and only it, you get access to this interpreter but you do not get access to any of the tkinter classes. This module is not designed to be imported by python scripts.\ntkinter provides python-based classes that use the embedded tcl\/tk interpreter. This is the module that defines Tk, Button, Text, etc.","Q_Score":0,"Tags":"python-3.x,tkinter","A_Id":57184692,"CreationDate":"2019-07-24T09:38:00.000","Title":"What is the difference between the _tkinter and tkinter modules?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to multi-thread some tasks using cosmosdb to optimize ETL time, and I can't find how, using the python API (but I could do something in REST if required) if I have a stored procedure to call twice for two partitions keys, I could send it to two different regions (namely 'West Europe' and 'Central France)\nI defined those as PreferredLocations in the connection policy but don't know how to include to a query, the instruction to route it to a specific location.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":45,"Q_Id":57183790,"Users Score":1,"Answer":"The only place you could specify that on would be the options objects of the requests. However there is nothing related to the regions.\nWhat you can do is initialize multiple clients that have a different order in the preferred locations and then spread the load that way in different regions.\nHowever, unless your apps are deployed on those different regions and latency is less, there is no point in doing so since Cosmos DB will be able to cope with all the requests in a single region as long as you have the RUs needed.","Q_Score":0,"Tags":"python,rest,azure-cosmosdb","A_Id":57185813,"CreationDate":"2019-07-24T13:11:00.000","Title":"How to send a query or stored procedure execution request to a specific location\/region of cosmosdb?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"A project I recently joined, for various reasons, decided not to use Django migration system and uses our own system (which is similar enough to Django's that we could possibly automate translations)\nPrimary Question\nIs it possible to start using Django's migration system now?\nMore Granular Question(s)\nIdeally, we'd like to find some way of saying \"all our tables and models are in-sync (i.e. there is no need to create and apply any migrations), Django does not need to produce any migrations for any existing model, only for changes we make.\n\n\nIs it possible to do this?\n\nIs it simply a case of \"create the django migration table, generate migrations (necessary?), and manually update the migration table to say that they've all been ran\"?\n\nWhere can I find more information for how to go about doing this? Are there any examples of people doing this in the past?\n\n\nRegarding SO Question Rules\nI didn't stop to think for very long about whether or not this is an \"acceptable\" question to ask on SO. I assume that it isn't due to the nature of the question not having a clear, objective set of criteria for a correct answer. however, I think that this problem is surely common enough, that it could provide an extremely valuable resource for anyone in my shoes in the future. Please consider this before voting to remove.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":143,"Q_Id":57209118,"Users Score":1,"Answer":"I think you should probably be able to do manage.py makemigrations (you might need to use each app name the first time) which will create the migrations files. You should then be able to do manage.py migrate --fake which will mimic the migration run without actually impacting your tables. \nFrom then on (for future changes), you would run makemigrations and migrate as normal.","Q_Score":0,"Tags":"python,django,django-models,orm,django-migrations","A_Id":57209384,"CreationDate":"2019-07-25T19:40:00.000","Title":"Is it possible to start using Django's migration system after years of not using it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a table I need to add columns to it, one of them is a column that dictates business logic. So think of it as a \"priority\" column, and it has to be unique and a integer field. It cannot be the primary key but it is unique for business logic purposes.\nI've searched the docs but I can't find a way to add the column and add default (say starting from 1) values and auto increment them without setting this as a primarykey..\nThus creating the field like\n\nexample_column = IntegerField(null=False, db_column='PriorityQueue',default=1)\n\nThis will fail because of the unique constraint. I should also mention this is happening when I'm migrating the table (existing data will all receive a value of '1')\nSo, is it possible to do the above somehow and get the column to auto increment?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":252,"Q_Id":57209258,"Users Score":1,"Answer":"It should definitely be possible, especially outside of peewee. You can definitely make a counter that starts at 1 and increments to the stop and at the interval of your choice with range(). You can then write each incremented variable to the desired field in each row as you iterate through.","Q_Score":1,"Tags":"python,python-3.x,peewee","A_Id":57209625,"CreationDate":"2019-07-25T19:50:00.000","Title":"Peewee incrementing an integer field without the use of primary key during migration","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a table I need to add columns to it, one of them is a column that dictates business logic. So think of it as a \"priority\" column, and it has to be unique and a integer field. It cannot be the primary key but it is unique for business logic purposes.\nI've searched the docs but I can't find a way to add the column and add default (say starting from 1) values and auto increment them without setting this as a primarykey..\nThus creating the field like\n\nexample_column = IntegerField(null=False, db_column='PriorityQueue',default=1)\n\nThis will fail because of the unique constraint. I should also mention this is happening when I'm migrating the table (existing data will all receive a value of '1')\nSo, is it possible to do the above somehow and get the column to auto increment?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":252,"Q_Id":57209258,"Users Score":0,"Answer":"Depends on your database, but postgres uses sequences to handle this kind of thing. Peewee fields accept a sequence name as an initialization parameter, so you could pass it in that manner.","Q_Score":1,"Tags":"python,python-3.x,peewee","A_Id":57210489,"CreationDate":"2019-07-25T19:50:00.000","Title":"Peewee incrementing an integer field without the use of primary key during migration","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a list of url and many of them are invalid. When I use scrapy to crawl, the engine will automatically filter those urls with 404 status code, but some urls' status code aren't 404 and will be crawled so when I open it, it says something like there's nothing here or the domain has been changed, etc. Can someone let me know how to filter these types of invalid urls?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1284,"Q_Id":57216139,"Users Score":0,"Answer":"In your callback (e.g. parse) implement checks that detect those cases of 200 responses that are not valid, and exit the callback right away (return) when you detect one of those requests.","Q_Score":0,"Tags":"python,scrapy,web-crawler","A_Id":57308027,"CreationDate":"2019-07-26T08:32:00.000","Title":"How to check if a url is valid in Scrapy?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Settings.py DEBUG=True\nBut the django web application shows Server Error 500.\nI need to see the error pages to debug what is wrong on the production server. \nThe web application works fine in development server offline.\nThe google logs does not show detail errors. Only shows the http code of the request.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":409,"Q_Id":57234950,"Users Score":0,"Answer":"Thank you all for replying to my question. The project had prod.py (production settings file, DEBUG=False) and a dev.py (development settings file). When python manage.py is called it directly calls dev.py(DEBUG=True). However, when I push to google app engine main.py is used to specify how to run the application. main.py calls wsgi.py which calls prod.pd (DEBUG=False). This is why the django error pages were not showing. I really appreciate you all. VictorTorres, Mahirq9 and ParthS007","Q_Score":0,"Tags":"python,django,google-compute-engine","A_Id":57312444,"CreationDate":"2019-07-27T18:25:00.000","Title":"Unable to view django error pages on Google Cloud web app","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have a paper on which there are scans of documents, I use tesseract to recognize the text, but sometimes the images are in the wrong orientation, then I cut these documents from the sheet and work with each one individually, but I need to turn them in the correct position, how to do it?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":206,"Q_Id":57240300,"Users Score":1,"Answer":"If all scans are in same orientation on the paper, then you can always try rotating it in reverse if tesseract is causing the problem in reading. If individual scans can be in arbitrary orientation, then you will have to use the same method on individual scans instead.","Q_Score":0,"Tags":"python,opencv,image-processing","A_Id":57240854,"CreationDate":"2019-07-28T11:14:00.000","Title":"How to turn the image in the correct orientation?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I have a paper on which there are scans of documents, I use tesseract to recognize the text, but sometimes the images are in the wrong orientation, then I cut these documents from the sheet and work with each one individually, but I need to turn them in the correct position, how to do it?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":206,"Q_Id":57240300,"Users Score":1,"Answer":"I\u2019m not sure if there are simple ways, but you can rotate the document after you do not find adequate characters in it, if you see letters, then the document is in the correct orientation.\nAs I understand it, you use a parser, so the check can be very simple, if there are less than 5 keys, then the document is turned upside down incorrectly","Q_Score":0,"Tags":"python,opencv,image-processing","A_Id":58587214,"CreationDate":"2019-07-28T11:14:00.000","Title":"How to turn the image in the correct orientation?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I have my own TensorFlow serving server for multiple neural networks. Now I want to estimate the load on it. Does somebody know how to get the current number of requests in a queue in TensorFlow serving? I tried using Prometheus, but there is no such option.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1082,"Q_Id":57256298,"Users Score":0,"Answer":"what 's more ,you can assign the number of threads by the --rest_api_num_threads or let it empty and automatically configured by tf serivng","Q_Score":2,"Tags":"python-3.x,tensorflow,prometheus,tensorflow-serving","A_Id":57549978,"CreationDate":"2019-07-29T14:50:00.000","Title":"Tensorflow Serving number of requests in queue","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have my own TensorFlow serving server for multiple neural networks. Now I want to estimate the load on it. Does somebody know how to get the current number of requests in a queue in TensorFlow serving? I tried using Prometheus, but there is no such option.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1082,"Q_Id":57256298,"Users Score":1,"Answer":"Actually ,the tf serving doesn't have requests queue , which means that the tf serving would't rank the requests, if there are too many requests. \nThe only thing that tf serving would do is allocating a threads pool, when the server is initialized.\nwhen a request coming , the tf serving will use a unused thread to deal with the request , if there are no free threads, the tf serving will return a unavailable error.and the client shoule retry again later.\nyou can find the these information in the comments of tensorflow_serving\/batching\/streaming_batch_schedulor.h","Q_Score":2,"Tags":"python-3.x,tensorflow,prometheus,tensorflow-serving","A_Id":57549954,"CreationDate":"2019-07-29T14:50:00.000","Title":"Tensorflow Serving number of requests in queue","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a large pandas DataFrame consisting of some 100k rows and ~100 columns with different dtypes and arbitrary content.\nI need to assert that it does not contain a certain value, let's say -1.\nUsing assert( not (any(test1.isin([-1]).sum()>0))) results in processing time of some seconds. \nAny idea how to speed it up?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":43,"Q_Id":57288507,"Users Score":1,"Answer":"Just to make a full answer out of my comment:\nWith -1 not in test1.values you can check if -1 is in your DataFrame.\nRegarding the performance, this still needs to check every single value, which is in your case \n10^5*10^2 = 10^7.\nYou only save with this the performance cost for summation and an additional comparison of these results.","Q_Score":1,"Tags":"python-3.x,pandas","A_Id":57290376,"CreationDate":"2019-07-31T10:20:00.000","Title":"speed up pandas search for a certain value not in the whole df","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"Trying to figure out how to make python play mp3s whenever a tag's text changes on an Online Fantasy Draft Board (ClickyDraft).\nI know how to scrape elements from a website with python & beautiful soup, and how to play mp3s. But how do you think can I have it detect when a certain element changes so it can play the appropriate mp3?\nI was thinking of having the program scrape the site every 0.5seconds to detect the changes,\nbut I read that that could cause problems? Is there any way of doing this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":57291451,"Users Score":0,"Answer":"The only way is too scrape the site on a regular basis. 0.5s is too fast. I don't know how time sensitive this project is. But scraping every 1\/5\/10 minute is good enough. If you need it quicker, just get a proxy (plenty of free ones out there) and you can scrape the site more often.\nJust try respecting the site, Don't consume too much of the sites ressources by requesting every 0.5 seconds","Q_Score":1,"Tags":"python,html,beautifulsoup,mp3","A_Id":57292691,"CreationDate":"2019-07-31T13:05:00.000","Title":"Is it possible to write a Python web scraper that plays an mp3 whenever an element's text changes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am working on Django app on branch A with appdb database in settings file. Now I need to work on another branch(B) which has some new DB changes(eg. new columns, etc). The easiest for me is to point branch B to a different DB by changing the settings.py and then apply the migrations. I did the migrations but I am getting error like 1146, Table 'appdb_b.django_site' doesn't exist. So how can I use a different DB for my branchB code without dropping database appdb?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":43,"Q_Id":57294233,"Users Score":1,"Answer":"The existing migration files have information that causes the migrate command to believe that the tables should exist and so it complains about them not existing.\nYou need to MOVE the migration files out of the migrations directory (everything except init.py) and then do a makemigrations and then migrate.","Q_Score":0,"Tags":"python,django","A_Id":57296935,"CreationDate":"2019-07-31T15:27:00.000","Title":"How to point Django app to new DB without dropping the previous DB?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I was learning how to play music using selenium so I wrote a program which would be used as a module to play music. Unfortunately I exited the python shell without exiting the headless browser and now the song is continuously playing.\nCould someone tell me how I can find the current headless browser and exit it?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":84,"Q_Id":57305632,"Users Score":1,"Answer":"You need to include in your script to stop the music before closing the session of your headless browser.","Q_Score":0,"Tags":"python-3.x,selenium-webdriver","A_Id":57306386,"CreationDate":"2019-08-01T09:16:00.000","Title":"Stop music from playing in headless browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I was learning how to play music using selenium so I wrote a program which would be used as a module to play music. Unfortunately I exited the python shell without exiting the headless browser and now the song is continuously playing.\nCould someone tell me how I can find the current headless browser and exit it?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":84,"Q_Id":57305632,"Users Score":1,"Answer":"If you are on a Linux box, You can easily find the process Id with ps aux| grep chrome command and Kill it. If you are on Windows kill the process via Task Manager","Q_Score":0,"Tags":"python-3.x,selenium-webdriver","A_Id":57308855,"CreationDate":"2019-08-01T09:16:00.000","Title":"Stop music from playing in headless browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to install new python modules on my computer and I know how to install through the terminal, but I wish to know if there is a way to install a new module directly through VSCode (like it is possible on PyCharm)?\nI already installed through the terminal, it isn't a problem, but I want to install without be obligate to open the terminal when I'm working on VSCode.","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":41844,"Q_Id":57310009,"Users Score":2,"Answer":"Unfortunately! for now, only possible way is terminal.","Q_Score":12,"Tags":"python,python-3.x,visual-studio-code,vscode-settings,python-module","A_Id":57310616,"CreationDate":"2019-08-01T13:21:00.000","Title":"How to install a new python module on VSCode?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am working on an object detection model. I have annotated images whose values are stored in a data frame with columns (filename,x,y,w,h, class). I have my images inside \/drive\/mydrive\/images\/ directory. I have saved the data frame into a CSV file in the same directory. So, now I have annotations in a CSV file and images in the images\/ directory. \nI want to feed this CSV file as the ground truth along with the image so that when the bounding boxes are recognized by the model and it learns contents of the bounding box.\nHow do I feed this CSV file with the images to the model so that I can train my model to detect and later on use the same to predict bounding boxes of similar images?\nI have no idea how to proceed.\nI do not get an error. I just want to know how to feed the images with bounding boxes so that the network can learn those bounding boxes.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75,"Q_Id":57315783,"Users Score":0,"Answer":"We need to feed the bounding boxes to the loss function. We need to design a custom loss function, preprocess the bounding boxes and feed it back during back propagation.","Q_Score":1,"Tags":"python,tensorflow,keras,computer-vision,object-detection","A_Id":57368538,"CreationDate":"2019-08-01T19:24:00.000","Title":"feeding annotations as ground truth along with the images to the model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I need some python advice to implement an algorithm.\nWhat I need is to detect which words from text 1 are in text 2:\n\nText 1: \"Mary had a dog. The dog's name was Ethan. He used to run down\n the meadow, enjoying the flower's scent.\"\nText 2: \"Mary had a cat. The cat's name was Coco. He used to run down\n the street, enjoying the blue sky.\"\n\nI'm thinking I could use some pandas datatype to check repetitions, but I'm not sure.\nAny ideas on how to implement this would be very helpful. Thank you very much in advance.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":57328345,"Users Score":0,"Answer":"You can use dictionary to first store words from first text and than just simply look up while iterating the second text. But this will take space.\nSo best way is to use regular expressions.","Q_Score":0,"Tags":"python,algorithm","A_Id":57335390,"CreationDate":"2019-08-02T14:20:00.000","Title":"Detecting which words are the same between two pieces of text","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need some python advice to implement an algorithm.\nWhat I need is to detect which words from text 1 are in text 2:\n\nText 1: \"Mary had a dog. The dog's name was Ethan. He used to run down\n the meadow, enjoying the flower's scent.\"\nText 2: \"Mary had a cat. The cat's name was Coco. He used to run down\n the street, enjoying the blue sky.\"\n\nI'm thinking I could use some pandas datatype to check repetitions, but I'm not sure.\nAny ideas on how to implement this would be very helpful. Thank you very much in advance.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":57328345,"Users Score":0,"Answer":"Since you do not show any work of your own, I'll just give an overall algorithm.\nFirst, split each text into its words. This can be done in several ways. You could remove any punctuation then split on spaces. You need to decide if an apostrophe as in dog's is part of the word--you probably want to leave apostrophes in. But remove periods, commas, and so forth.\nSecond, place the words for each text into a set.\nThird, use the built-in set operations to find which words are in both sets.\nThis will answer your actual question. If you want a different question that involves the counts or positions of the words, you should make that clear.","Q_Score":0,"Tags":"python,algorithm","A_Id":57328491,"CreationDate":"2019-08-02T14:20:00.000","Title":"Detecting which words are the same between two pieces of text","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I've tried adding highlight_language and pygments_style in the config.py and also tried various ways I found online inside the .rst file. Can anyone offer any advice on how to get the syntax highlighting working?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":99,"Q_Id":57340091,"Users Score":0,"Answer":"Sorry, it turns out that program arguments aren't highlighted (the test I was using)","Q_Score":0,"Tags":"python-sphinx,pygments","A_Id":57341047,"CreationDate":"2019-08-03T16:17:00.000","Title":"I can not get pigments highlighting for Python to work in my Sphinx documentation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am currently developing mobile applications in Kivy. I would like to create an app to aid in the development process. This app would download an APK file from a network location and install\/run it. I know how to download files of course. How can I programmatically install and run an Android APK file in Kivy\/Android\/Python3?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":57343625,"Users Score":0,"Answer":"Look up how you would do it in Java, then you should be able to do it from Kivy using Pyjnius.","Q_Score":0,"Tags":"python,android,python-3.x,kivy","A_Id":57345743,"CreationDate":"2019-08-04T03:43:00.000","Title":"Install & run an extra APK file with Kivy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Given a Java string and an offset into that String, what is the correct way of calculating the offset of that same location into an UTF8 string?\nMore specifically, given the offset of a valid codepoint in the Java string, how can one map that offset to a new offset of that codepoint in a Python 3 string? And vice versa?\nIs there any library method which already provides the mapping between Java String offsets and Python 3 string offsets?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":315,"Q_Id":57345598,"Users Score":0,"Answer":"No, there cannot be. UTF-16 uses a varying number of code units per codepoint and so does UTF-8. So, the indices are entirely dependent on the codepoints in the string. You have to scan the string and count. \nThere are relationships between the encodings, though. A codepoint has two UTF-16 code units if and only if it has four UTF-8 code units. So, an algorithm could tally UTF-8 code units by scanning UTF-16 codepoints: 4 four a high surrogate, 0 for a low surrogate, 3 for some range, 2 for another and 1 for another.","Q_Score":3,"Tags":"java,python-3.x,unicode,utf-8,utf-16","A_Id":57348500,"CreationDate":"2019-08-04T09:57:00.000","Title":"Java code to convert between UTF8 and UTF16 offsets (Java string offsets to\/from Python 3 string offsets)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am wondering why when I use list(dictionary) it only returns keys and not their definitions into a list?\nFor example, I import a glossary with terms and definitions into a dictionary using CSV reader, then use the built in list() function to convert the dictionary to a list, and it only returns keys in the list.\nIt's not really an issue as it actually allows my program to work well, was just wondering is that just how it is supposed to behave or?\nMany thanks for any help.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":262,"Q_Id":57346935,"Users Score":0,"Answer":"In short: In essence it works that way, because it was designed that way. It makes however sense if we take into account that x in some_dict performs a membercheck on the dictionary keys.\n\nFrequently Python code iterates over a collection, and does not know the type of the collection it iterates over: it can be a list, tuple, set, dictionary, range object, etc.\nThe question is, do we see a dictionary as a collection, and if yes, a collection of what? If we want to make it collection, there are basically three logical answers to the second question: we can see it as a collection of the keys, of the values, or key-value pairs. Especially keys and key-value pairs are popular. C# for example sees a dictionary as a collection of KeyValuePairs. Python provides the .values() and .items() method to iterate over the values and key-value pairs.\nDictionaries are mainly designed to perform a fast lookup for a key and retrieve the corresponding value. Therefore the some_key in some_dict would be a sensical query, or (some_key, some_value) in some_dict, since the latter chould check if the key is in the dictionary, and then check if it matches with some_value. The latter is however less flexible, since often we might not want to be interested in the corresponding value, we simply want to check if the dictionary contains a certain key. We furthermore can not support both use cases concurrently, since if the dictionary would for example contain 2-tuples as keys, then that means it is ambiguous if (1, 2) in some_dict would mean that for key 1 the value is 2; or if (1, 2) is a key in the dictionary.\nSince the designers of Python decided to define the membership check on dictionaries on the keys, it makes more sense to make a dictionary an iterable over its keys. Indeed, one usually expects that if x in some_iterable holds, then x in list(some_iterable) should hold as well. If the iterable of a dictionary would return 2-tuples of key-value pairs (like C# does), then if we would make a list of these 2-tuples, it would not be in harmony with the membership check on the dictionary itself. Since if 2 in some_dict holds, 2 in list(some_dict) would fail.","Q_Score":2,"Tags":"python","A_Id":57349962,"CreationDate":"2019-08-04T13:24:00.000","Title":"Why converting dictionaries to lists only returns keys?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm a bit clueless. I have a csv file with these columns: name - picture url\nI would like to bulk download the 70k images into a folder, rename the images with the name in the first column and number them if there is more than one per name.\nSome are jpegs some are pngs. \nI'm guessing I need to use pandas to get the data from the csv but I don't know how to make the downloading\/renaming part without starting all the downloads at the same time, which will for sure crash my computer (It did, I wasn't even mad).\nThanks in advance for any light you can shed on this.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":642,"Q_Id":57346966,"Users Score":1,"Answer":"Try downloading in batches like 500 images...then sleep for some 1 seconds and loop it....quite time consuming...but sure fire method....for the coding reference you can explore packges like urllib (for downloading) and as soon as u download the file use os.rename() to change the name....As u already know for that csv file use pandas...","Q_Score":0,"Tags":"python,image,url,download,bulk","A_Id":57347119,"CreationDate":"2019-08-04T13:31:00.000","Title":"How do I bulk download images (70k) from urls with a restriction on the simultaneous downloads?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have an image stored in a 2D array called data. I know how to calculate the standard deviation of the entire array using numpy that outputs one number quantifying how much the data is spread. However, how can I made a standard deviation map (of the same size as my image array) and each element in this array is the standard deviation of the corresponding pixel in the image array (i.e, data).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5088,"Q_Id":57351759,"Users Score":1,"Answer":"Use slicing, given images[num, width, height] you may calculate std. deviation of a single image using images[n].std() or for a single pixel: images[:, x, y].std()","Q_Score":0,"Tags":"python,arrays,numpy,standard-deviation","A_Id":57351808,"CreationDate":"2019-08-05T03:07:00.000","Title":"Standard Deviation of every pixel in an image in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using PySpark and I need to convert each row in a DataFrame to a JSON file (in s3), preferably naming the file using the value of a selected column.\nCouldn't find how to do that. Any help will be very appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":166,"Q_Id":57353211,"Users Score":0,"Answer":"I think directly we can't store for each row as a JSON based file. Instead of that we can do like iterate for each partition of dataframe and connect to S3 using AWS S3 based library's (to connect to S3 on the partition level). Then, On each partition with the help of iterator, we can convert the row into JSON based file and push to S3.","Q_Score":0,"Tags":"python,apache-spark,amazon-s3,pyspark,pyspark-sql","A_Id":57353337,"CreationDate":"2019-08-05T06:28:00.000","Title":"Convert each row in a PySpark DataFrame to a file in s3","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model.\nI understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer)\nSo for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this:\n\nAn image of a handwritten digit will have 784 pixels.\nSince there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255)\neach node branches out and these branches are the weights.\nMy next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights).\nWhatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1.\n\nThat number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly.\nThank you!","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":395,"Q_Id":57369148,"Users Score":0,"Answer":"AFAIK, for this digit recognition case, one way to think about it is each level of the hidden layers represents the level of abstraction.\nFor now, imagine the neural network for digit recognition has only 3 layers which is 1 input layer, 1 hidden layer and 1 output layer.\nLet's take a look at a number. To recognise that it is a number we can break the picture of the number to a few more abstract concepts such as lines, circles and arcs. If we want to recognise 6, we can first recognise the more abstract concept that is exists in the picture. for 6 it would be an arc and a circle for this example. For 8 it would be 2 circles. For 1 it would be a line.\nIt is the same for a neural network. We can think of layer 1 for pixels, layer 2 for recognising the abstract concept we talked earlier such as lines, circles and arcs and finally in layer 3 we determine which number it is.\nHere we can see that the input goes through a series of layers from the most abstract layer to the less abstract layer (pixels -> line, circle, arcs -> number). In this example we only have 1 hidden layer but in real implementation it would be better to have more hidden layer that 1 depending on your interpretation of the neural network. Sometime we don't even have to think about what each layer represents and let the training do it fo us. That is the purpose of the training anyway.","Q_Score":0,"Tags":"python,tensorflow,neural-network","A_Id":57369487,"CreationDate":"2019-08-06T04:52:00.000","Title":"what do hidden layers mean in a neural network?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model.\nI understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer)\nSo for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this:\n\nAn image of a handwritten digit will have 784 pixels.\nSince there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255)\neach node branches out and these branches are the weights.\nMy next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights).\nWhatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1.\n\nThat number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly.\nThank you!","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":395,"Q_Id":57369148,"Users Score":0,"Answer":"Consider a very basic example of AND, OR, NOT and XOR functions.\nYou may already know that a single neuron is only suitable when the problem is linearly separable.\nHere in this case, AND, OR and NOT functions are linearly separable and so they can be easy handled using a single neuron.\nBut consider the XOR function. It is not linearly separable. So a single neuron will not be able to predict the value of XOR function.\nNow, XOR function is a combination of AND, OR and NOT. Below equation is the relation between them:\n\na XOR b = (a AND (NOT b)) OR ((NOT a) AND b)\n\nSo, for XOR, we can use a network which contain three layers.\nFirst layer will act as NOT function, second layer will act as AND of the output of first layer and finally the output layer will act as OR of the 2nd hidden layer.\nNote: This is just a example to explain why it is needed, XOR can be implemented in various other combination of neurons.","Q_Score":0,"Tags":"python,tensorflow,neural-network","A_Id":57369294,"CreationDate":"2019-08-06T04:52:00.000","Title":"what do hidden layers mean in a neural network?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model.\nI understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer)\nSo for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this:\n\nAn image of a handwritten digit will have 784 pixels.\nSince there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255)\neach node branches out and these branches are the weights.\nMy next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights).\nWhatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1.\n\nThat number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly.\nThank you!","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":395,"Q_Id":57369148,"Users Score":0,"Answer":"A hidden layer in a neural network may be understood as a layer that is neither an input nor an output, but instead is an intermediate step in the network's computation.\nIn your MNIST case, the network's state in the hidden layer is a processed version of the inputs, a reduction from full digits to abstract information about those digits.\nThis idea extends to all other hidden layer cases you'll encounter in machine learning -- a second hidden layer is an even more abstract version of the input data, a recurrent neural network's hidden layer is an interpretation of the inputs that happens to collect information over time, or the hidden state in a convolutional neural network is an interpreted version of the input with certain features isolated through the process of convolution.\nTo reiterate, a hidden layer is an intermediate step in your neural network's process. The information in that layer is an abstraction of the input, and holds information required to solve the problem at the output.","Q_Score":0,"Tags":"python,tensorflow,neural-network","A_Id":57369365,"CreationDate":"2019-08-06T04:52:00.000","Title":"what do hidden layers mean in a neural network?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am doing OCR on Raw PDF file where in i am converting into png images and doing OCR on that. My objective is to extract coordinates for a certain keyword from png and showcase those coordinates on actual raw pdf.\nI have already tried showing those coordinates on png images using opencv but i am not able to showcase those coordinates on actual raw pdf since the coordinate system of both format are different. Can anyone please helpme on how to showcase bounding box on actual raw pdf based on the coordinates generated from png images.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":303,"Q_Id":57373489,"Users Score":-1,"Answer":"All you need to do is map the coordinates of the OCR token (which would be given for the image) to that of the pdf page. \nFor instance, \nimage_dimensions = [1800, 2400] # width, height\npdf_page_dimension = [595, 841] # these are coordinates of the specific page of the pdf\nAssuming, on OCRing the image, a word has coordinates = [400, 700, 450, 720] , the same can be rendered on the pdf by multiplying them with scale on each axis\nx_scale = pdf_page_dimension[0] \/ image_dimensions[0]\ny_scale = pdf_page_dimension[1] \/ image_dimensions[1]\nscaled_coordinates = [400*x_scale, 700*y_scale, 450*x_scale, 720*y_scale]\nPdf page dimensions can be obtained from any of the packages: poppler, pdfparser, pdfminer, pdfplumber","Q_Score":0,"Tags":"python,opencv,nlp,ocr,tesseract","A_Id":61167395,"CreationDate":"2019-08-06T09:58:00.000","Title":"Showing text coordinate from png on raw .pdf file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm currently trying to use HDBSCAN to cluster movie data. The goal is to cluster similar movies together (based on movie info like keywords, genres, actor names, etc) and then apply LDA to each cluster and get the representative topics. However, I'm having a hard time evaluating the results (apart from visual analysis, which is not great as the data grows). With LDA, although it's hard to evaluate it, i've been using the coherence measure. However, does anyone have any idea on how to evaluate the clusters made by HDBSCAN? I haven't been able to find much info on it, so if anyone has any idea, I'd very much appreciate!","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2279,"Q_Id":57377594,"Users Score":2,"Answer":"Its the same problem everywhere in unsupervised learning.\nIt is unsupervised, you are trying to discover something new and interesting. There is no way for the computer to decide whether something is actually interesting or new. It can decide and trivial cases when the prior knowledge is coded in machine processable form already, and you can compute some heuristics values as a proxy for interestingness. But such measures (including density-based measures such as DBCV are actually in no way better to judge this than the clustering algorithm itself is choosing the \"best\" solution).\nBut in the end, there is no way around manually looking at the data, and doing the next steps - try to put into use what you learned of the data. Supposedly you are not invory tower academic just doing this because of trying to make up yet another useless method... So use it, don't fake using it.","Q_Score":2,"Tags":"python,cluster-analysis,evaluation,hdbscan","A_Id":57383549,"CreationDate":"2019-08-06T13:48:00.000","Title":"How to evaluate HDBSCAN text clusters?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm looking for the best way to preform ETL using Python.\nI'm having a channel in RabbitMQ which send events (can be even every second). \nI want to process every 1000 of them.\nThe main problem is that RabbitMQ interface (I'm using pika) raise callback upon every message.\nI looked at Celery framework, however the batch feature was depreciated in version 3.\nWhat is the best way to do it? I thinking about saving my events in a list, and when it reaches 1000 to copy it to other list and preform my processing. However, how do I make it thread-safe? I don't want to lose events, and I'm afraid of losing events while synchronising the list.\nIt sounds like a very simple use-case, however I didn't find any good best practice for it.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":5901,"Q_Id":57378832,"Users Score":1,"Answer":"First of all, you should not \"batch\" messages from RabbitMQ unless you really have to. The most efficient way to work with messaging is to process each message independently. \nIf you need to combine messages in a batch, I would use a separate data store to temporarily store the messages, and then process them when they reach a certain condition. Each time you add an item to the batch, you check that condition (for example, you reached 1000 messages) and trigger the processing of the batch. \nThis is better than keeping a list in memory, because if your service dies, the messages will still be persisted in the database. \nNote : If you have a single processor per queue, this can work without any synchronization mechanism. If you have multiple processors, you will need to implement some sort of locking mechanism.","Q_Score":3,"Tags":"python-3.x,rabbitmq,etl","A_Id":57414807,"CreationDate":"2019-08-06T14:54:00.000","Title":"Best Practice for Batch Processing with RabbitMQ","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I see many posts on 'how to run nosetests', but none on how to make pycharm et you run a script without nosetests. And yet, I seem to only be able to run or debug 'Nosetests test_splitter.py' and not ust 'test_splitter.py'!\nI'm relatively new to pycharm, and despite going through the documentation, I don't quite understand what nosetests are about and whether they would be preferrable for me testing myscript. But I get an error\n\nModuleNotFoundError: No module named 'nose'\nProcess finished with exit code 1\nEmpty suite\n\nI don't have administartive access so cannot download nosetests, if anyone would be sugesting it. I would just like to run my script! Other scripts are letting me run them just fine without nosetests!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":112,"Q_Id":57389351,"Users Score":0,"Answer":"I found the solution: I can run without nosetests from the 'Run' dropdown options in the toolbar, or Alt+Shift+F10.","Q_Score":1,"Tags":"python,pycharm","A_Id":57389395,"CreationDate":"2019-08-07T07:44:00.000","Title":"Pycharm is not letting me run my script 'test_splitter.py' , but instead 'Nosetests in test_splitter.py'?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"That question wasn't very clear. \nEssentially, I am trying to make a multi-player Pac-Man game whereby the players (when playing as ghosts) can only see a certain radius around them. My best guess for going about this is to have a rectangle which covers the whole maze and then somehow cut out a circle which will be centred on the ghost's rect. However, I am not sure how to do this last part in pygame. \nI'd just like to add if it's even possible in pygame, it would be ideal for the circle to be pixelated and not a smooth circle, but this is not essential.\nAny suggestions? Cheers.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":214,"Q_Id":57393670,"Users Score":1,"Answer":"The best I can think of is kind of a hack. Build an image outside pygame that is mostly black with a circle of zero-alpha in the center, then blit that object on top of your ghost character to only see a circle around it. I hope there is a better way but I do not know what that is.","Q_Score":3,"Tags":"python,python-3.x,pygame","A_Id":57401296,"CreationDate":"2019-08-07T11:43:00.000","Title":"How do I display a large black rectangle with a moveable transparent circle in pygame?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am writing a program in python that can read pdf document, extract text from the document and rename the document using extracted text. At first, the scanned pdf document is not searchable. I would like to convert the pdf into searchable pdf on Python instead of using Google doc, Cisdem pdf converter. \nI have read about ocrmypdf module which can used to solve this. However, I do not know how to write the code due to my limited knowledge. \nI expect the output to convert the scanned pdf into searchable pdf.","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":3922,"Q_Id":57398839,"Users Score":3,"Answer":"I suggest working on the working through the turoial, will maybe take you some time but it should be wortht it. \nI'm not exactly sure what you exactly want. In my project the settings below work fine in Most of the Cases. \nimport ocrmypdf , tesseract\ndef ocr(file_path, save_path):\n ocrmypdf.ocr(file_path, save_path, rotate_pages=True,\n remove_background=True,language=\"en\", deskew=True, force_ocr=True)","Q_Score":4,"Tags":"python,python-3.x","A_Id":58269467,"CreationDate":"2019-08-07T16:34:00.000","Title":"How do I convert scanned PDF into searchable PDF in Python (Mac)? e.g. OCRMYPDF module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am writing a program in python that can read pdf document, extract text from the document and rename the document using extracted text. At first, the scanned pdf document is not searchable. I would like to convert the pdf into searchable pdf on Python instead of using Google doc, Cisdem pdf converter. \nI have read about ocrmypdf module which can used to solve this. However, I do not know how to write the code due to my limited knowledge. \nI expect the output to convert the scanned pdf into searchable pdf.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3922,"Q_Id":57398839,"Users Score":0,"Answer":"This would be done well into two steps\n\nCreate Python OCR Python function\nimport ocrmypdf\ndef ocr(file_path, save_path):\nocrmypdf.ocr(file_path, save_path)\n\nCall and use a function.\nocr(\"input.pdf\",\"output.pdf\")\n\n\nThank you, if you got any question ask please.","Q_Score":4,"Tags":"python,python-3.x","A_Id":68271081,"CreationDate":"2019-08-07T16:34:00.000","Title":"How do I convert scanned PDF into searchable PDF in Python (Mac)? e.g. OCRMYPDF module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm working with AWS Lambda functions (in Python), that process new files that appear in the same Amazon S3 bucket and folders.\nWhen new file appears in s3:\/folder1\/folderA, B, C, an event s3:ObjectCreated:* is generated and it goes into sqs1, then processed by Lambda1 (and then deleted from sqs1 after successful processing).\nI need the same event related to the same new file that appears in s3:\/folder1\/folderA (but not folderB, or C) to go also into sqs2, to be processed by Lambda2. Lambda1 modifies that file and saves it somewhere, Lambda2 gets that file into DB, for example.\nBut AWS docs says that:\n\nNotification configurations that use Filter cannot define filtering rules with overlapping prefixes, overlapping suffixes, or prefix and suffix overlapping.\n\nSo question is how to bypass this limitation? Are there any known recommended or standard solutions?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1130,"Q_Id":57399129,"Users Score":1,"Answer":"Instead of set up the S3 object notification of (S3 -> SQS), you should set up a notification of (S3 -> Lambda).\nIn your lambda function, you parse the S3 event and then you write your own logic to send whatever content about the S3 event to whatever SQS queues you like.","Q_Score":2,"Tags":"python-3.x,amazon-web-services,amazon-s3,amazon-sqs","A_Id":57399292,"CreationDate":"2019-08-07T16:55:00.000","Title":"How to direct the same Amazon S3 events into several different SQS queues?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm needing to trigger an action at a particular date\/time either in python or by another service.\nLet's say I have built an application that stores the expiry dates of memberships in a database. I'm needing to trigger a number of actions when the member expires (for example, changing the status of the membership and sending an expiry email to the member), which is fine - I can deal with the actions. \nHowever, what I am having trouble with is how do I get these actions to trigger when the expiry date is reached? Are there any concepts or best practices that I should stick to when doing this?\nCurrently, I've achieved this by executing a Google Cloud Function every day (via Google Cloud Scheduler) which checks if the membership expiry is equal to today, and completes the action if it is. I feel like this solution is quite 'hacky'.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60,"Q_Id":57407997,"Users Score":0,"Answer":"I'm not sure which database you are using but I'm inferring you have a table that have the \"membership\" details of all your users. And each day you run a Cron job that queries this table to see which row has \"expiration_date = today\", is that correct?. \nI believe that's an efficient way to do it (it will be faster if you have few columns on that table).","Q_Score":0,"Tags":"python,cron,google-cloud-functions,google-cloud-scheduler","A_Id":57411478,"CreationDate":"2019-08-08T08:08:00.000","Title":"Triggering actions in Python Flask via cron or similar","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm working with a MEAN stack application, that passes a file to python script, and this script doing some tasks and then it returns some results.\nThe question is and how to install the required python packages when I deploy it? \nThanks!\nI've tried to run python code inside nodejs application, using python shell.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":358,"Q_Id":57428664,"Users Score":2,"Answer":"Place python script along with requirements.txt(which has your python dependencies) in your nodejs project\ndirectory.\nDuring deployment , call pip install on the requirements.txt and it\nshould install the packages for you.\nYou can call python script from nodejs just like any shell command\nusing inbuild child_process module or python-shell.","Q_Score":1,"Tags":"python,node.js,angular,file,mean-stack","A_Id":57430054,"CreationDate":"2019-08-09T10:59:00.000","Title":"How to deploy and run python scripts inside nodejs application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am very new to python and cannot seem to figure out how to accomplish this task. I want to connect to a website and extract the certificate information such as issuer and expiration dates.\nI have looked all over, tried all kinds of steps but because I am new I am getting lost in the socket, wrapper etc.\nTo make matters worse, I am in a proxy environment and it seems to really complicate things.\nDoes anyone know how I could connect and extract the information while behind the proxy?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1079,"Q_Id":57432064,"Users Score":0,"Answer":"Python SSL lib don't deal with proxies.","Q_Score":0,"Tags":"python,ssl,python-requests,pyopenssl,m2crypto","A_Id":71436836,"CreationDate":"2019-08-09T14:23:00.000","Title":"Trying to extract Certificate information in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Hoping somebody can point me in the right direction.\nI am trying to parse log file to figure out how many users are logging into the system on a per-day basis.\nThe log file gets generated in the pattern listed below.\n\"<\"Commit ts=\"20141001114139\" client=\"ABCREX\/John Doe\">\n\"8764\",\"ABCREX\/John Doe\",\"00.000.0.000\",\"User 'ABCREX\/John Doe' successfully logged in from address '00.000.0.000'.\"\n\"<\"\/Commit>\n\"<\"Commit ts=\"20141001114139\" client=\"ABCREX\/John Doe\">\n\"8764\",\"ABCREX\/Jerry Doe\",\"00.000.0.000\",\"User 'ABCREX\/Jerry Doe' successfully logged in from address '00.000.0.000'.\"\n\"<\"\/Commit>\n\"<\"Commit ts=\"20141001114139\" client=\"ABCREX\/John Doe\">\n\"8764\",\"ABCREX\/Jane Doe\",\"00.000.0.000\",\"User 'ABCREX\/Jane Doe' successfully logged in from address '00.000.0.000'.\"\n\"<\"\/Commit>\nI am trying to capture the username from the above lines and load into DB.\nso I am interested only in values \nJohn Doe, Jerry Doe, Jane Doe\nbut the when I do pattern match using REGEX it returns the below\nclient=\"ABCREX\/John Doe\"> \nthen using the code I am employing I have to apply multiple replace to remove\n \"Client\", \"ABCREX\/\", \">\"...etc \nI currently have code which is working but I feel its highly inefficient and resource consuming. I am performing split on tags then parsing reading line by line. \n'''extract the user login Name'''\nUserLoginName = str(re.search('client=(.*)>',items).group()).replace('ABCREX\/', '').replace('client=\"','').replace('\">', '')\nprint(UserLoginName)\nIs there any way I can tell the REGEX to grab only the string found within the pattern and not include the pattern in the results as well?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":57437708,"Users Score":0,"Answer":"pattern = r'User\\s\\'ABCREX\/(.*?)\\''\nlist_of_usernames = re.findall(pattern, output)\nThat would match the pattern\n\"User 'ABCREX\/Jerry Doe'\" and pull out the username and add it to a list. Is that helpful? I'm new here too so let me know if there is more I can help answer.","Q_Score":0,"Tags":"python,python-3.x","A_Id":57437911,"CreationDate":"2019-08-09T22:16:00.000","Title":"Python - Using RegEx to extract only the String in between pattern","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a database with some tables in it. I want now on my website has the dropdown and the choices are the names of people from a column of the table from my database and every time I click on a name it will show me a corresponding ID\u00a0also from a column from this table. how I can do that? or maybe a guide where should I find an answer !\nmany thanks!!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":57439500,"Users Score":0,"Answer":"You have to do that in python(if that's what you are using in the backend). \nYou can create functions in python that gets the list of name of tables which then you can pass to your front-end code. Similarly, you can setup functions where you get the specific table name from HTML and pass it to python and do all sort of database queries.\nIf all these sounds confusing to you. I suggest you take a Full stack course on udemy, youtube, etc because it can't really be explained in one simple answer.\nI hope it was helpful. Feel free to ask me more","Q_Score":0,"Tags":"javascript,python,html,flask","A_Id":57439549,"CreationDate":"2019-08-10T05:15:00.000","Title":"SelectField to create dropdown menu","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to create a hybrid application with python back-end and java GUI and for that purpose I am using jython to access the data from the GUI. \nI wrote code using a standard Python 3.7.4 virtual environment and it worked \"perfectly\". But when I try to run the same code on jython it doesn't work so it seems that in jython some packages like threading are overwritten with java functionality. \nMy question is how can I use the threading package for example from python but in jython environment? \nHere is the error: \n\nException in thread Thread-1:Traceback (most recent call last):\n File \"\/home\/dexxrey\/jython2.7.0\/Lib\/threading.py\", line 222, in _Thread__bootstrap\n self.run()\n self._target(*self._args, **self._kwargs)","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":122,"Q_Id":57445745,"Users Score":1,"Answer":"Since you have already decoupled the application i.e using python for backend and java for GUI, why not stick to that and build in a communication layer between the backend and frontend, this layer could either be REST or any Messaging framework.","Q_Score":0,"Tags":"java,python,jython","A_Id":57445769,"CreationDate":"2019-08-10T21:15:00.000","Title":"Using the original python packages instead of the jython packages","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to access an excel file using python for my physics class. I have to generates data that follows a function but creates variance so it doesn\u2019t line up perfectly to the function(simulating the error experienced in experiments). I did this by using the rand() function. We need to generate a lot of data sets so that we can average them together and eliminate the error\/noise creates by the rand() function. I tried to do this by loading the excel file and recording the data I need, but then I can\u2019t figure out how to get the rand() function to rerun and create a new data set. In excel it reruns when i change the value of any cell on the excel sheet, but I don\u2019t know how to do this when I\u2019m accessing the file with Python. Can someone help me figure out how to do this? Thank You.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":193,"Q_Id":57454186,"Users Score":0,"Answer":"Excel formulas like RAND(), or any other formula, will only refresh when Excel is actually running and recalculating the worksheet.\nSo, even though you may be access the data in an Excel workbook with Python, you won't be able to run Excel calculations that way. You will need to find a different approach.","Q_Score":0,"Tags":"excel,python-3.x,pandas,xlrd,xlwt","A_Id":57454549,"CreationDate":"2019-08-11T22:56:00.000","Title":"How to get the rand() function in excel to rerun when accessing an excel file through python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"My notebook is located on a server, which means that the kernel will still run even though I close the notebook tab. I was thus wondering if it was possible to let the cell running by itself while closing the window? As the notebook is located on a server the kernel will not stop running... \nI tried to read previous questions but could not find an answer. Any idea on how to proceed? \nThanks!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":144,"Q_Id":57462603,"Users Score":0,"Answer":"If you run the cell before closing the tab it will continue to run once the tab has been closed. However, the output will be lost (anything using print functions to stdout or plots which display inline) unless it is written to file.","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":57462710,"CreationDate":"2019-08-12T13:56:00.000","Title":"Jupyter notebook: need to run a cell even though close the tab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"My notebook is located on a server, which means that the kernel will still run even though I close the notebook tab. I was thus wondering if it was possible to let the cell running by itself while closing the window? As the notebook is located on a server the kernel will not stop running... \nI tried to read previous questions but could not find an answer. Any idea on how to proceed? \nThanks!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":144,"Q_Id":57462603,"Users Score":1,"Answer":"You can make open a new file and write outputs to it. I think that's the best that you can do.","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":60322753,"CreationDate":"2019-08-12T13:56:00.000","Title":"Jupyter notebook: need to run a cell even though close the tab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a problem. I am trying to use my model with Rasa core, but it gives me this error:\n\nrasa_nlu.model.UnsupportedModelError: The model version is to old to\n be loaded by this Rasa NLU instance. Either retrain the model, or run\n withan older version. Model version: 0.14.6 Instance version: 0.15.1\n\nDoes someone know which version I need to use then and how I can install that version?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":879,"Q_Id":57465038,"Users Score":0,"Answer":"I believe you trained this model on the previous version of Rasa NLU and updated Rasa NLU to a new version (Rasa NLU is a dependency for Rasa Core, so changes were made in requirenments.txt file).\nIf this is a case, there are 2 ways to fix it:\n\nRecommended solution. If you have data and parameters, train your NLU model again using current dependencies (this one that you have running now). So you have a new model which is compatible with your current version of Rasa\nIf you don't have a data or can not retrain a model for some reason, then downgrade Rasa NLU to version 0.14.6. I'm not sure if your current Rasa core is compatible with NLU 0.14.6, so you might also need to downgrade Rasa core if you see errors.\n\nGood luck!","Q_Score":0,"Tags":"python,anaconda,rasa-nlu,rasa-core","A_Id":57510905,"CreationDate":"2019-08-12T16:37:00.000","Title":"Rasa NLU model to old","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm a novice in python programming and i'm trying to split full name to first name and last name, can someone assist me on this ? so my example file is:\nSarah Simpson\nI expect the output like this : Sarah,Simpson","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20697,"Q_Id":57466474,"Users Score":0,"Answer":"name = \"Thomas Winter\"\nLastName = name.split()[1]\n(note the parantheses on the function call split.)\nsplit() creates a list where each element is from your original string, delimited by whitespace. You can now grab the second element using name.split()[1] or the last element using name.split()[-1]","Q_Score":2,"Tags":"python-3.x,string,split","A_Id":57466538,"CreationDate":"2019-08-12T18:34:00.000","Title":"how can i split a full name to first name and last name in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I see an option for MySql and Postgres, and have read help messages for sqlite, but I don't see anyway to use it or to install it. So it appears that it's available or else there wouldn't be any help messages, but I can't find it. I can't do any 'sudo', so no 'apt install', so don't know how to invoke and use it!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":19,"Q_Id":57467554,"Users Score":1,"Answer":"sqlite is already installed. You don't need to invoke anything to install it. Just configure your web app to use it.","Q_Score":0,"Tags":"pythonanywhere","A_Id":57476932,"CreationDate":"2019-08-12T19:59:00.000","Title":"pythonanwhere newbie: I don't see sqlite option","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Are there any conventions on how to implement services in Django? Coming from a Java background, we create services for business logic and we \"inject\" them wherever we need them.\nNot sure if I'm using python\/django the wrong way, but I need to connect to a 3rd party API, so I'm using an api_service.py file to do that. The question is, I want to define this service as a class, and in Java, I can inject this class wherever I need it and it acts more or less like a singleton. Is there something like this I can use with Django or should I build the service as a singleton and get the instance somewhere or even have just separate functions and no classes?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2544,"Q_Id":57468620,"Users Score":0,"Answer":"Adding to the answer given by bruno desthuilliers and TreantBG.\nThere are certain questions that you can ask about the requirements.\nFor example one question could be, does the api being called change with different type of objects ?\nIf the api doesn't change, you will probably be okay with keeping it as a method in some file or class.\nIf it does change, such that you are calling API 1 for some scenario, API 2 for some and so on and so forth, you will likely be better off with moving\/abstracting this logic out to some class (from a better code organisation point of view).\nPS: Python allows you to be as flexible as you want when it comes to code organisation. It's really upto you to decide on how you want to organise the code.","Q_Score":4,"Tags":"python,django,python-3.x","A_Id":67801062,"CreationDate":"2019-08-12T21:44:00.000","Title":"Python\/Django and services as classes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"The dataset I have is a set of quotations that were presented to various customers in order to sell a commodity. Prices of commodities are sensitive and standardized on a daily basis and therefore negotiations are pretty tricky around their prices. I'm trying to build a classification model that had to understand if a given quotation will be accepted by a customer or rejected by a customer.\nI made use of most classifiers I knew about and XGBClassifier was performing the best with ~95% accuracy. Basically, when I fed an unseen dataset it was able to perform well. I wanted to test how sensitive is the model to variation in prices, in order to do that, I synthetically recreated quotations with various prices, for example, if a quote was being presented for $30, I presented the same quote at $5, $10, $15, $20, $25, $35, $40, $45,..\nI expected the classifier to give high probabilities of success as the prices were lower and low probabilities of success as the prices were higher, but this did not happen. Upon further investigation, I found out that some of the features were overshadowing the importance of price in the model and thus had to be dealt with. Even though I dealt with most features by either removing them or feature engineering them to better represent them I was still stuck with a few features that I just cannot remove (client-side requirements)\nWhen I checked the results, it turned out the model was sensitive to 30% of the test data and was showing promising results, but for the rest of the 70% it wasn't sensitive at all. \nThis is when the idea struck my mind to feed only that segment of the training data where price sensitivity can be clearly captured or where the success of the quote is inversely related to the price being quoted. This created a loss of about 85% of the data, however the relationship that I wanted the model to learn was being captured perfectly well.\nThis is going to be an incremental learning process for the model, so each time a new dataset comes, I'm thinking of first evaluating it for the price sensitivity and then feeding in only that segment of the data for training which is price sensitive. \nHaving given some context to the problem, some of the questions I had were:\n\nDoes it make sense to filter out the dataset for segments where the kind of relationship I'm looking for is being exhibited?\nPost training the model on the smaller segment of the data and reducing the number of features from 21 to 8, the model accuracy went down to ~87%, however it seems to have captured the price sensitivity bit perfectly. The way I evaluated price sensitivity is by taking the test dataset and artificially adding 10 rows for each quotation with varying prices to see how the success probability changes in the model. Is this a viable approach to such a problem?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":53,"Q_Id":57487124,"Users Score":1,"Answer":"To answer your first question, deleting the part of the dataset that doesn't work is not a good idea because then your model will overfit on the data that gives better numbers. This means that the accuracy will be higher, but when presented with something that is slightly different from the dataset, the probability of the network adapting is lower.\nTo answer the second question, it seems like that's a good approach, but again I'd recommend keeping the full dataset.","Q_Score":0,"Tags":"python,machine-learning,xgbclassifier","A_Id":57488153,"CreationDate":"2019-08-14T01:33:00.000","Title":"Does it make sense to use a part of the dataset to train my model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using jedi and more specifically deoplete-jedi in neovim and I wonder if I should install it in every project as a dependency or if I can let jedi reside in the same python environment as neovim uses (and set the setting to tell deoplete-jedi where to look)\nIt seems wasteful to have to install it in ever project but then again IDK how it would find my project environment from within the neovim environment either.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":469,"Q_Id":57497539,"Users Score":2,"Answer":"If by the word \"project\"you mean Python virtual environments then yes, you have to install every program and every library that you use to every virtualenv separately. flake8, pytest, jedi, whatever. Python virtual environments are intended to protect one set of libraries from the other so that you could install different sets of libraries and even different versions of libraries. The price is that you have to duplicate programs\/libraries that are used often.\nThere is a way to connect a virtualenv to the globally installed packages but IMO that brings more harm than good.","Q_Score":0,"Tags":"python,vim,neovim,python-jedi","A_Id":57499070,"CreationDate":"2019-08-14T15:13:00.000","Title":"should jedi be install in every python project environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I attempted to do import segmentation_models as sm, but I got an error saying efficientnet was not found. So I then did pip install efficientnet and tried it again. I now get ModuleNotFoundError: no module named efficientnet.tfkeras, even though Keras is installed as I'm able to do from keras.models import * or anything else with Keras\nhow can I get rid of this error?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":10143,"Q_Id":57503473,"Users Score":6,"Answer":"To install segmentation-models use the following command: pip install git+https:\/\/github.com\/qubvel\/segmentation_models","Q_Score":7,"Tags":"python,keras","A_Id":57539079,"CreationDate":"2019-08-15T00:11:00.000","Title":"ModuleNotFoundError: no module named efficientnet.tfkeras","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"The question is really simple:\nI have a python package installed using pip3 and I'd like to tweak it a little to perform some computations. I've read (and it seems logical) that is very discouraged to not to edit the installed modules. Thus, how can I do this once I downloaded the whole project folder to my computer? Is there any way to, once edited this source code install it with another name? How can I avoid mixing things up? \nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":999,"Q_Id":57504520,"Users Score":0,"Answer":"You can install the package from its source code, instead of PyPi. \n\nDownload the source code - do a git clone of the package\nInstead of pip install , install with pip install -e \nChange code in the source code, and it will be picked up automatically.","Q_Score":1,"Tags":"python,package","A_Id":57504585,"CreationDate":"2019-08-15T03:25:00.000","Title":"Editing a python package","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to set up Keras in order to run models using my GPU. I have a Radeon RX580 and am running Windows 10.\nI saw realized that CUDA only supports NVIDIA GPUs and was having difficulty finding a way to get my code to run on the GPU. I tried downloading and setting up plaidml but afterwards from tensorflow.python.client import device_lib\nprint(device_lib.list_local_devices()) \nonly printed that I was running on a CPU and there was not a GPU available even though the plaidml setup was a success. I have read that PyOpenCl is needed but have not gotten a clear answer as to why or to what capacity. Does anyone know how to set up this AMD GPU to work properly? any help would be much appreciated. Thank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":435,"Q_Id":57504746,"Users Score":0,"Answer":"To the best of my knowledge, PlaidML was not working because I did not have the required prerequisites such as OpenCL. Once I downloaded the Visual Studio C++ build tools in order to install PyopenCL from a .whl file. This seemed to resolve the issue","Q_Score":1,"Tags":"python-3.x,tensorflow,keras,gpu,amd","A_Id":57529064,"CreationDate":"2019-08-15T04:06:00.000","Title":"Setting up keras and tensoflow to operate with AMD GPU","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'd like to build a GUI for a few Python functions I've written that pull data from MS SQL Server. My boss wants me to share the magic of Python & SQL with the rest of the team, without them having to learn any coding.\nI've decided to go down the route of using Flask to create a webapp and creating an executable file using pyinstaller. I'd like it to work similarly to Jupyter Notebook, where you click on the file and it opens the notebook in your browser.\nI was able to hack together some code to get a working prototype of the GUI. The issue is I don't know how to deploy it. I need the GUI\/Webapp to only run on the local computer for the user I sent the file to, and I don't want it accessible via the internet (because of proprietary company data, security issues, etc). \nThe only documentation I've been able to find for deploying Flask is going the routine route of a web server. \nSo the question is, can anyone provide any guidance on how to deploy my GUI WebApp so that it's only available to the user who has the file, and not on the world wide web?\nThank you!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1022,"Q_Id":57515167,"Users Score":0,"Answer":"Unfortunately, you do not have control over a give users computer. \nYou are using flask, so your application is a web application which will be exposing your data to some port. I believe the default flask port is 5000.\nRegardless, if your user opens the given port in their firewall, and this is also open on whatever router you are connected to, then your application will be publicly visible.\nThere is nothing that you can do from your python application code to prevent this.\nHaving said all of that, if you are running on 5000, it is highly unlikely your user will have this port publicly exposed. If you are running on port 80 or 8080, then the chances are higher that you might be exposing something.\nA follow up question would be where is the database your web app is connecting to? Is it also on your users machine? If not, and your web app can connect to it regardless of whose machine you run it on, I would be more concerned about your DB being publicly exposed.","Q_Score":0,"Tags":"python,user-interface,flask,web-applications","A_Id":57515311,"CreationDate":"2019-08-15T19:33:00.000","Title":"How to deploy flask GUI web application only locally with exe file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'd like to build a GUI for a few Python functions I've written that pull data from MS SQL Server. My boss wants me to share the magic of Python & SQL with the rest of the team, without them having to learn any coding.\nI've decided to go down the route of using Flask to create a webapp and creating an executable file using pyinstaller. I'd like it to work similarly to Jupyter Notebook, where you click on the file and it opens the notebook in your browser.\nI was able to hack together some code to get a working prototype of the GUI. The issue is I don't know how to deploy it. I need the GUI\/Webapp to only run on the local computer for the user I sent the file to, and I don't want it accessible via the internet (because of proprietary company data, security issues, etc). \nThe only documentation I've been able to find for deploying Flask is going the routine route of a web server. \nSo the question is, can anyone provide any guidance on how to deploy my GUI WebApp so that it's only available to the user who has the file, and not on the world wide web?\nThank you!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1022,"Q_Id":57515167,"Users Score":1,"Answer":"So, a few assumptions-- since you're a business and you're rocking a SQLServer-- you likely have Active Directory, and the computers that you care to access this app are all hooked into that domain (so, in reality, you, or your system admin does have full control over those computers).\nAlso, the primary function of the app is to access a SQLServer to populate itself with data before doing something with that data. If you're deploying that app, I'm guessing you're probably also including the SQLServer login details along with it.\nWith that in mind, I would just serve the Flask app on the network on it's own machine (maybe even the SQLServer machine if you have the choice), and then either implement security within the app that feeds off AD to authenticate, or just have a simple user\/pass authentication you can distribute to users. By default random computers online aren't going to be able to access that app unless you've set your firewalls to deliberately route WAN traffic to it.\nThat way, you control the Flask server-- updates only have to occur at one point, making development easier, and users simply have to open up a link in an email you send, or a shortcut you leave on their desktop.","Q_Score":0,"Tags":"python,user-interface,flask,web-applications","A_Id":57515525,"CreationDate":"2019-08-15T19:33:00.000","Title":"How to deploy flask GUI web application only locally with exe file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to work with opencv in my xamarin application .\nI found that if I use openCV directly in xamarin , the size of the app will be huge .\nthe best solution I found for this is to use the openCV in python script then to host the python script in a Web Server and access it by calling an API from xamarin .\nI have no idea how to do this .\nany help please ?\nand is there is a better solutions ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":176,"Q_Id":57546711,"Users Score":1,"Answer":"You can create your web server using Flask or Django. Flask is a simple micro framework whereas Django is a more advanced MVC like framework.","Q_Score":0,"Tags":"python,api,opencv,xamarin,webserver","A_Id":57546829,"CreationDate":"2019-08-18T17:11:00.000","Title":"how to host python script in a Web Server and access it by calling an API from xamarin application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a Python project which uses an open source package registered as a dependency in requirements.txt\nThe package has some deficiencies, so I forked it on Github and made some changes. Now I'd like to test out these changes by running my original project, but I'd like to use the now forked (updated) code for the package I'm depending on.\nThe project gets compiled into a Docker image; pip install is used to add the package into the project during the docker-compose build command. \nWhat are the standard methods of creating a docker image and running the project using the newly forked dependency, as opposed to the original one? Can requirements.txt be modified somehow or do I need to manually include it into the project? If the latter, how?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":58,"Q_Id":57565886,"Users Score":0,"Answer":"you can use git+https:\/\/github.com\/.....\/your_forked_repo in your requirements.txt instead of typing Package==1.1.1","Q_Score":0,"Tags":"python,docker,docker-compose","A_Id":57567131,"CreationDate":"2019-08-20T02:11:00.000","Title":"Python: Reference forked project in requirements.txt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Wow, I'm thankful for all of the responses on this! To clarify the data pattern does repeat. Here is a sample:\nItem: some text Name: some other text Time recorded: hh:mm Time left: hh:mm \n other unrelated text some other unrelated text lots more text that is unrelated Item: some text Name: some other text Time recorded: hh:mm Time left: hh:mm other unrelated text some other unrelated text lots more text that is unrelated Item: some text Name: some other text Time recorded: hh:mm Time left: hh:mm \n and so on and so on\nI am using Python 3.7 to parse input from a text file that is formatted like this sample:\nItem: some text Name: some other text Time recorded: hh:mm Time left: hh:mm and the pattern repeats, with other similar fields, through a few hundred pages.\nBecause there is a \":\" value in some of the values (i.e. hh:mm), I not sure how to use that as a delimiter between the key and the value. I need to obtain all of the values associated with \"Item\", \"Name\", and \"Time left\" and output all of the matching values to a CSV file (I have the output part working)\nAny suggestions? Thank you!\n(apologies, I asked this on Stack Exchange and it was deleted, I'm new at this)","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":93,"Q_Id":57579025,"Users Score":1,"Answer":"Use the ': ' (with a space) as a delimiter.","Q_Score":2,"Tags":"python,parsing","A_Id":57579093,"CreationDate":"2019-08-20T17:45:00.000","Title":"Parsing in Python where delimiter also appears in the data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am currently helping with some NLP code and in the code we have to access a database to get the papers. I have fun the code successfully before but every time I try to run the code again I get the error sqlite3.DatabaseError: file is not a database. I am not sure what is happening here because the database is still in the same exact position and the path doesn't change. \nI've tried looking up this problem but haven't found similar issues. \nI am hoping that someone can explain what is happening here because I don't even know how to start with this issue because it runs once but not again.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":131,"Q_Id":57580912,"Users Score":0,"Answer":"I got the same issue. I have a program that print some information from my database but after running it again and again, I got an error that my database was unable to load. For me I think it may be because I have tried to be connected to my database that this problem occurs. And what I suggest you is to reboot your computer or to research the way of being connected several times to the database","Q_Score":0,"Tags":"python-3.x,sqlite","A_Id":57581507,"CreationDate":"2019-08-20T20:09:00.000","Title":"Getting error 'file is not a database' after already accessing the database","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I recently followed a tutorial on web scraping, and as part of that tutorial, I had to execute (?) the script I had written in my command line.Now that script runs every hour and I don't know how to stop it.\nI want to stop the script from running. I have tried deleting the code, but the script still runs. What should I do?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":869,"Q_Id":57590672,"Users Score":1,"Answer":"I can't comment, but you must show us the script or part of the script so we can try to find out the problem or the video you were watching. Asking just a question without an example doesn't help us as much figure out the problem.\n\nIf you're using Flask, in the terminal or CMD you're running the script. Type in CTRL+C and it should stop the script. OR set the debug to false eg. app.run(debug=False) turn that to False because sometimes that can make it run in background and look for updates even though the script was stopped. In conclusion: Try to type CTRL+C or if not set debug to False","Q_Score":0,"Tags":"python,command-line","A_Id":57590799,"CreationDate":"2019-08-21T11:32:00.000","Title":"How do I stop a Python script from running in my command line?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I recently followed a tutorial on web scraping, and as part of that tutorial, I had to execute (?) the script I had written in my command line.Now that script runs every hour and I don't know how to stop it.\nI want to stop the script from running. I have tried deleting the code, but the script still runs. What should I do?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":869,"Q_Id":57590672,"Users Score":1,"Answer":"You can kill it from task manager.","Q_Score":0,"Tags":"python,command-line","A_Id":57590890,"CreationDate":"2019-08-21T11:32:00.000","Title":"How do I stop a Python script from running in my command line?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I would like to insert a python variable into a text cell, in google colab.\nFor example, if a=10, I would like to insert the a into a text cell and render the value.\nSo in the text cell (using Jupyter Notebook with nbextensions) I would like to write the following in the text cell:\nThere will be {{ a }} pieces of fruit at the reception.\nIt should show up as:\nThere will be 10 pieces of fruit at the reception.\nThe markdown cheatsheets and explanations do not say how to achieve this. Is this possible currently?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":2998,"Q_Id":57619805,"Users Score":3,"Answer":"It's not possible to change 'input cell' (either code or markdown) programmatically. You can change only the output cells. Input cells always require manually change. (even %load doesn't work)","Q_Score":8,"Tags":"python-3.x,jupyter-notebook,google-colaboratory","A_Id":57644243,"CreationDate":"2019-08-23T04:11:00.000","Title":"How to insert variables into a text cell using google colab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to fine turn my model when using Keras, and I want to change my training data and learning rate to train when the epochs arrive 10, So how to get a callback when the specified epoch number is over.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":484,"Q_Id":57636091,"Users Score":0,"Answer":"Actually, the way keras works this is probably not the best way to go, it would be much better to treat this as fine tuning, meaning that you finish the 10 epochs, save the model and then load the model (from another script) and continue training with the lr and data you fancy.\nThere are several reasons for this.\n\nIt is much clearer and easier to debug. You check you model properly after the 10 epochs, verify that it works properly and carry on\nIt is much better to do several experiments this way, starting from epoch 10.\n\nGood luck!","Q_Score":0,"Tags":"python,keras","A_Id":57636395,"CreationDate":"2019-08-24T07:50:00.000","Title":"How to get a callback when the specified epoch number is over?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I tried switching from venv & conda to pipenv to manage my virtual environments, but one thing I noticed about pipenv that it's oddly slow when it's doing \"Locking\" and it gets to the point where it stops executing for \"Running out of time\". Is it usually this slow or is it just me? Also, could you give me some advice regarding how to make it faster?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":8709,"Q_Id":57646310,"Users Score":15,"Answer":"Pipenv is literally a joke. I spent 30 minutes staring at \"Locking\", which eventually fails after exactly 15 minutes, and I tried two times.\nThe most meaningless thirty minutes in my life.\nWas my Pipfile complex? No. I included \"flask\" with \"flake8\" + \"pylint\" + \"mypy\" + \"black\".\nEvery time someone tries to fix the \"dependency management\" of Python, it just gets worse.\nI'm expecting Poetry to solve this, but who knows.\nMaybe it's time to move on to typed languages for web development.","Q_Score":22,"Tags":"python,pipenv","A_Id":61545097,"CreationDate":"2019-08-25T13:10:00.000","Title":"Is Python's pipenv slow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I tried switching from venv & conda to pipenv to manage my virtual environments, but one thing I noticed about pipenv that it's oddly slow when it's doing \"Locking\" and it gets to the point where it stops executing for \"Running out of time\". Is it usually this slow or is it just me? Also, could you give me some advice regarding how to make it faster?","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":8709,"Q_Id":57646310,"Users Score":2,"Answer":"try using --skip-lock like this :\npipenv install --skip-lock\nNote : do not skip-lock when going in production","Q_Score":22,"Tags":"python,pipenv","A_Id":65533914,"CreationDate":"2019-08-25T13:10:00.000","Title":"Is Python's pipenv slow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a folder on my desktop that contains my script and when I run it in the pycharm ide it works perfectly but when I try to run from the terminal I get \/Users\/neelmukherjee\/Desktop\/budgeter\/product_price.py: Permission denied\nI'm not quite sure as to why this is happening\nI tried using ls -al to check the permissions and for some reason, the file is labelled as\ndrwx------@ 33 neelmukherjee staff 1056 26 Aug 09:03 Desktop\nI'm assuming this means that I should run this file as an admin. But how exactly can I do that?\nMy goal is to run my script from the terminal successfully and that may be possible by running it as an admin how should I do that?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":35,"Q_Id":57650883,"Users Score":0,"Answer":"Ok, so I was able to figure it out. I had to use\nchmod +x to help make it executable first.\nchmod +x \/Users\/neelmukherjee\/Desktop\/budgeter\/product_price.py\nand the run \/Users\/neelmukherjee\/Desktop\/budgeter\/product_price.py","Q_Score":0,"Tags":"python-3.x,macos,terminal","A_Id":57651015,"CreationDate":"2019-08-26T01:21:00.000","Title":"Python script denied in terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to write a script in bash\/python such that the script copies the latest file which arrives at hdfs directory.I know I can use inotify in local, but how to implement it in hdfs?\nCan you please share the sample code for it. When I searched for it in google it gives me long codes.Is there a simpler way other than inotify(if its too complex)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":144,"Q_Id":57652601,"Users Score":0,"Answer":"Inelegant hack:\nMount hdfs using FUSE then periodically use find -cmin n to get a list of files created in the last n minutes.\nThen use find -anewer to sort them.","Q_Score":0,"Tags":"python,bash,hdfs,inotify","A_Id":57653039,"CreationDate":"2019-08-26T06:10:00.000","Title":"How to watch an hdfs directory and copy the latest file that arrives in hdfs to local?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a dataset that contains 'tag' and 'date'. I need to group the data by 'tag' (this is pretty easy), then within each group count the number of row that the date for them is smaller than the date in that specific row. I basically need to loop over the rows after grouping the data. I don't know how to write a UDF which takes care of that in PySpark. I appreciate your help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":57665530,"Users Score":0,"Answer":"you need an aggregation ? \ndf.groupBy(\"tag\").agg({\"date\":\"min\"})\nwhat about that ?","Q_Score":0,"Tags":"python,pyspark","A_Id":57681370,"CreationDate":"2019-08-26T22:11:00.000","Title":"PySpark Group and apply UDF row by row operation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am an extreme beginner with Python and its libraries and installation in general. I want to make an extremely simple google search web scraping tool. I was told to use Requests and BeautifulSoup. I have installed python3 on my Mac by using brew install python3 and I am wondering how to get those two libraries\nI googled around and many results said that by doing brew install python3 it will automatically install pip so I can use something like pip install requests but it says pip: command not found.\nby running python3 --version it says Python 3.7.4","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":165,"Q_Id":57665963,"Users Score":0,"Answer":"Since you're running with Python3, not Python (which usually refers to 2.7), you should try using pip3.\npip on the other hand, is the package installer for Python, not Python3.","Q_Score":0,"Tags":"python-3.x,pip","A_Id":57666013,"CreationDate":"2019-08-26T23:20:00.000","Title":"How to install stuff like Requests and BeautifulSoup to use in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm writing a python program which uses subprocess to send files via cURL. It works, but for each file\/zip it outputs the loading progress, time and other stuff which I don't want to be shown. Does anyone know how to stop it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":106,"Q_Id":57676703,"Users Score":0,"Answer":"You should add stderr=subprocess.DEVNULL or stderr=subprocess.PIPE to your check_output call","Q_Score":0,"Tags":"python,python-3.x,subprocess","A_Id":57676861,"CreationDate":"2019-08-27T14:17:00.000","Title":"Stop subprocess.check_output to print on video","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Suppose I have a bulleted list in Jupyter in a markdown cell like this:\n\nItem1\nItem2\nItem3\n\nIs there a way to convert this one cell list in three markdown text cells?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":369,"Q_Id":57677910,"Users Score":0,"Answer":"Ctrl + Shift + - will split a cell on cursor. Else, cannot process a text of a cell with code unless you're importing a notebook within another notebook.","Q_Score":0,"Tags":"python,jupyter-notebook,markdown,jupyter","A_Id":57678955,"CreationDate":"2019-08-27T15:22:00.000","Title":"In a Jupyter Notebook how do I split a bulleted list in multiple text cells?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have an array with ~1,000,000 rows, each of which is a numpy array of 4,800 float32 numbers.\nI need to save this as a csv file, however using numpy.savetxt has been running for 30 minutes and I don't know how much longer it will run for.\nIs there a faster method of saving the large array as a csv?\nMany thanks,\nJosh","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":266,"Q_Id":57679863,"Users Score":2,"Answer":"As pointed out in the comments, 1e6 rows * 4800 columns * 4 bytes per float32 is 18GiB. Writing a float to text takes ~9 bytes of text (estimating 1 for integer, 1 for decimal, 5 for mantissa and 2 for separator), which comes out to 40GiB. This takes a long time to do, since just the conversion to text itself is non-trivial, and disk I\/O will be a huge bottle-neck.\nOne way to optimize this process may be to convert the entire array to text on your own terms, and write it in blocks using Python's binary I\/O. I doubt that will give you too much benefit though.\nA much better solution would be to write the binary data to a file instead of text. Aside from the obvious advantages of space and speed, binary has the advantage of being searchable and not requiring transformation after loading. You know where every individual element is in the file, if you are clever, you can access portions of the file without loading the entire thing. Finally, a binary file is more likely to be highly compressible than a relatively low-entropy text file.\nDisadvantages of binary are that it is not human-readable, and not as portable as text. The latter is not a problem, since transforming into an acceptable format will be trivial. The former is likely a non-issue given the amount of data you are attempting to process anyway.\nKeep in mind that human readability is a relative term. A human can not read 40iGB of numerical data with understanding. A human can process A) a graphical representation of the data, or B) scan through relatively small portions of the data. Both cases are suitable for binary representations. Case A) is straightforward: load, transform and plot the data. This will be much faster if the data is already in a binary format that you can pass directly to the analysis and plotting routines. Case B) can be handled with something like a memory mapped file. You only ever need to load a small portion of the file, since you can't really show more than say a thousand elements on screen at one time anyway. Any reasonable modern platform should be able to keep upI\/O and binary-to-text conversion associated with a user scrolling around a table widget or similar. In fact, binary makes it easier since you know exactly where each element belongs in the file.","Q_Score":2,"Tags":"python,numpy","A_Id":57680106,"CreationDate":"2019-08-27T17:48:00.000","Title":"Saving large numpy 2d arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using celery to do some distributed tasks and want to override celery_taskmeta and add some more columns. I use Postgres as DB and SQLAlchemy as ORM. I looked up celery docs but could not find out how to do it.\nHelp would be appreciated.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1097,"Q_Id":57688644,"Users Score":2,"Answer":"I would suggest a different approach - add an extra table with your extended data. This table would have a foreign-key constraint that would ensure each record is related to the particular entry in the celery_taskmeta. Why this approach? - It separates your domain (domain of your application), from the Celery domain. Also it does not involve modifying the table structure that may (in theory it should not) cause trouble.","Q_Score":1,"Tags":"python,postgresql,sqlalchemy,celery","A_Id":57689713,"CreationDate":"2019-08-28T08:59:00.000","Title":"Overriding celery result table (celery_taskmeta) for Postgres","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I tried pip uninstall airflow and pip3 uninstall airflow and both return \n\nCannot uninstall requirement airflow, not installed\n\nI'd like to remove airflow completely and run clean install.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2434,"Q_Id":57694246,"Users Score":5,"Answer":"Airflow now is apache-airflow.","Q_Score":1,"Tags":"python,ubuntu,airflow","A_Id":57694436,"CreationDate":"2019-08-28T14:01:00.000","Title":"how to remove airflow install","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have get \n\nImportError: cannot import name 'deque' from 'collections'\n\nHow to resolve this issue? I have already changed module name (the module name is collections.py) but this is not worked.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3757,"Q_Id":57696747,"Users Score":0,"Answer":"In my case I had to rename my python file from keyword.py to keyword2.py.","Q_Score":2,"Tags":"python-3.x","A_Id":69260693,"CreationDate":"2019-08-28T16:38:00.000","Title":"ImportError: cannot import name 'deque' from 'collections' how to clear this?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have get \n\nImportError: cannot import name 'deque' from 'collections'\n\nHow to resolve this issue? I have already changed module name (the module name is collections.py) but this is not worked.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3757,"Q_Id":57696747,"Users Score":0,"Answer":"I had the same problem when i run the command python -m venv . Renamed my file from: collections.py to my_collections.py.\nIt worked!","Q_Score":2,"Tags":"python-3.x","A_Id":58663455,"CreationDate":"2019-08-28T16:38:00.000","Title":"ImportError: cannot import name 'deque' from 'collections' how to clear this?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"This is probably a really simple question, but I can't seem to find an answer online.\nI'm using a Google Cloud Function to generate a CSV file and store the file in a Google Storage bucket. I've got the code working on my local machine using a json service account.\nI'm wanting to push this code to a cloud function, however, I can't use the json service account file in the cloud environment - so how do I authenticate to my storage account in the cloud function?","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":89,"Q_Id":57720636,"Users Score":4,"Answer":"You don't need the json service account file in the cloud environment.\nIf the GCS bucket and GCF are in the same project, you can just directly access it.\nOtherwise, add your GCF default service account(Note: it's App Engine default service account ) to your GCS project's IAM and grant relative GSC permission.","Q_Score":0,"Tags":"python-3.x,google-cloud-platform,google-cloud-functions,google-cloud-storage","A_Id":57721212,"CreationDate":"2019-08-30T04:47:00.000","Title":"Authenticating Google Cloud Storage SDK in Cloud Functions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have written a script in selenium python which is basically opening up a website and clicking on links in it and doing this thing multiple times..\nPurpose of the software was to increase traffic on the website but after script was made it has observed that is not posting real traffic on website while website is just taking it as a test and ignoring it.\nNow I am wondering whether it is basically possible with selenium or not? \nI have searched around and I suppose it is possible but don't know how. Do anyone know about this? Or is there any specific piece of code for this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1011,"Q_Id":57729315,"Users Score":0,"Answer":"It does create traffic, the problem is websites sometimes defends from bots and can guess if the income connection is a bot or not, maybe you should put some time.wait(seconds) between actions to deceive the website control and make it thinks you are a person","Q_Score":1,"Tags":"python,selenium,automation,web-traffic","A_Id":57729437,"CreationDate":"2019-08-30T15:12:00.000","Title":"Can selenium post real traffic on a website?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So i want to implement random search but there is no clear cut example as to how to do this. I am confused between the following methods:\n\ntune.randint()\nray.tune.suggest.BasicVariantGenerator()\ntune.sample_from(lambda spec: blah blah np.random.choice())\n\nCan someone please explain how and why these methods are same\/different for implementing random search.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":57733690,"Users Score":0,"Answer":"Generally, you don't need to use ray.tune.suggest.BasicVariantGenerator(). \nFor the other two choices, it's up to what suits your need. tune.randint() is just a thin wrapper around tune.sample_from(lambda spec: np.random.randint(...)). You can do more expressive\/conditional searches with the latter, but the former is easier to use.","Q_Score":0,"Tags":"python,ray","A_Id":57743094,"CreationDate":"2019-08-30T22:08:00.000","Title":"what are the options to implement random search?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"[warning VSCode newbie here]\nWhen installing pylinter from within VScode I got this message:\nThe script isort.exe is installed in 'C:\\Users\\fjanssen\\AppData\\Roaming\\Python\\Python37\\Scripts' which is not on PATH.\nWhich is correct. However, my Python is installed in C:\\Program Files\\Python37\\\nSo I am thinking Python is installed for all users, while pylinter seems to be installed for the user (me).\nChecking the command-line that VScode threw to install pylinter it indeed seems to install for the user:\n\n& \"C:\/Program Files\/Python37\/python.exe\" -m pip install -U pylint --user\n\nSo, I have some questions on resolving this issue;\n1 - how can I get the immediate issue resolved?\n- remove pylinter as user\n- re-install for all users\n2 - Will this (having python installed for all users) keep bugging me in the future? \n- should I re-install python for the current user only when using it with VScode?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2269,"Q_Id":57744390,"Users Score":1,"Answer":"If the goal is to simply use pylint with VS Code, then you don't need to install it globally. Create a virtual environment and select that in VS Code as your Python interpreter and then pylint will be installed there instead of globally. That way you don't have to worry about PATH.","Q_Score":1,"Tags":"python,visual-studio-code","A_Id":57983283,"CreationDate":"2019-09-01T08:27:00.000","Title":"Python Linter installation issue with VScode","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am a beginner in python and want to know how to take just the user specified number of inputs in one single line and store each input in a variable.\nFor example:\nSuppose I have 3 test cases and have to pass 4 integers separated by a white space for each such test case.\nThe input should look like this:\n3\n1 0 4 3\n2 5 -1 4\n3 7 1 9\nI know about the split() method that helps you to separate integers with a space in between. But since I need to input only 4 integers, I need to know how to write the code so that the computer would take only 4 integers for each test case, and then the input line should automatically move, asking the user for input for the next test case.\nOther than that, the other thing I am looking for is how to store each integer for each test case in some variable so I can access each one later.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":57746739,"Users Score":0,"Answer":"For the first part, if you would like to store input in a variable, you would do the following...\n (var_name) = input()\nOr if you want to treat your input as an integer, and you are sure it is an integer, you would want to do this\n (var_name) = int(input())\nThen you could access the input by calling up the var_name.\nHope that helped :D","Q_Score":0,"Tags":"python-3.x,input","A_Id":57750161,"CreationDate":"2019-09-01T14:17:00.000","Title":"Taking specified number of user inputs and storing each in a variable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a problem in which I have to show data entered into a database without having to press any button or doing anything.\nI am creating an app for a hospital, it has two views, one for a doctor and one for a patient.\nI want as soon as the patient enters his symptoms, it shows up on doctor immediately without having to press any button.\nI have no idea how to do this.\nAny help would be appreciated.\nThanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":955,"Q_Id":57756447,"Users Score":0,"Answer":"You can't do that with Django solely. You have to use some JS framework (React, Vue, Angular) and WebSockets, for example.","Q_Score":1,"Tags":"python,django,django-signals","A_Id":57756507,"CreationDate":"2019-09-02T11:48:00.000","Title":"How to automatically update view once the database is updated in django?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am practicing model deployment to GCP cloud ML Engine. However, I receive errors stated below when I execute the following code section in my local jupyter notebook. Please note I do have bash installed in my local PC and environment variables are properly set.\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute\/region $REGION\nError messages:\n-bash: line 1: \/mnt\/c\/Users\/User\/AppData\/Local\/Google\/Cloud SDK\/google-cloud-sdk\/bin\/gcloud: Permission denied\n-bash: line 2: \/mnt\/c\/Users\/User\/AppData\/Local\/Google\/Cloud SDK\/google-cloud-sdk\/bin\/gcloud: Permission denied\nCalledProcessError: Command 'b'gcloud config set project $PROJECT\\ngcloud config set compute\/region $REGION\\n\\n'' returned non-zero exit status 126.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":315,"Q_Id":57787007,"Users Score":0,"Answer":"Perhaps you installed Google Cloud SDK with root?\ntry \nsudo gcloud config set project $PROJECT\nand \nsudo gcloud config set compute\/region $REGION","Q_Score":0,"Tags":"python,google-cloud-platform,jupyter-notebook,gcloud,gcp-ai-platform-notebook","A_Id":57788161,"CreationDate":"2019-09-04T11:00:00.000","Title":"how do I give permission to bash to run to multiple gcloud commands from local jupyter notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have function imported from a DLL file using pythonnet:\nI need to trace my function(in a C# DLL) with Python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":57789526,"Users Score":0,"Answer":"you can hook a Visual Studio debugger to python.exe which runs your dll","Q_Score":0,"Tags":"c#,python-3.6,python.net","A_Id":61789129,"CreationDate":"2019-09-04T13:31:00.000","Title":"how to use breakpoint in mydll.dll using python3 and pythonnet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to build an application in python which will use Oracle Database installed in corporate server and the application which I am developing can be used in any local machine.\nIs it possible to connect to oracle DB in Python without installing the oracle client in the local machine where the python application will be stored and executed?\nLike in Java, we can use the jdbc thin driver to acheive the same, how it can be achieved in Python.\nAny help is appreciated\nInstalling oracle client, connect is possible through cx_Oracle module.\nBut in systems where the client is not installed, how can we connect to the DB.","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":6490,"Q_Id":57789704,"Users Score":2,"Answer":"It is not correct that java can connect to oracle without any oracle provided software.\nIt needs a compatible version of ojdbc*.jar to connect. Similarly python's cx_oracle library needs oracle instant-client software from oracle to be installed.\nInstant client is free software and has a small footprint.","Q_Score":2,"Tags":"python,database,oracle,connect,cx-oracle","A_Id":63163648,"CreationDate":"2019-09-04T13:40:00.000","Title":"Python Oracle DB Connect without Oracle Client","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to build an application in python which will use Oracle Database installed in corporate server and the application which I am developing can be used in any local machine.\nIs it possible to connect to oracle DB in Python without installing the oracle client in the local machine where the python application will be stored and executed?\nLike in Java, we can use the jdbc thin driver to acheive the same, how it can be achieved in Python.\nAny help is appreciated\nInstalling oracle client, connect is possible through cx_Oracle module.\nBut in systems where the client is not installed, how can we connect to the DB.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":6490,"Q_Id":57789704,"Users Score":0,"Answer":"Installing Oracle client is a huge pain. Could you instead create a Webservice to a system that does have OCI and then connect to it that way? This might end being a better solution rather than direct access.","Q_Score":2,"Tags":"python,database,oracle,connect,cx-oracle","A_Id":70981244,"CreationDate":"2019-09-04T13:40:00.000","Title":"Python Oracle DB Connect without Oracle Client","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I used python-2.7 version to run the PyTorch with GPU support. I used this command to train the dataset using multi-GPU. \nCan someone please tell me how can I fix this error with PyTorch in OpenNMT-py or is there a way to take pytorch support for multi-GPU using python 2.7?\nHere is the command that I tried.\n\n\nCUDA_VISIBLE_DEVICES=1,2\n python train.py -data data\/demo -save_model demo-model -world_size 2 -gpu_ranks 0 1\n\n\nThis is the error:\n\nTraceback (most recent call last):\n File \"train.py\", line 200, in \n main(opt)\n File \"train.py\", line 60, in main\n mp = torch.multiprocessing.get_context('spawn')\n AttributeError: 'module' object has no attribute 'get_context'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":166,"Q_Id":57798219,"Users Score":0,"Answer":"Maybe you can check whether your torch and python versions fit the openmt requiremen.\nI remember their torch is 1.0 or 1.2 (1.0 is better). You have to lower your latest of version of torch. Hope that would work","Q_Score":0,"Tags":"python-2.7,pytorch,opennmt","A_Id":60118279,"CreationDate":"2019-09-05T03:55:00.000","Title":"How to take multi-GPU support to the OpenNMT-py (pytorch)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I understand how the readframes() method works for mono audio input, however I don't know how it will work for stereo input. Would it give a tuple of two byte objects?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":208,"Q_Id":57811176,"Users Score":0,"Answer":"A wave file has:\nsample rate of Wave_read.getframerate() per second (e.g 44100 if from an audio CD).\nsample width of Wave_read.getsampwidth() bytes (i.e 1 for 8-bit samples, 2 for 16-bit samples)\nWave_read.getnchannels() channels (typically 1 for mono, 2 for stereo)\nEvery time you do a Wave_read.getframes(N), you get N * sample_width * n_channels bytes.","Q_Score":0,"Tags":"python,wave","A_Id":57811267,"CreationDate":"2019-09-05T18:28:00.000","Title":"What does wave_read.readframes() return if there are multiple channels?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"scipy.constants.physical_constants returns (value, unit, uncertainty) tuples for many specific physical constants. The units are given in the form of a string. (For example, one of the options for the universal gas constant has a unit field of 'J kg^-1 K^-1'.)\nAt first blush, this seems pretty useful. Keeping track of your units is very important in scientific calculations, but, for the life of me, I haven't been able to find any facilities for parsing these strings into something that can be tracked. Without that, there's no way to simplify the combined units after different values have been added, subtracted, etc with eachother.\nI know I can manually declare the units of constants with separate libraries such as what's available in SymPy, but that would make ScyPy's own units completely useless (maybe just a convenience for printouts). That sounds pretty absurd. I can't imagine that ScyPy doesn't know how to deal with units.\nWhat am I missing?\nEdit:\nI know that SciPy is a stack, and I am well aware of what libraries are part of it. My questions is about if SciPy knows how to work with the very units it spits out with its constants (or if I have to throw out those units and manually redefine everything). As far as I can see, it can't actually parse its own unit strings (and nothing else in the ecosystem seems to know how to make heads or tails of them either). This doesn't make sense to me because if SciPy proper can't deal with these units, why would they be there in the first place? Not to mention, keeping track of your units across your calculations is the exact kind of thing you need to do in science. Forcing manual redefinitions of all the units someone went through the trouble of associating with all these constants doesn't make sense.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":230,"Q_Id":57830447,"Users Score":0,"Answer":"No, scipy the library does not have any notion of quantities with units and makes no guarantees when operating on quantities with units (from e.g. pint, astropy.Quantity or other objects from other unit-handling packages).","Q_Score":0,"Tags":"python,scipy,symbolic-math,scientific-computing","A_Id":57832031,"CreationDate":"2019-09-07T03:28:00.000","Title":"Does SciPy have utilities for parsing and keeping track of the units associated with its constants?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have this strange but when I'm using a LightGBM model to calculate some predictions.\nI trained a LightGBM model inside of jupyter and dumped it into a file using pickle. This model is used in an external class.\nMy problem is when I call my prediction function from this external class outside of jupyter it always predicts an output of 0.5 (on all rows). When I use the exact same class inside of jupyter I get the expected output. In both cases the exact same model is used with the exact same data.\nHow can this behavior be explained and how can I achieve to get the same results outside of jupyter? Has it something to do with the fact I trained the model inside of jupyter? (I can't imagine why it would, but atm have no clue where this bug is coming from)\nEdit: Used versions:\nBoth times the same lgb version is used (2.2.3), I also checked the python version which are equal (3.6.8) and all system paths (sys.path output). The paths are equal except of '\/home\/xxx\/.local\/lib\/python3.6\/site-packages\/IPython\/extensions' and '\/home\/xxx\/.ipython'.\nEdit 2: I copied the code I used inside of my jupyter and ran it as a normal python file. The model made this way works now inside of jupyter and outside of it. I still wonder why this bug accrued.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":100,"Q_Id":57833411,"Users Score":1,"Answer":"It can't be a jupyter problem since jupyter is just an interface to communicate with python. The problem could be that you are using different python environment and different version of lgbm... Check import lightgbm as lgb and lgb.__version__ on both jupyter and your python terminal and make sure there are the same (or check if there has been some major changements between these versions)","Q_Score":0,"Tags":"python,jupyter-notebook,lightgbm","A_Id":57833508,"CreationDate":"2019-09-07T11:50:00.000","Title":"LightGBM unexpected behaviour outside of jupyter","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have to create a setup screen with tk that starts only at the first boot of the application where you will have to enter names etc ... a sort of setup. Does anyone have any ideas on how to do so that A) is performed only the first time and B) the input can be saved and used in the other scripts? Thanks in advance","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":39,"Q_Id":57843754,"Users Score":1,"Answer":"Why not use a file to store the details? You could use a text file or you could use pickle to save a python object then reload it. On starting your application you could check to see if the file exists and contains the necessary information, if it doesn't you can activate your setup screen, if not skip it.","Q_Score":0,"Tags":"python,python-3.x,tkinter","A_Id":57843779,"CreationDate":"2019-09-08T16:32:00.000","Title":"Create Python setup","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am encountering a task and I am not entirely sure what the best solution is.\nI currently have one data set in mongo that I use to display user data on a website, backend is in Python. A different team in the company recently created an API that has additional data that I would let to show along side the user data, and the data from the newly created API is paired to my user data (Shows specific data per user) that I will need to sync up.\nI had initially thought of creating a cron job that runs weekly (as the \"other\" API data does not update often) and then taking the information and putting it directly into my data after pairing it up.\nA coworker has suggested caching the \"other\" API data and then just returning the \"mixed\" data to display on the website.\nWhat is the best course of action here? Actually adding the data to our data set would allow us to have 1 source of truth and not rely on the other end point, as well as doing less work each time we need the data. Also if we end up needing that information somewhere else in the project, we already have the data in our DB and can just use it directly without needing to re-organize\/pair it. \nJust looking for general pro's and cons for each solution. Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":62,"Q_Id":57854727,"Users Score":2,"Answer":"Synchronization will always cost more than federation. I would either A) embrace CORS and integrate it in the front-end, or B) create a thin proxy in your Python App.\nWhich you choose depends on how quickly this API changes, whether you can respond to those changes, and whether you need graceful degradation in case of remote API failure. If it is not mission-critical data, and the API is reliable, just integrate it in the browser. If they support things like HTTP cache-control, all the better, the user's browser will handle it. \nIf the API is not scalable\/reliable, then consider putting in a proxy server-side so that you can catch errors and provide graceful degradation.","Q_Score":0,"Tags":"python,database,architecture","A_Id":57857492,"CreationDate":"2019-09-09T13:09:00.000","Title":"What is the best way to combine two data sets that depend on each other?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":1},{"Question":"I\u2019m using pandas 0.25.1 in Jupyter Lab and the maximum number of rows I can display is 10, regardless of what pd.options.display.max_rows is set to. \nHowever, if pd.options.display.max_rows is set to less than 10 it takes effect and if pd.options.display.max_rows = None then all rows show.\nAny idea how I can get a pd.options.display.max_rows of more than 10 to take effect?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":32532,"Q_Id":57860775,"Users Score":-1,"Answer":"min_rows displays the number of rows to be displayed from the top (head) and from the bottom (tail) it will be evenly split..despite putting in an odd number. If you only want a set number of rows to be displayed without reading it into the memory,\nanother way is to use nrows = 'putnumberhere'.\ne.g. results = pd.read_csv('ex6.csv', nrows = 5) # display 5 rows from the top 0 - 4\nIf the dataframe has about 100 rows and you want to display only the first 5 rows from the top...NO TAIL use .nrows","Q_Score":38,"Tags":"python,pandas","A_Id":66486493,"CreationDate":"2019-09-09T20:26:00.000","Title":"pandas pd.options.display.max_rows not working as expected","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have used Tensorflow object detection for quite awhile now. I am more of a user, I dont really know how it works. I am wondering is it possible to train it to recognize an object is something and not something? For example, I want to detect cracks on the tiles. Can i use object detection to do so where i show an image of a tile and it can tell me if there is a crack (and also show the location), or it will tell me if there is no crack on the tile? \nI have tried to train using pictures with and without defect, using 2 classes (1 for defect and 1 for no defect). But the results keep showing both (if the picture have defect) in 1 picture. Is there a way to show only the one with defect?\nBasically i would like to do defect checking. This is a simplistic case of 1 defect. but the actual case will have a few defects.\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":92,"Q_Id":57879708,"Users Score":0,"Answer":"In case you're only expecting input images of tiles, either with defects or not, you don't need a class for no defect.\nThe API adds a background class for everything which is not the other classes.\nSo you simply need to state one class - defect, and tiles which are not detected as such are not defected. \nSo in your training set - simply give bounding boxes of defects, and no bounding box in case of no defect, and then your model should learn to detect the defects as mentioned above.","Q_Score":0,"Tags":"python-3.x,tensorflow,object-detection","A_Id":57888733,"CreationDate":"2019-09-11T00:46:00.000","Title":"Using tensorflow object detection for either or detection","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am writing a data mining script to pull information off of a program called Agisoft PhotoScan for my lab. PhotoScan uses its own Python library (and I'm not sure how to access pip for this particular build), which has caused me a few problems installing other packages. After dragging, dropping, and praying, I've gotten a few packages to work, but I'm still facing a memory leak. If there is no way around it, I can try to install some more packages to weed out the leak, but I'd like to avoid this if possible.\nMy understanding of Python garbage collection so far is, when an object loses its reference, it should be deleted. I used sys.getrefcount() to check all my variables, but they all stay constant. I have a hunch that the issue could be in the mysql-connector package I installed, or in PhotoScan itself, but I am not sure how to go about testing. I will be more than happy to provide code if that will help!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":57893668,"Users Score":0,"Answer":"It turns out that the memory leak was indeed with the PhotoScan program. I've worked around it by having a separate script open and close it, running my original script once each time. Thank you all for the help!","Q_Score":0,"Tags":"python,mysql-connector-python","A_Id":57895212,"CreationDate":"2019-09-11T16:52:00.000","Title":"How can I find memory leaks without external packages?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to start cmd window and then running a chain of cmds in succession one after the other in that cmd window.\nsomething like start cmd \/k pipenv shell && py manage.py runserver the start cmd should open a new cmd window, which actually happens, then the pipenv shell should start a virtual environment within that cmd instance, also happens, and the py manage.py runserver should run in the created environment but instead it runs where the script is called. \nAny ideas on how I can make this work?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":826,"Q_Id":57941842,"Users Score":1,"Answer":"Your py manage.py runserver command calling python executor in your major environment. In your case, you could use pipenv run manage.py runserver that detect your virtual env inside your pipfile and activate it to run your command. An alternative way is to use virtualenv that create virtual env directly inside your project directory and calling envname\\Scripts\\activate each time you want to run something inside your virtual env.","Q_Score":2,"Tags":"python,batch-file,cmd","A_Id":57942022,"CreationDate":"2019-09-15T06:56:00.000","Title":"Start cmd and run multiple commands in the created cmd instance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a structured numpy ndarray la = {'val1':0,'val2':1} and I would like to return the vals using the 0 and 1 as keys, so I wish to return val1 when I have 0 and val2 when I have 1 which should have been straightforward however my attempts have failed, as I am not familiar with this structure.\nHow do I return only the corresponding val, or an array of all vals so that I can read in order?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2969,"Q_Id":57948331,"Users Score":0,"Answer":"Just found out that I can use la.tolist() and it returns a dictionary, somehow? when I wanted a list, alas from there on I was able to solve my problem.","Q_Score":0,"Tags":"python,numpy,dictionary,key-value,numpy-ndarray","A_Id":57948589,"CreationDate":"2019-09-15T21:33:00.000","Title":"structured numpy ndarray, how to get values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I start on python, I try to use mathplotlib on my code but I have an error \"ModuleNotFoundError: No module named 'matplotlib'\" on my cmd. So I have tried to use pip on the cmd: pip install mathplotlib.\nBut I have an other error \"No python at 'C:...\\Microsoft Visual Studio...\"\nActually I don't use microsoft studio anymore so I usinstall it but I think I have to change the path for the pip modul but I don't know how... I add the link of the script of the python folder on the variables environment but it doesn't change anything. How can I use pip ?\nAntoine","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":618,"Q_Id":57959921,"Users Score":0,"Answer":"Your setup seems messed up. A couple of ideas:\n\nlong term solution: Uninstall everything related to Python, make sure your PATH environment variables are clean, and reinstall Python from scratch.\nshort term solution: Since py seems to work, you could go along with it: py, py -3 -m pip install , and so on.\nIf you feel comfortable enough you could try to salvage what works by looking at the output of py -0p, this should tell you where are the Python installations that are potentially functional, and you could get rid of the rest.","Q_Score":0,"Tags":"python-3.x,pip","A_Id":57982611,"CreationDate":"2019-09-16T15:19:00.000","Title":"impossible to use pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have created a chatbot using RASA to work with free text and it is working fine. As per my new requirement i need to build button based chatbot which should follow flowchart kind of structure. I don't know how to do that what i thought is to convert the flowchart into graph data structure using networkx but i am not sure whether it has that capability. I did search but most of the examples are using dialogue or chat fuel. Can i do it using networkx.\nPlease help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":614,"Q_Id":57961205,"Users Score":1,"Answer":"Sure, you can.\nYou just need each button to point to another intent. The payload of each button should point have the \/intent_value as its payload and this will cause the NLU to skip evaluation and simply predict the intent. Then you can just bind a trigger to the intent or use the utter_ method.\nHope that helps.","Q_Score":0,"Tags":"python,networkx,flowchart,rasa","A_Id":58013231,"CreationDate":"2019-09-16T16:45:00.000","Title":"How to create button based chatbot","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"In teradataml how should the user remove temporary tables created by Teradata MLE functions?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":57963362,"Users Score":0,"Answer":"At the end of a session call remove_context() to trigger the dropping of tables.","Q_Score":0,"Tags":"python,teradata","A_Id":57963363,"CreationDate":"2019-09-16T19:35:00.000","Title":"Teradataml: Remove all temporary tables created by Teradata MLE functions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a module with a controller and I need to inherit it in a newly created module for some customization. I searched about the controller inheritance in Odoo and I found that we can inherit Odoo's base modules' controllers this way:\nfrom odoo.addons.portal.controllers.portal import CustomerPortal, pager as portal_pager, get_records_pager\nbut how can I do this for a third party module's controller? In my case, the third party module directory is one step back from my own module's directory. If I should import the class of a third party module controller, how should I do it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":455,"Q_Id":57968188,"Users Score":3,"Answer":"It is not a problem whether you are using a custom module.If the module installed in the database you can import as from odoo.addons. \nEg : from odoo.addons.your_module.controllers.main import MyClass","Q_Score":1,"Tags":"python,odoo,web-controls,odoo-12","A_Id":57971645,"CreationDate":"2019-09-17T06:03:00.000","Title":"How to inherit controller of a third party module for customization Odoo 12?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have two columns of having high cardinal categorical values, one column(area_id) has 21878 unique values and other has(page_entry) 800 unique values. I am building a predictive ML model to predict the hits on a webpage.\ncolumn information:\narea_id: all the locations that were visited during the session. (has location code number of different areas of a webpage)\npage_entry: describes the landing page of the session.\nhow to change these two columns into numerical apart from one_hot encoding?\nthank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":150,"Q_Id":57975387,"Users Score":0,"Answer":"One approach could be to group your categorical levels into smaller buckets using business rules. In your case for the feature area_id you could simply group them based on their geographical location, say all area_ids from a single district (or for that matter any other level of aggregation) will be replaced by a single id. Similarly, for page_entry you could group similar pages based on some attributes like nature of the web page like sports, travel, etc. In this way you could significantly reduce the number dimensions of your variables.\nHope this helps!","Q_Score":0,"Tags":"python,machine-learning,data-science,data-cleaning,data-processing","A_Id":57975781,"CreationDate":"2019-09-17T13:31:00.000","Title":"how to deal with high cardinal categorical feature into numeric for predictive machine learning model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Problem Statement:\nThere are 5 sockets and 6 phones. Each phone takes 60 minutes to charge completely. What is the least time required to charge all phones?\nThe phones can be interchanged along the sockets\nWhat I've tried:\nI've made a list with 6 elements whose initial value is 0. I've defined two functions. Switch function, which interchanges the phone one socket to the left. Charge function, which adds value 10(charging time assumed) to each element, except the last (as there are only 5 sockets). As the program proceeds, how do I restrict individual elements to 60, while other lower value elements still get added 10 until they attain the value of 60?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":57997755,"Users Score":0,"Answer":"You cannot simply restrict the maximum element size. What you can do is check the element size with a if condition and terminate the process.\nbtw, answer is 6x60\/5=72 mins.","Q_Score":0,"Tags":"python","A_Id":57997990,"CreationDate":"2019-09-18T17:09:00.000","Title":"How to restrict the maximum size of an element in a list in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Problem Statement:\nThere are 5 sockets and 6 phones. Each phone takes 60 minutes to charge completely. What is the least time required to charge all phones?\nThe phones can be interchanged along the sockets\nWhat I've tried:\nI've made a list with 6 elements whose initial value is 0. I've defined two functions. Switch function, which interchanges the phone one socket to the left. Charge function, which adds value 10(charging time assumed) to each element, except the last (as there are only 5 sockets). As the program proceeds, how do I restrict individual elements to 60, while other lower value elements still get added 10 until they attain the value of 60?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":57997755,"Users Score":0,"Answer":"In the charge function, add an if condition that checks the value of the element.\nI'm not sure what you're add function looks like exactly, but I would define the pseudocode to look something like this:\nif element < 60:\nadd 10 to the element\nThis way, if an element is greater than or equal to 60, it won't get caught by the if condition and won't get anything added to it.","Q_Score":0,"Tags":"python","A_Id":57997965,"CreationDate":"2019-09-18T17:09:00.000","Title":"How to restrict the maximum size of an element in a list in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So, this might be an utterly dumb question, but I have just started working with python and it's data science libs, and I would like to see seaborn plots displayed, but I prefer to work with editors I have experience with, like VS Code or PyCharm instead of Jupyter notebook. Of course, when I run the python code, the console does not display the plots as those are images. So how do I get to display and see the plots when not using jupyter?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":427,"Q_Id":57999071,"Users Score":0,"Answer":"You can try to run an matplotlib example code with python console or ipython console. They will show you a window with your plot. \nAlso, you can use Spyder instead of those consoles. It is free, and works well with python libraries for data science. Of course, you can check your plots in Spyder.","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":57999219,"CreationDate":"2019-09-18T18:44:00.000","Title":"how to display plot images outside of jupyter notebook?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using Celery with a RabbitMQ server. I have a publisher, which could potentially be terminated by a SIGKILL and since this signal cannot be watched, I cannot revoke the tasks. What would be a common approach to revoke the tasks where the publisher is not alive anymore?\nI experimented with an interval on the worker side, but the publisher is obviously not registered as a worker, so I don't know how I can detect a timeout","AnswerCount":3,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":152,"Q_Id":58017144,"Users Score":6,"Answer":"There's nothing built-in to celery to monitor the producer \/ publisher status -- only the worker \/ consumer status. There are other alternatives that you can consider, for example by using a redis expiring key that has to be updated periodically by the publisher that can serve as a proxy for whether a publisher is alive. And then in the task checking to see if the flag for a publisher still exists within redis, and if it doesn't the task returns doing nothing.","Q_Score":8,"Tags":"python,rabbitmq,celery","A_Id":58053094,"CreationDate":"2019-09-19T18:35:00.000","Title":"Tasks linger in celery amqp when publisher is terminated","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am using Celery with a RabbitMQ server. I have a publisher, which could potentially be terminated by a SIGKILL and since this signal cannot be watched, I cannot revoke the tasks. What would be a common approach to revoke the tasks where the publisher is not alive anymore?\nI experimented with an interval on the worker side, but the publisher is obviously not registered as a worker, so I don't know how I can detect a timeout","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":152,"Q_Id":58017144,"Users Score":4,"Answer":"Another solution, which works in my case, is to add the next task only if the current processed ones are finished. In this case the queue doesn't fill up.","Q_Score":8,"Tags":"python,rabbitmq,celery","A_Id":58107564,"CreationDate":"2019-09-19T18:35:00.000","Title":"Tasks linger in celery amqp when publisher is terminated","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I know how to use magical methods in python, but I would like to understand more about them.\nFor it I would like to consider three examples:\n1) __init__:\nWe use this as constructor in the beginning of most classes. If this is a method, what is the object associated with it? Is it a basic python object that is used to generate all the other objects?\n2) __add__\nWe use this to change the behaviour of the operator +. The same question above.\n3) __name__:\nThe most common use of it is inside this kind of structure:if __name__ == \"__main__\":\nThis is return True when you are running the module as the main program.\nMy question is __name__ a method or a variable? If it is a variable what is the method associated with it. If this is a method, what is the object associated with it?\nSince I do not understand very well these methods, maybe the questions are not well formulated. I would like to understand how these methods are constructed in Python.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":55,"Q_Id":58017504,"Users Score":1,"Answer":"The object is the class that's being instantiated, a.k.a. the Foo in Foo.__init__(actual_instance)\nIn a + b the object is a, and the expression is equivalent to a.__add__(b)\n__name__ is a variable. It can't be a method because then comparisons with a string would always be False since a function is never equal to a string","Q_Score":0,"Tags":"python,object,methods","A_Id":58017632,"CreationDate":"2019-09-19T19:03:00.000","Title":"Python \"Magic methods\" are realy methods?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to write a program with python that works like android folders bit for Windows. I want the user to be able to single click on a desktop icon and then a window will open with the contents of the folder in it. After giving up trying to find a way to allow single click to open a desktop application (for only one application I am aware that you can allow single click for all files and folders), I decided to check if the user clicked in the location of the file and if they were on the desktop while they were doing that. So what I need to know is how to check if the user is viewing the desktop in python.\nThanks,\nHarry\nTLDR; how to check if user is viewing the desktop - python","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":256,"Q_Id":58018945,"Users Score":0,"Answer":"I don't know if \"single clicking\" would work in any way but you can use Pyautogui to automatically click as many times as you want","Q_Score":2,"Tags":"python,windows,directory,desktop","A_Id":58028985,"CreationDate":"2019-09-19T21:07:00.000","Title":"Python - how to check if user is on the desktop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Good day!\nI have a celebrity dataset on which I want to fine-tune a keras built-in model. SO far what I have explored and done, we remove the top layers of the original model (or preferably, pass the include_top=False) and add our own layers, and then train our newly added layers while keeping the previous layers frozen. This whole thing is pretty much like intuitive.\nNow what I require is, that my model learns to identify the celebrity faces, while also being able to detect all the other objects it has been trained on before. Originally, the models trained on imagenet come with an output layer of 1000 neurons, each representing a separate class. I'm confused about how it should be able to detect the new classes? All the transfer learning and fine-tuning articles and blogs tell us to replace the original 1000-neuron output layer with a different N-neuron layer (N=number of new classes). In my case, I have two celebrities, so if I have a new layer with 2 neurons, I don't know how the model is going to classify the original 1000 imagenet objects.\nI need a pointer on this whole thing, that how exactly can I have a pre-trained model taught two new celebrity faces while also maintaining its ability to recognize all the 1000 imagenet objects as well.\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1312,"Q_Id":58027839,"Users Score":3,"Answer":"With transfer learning, you can make the trained model classify among the new classes on which you just trained using the features learned from the new dataset and the features learned by the model from the dataset on which it was trained in the first place. Unfortunately, you can not make the model to classify between all the classes (original dataset classes + second time used dataset classes), because when you add the new classes, it keeps their weights only for classification.\nBut, let's say for experimentation you change the number of output neurons (equal to the number of old + new classes) in the last layer, then it will now give random weights to these neurons which on prediction will not give you meaningful result. \nThis whole thing of making the model to classify among old + new classes experimentation is still in research area. \nHowever, one way you can achieve it is to train your model from scratch on the whole data (old + new).","Q_Score":4,"Tags":"python,tensorflow,keras,deep-learning,classification","A_Id":58028414,"CreationDate":"2019-09-20T11:50:00.000","Title":"How to fine-tune a keras model with existing plus newer classes?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using Imageio, the python library that wraps around ffmpeg to do hardware encoding via nvenc. My issue is that I can't get more than 2 sessions to launch (I am using non-quadro GPUs). Even using multiple GPUs. I looked over NVIDIA's support matrix and they state only 2 sessions per gpu, but it seems to be per system.\nFor example I have 2 GPUs in a system. I can either use the env variable CUDA_VISIBLE_DEVICES or set the ffmpeg flag -gpu to select the GPU. I've verified gpu usage using Nvidia-smi cli. I can get 2 encoding sessions working on a single gpu. Or 1 session working on 2 separate gpus each. But I can't get 2 encoding sessions working on 2 gpus. \nEven more strangely if I add more gpus I am still stuck at 2 sessions. I can't launch a third encoding session on a 3rd gpu. I am always stuck at 2 regardless of the # of gpus. Any ideas on how to fix this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":967,"Q_Id":58029589,"Users Score":2,"Answer":"Nvidia limits it 2 per system Not 2 per GPU. The limitation is in the driver, not the hardware. There have been unofficially drivers posted to github which remove the limitation","Q_Score":0,"Tags":"python,ffmpeg,python-imageio,nvenc","A_Id":58042103,"CreationDate":"2019-09-20T13:43:00.000","Title":"Nvenc session limit per GPU","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"The Divio Django CMS offers two servers: TEST and LIVE. Are these also two separate repositories? Or how is this done in the background?\nI'm wondering because I would have the feeling the LIVE server is its own repository that just pulls from the TEST whenever I press deploy. Is that correct?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":58,"Q_Id":58038110,"Users Score":1,"Answer":"All Divio projects (django CMS, Python, PHP, whatever) have a Live and Test environment.\nBy default, both build the project from its repository's master branch (in older projects, develop). \nOn request, custom tracking branches can be enabled, so that the Live and Test environments will build from separate branches.\nWhen a build successfully completes, the Docker image can be reused until changes are made to the project's repository. This means that after a successful deployment on Test, the Docker image doesn't need to be rebuilt, and the Live environment can be deployed much faster from the pre-built image. (Obviously this is only possible when they are on the same branch.)","Q_Score":1,"Tags":"python,django,divio","A_Id":58041679,"CreationDate":"2019-09-21T07:16:00.000","Title":"Setup of the Divio CMS Repositories","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"for my current requirement, I'm having a dataset of 10k+ faces from 100 different people from which I have trained a model for recognizing the face(s). The model was trained by getting the 128 vectors from the facenet_keras.h5 model and feeding those vector value to the Dense layer for classifying the faces.\nBut the issue I'm facing currently is\n\nif want to train one person face, I have to retrain the whole model once again.\n\nHow should I get on with this challenge? I have read about a concept called transfer learning but I have no clues about how to implement it. Please give your suggestion on this issue. What can be the possible solutions to it?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":675,"Q_Id":58049090,"Users Score":1,"Answer":"With transfer learning you would copy an existing pre-trained model and use it for a different, but similar, dataset from the original one. In your case this would be what you need to do if you want to train the model to recognize your specific 100 people.\nIf you already did this and you want to add another person to the database without having to retrain the complete model, then I would freeze all layers (set layer.trainable = False for all layers) except for the final fully-connected layer (or the final few layers). Then I would replace the last layer (which had 100 nodes) to a layer with 101 nodes. You could even copy the weights to the first 100 nodes and maybe freeze those too (I'm not sure if this is possible in Keras). In this case you would re-use all the trained convolutional layers etc. and teach the model to recognise this new face.","Q_Score":1,"Tags":"python-3.x,tensorflow,keras,deep-learning,face-recognition","A_Id":58049249,"CreationDate":"2019-09-22T12:12:00.000","Title":"How do i retrain the model without losing the earlier model data with new set of data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm using Python and Flask, served by Waitress, to host a POST API. I'm calling the API from a C# program that posts data and gets a string response. At least 95% of the time, it works fine, but sometimes the C# program reports an error: \n(500) Internal Server Error.\nThere is no further description of the error or why it occurs. The only clue is that it usually happens in clusters -- when the error occurs once, it likely occurs several times in a row. Without any intervention, it then goes back to running normally.\nSince the error is so rare, it is hard to troubleshoot. Any ideas as to how to debug or get more information? Is there error handling I can do from either the C# side or the Flask\/Waitress side?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2806,"Q_Id":58049827,"Users Score":0,"Answer":"Your flask application should be logging the exception when it occurs. Aside from combing through your logs (which should be stored somewhere centrally) you could consider something like Sentry.io, which is pretty easy to setup with Flask apps.","Q_Score":6,"Tags":"c#,python,flask,waitress","A_Id":58050151,"CreationDate":"2019-09-22T13:48:00.000","Title":"How to debug (500) Internal Server Error on Python Waitress server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":1},{"Question":"I`m new to python. I have a csv file. I need to check whether the inputs are correct or not. The ode should scan through each rows. \nAll columns for a particular row should contain values of same type: Eg:\nAll columns of second row should contain only string, \nAll columns of third row should contain only numbers... etc\nI tried the following approach, (it may seem blunder):\nI have only 15 rows, but no idea on number of columns(Its user choice)\ndf.iloc[1].str.isalpha()\nThis checks for string. I don`t know how to check ??","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":59,"Q_Id":58056352,"Users Score":1,"Answer":"Simple approach that can be modified:\n\nOpen df using df = pandas.from_csv()\nFor each column, use df[''] = df[''].astype(str) (str = string, int = integer, float = float64, ..etc).\n\nYou can check column types using df.dtypes","Q_Score":1,"Tags":"python,pandas","A_Id":58056413,"CreationDate":"2019-09-23T05:52:00.000","Title":"Check inputs in csv file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I cannot upgrade pip on my Mac from the Terminal. \nAccording to the documentation I have to type the command:\npip install -U pip\nI get the error message in the Terminal:\npip: command not found\nI have Mac OS 10.14.2, python 3.7.2 and pip 18.1.\nI want to upgrade to pip 19.2.3","AnswerCount":9,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":36951,"Q_Id":58060961,"Users Score":3,"Answer":"I have found an answer that worked for me:\nsudo pip3 install -U pip --ignore-installed pip\nThis installed pip version 19.2.3 correctly.\nIt was very hard to find the correct command on the internet...glad I can share it now.\nThanks.","Q_Score":5,"Tags":"python-3.x","A_Id":58065406,"CreationDate":"2019-09-23T11:00:00.000","Title":"how do I upgrade pip on Mac?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I cannot upgrade pip on my Mac from the Terminal. \nAccording to the documentation I have to type the command:\npip install -U pip\nI get the error message in the Terminal:\npip: command not found\nI have Mac OS 10.14.2, python 3.7.2 and pip 18.1.\nI want to upgrade to pip 19.2.3","AnswerCount":9,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":36951,"Q_Id":58060961,"Users Score":0,"Answer":"I came on here to figure out the same thing but none of this things seemed to work. so I went back and looked how they were telling me to upgrade it but I still did not get it. So I just started trying things and next thing you know I seen the downloading lines and it told me that my pip was upgraded. what I used was (pip3 install -- upgrade pip). I hope this can help anyone else in need.","Q_Score":5,"Tags":"python-3.x","A_Id":63387903,"CreationDate":"2019-09-23T11:00:00.000","Title":"how do I upgrade pip on Mac?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I cannot upgrade pip on my Mac from the Terminal. \nAccording to the documentation I have to type the command:\npip install -U pip\nI get the error message in the Terminal:\npip: command not found\nI have Mac OS 10.14.2, python 3.7.2 and pip 18.1.\nI want to upgrade to pip 19.2.3","AnswerCount":9,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":36951,"Q_Id":58060961,"Users Score":10,"Answer":"pip3 install --upgrade pip\n\nthis works for me!","Q_Score":5,"Tags":"python-3.x","A_Id":61442850,"CreationDate":"2019-09-23T11:00:00.000","Title":"how do I upgrade pip on Mac?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I have two data from.\ndf1 with columns: id,x1,x2,x3,x4,....xn\ndf2 with columns: id,y.\ndf3 =pd.concat([df1,df2],axis=1)\nwhen I use pandas concat to combine them, it became\nid,y,id,x1,x2,x3...xn.\nthere are two id here.How can I get rid of one.\nI have tried :\ndf3=pd.concat([df1,df2],axis=1).drop_duplicates().reset_index(drop=True).\nbut not work.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1418,"Q_Id":58070840,"Users Score":0,"Answer":"drop_duplicates() only removes rows that are completely identical.\nwhat you're looking for is pd.merge().\npd.merge(df1, df2, on='id)","Q_Score":1,"Tags":"python,pandas,concat","A_Id":58070875,"CreationDate":"2019-09-23T22:18:00.000","Title":"how to remove duplicates when using pandas concat to combine two dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm trying to make a classifier for uncertain data (e.g ranged data) using python. in certain dataset, the list is a 2D array or array of record (contains float numbers for data and a string for labels), where in uncertain dataset the list is a 3D array (contains range of float numbers for data and a string for labels). i managed to manipulate a certain dataset to be uncertain using uniform probability distribution. A research paper says that i have to use supremum distance metric. how do i implement this metric in python? note that in uncertain dataset, both test set and training set is uncertain","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":119,"Q_Id":58089636,"Users Score":0,"Answer":"I found out using scipy spatial distance and tweaking for-loops in standard knn helps a lot","Q_Score":0,"Tags":"python,knn","A_Id":58090124,"CreationDate":"2019-09-25T00:25:00.000","Title":"Supremum Metric in Python for Knn with Uncertain Data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am experiencing performance issues in my pipeline in a DoFn that uses large side input of ~ 1GB. The side input is passed using the pvalue.AsList(), which forces materialization of the side input.\nThe execution graph of the pipeline shows that the particular step spends most of the time for reading the side input. The total amount of data read exceeds the size of the side input by far. Consequently, I conclude that the side input does not fit into memory \/ cache of the workers even though their RAM is sufficient (using n1-highmem4 workers with 26 GB RAM).\nHow do I know how big this cache actually is? Is there a way to control its size using Beam Python SDK 2.15.0 (like there was the pipeline option --workerCacheMb=200 for Java 1.x SDK)?\nThere is no easy way of shrinking my side input more than 10%.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":305,"Q_Id":58099163,"Users Score":0,"Answer":"If you are using AsList, you are correct that the whole side input should be loaded into memory. It may be that your worker has enough memory available, but it just takes very long to read 1GB of data into the list. Also, the size of the data that is read depends on the encoding of it. If you can share more details about your algorithm, we can try to figure out how to write a pipeline that may run more efficiently.\n\nAnother option may be to have an external service to keep your side input - for instance, a Redis instance that you write to on one side, and red from on the other side.","Q_Score":1,"Tags":"python,google-cloud-dataflow","A_Id":58103855,"CreationDate":"2019-09-25T13:06:00.000","Title":"Dataflow Sideinputs - Worker Cache Size in SDK 2.x","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I recently installed Anaconda in my Windows. I did that to use some packages from some specific channels required by an application that is using Python 3.5 as its scripting language.\nI adjusted my PATH variable to use Conda, pointing to the Python environment of the particular program, but now I would like to use Conda as well for a different Python installation that I have on my Windows.\nWhen installing Anaconda then it isn't asking for a Python version to be related to. So, how can I use Conda to install into the other Python installation. Both Python installations are 'physical' installations - not virtual in any way.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":114,"Q_Id":58112822,"Users Score":1,"Answer":"Uninstall the other python installation and create different conda environments, that is what conda is great at. \nUsing conda from your anaconda installation to manage packages from another, independent python installation is not possible and not very feasible.\nSomething like this could serve your needs:\n\nCreate one env for python 3.5 conda create -n py35 python=3.5\nCreate one env for some other python version you would like to use, e.g. 3.6: conda create -n py36 python=3.6\nUse conda activate py35, conda deactivate, conda activate py36 to switch between your virtual environments.","Q_Score":0,"Tags":"python,anaconda,conda","A_Id":58112997,"CreationDate":"2019-09-26T08:40:00.000","Title":"Install packages with Conda for a second Python installation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I been learning how to use Apache-Airflow the last couple of months and wanted to see if anybody has any experience with transferring CSV files from S3 to a Mysql database in AWS(RDS). Or from my Local drive to MySQL.\nI managed to send everything to an S3 bucket to store them in the cloud using airflow.hooks.S3_hook and it works great. I used boto3 to do this.\nNow I want to push this file to a MySQL database I created in RDS, but I have no idea how to do it. Do I need to use the MySQL hook and add my credentials there and then write a python function?\nAlso, It doesn't have to be S3 to Mysql, I can also try from my local drive to Mysql if it's easier.\nAny help would be amazing!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1067,"Q_Id":58119536,"Users Score":0,"Answer":"were you able to resolve the 'MySQLdb._exceptions.OperationalError: (2068, 'LOAD DATA LOCAL INFILE file request rejected due to restrictions on access' issue","Q_Score":1,"Tags":"python,mysql,amazon-s3,airflow","A_Id":70966957,"CreationDate":"2019-09-26T14:54:00.000","Title":"S3 file to Mysql AWS via Airflow","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a dataset with the first column as date in the format: 2011-01-01 and type(data_raw['pandas_date']) gives me pandas.core.series.Series\nI want to convert the whole column into date time object so I can extract and process year\/month\/day from each row as required.\nI used pd.to_datetime(data_raw['pandas_date']) and it printed output with dtype: datetime64[ns] in the last line of the output. I assume that values were converted to datetime.\nbut when I run type(data_raw['pandas_date']) again, it still says pandas.core.series.Series and anytime I try to run .dt function on it, it gives me an error saying this is not a datetime object.\nSo, my question is - it looks like to_datetime function changed my data into datetime object, but how to I apply\/save it to the pandas_date column? I tried \ndata_raw['pandas_date'] = pd.to_datetime(data_raw['pandas_date'])\nbut this doesn't work either, I get the same result when I check the type. Sorry if this is too basic.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":384,"Q_Id":58138314,"Users Score":0,"Answer":"type(data_raw['pandas_date']) will always return pandas.core.series.Series, because the object data_raw['pandas_date'] is of type pandas.core.series.Series. What you want is to get the dtype, so you could just do data_raw['pandas_date'].dtype.\n\ndata_raw['pandas_date'] = pd.to_datetime(data_raw['pandas_date'])\n\nThis is correct, and if you do data_raw['pandas_date'].dtype again afterwards, you will see that it is datetime[64].","Q_Score":1,"Tags":"python,pandas,datetime","A_Id":58138420,"CreationDate":"2019-09-27T16:26:00.000","Title":"Change column from Pandas date object to python datetime","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"You have a 2005 Honda Accord with 50 miles (weight max) left in the tank. Which McDonalds locations (graph nodes) can you visit within a 50 mile radius? This is my question. \nIf you have a weighted directed acyclic graph, how can you find all the nodes that can be visited within a given weight restriction? \nI am aware of Dijkstra's algorithm but I can't seem to find any documentation of its uses outside of min-path problems. In my example, theres no node in particular that we want to end at, we just want to go as far as we can without going over the maximum weight. It seems like you should be able to use BFS\/DFS in order to solve this, but I cant find documentation for implementing those in graphs with edge weights (again, outside of min-path problems).","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":697,"Q_Id":58142385,"Users Score":0,"Answer":"Finding the longest path to a vertex V (a McDonald's in this case) can be accomplished using topological sort. We can start by sorting our nodes topologically, since sorting topologically will always return the source node U, before the endpoint, V, of a weighted path. Then, since we would now have access to an array in which each source vertex precedes all of its adjacent vertices, we can search through every path beginning with vertex U and ending with vertex V and set a value in an array with an index corresponding to U to the maximum edge weight we find connecting U to V. If the sum of the maximal distances exceeds 50 without reaching a McDonalds, we can backtrack and explore the second highest weight path going from U to V, and continue backtracking should we exhaust every path exiting from vertex U. Eventually we will arrive at a McDonalds, which will be the McDonalds with the maximal distance from our original source node while maintaining a total spanning distance under 50.","Q_Score":0,"Tags":"python,data-structures,graph,tree,breadth-first-search","A_Id":59133082,"CreationDate":"2019-09-28T00:05:00.000","Title":"Using BFS\/DFS To Find Path With Maximum Weight in Directed Acyclic Graph","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"You have a 2005 Honda Accord with 50 miles (weight max) left in the tank. Which McDonalds locations (graph nodes) can you visit within a 50 mile radius? This is my question. \nIf you have a weighted directed acyclic graph, how can you find all the nodes that can be visited within a given weight restriction? \nI am aware of Dijkstra's algorithm but I can't seem to find any documentation of its uses outside of min-path problems. In my example, theres no node in particular that we want to end at, we just want to go as far as we can without going over the maximum weight. It seems like you should be able to use BFS\/DFS in order to solve this, but I cant find documentation for implementing those in graphs with edge weights (again, outside of min-path problems).","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":697,"Q_Id":58142385,"Users Score":0,"Answer":"For this problem, you will want to run a DFS from the starting node. Recurse down the graph from each child of the starting node until a total weight of over 50 is reached. If a McDonalds is encountered along the traversal record the node reached in a list or set. By doing so, you will achieve the most efficient algorithm possible as you will not have to create a complete topological sort as the other answer to this question proposes. Even though this algorithm still technically runs in O(ElogV) time, by recursing back on the DFS when a path distance of over 50 is reached you avoid traversing through the entire graph when not necessary.","Q_Score":0,"Tags":"python,data-structures,graph,tree,breadth-first-search","A_Id":60401949,"CreationDate":"2019-09-28T00:05:00.000","Title":"Using BFS\/DFS To Find Path With Maximum Weight in Directed Acyclic Graph","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I'm starting to use Qt Designer.\nI am trying to create a game, and the first task that I want to do is to create a window where you have to input the name of the map that you want to load. If the map exists, I then switch to the main game window, and if the name of the map doesn't exist, I want to display a popup window that tells the user that the name of the map they wrote is not valid. \nI'm a bit confused with the part of showing the \"not valid\" pop-up window.\nI realized that I have two options:\n\nCreating 2 separated .ui files, and with the help of the .show() and .hide() commands show the correspoding window if the user input is invalid. \nThe other option that I'm thinking of creating both windows in the same .ui file, which seems to be a better option, but I don't really know how to work with windows that come from the same file. Should I create a separate class for each of the windows that come from the Qt Designer file? If not, how can I access both windows from the same class?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":58159932,"Users Score":0,"Answer":"Your second option seems impossible, it would be great to share the .ui since in my years that I have worked with Qt Designer I have not been able to implement what you point out.\nAn .ui is an XML file that describes the elements and their properties that will be used to create a class that is used to fill a particular widget. So considering the above, your second option is impossible.\nThis concludes that the only viable option is its first method.","Q_Score":1,"Tags":"python,pyqt,qt-designer","A_Id":58159983,"CreationDate":"2019-09-29T23:15:00.000","Title":"How does Qt Designer work in terms of creating more than 1 dialog per file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"So, this is for my assignment and I have to create a flight booking system. One of the requirements is that it should create 3 digit passenger code that does not start with zeros (e.g. 100 is the smallest acceptable value) and I have no idea how I can do it since I am a beginner and I just started to learn Python. I have made classes for Passenger, Flight, Seating Area so far because I just started on it today. Please help. Thank you.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":58177362,"Users Score":0,"Answer":"I like list comprehension for making a list of 100 to 999:\nflights = [i for i in range(100, 1000)]\nFor the random version, there is probably a better way, but Random.randint(x, y) creates a random in, inclusive of the endpoints:\nfrom random import Random\nrand = Random()\nflight = rand.randint(100,999)\nHope this helps with your homework, but do try to understand the assignment and how the code works...lest you get wrecked on the final!","Q_Score":0,"Tags":"python","A_Id":58177528,"CreationDate":"2019-10-01T02:27:00.000","Title":"Start at 100 and count up till 999","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to select all values bigger than 8000 within a pandas dataframe. \nnew_df = df.loc[df['GM'] > 8000]\nHowever, it is not working. I think the problem is, that the value comes from an Excel file and the number is interpreted as string e.g. \"1.111,52\". Do you know how I can convert such a string to float \/ int in order to compare it properly?","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":93,"Q_Id":58179925,"Users Score":2,"Answer":"You can see value of df.dtypes to see what is the type of each column. Then, if the column type is not as you want to, you can change it by df['GM'].astype(float), and then new_df = df.loc[df['GM'].astype(float) > 8000] should work as you want to.","Q_Score":1,"Tags":"python,pandas","A_Id":58179988,"CreationDate":"2019-10-01T07:26:00.000","Title":"String problem \/ Select all values > 8000 in pandas dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have a caltech101 dataset for object detection. Can we detect multiple objects in single image using model trained on caltech101 dataset?\nThis dataset contains only folders (label-wise) and in each folder, some images label wise.\nI have trained model on caltech101 dataset using keras and it predicts single object in image. Results are satisfactory but is it possible to detect multiple objects in single image?\nAs I know some how regarding this. for detecting multiple objects in single image, we should have dataset containing images and bounding boxes with name of objects in images.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":233,"Q_Id":58225543,"Users Score":0,"Answer":"The dataset can be used for detecting multiple objects but with below steps to be followed:\n\nThe dataset has to be annotated with bounding boxes on the object present in the image\nAfter the annotations are done, you can use any of the Object detectors to do transfer learning and train on the annotated caltech 101 dataset\n\nNote: - Without annotations, with just the caltech 101 dataset, detecting multiple objects in a single image is not possible","Q_Score":0,"Tags":"python,keras,deep-learning,object-detection,tensorflow-datasets","A_Id":58337978,"CreationDate":"2019-10-03T19:17:00.000","Title":"Can we detect multiple objects in image using caltech101 dataset containing label wise images?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am writing a serial data logger in Python and am wondering which data type would be best suited for this. Every few milliseconds a new value is read from the serial interface and is saved into my variable along with the current time. I don't know how long the logger is going to run, so I can't preallocate for a known size.\nIntuitively I would use an numpy array for this, but appending \/ concatenating elements creates a new array each time from what I've read.\nSo what would be the appropriate data type to use for this?\nAlso, what would be the proper vocabulary to describe this problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":139,"Q_Id":58237574,"Users Score":0,"Answer":"Python doesn't have arrays as you think of them in most languages. It has \"lists\", which use the standard array syntax myList[0] but unlike arrays, lists can change size as needed. using myList.append(newItem) you can add more data to the list without any trouble on your part.\nSince you asked for proper vocabulary in a useful concept to you would be \"linked lists\" which is a way of implementing array like things with varying lengths in other languages.","Q_Score":1,"Tags":"python,types,data-acquisition","A_Id":58237688,"CreationDate":"2019-10-04T13:40:00.000","Title":"Data type to save expanding data for data logging in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Once you commit in pycharm it takes you to a second window to go through with the push. But if you only hit commit and not commit\/push then how do you bring up the push option. You can't do another commit unless changes are made.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":57,"Q_Id":58242568,"Users Score":2,"Answer":"In the upper menu [VCS] -> [Git...] -> [Push]","Q_Score":0,"Tags":"python,pycharm,push,commit","A_Id":58242615,"CreationDate":"2019-10-04T20:01:00.000","Title":"How do you push in pycharm if the commit was already done?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Trying to run the python-telegram-bot library through Jupyter Notebook I get this question error. I tried many ways to reinstall it, but nothing from answers at any forums helped me. What should be a mistake and how to avoid it while installing?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":942,"Q_Id":58259708,"Users Score":1,"Answer":"Do you have a directory with \"telegram\" name? If you do,rename your directory and try it again to prevent import conflict.\ngood luck:)","Q_Score":0,"Tags":"telegram-bot,python-telegram-bot","A_Id":58888596,"CreationDate":"2019-10-06T17:33:00.000","Title":"ModuleNotFoundError: No module named 'telegram'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I am writing a slack bot, and I am using argsparse to parse the arguments sent into the slackbot, but I am trying to figure out how to get the help message string so I can send it back to the user via the slack bot. \nI know that ArgumentParser has a print_help() method, but that is printed via console and I need a way to get that string.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":84,"Q_Id":58276942,"Users Score":1,"Answer":"I just found out that there's a method called format_help() that generates that help string","Q_Score":0,"Tags":"python,python-3.x,argparse","A_Id":58276991,"CreationDate":"2019-10-07T20:48:00.000","Title":"argparse.print_help() ArgumentParser message string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"There will be an unordered_map in c++ dll containing some 'vectors' mapped to its 'names'. For each of these 'names', the python code will keep on collecting data from a web server every 5 seconds and fill the vectors with it.\nIs such a dll possible? If so, how to do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":58277885,"Users Score":0,"Answer":"You can make the Python code into an executable. Run the executable file from the DLL as a separate process and communicate with it via TCP localhost socket - or some other Windows utility that allows to share data between different processes.\nThat's a slow mess. I agree, but it works.\nYou can also embed Python interpreter and run the script it on the dll... I suppose.","Q_Score":0,"Tags":"python,c++","A_Id":58279737,"CreationDate":"2019-10-07T22:25:00.000","Title":"Is it possible to have a c++ dll run a python program in background and have it populate a map of vectors? If so, how?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm having trouble connecting the mathematical concept of spline interpolation with the application of a spline filter in python. My very basic understanding of spline interpolation is that it's fitting the data in a piece-wise fashion, and the piece-wise polynomials fitted are called splines. But its applications in image processing involve pre-filtering the image and then performing interpolation, which I'm having trouble understanding.\nTo give an example, I want to interpolate an image using scipy.ndimage.map_coordinates(input, coordinates, prefilter=True), and the keyword prefilter according to the documentation:\n\nDetermines if the input array is prefiltered with spline_filter before interpolation \n\nAnd the documentation for scipy.ndimage.interpolation.spline_filter simply says the input is filtered by a spline filter. So what exactly is a spline filter and how does it alter the input data to allow spline interpolation?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":389,"Q_Id":58278604,"Users Score":1,"Answer":"I'm guessing a bit here. In order to calculate a 2nd order spline, you need the 1st derivative of the data. To calculate a 3rd order spline, you need the second derivative. I've not implemented an interpolation motor beyond 3rd order, but I suppose the 4th and 5th order splines will require at least the 3rd and 4th derivatives.\nRather than recalculating these derivatives every time you want to perform an interpolation, it is best to calculate them just once. My guess is that spline_filter is doing this pre-calculation of the derivatives which then get used later for the interpolation calculations.","Q_Score":2,"Tags":"python,interpolation,spline","A_Id":58283603,"CreationDate":"2019-10-08T00:10:00.000","Title":"What is the difference between spline filtering and spline interpolation?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I need to know how to make a highlighted label(or small box )appears when the mouse is on widget like when you are using browser and put the mouse on (reload\/back\/etc...) button a small box will appear and tell you what this button do\nand i want that for any widget not only widgets on toolbar","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":159,"Q_Id":58283157,"Users Score":-1,"Answer":"As the comment of @ekhumoro says\nsetToolTip is the solution","Q_Score":0,"Tags":"python,pyqt,pyqt5,python-3.7","A_Id":58285881,"CreationDate":"2019-10-08T08:59:00.000","Title":"How to show a highlighted label when The mouse is on widget","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am having hard time to install a python lib called python3-saml\nTo narrow down the problem I created a very simple application on ibm-cloud and I can deploy it without any problem, but when I add as a requirement the lib python3-saml \nI got an exception saying:\npkgconfig.pkgconfig.PackageNotFoundError: xmlsec1 not found\nThe above was a deployment on ibm-cloud, but I did try to install the same python lib locally and I got the same error message, locally I can see that I have the xmlsec1 installed.\nAny help on how to successfully deploy it on ibm-cloud using python3-saml?\nThanks in advance","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":448,"Q_Id":58288228,"Users Score":2,"Answer":"I had a similar issue and I had to install the \"xmlsec1-devel\" on my CentOS system before installing the python package.","Q_Score":1,"Tags":"python,cloud,ibm-cloud,xmlsec1","A_Id":59429632,"CreationDate":"2019-10-08T14:18:00.000","Title":"xmlsec1 not found on ibm-cloud deployment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I'm new with Python and new on Stackoverflow, so please let me know if this question should be posted somewhere else or you need any other info :). But I hope someone can help me out with what seems to be a rather simple mistake...\nI'm working with Python in Jupyter Notebook and am trying to create my own module with some selfmade functions\/loops that I often use. However, when I try to some of the functions from my module, I get an error related to the import of the built-in module that is used in my own module.\nThe way I created my own module was by:\n\ncreating different blocks of code in a notebook and downloading it\nas 'Functions.py' file.\nsaving this Functions.py file in the folder that i'm currently working in (with another notebook file)\nin my current notebook file (where i'm doing my analysis), I import my module with 'import Functions'.\n\nSo far, the import of my own module seems to work. However, some of my self-made functions use functions from built-in modules. E.g. my plot_lines() function uses math.ceil() somewhere in the code. Therefore, I imported 'math' in my analysis notebook as well. But when I try to run the function plot_lines() in my notebook, I get the error \"NameError: name 'math' is not defined\".\nI tried to solve this error by adding the code 'import math' to the function in my module as well, but this did not resolve the issue. \nSo my question is: how can I use functions from built-in Python modules in my own modules?\nThanks so much in advance for any help!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":99,"Q_Id":58320255,"Users Score":0,"Answer":"If anyone encounters the same issue:\nadd 'import math' to your own module. \nMake sure that you actually reload your adjusted module, e.g. by restarting your kernell!","Q_Score":0,"Tags":"python,function,math,module,jupyter-notebook","A_Id":58321054,"CreationDate":"2019-10-10T09:57:00.000","Title":"Using a function from a built-in module in your own module - Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":1,"DOCUMENTATION":0,"ERRORS":1,"REVIEW":0},{"Question":"I use rawpy module in python to post-process raw images, however, no matter how I set the Params, the output is different from the default RGB in camera ISP, so anyone know how to operate on this please?\nI have tried the following ways:\nDefault:\noutput = raw.postprocess()\nUse Camera White balance:\noutput = raw.postprocess(use_camera_wb=True)\nNo auto bright:\noutput = raw.postprocess(use_camera_wb=True, no_auto_bright=True)\nNone of these could recover the RGB image as the camera ISP output.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1074,"Q_Id":58325514,"Users Score":0,"Answer":"The dcraw\/libraw\/rawpy stack is based on publicly available (reverse-engineered) documentation of the various raw formats, i.e., it's not using any proprietary libraries provided by the camera vendors. As such, it can only make an educated guess at what the original camera ISP would do with any given image. Even if you have a supposedly vendor-neutral DNG file, chances are the camera is not exporting everything there in full detail.\nSo, in general, you won't be able to get the same output.","Q_Score":1,"Tags":"python,dcraw","A_Id":58499291,"CreationDate":"2019-10-10T14:40:00.000","Title":"how to post-process raw images using rawpy to have the same effect with default output like ISP in camera?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to improve mobilenet_v2's detection of boats with about 400 images I have annotated myself, but keep on getting an underfitted model when I freeze the graphs, (detections are random does not actually seem to be detecting rather just randomly placing an inference). I performed 20,000 steps and had a loss of 2.3.\nI was wondering how TF knows that what I am training it on with my custom label map\nID:1\nName: 'boat'\nIs the same as what it regards as a boat ( with an ID of 9) in the mscoco label map.\nOr whether, by using an ID of 1, I am training the models' idea of what a person looks like to be a boat?\nThank you in advance for any advice.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":58332687,"Users Score":0,"Answer":"The model works with the category labels (numbers) you give it. The string \"boat\" is only a translation for human convenience in reading the output.\nIf you have a model that has learned to identify a set of 40 images as class 9, then giving it a very similar image that you insist is class 1 will confuse it. Doing so prompts the model to elevate the importance of differences between the 9 boats and the new 1 boats. If there are no significant differences, then the change in weights will find unintended features that you don't care about.\nThe result is a model that is much less effective.","Q_Score":0,"Tags":"python,tensorflow,deep-learning,conv-neural-network,object-detection","A_Id":58345624,"CreationDate":"2019-10-11T00:23:00.000","Title":"How does TF know what object you are finetuning for","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to improve mobilenet_v2's detection of boats with about 400 images I have annotated myself, but keep on getting an underfitted model when I freeze the graphs, (detections are random does not actually seem to be detecting rather just randomly placing an inference). I performed 20,000 steps and had a loss of 2.3.\nI was wondering how TF knows that what I am training it on with my custom label map\nID:1\nName: 'boat'\nIs the same as what it regards as a boat ( with an ID of 9) in the mscoco label map.\nOr whether, by using an ID of 1, I am training the models' idea of what a person looks like to be a boat?\nThank you in advance for any advice.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":58332687,"Users Score":0,"Answer":"so I managed to figure out the issue.\nWe created the annotation tool from scratch and the issue that was causing underfitting whenever we trained regardless of the number of steps or various fixes I tried to implement was that When creating bounding boxes there was no check to identify whether the xmin and ymin coordinates were less than the xmax and ymax I did not realize this would be such a large issue but after creating a very simple check to ensure the coordinates are correct training ran smoothly.","Q_Score":0,"Tags":"python,tensorflow,deep-learning,conv-neural-network,object-detection","A_Id":58513059,"CreationDate":"2019-10-11T00:23:00.000","Title":"How does TF know what object you are finetuning for","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm working with odoo11 community version and currently I have some problem.\nThis is my exmplanation of problem:\nIn company I have many workcenters, and for each workcenter:\n1) I want to create separate warehouse for each workcenter\nor\n2) Just 1 warehouse but different storage areas for each workcenter\n(currently I made second option) and each workcenter have their own operation type: Production\nNow my problem started, There are manufacturing orders and each manufacturing order have few workorders, And I want to do something that when some workorder is started then products are moved to this workcenter's warehouse\/storage area and they are there untill next workorders using different workcenter starting then product are moved to next workcenter warehouse\/storage area.\nI can only set that after creating new sale order production order is sent to first Workcenter storage area and he is ther untill all workorders in production order are finished, I don't know how to trigger move routes between workcenters storage areas. for products that are still in production stage\nCan I do this from odoo GUI, or maybe I need to do this somewhere in code?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":58332889,"Users Score":0,"Answer":"Ok, I found my answer, which is that to accomplish what I wanted I need to use Manufacturing with Multi levell Bill of material, it working in way that theoretically 3 steps manufacturing order is divided into 3 single manufacture orders with 1 step each, and for example 2 and 3 prodcution order which before were 2 and 3 step are using as components to produce product that are finished in previous step which now is individual order.","Q_Score":0,"Tags":"python,odoo,odoo-11","A_Id":58356611,"CreationDate":"2019-10-11T00:57:00.000","Title":"Warehouse routes between each started workorder in production order","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Good day folks\nRecently, I made a python based web crawler machine that scrapes_ some news ariticles and django web page that collects search title and url from users.\nBut I do not know how to connect the python based crawler machine and django web page together, so I am looking for the any good resources that I can reference.\nIf anyone knows the resource that I can reference,\nCould you guys share those?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":101,"Q_Id":58333930,"Users Score":0,"Answer":"There are numerous ways you could do this. \nYou could directly integrate them together. Both use Python, so the scraper would just be written as part of Django. \nYou could have the scraper feed the data to a database and have Django read from that database. \nYou could build an API from the scraper to your Django implementation. \nThere are quite a few options for you depending on what you need.","Q_Score":0,"Tags":"python,django,web-crawler","A_Id":58335189,"CreationDate":"2019-10-11T03:40:00.000","Title":"How to Connect Django with Python based Crawler machine?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I was wondering if it is possible for me to use Django code I have for my website and somehow use that in a mobile app, in a framework such as, for example, Flutter.\nSo is it possible to use the Django backend I have right now and use it in a mobile app?\nSo like the models, views etc...","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":16475,"Q_Id":58337572,"Users Score":15,"Answer":"Yes. There are a couple ways you could do it\n\nUse the Django Rest Framework to serve as the backend for something like React Native. \nBuild a traditional website for mobile and then run it through a tool like PhoneGap. \nUse the standard Android app tools and use Django to serve and process data through API requests.","Q_Score":7,"Tags":"python,android,django,django-models,mobile","A_Id":58337906,"CreationDate":"2019-10-11T08:52:00.000","Title":"Is it possible to make a mobile app in Django?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"Can anyone please let me know how to simulate mouse hover event using robot framework on a desktop application. I.e if I mouse hover on a specific item or an object, the sub menus are listed and i need to select one of the submenu item.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1537,"Q_Id":58338018,"Users Score":0,"Answer":"It depends on the automation library that you are using to interact with the Desktop application. \nThe normal approach is the following: \n\nFind the element that you want to hover on (By ID or some other unique locator)\nGet the attribute position of the element (X,Y)\nMove your mouse to that position. \n\nIn this way you don\u00b4t \"hardcode\" the x,y position what will make your test case flaky.","Q_Score":0,"Tags":"python,robotframework","A_Id":58340413,"CreationDate":"2019-10-11T09:16:00.000","Title":"how to simulate mouse hover in robot framework on a desktop application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I understand how it works when you have one column output but could not understand how it is done for 4 column outputs.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":49,"Q_Id":58342612,"Users Score":1,"Answer":"It\u2019s not advised to calculate accuracy for continuous values. For such values you would want to calculate a measure of how close the predicted values are to the true values. This task of prediction of continuous values is known as regression. And generally R-squared value is used to measure the performance of the model.\nIf the predicted output is of continuous values then mean square error is the right option \nFor example:\nPredicted o\/p vector1-----> [2,4,8] and\nActual o\/p vector1 -------> [2,3.5,6]\n1.Mean square error is sqrt((2-2)^2+(4-3.5)^2+(8-6)^2 )\n2.Mean absolute error..etc.\n(2)if the output is of classes then accuracy is the right metric to decide on model performance\nPredicted o\/p vector1-----> [0,1,1]\nActual o\/p vector1 -------> [1,0,1]\nThen accuracy calculation can be done with following:\n1.Classification Accuracy\n2.Logarithmic Loss\n3.Confusion Matrix\n4.Area under Curve\n5.F1 Score","Q_Score":0,"Tags":"python-3.x,tensorflow,neural-network,conv-neural-network,recurrent-neural-network","A_Id":60562288,"CreationDate":"2019-10-11T13:46:00.000","Title":"I have a network with 3 features and 4 vector outputs. How is MSE and accuracy metric calculated?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I tried type(+) hoping to know more about how is this operator represented in python but i got SyntaxError: invalid syntax.\nMy main problem is to cast as string representing an operation :\"3+4\" into the real operation to be computed in Python (so to have an int as a return: 7).\nI am also trying to avoid easy solutions requiring the os library if possible.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":56,"Q_Id":58342835,"Users Score":7,"Answer":"Operators don't really have types, as they aren't values. They are just syntax whose implementation is often defined by a magic method (e.g., + is defined by the appropriate type's __add__ method).\nYou have to parse your string:\n\nFirst, break it down into tokens: ['3', '+', '4']\nThen, parse the token string into an abstract syntax tree (i.e., something at stores the idea of + having 3 and 4 as its operands).\nFinally, evaluate the AST by applying functions stored at a node to the values stored in its children.","Q_Score":0,"Tags":"python,python-3.x,python-2.7","A_Id":58342863,"CreationDate":"2019-10-11T13:57:00.000","Title":"What are the types of Python operators?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I want to write a program to simulate 5-axis cnc gcode with vpython and I need to rotate trail of the object that's moving. Any idea how that can be done?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":58356337,"Users Score":0,"Answer":"It's difficult to know exactly what you need, but if instead of using \"make_trail=True\" simply create a curve object to which you append points. A curve object named \"c\" can be rotated using the usual way to rotate an object: c.rotate(.....).","Q_Score":0,"Tags":"python-3.x,vpython","A_Id":58893552,"CreationDate":"2019-10-12T16:46:00.000","Title":"How to rotate a object trail in vpython?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am new to deep learning, I was wondering if there is a way to extract parts of images containing the different label and then feed those parts to different model for further processing?\nFor example,consider the dog vs cat classification.\nSuppose the image contains both cat and dog.\nWe successfully classify that the image contains both, but how can we classify the breed of the dog and cat present?\nThe approach I thought of was,extracting\/cutting out the parts of the image containing dog and cat.And then feed those parts to the respective dog breed classification model and cat breed classification model separately.\nBut I have no clue on how to do this.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":170,"Q_Id":58362763,"Users Score":1,"Answer":"Your thinking is correct, you can have multiple pipelines based on the number of classes.\n\nTraining:\nMain model will be an object detection and localization model like Faster RCNN, YOLO, SSD etc trained to classify at a high level like cat and dog. This pipeline provides you bounding box details (left, bottom, right, top) along with the labels.\nSub models will be multiple models trained on a lover level. For example a model that is trained to classify breed. This can be done by using models like vgg, resnet, inception etc. You can utilize transfer learning here.\nInference: Pass the image through Main model, crop out the detection objects using bounding box details (left, bottom, right, top) and based on the label information, feed it appropriate sub model and extract the results.","Q_Score":0,"Tags":"python,tensorflow,machine-learning,keras,deep-learning","A_Id":58363469,"CreationDate":"2019-10-13T10:37:00.000","Title":"How to extract\/cut out parts of images classified by the model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":1,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I'm new to dask and trying to use it in our cluster which uses NC job scheduler (from Runtime Design Automation, similar to LSF). I'm trying to create an NCCluster class similar to LSFCluster to keep things simple. \nWhat are the steps involved in creating a job scheduler for custom clusters?\nIs there any other way to interface dask to custom clusters without using JobQueueCluster?\nI could find info on how to use the LSFCluster\/PBSCluster\/..., but couldn't find much information on creating one for a different HPC.\nAny links to material\/examples\/docs will help\nThanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60,"Q_Id":58364733,"Users Score":0,"Answer":"Got it working after going through the source code.\nTips for anyone trying:\n\nCreate a customCluster & customJob class similar to LSFCluster & LSFJob.\nOverride the following\n\n\nsubmit_command\ncancel_command\nconfig_name (you'll have to define it in the jobqueue.yaml)\nDepending on the cluster, you may need to override the _submit_job, _job_id_from_submit_ouput and other functions.\n\n\nHope this helps.","Q_Score":0,"Tags":"python,python-3.x,dask,dask-distributed","A_Id":58399685,"CreationDate":"2019-10-13T14:56:00.000","Title":"Creating dask_jobqueue schedulers to launch on a custom HPC","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":1,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to get some code working on mac and to do that I have been using an anaconda virtual environment. I have all of the dependencies loaded as well as my script, but I don't know how to execute my file in the virtual environment on mac. The python file is on my desktop so please let me know how to configure the path if I need to. Any help?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":54,"Q_Id":58368638,"Users Score":1,"Answer":"If you have a terminal open and are in your virtual environment then simply invoking the script should run it in your environment.","Q_Score":0,"Tags":"python,python-3.x,macos,anaconda","A_Id":58368730,"CreationDate":"2019-10-13T23:43:00.000","Title":"How to run a python script using an anaconda virtual environment on mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I am trying to deploy a Python webapp on AWS that takes a USERNAME and PASSWORD as input from a user, inputs them into a template Python file, and logs into their Instagram account to manage it automatically. \nIn Depth Explanation:\nI am relatively new to AWS and am really trying to create an elaborate project so I can learn. I was thinking of somehow receiving the user input on a simple web page with two text boxes to input their Instagram account info (username & pass). Upon receiving this info, my instinct tells me that I could somehow use Lambda to quickly inject it into specific parts of an already existing template.py file, which will then be taken and combined with the rest of the source files to run the code. These source files could be stored somewhere else on AWS (S3?). I was thinking of running this using Elastic Beanstalk. \nI know this is awfully involved, but my main issue is this whole dynamic injection thing. Any ideas would be so greatly appreciated. In the meantime, I will be working on it.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":58380298,"Users Score":0,"Answer":"One way in which you could approach this would be have a hosted website on a static s3 bucket. Then, when submitting a request, goes to an API Gateway POST endpoint, This could then trigger a lambda (in any language of choice) passing in the two values.\nThis would then be passed into the event object of the lambda, you could store these inside secrets manager using the username as the Key name so you can reference it later on. Storing it inside a file inside a lambda is not a good approach to take. \nUsing this way you'd learn some key services:\n\nS3 + Static website Hosting\nAPI Gateway \nLambdas \nSecrets Manager\n\nYou could also add alias's\/versions to the lambda such as dev or production and same concept to API Gateways with stages to emulate doing a deployment.\nHowever there are hundreds of different ways to also design it. And this is only one of them!","Q_Score":2,"Tags":"python,amazon-web-services,amazon-s3,amazon-ec2,aws-lambda","A_Id":58380658,"CreationDate":"2019-10-14T15:59:00.000","Title":"Dynamically Injecting User Input Values into Python code on AWS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1,"API_CHANGE":0,"API_USAGE":1,"CONCEPTUAL":0,"DISCREPANCY":0,"DOCUMENTATION":0,"ERRORS":0,"REVIEW":0},{"Question":"I have this html code:\n