[{"Question":"I'm creating a Python application with dash\/plotly which works with dataframe with unknown number of columns, and I need to make some visualization which depends on this number. For example, I need to make a table which contains the same number of columns as my dataframe, and also I need the same number of graphs. I tried to do it with creating elements in cycles, but it didn't work. What shall I do?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":65113370,"Users Score":0,"Answer":"Start with an empty table, and an empty container. Run your callback to create the table as you need it, and update its data and columns properties, and just output all the graphs as children of the container.","Q_Score":0,"Tags":"python,plotly-dash","A_Id":65117887,"CreationDate":"2020-12-02T17:42:00.000","Title":"Problem with creation elements in Dash by cycles","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a local DB set up for which I create object-relational mapping with SQLAlchemy objects.\nEverything was fine until I changed the schema of the DB - including adding a new column to one of the tables. Now I keep seeing:\nsqlalchemy.exc.NoReferencedColumnError: Could not initialize target column for ForeignKey 'ModelFit.id' on table 'ModelPrice': table 'ModelFit' has no column named 'id'\nwhere 'id' is the SQLALchemy Column object of ModelFit table's \"Id\" column.\nStraight SQL queries on the new DB execute fine, the only problem is initializing this mapping.\nI saw a similar question from someone saying they figured it out by \"removing the .db file from the project and ran it again\", but I don't have any such file. I don't even use flask or anything to create the DB, did it straight in local DB using SQL.\nAny help or insight on what is happening here? Or would more info be helpful?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":185,"Q_Id":65113773,"Users Score":0,"Answer":"Figured it out eventually - seems like the order of columns had to be the same.\nRunning ALTER TABLE on the schema to add an \"Id\" column added it to the right. My SQLAlchemy class had \"id\" column listed as the first one.\nI can't confirm the reason why because I haven't looked at the part of the docs that would confirm this, but anecdotally, this was the case.\nSo basically had to delete the table and re-make it with the desired ordering.","Q_Score":0,"Tags":"python,orm,sqlalchemy,flask-sqlalchemy,sqlalchemy-migrate","A_Id":65133296,"CreationDate":"2020-12-02T18:11:00.000","Title":"SQLAlchemy objects don't recognize new columns after DB schema was changed","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do i remove\/replace \/n characters from Camelot table?\nI'm trying to parse PDF file into table, which contains cells with multiple line breaks.\nNot sure if its a BUG, but sequence is messed up because of those characters.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":375,"Q_Id":65113842,"Users Score":0,"Answer":"Bit late to the party here but, since Camelot returns a panda dataframe you can use\ntables[0].df = tables[0].df.replace('\\\\n',' ', regex=True)","Q_Score":0,"Tags":"python,dataframe,pdf","A_Id":71571887,"CreationDate":"2020-12-02T18:17:00.000","Title":"Python Camelot - How to remove line beaks \/n from table","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm having trouble deploying a FastAPI app on cpanel with Passenger","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1429,"Q_Id":65114261,"Users Score":2,"Answer":"Applied the python package a2wsgi for resolving the problem but it's not working.","Q_Score":2,"Tags":"python,cpanel,passenger,fastapi","A_Id":65317959,"CreationDate":"2020-12-02T18:47:00.000","Title":"Is there a way to deploy a fastapi app on cpanel?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I used the TextRank algorithm for ranking sentences of some articles. The total number of sentences in the articles range from 10 to 71. I wanted to know if there is any way of determining the value of k for selecting the top k ranked sentences as the summary. Or is that fixed to be some number?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":65114269,"Users Score":0,"Answer":"That's probably mostly determined by how large of a summarization you need. In other words, if the summary must fit into some constraint (e.g., 400 characters or less; at least 50 words) then what's an appropriate setting of k to satisfy the constraints? Relatively speaking, it's similar to hyperparmeter optimization in ML.\nAlso, the quality will tend to be affected. Too small of k yields results that probably aren't effective. FWIW, I try to use k >= 3 generally. Too large of k and the results become less readable.","Q_Score":1,"Tags":"python-3.x,summarization,textrank","A_Id":65118298,"CreationDate":"2020-12-02T18:47:00.000","Title":"Is there any way of determining the value of k for selecting top k sentences in text summarization","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I think I'm not using the upgraded seaborn. I run !pip install seaborn --upgrade to version 0.11.0\nHowever when I run import seaborn as sns; sns.__version__ I still got 0.10.1\nI use Mac OS","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":134,"Q_Id":65119212,"Users Score":0,"Answer":"I restarted the kernel and the seaborn version got updated","Q_Score":0,"Tags":"python,seaborn","A_Id":65135974,"CreationDate":"2020-12-03T02:52:00.000","Title":"upgraded seaborn but when running sns.__version__ the version is still the same","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to execute a group of copy commands in redshift from a lambda function where I copy around 100 GB of files from S3 to a table in redshift. I cannot use Redshift Data API for this (because I cannot have a secret arn for the cluster now and getting temporary credentials is also not ideal in my case).\nI have tried using the psycopg2 library but as soon as the lambda function timeouts, the execution stops too.\nIs there any way I can asynchronously pass the queries to redshift and when the lambda function timeouts, the query still keeps executing in redshift?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":282,"Q_Id":65119975,"Users Score":2,"Answer":"Sorry, no. This is fundamental to ODBC \/ JDBC connections - if the connection is dropped the transaction will be aborted. I went so far a few years back as to have a small EC2 repeat Lambda SQL to Redshift so that Lambda could end w\/o closing the connection. Worked great but having a server in the center of a server-less solution wasn\u2019t quite right. You could go down this right if you like.\nThis is why Redshift Data API is a big step forward","Q_Score":0,"Tags":"python,postgresql,amazon-web-services,aws-lambda,amazon-redshift","A_Id":65120291,"CreationDate":"2020-12-03T04:35:00.000","Title":"How to asynchronously pass redshift query from lambda function?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to cluster some GPS points with DBSCAN algorithm and I select eps:20 m and min_samples:4.\nThe algorithm works fine on my data, however I need to also consider time of point in the clustering process.\nSuppose there are 10 points in a cluster, however 4 of them are between 8 am and 8:30 am and others in duration of 11 am and 11:15 am. What I want is that the algorithm detects 2 clusters here, one with time of 8 points and one with time of 11 points.\nI mean I need to have another criteria for my DBSCAN algorithm rather than eps and min_samples.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":172,"Q_Id":65121781,"Users Score":0,"Answer":"Use Generalized DBSCAN.\nThen define neighbors as being both\n\nwithin distance maxNeighborDistance km\nwithin maxNeighborTime minutes","Q_Score":0,"Tags":"python,dbscan","A_Id":65634893,"CreationDate":"2020-12-03T07:48:00.000","Title":"Considering time in DBSCAN","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use Pyinstaller in pop os for a python script but it creates a x-sharedlib file that I can only open through terminal. I tried to rename it to exe and run it but nothing happens. How I can make it open by double click? Thank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":106,"Q_Id":65123294,"Users Score":0,"Answer":"Found the solution. I renamed it to .sh changed nautilus preferences to run executable text files and runs normally now.","Q_Score":0,"Tags":"python-3.x,linux","A_Id":65144243,"CreationDate":"2020-12-03T09:37:00.000","Title":"Pyinstaller creates x-sharedlib file in pop os","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"%%time\ntrain_data = dt.fread('..\/input\/prediction\/train.csv').to_pandas()\nThe output results as an error and says UsageError: Line magic function %%time not found. Suggest some approach.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":19737,"Q_Id":65124633,"Users Score":1,"Answer":"%%time was the first thing in the cell and after going through the documentation i found %%time is now updated with %time","Q_Score":16,"Tags":"python,dataframe,magic-function,magicline","A_Id":69712765,"CreationDate":"2020-12-03T10:58:00.000","Title":"Line magic function `%%time` not found","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am training a computer vision model.\nI divide the images in 3 datasets: training, validation and testing.\nSo that I get always the same images in training, vaidation and testing, I use the random_state parameter of train_test_split function.\nHowever, I have a problem:\nI am training and testing on two different computers (linux and windows).\nI thought that the results for a given random state would be same but they aren't.\nIs there a way that I get the same results on both computers ?\nI can't divide the images in 3 folders (training, validation and testing) since I want to change the test size and validation size during different experiments.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":65126722,"Users Score":0,"Answer":"On a practical note, training of the models may require\nthe usage of a distant computer or server (e.g. Microsoft\nAzur, Google collaboratory etc.) and it is important to be\naware that random seeds vary between different python versions and operating systems.\nThus, when dividing the original dataset into training, validation and testing datasets,\nthe usage of spliting functions with random seeds is prohibited as it could lead to overlapping testing and training\ndatasets. A way to avoid this is by keeping separate .csv\nfiles with the images to be used for training, validation, or\ntesting.","Q_Score":0,"Tags":"python,random,scikit-learn","A_Id":71892581,"CreationDate":"2020-12-03T13:15:00.000","Title":"sklearn.model_selection.train_test_split random state","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So i have created an automation bot to do some stuff for me on the internet .. Using Selenium Python..After long and grooling coding sessions ..days and nights of working on this project i have finally completed it ...Only to be randomly greeted with a Error 1015 \"You are being rate limited\".\nI understand this is to prevent DDOS attacks. But it is a major blow.\nI have contacted the website to resolve the matter but to no avail ..But the third party security software they use says that they the website can grant my ip exclusion of rate limiting.\nSo i was wondering is there any other way to bypass this ..maybe from a coding perspective ...\nI don't think stuff like clearing cookies will resolve anything ..or will it as it is my specific ip address that they are blocking\nNote:\nThe TofC of the website i am running my bot on doesn't say you cant use automation software on it ..but it doesn't say you cant either.\nI don't mind coding some more to prevent random access denials ..that i think last for 24 hours which can be detrimental as the final stage of this build is to have my program run daily for long periods of times.\nDo you think i could communicate with the third party security to ask them to ask the website to grant me access ..I have already tried resolving the matter with the website. All they said was that A. On there side it says i am fine\nB. The problem is most likely on my side ..\"Maybe some malicious software is trying to access our website\" which .. malicious no but a bot yes. That's what made me think maybe it would be better if i resolved the matter myself.\nDo you think i may have to implement wait times between processes or something. Im stuck.\nThanks for any help. And its a single bot!","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":6688,"Q_Id":65128879,"Users Score":1,"Answer":"I see some possibilities for you here:\n\nIntroduce wait time between requests to the site\nReduce the requests you make\nExtend your bot to detect when it hits the limit and change your ip address (e.g. by restarting you router)\n\nThe last one is the least preferable I would assume and also the most time consuming one.","Q_Score":1,"Tags":"python-3.x,selenium,selenium-webdriver,cloudflare,rate-limiting","A_Id":65128935,"CreationDate":"2020-12-03T15:23:00.000","Title":"How to bypass being rate limited ..HTML Error 1015 using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a problem. I made an application that can detect the heartbeat from the face. Its functioning is correct if I use videocapture (0) (therefore using the webcam of the machine), but the results are wrong if I use any recorded video.\nI guess there are compatibility, codec or compression issues.\nHow could I solve?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":65129328,"Users Score":0,"Answer":"OpenCV will read video files as quickly as possible. that's a feature.\nif you need to display that data at a particular speed, it's your responsibility to throttle the loop. as suggested in the comments, you would do that by giving a suitable (maximum!) delay to waitKey.\nif your VideoCapture object doesn't give a sensible value for CAP_PROP_FPS, see if you can get a sensible value for CAP_PROP_POS_MSEC.","Q_Score":0,"Tags":"python,opencv,camera","A_Id":65143545,"CreationDate":"2020-12-03T15:49:00.000","Title":"Videocapture problem with video and real-time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I made my app using python(PyQt5). I converted py to exe using pyinstaller but after converting it looped at splash screen(My app has splash screen and main screen.). So,I decided to stay it as py file but can I make it unreadable for user? I mean only my app can execute it and normal users couldn't able to see source code?\nNote:This program is only for Windows.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":138,"Q_Id":65131473,"Users Score":0,"Answer":"pyinstaller is the way to go. It does work with libraries you use. It seems you have some underlying problems though.\nAnother alternative is py2exe.\nDid you try some reasearch into pyinstaller options? Some suggest using --onefile option of pyinstaller.","Q_Score":1,"Tags":"python,python-3.x,windows","A_Id":65131625,"CreationDate":"2020-12-03T17:56:00.000","Title":"How to make code unreadable but executbale?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I run a process distirbuted over the cores through concurrent.futures. Each of the processes has a function which ultimately calls os.getpid(). Might the IDs from os.getpid() coincide in spite of being in different concurrent.futures' branches?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":391,"Q_Id":65137493,"Users Score":3,"Answer":"I don't know that the meaning of the value returned by os.getpid() is well defined. I'm pretty sure that you can depend on no two running processes having the same ID, but it's very likely that after some process is terminated, it's ID eventually will be re-used.\nThat's what happens in most operating systems, and the implementation of os.getpid() quite likely just calls the operating system and returns the same value.","Q_Score":1,"Tags":"python,concurrency,multiprocessing,pid","A_Id":65137573,"CreationDate":"2020-12-04T03:22:00.000","Title":"uniqueness of os.getpid in multiprocessing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I know that namedtuple is a factory function to create classes with immutable data fields. I'm assuming they are hashable so that they can be used in sets. My worry is if there are any gotchas associated with using namedtuples in sets. e.g. are there issues if there are nested namedtuples?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":435,"Q_Id":65137741,"Users Score":1,"Answer":"Yes, they are hashable, and can be used in sets, like tuples.\nThe gotcha is that a tuple of mutable objects can change underneath you.\nTuples composed of immutable objects are safe in this regards.\nNot sure it is a gotcha, but it is worth noting @user2357112supportsMonica's remark in the comments:\n\nThe other gotcha is that a namedtuple is still a tuple, and its\n__hash__ and __eq__ are ordinary tuple hash and equality. A namedtuple will compare equal to ordinary tuples or instances of unrelated\nnamedtuple classes if the contents match.","Q_Score":2,"Tags":"python,set,tuples,namedtuple","A_Id":65137757,"CreationDate":"2020-12-04T03:59:00.000","Title":"Can namedtuples be used in set()?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to find out what fonts are available to be used in PIL with the font = ImageFont.load() and\/or ImageFont.truetype() function. I want to create a list from which I can sample a random font to be used. I haven't found anything in the documentation so far, unfortunately.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1556,"Q_Id":65141291,"Users Score":2,"Answer":"I have so far not found a solution with PIL but matplotlib has a function to get all the available fonts on from the system:\nsystem_fonts = matplotlib.font_manager.findSystemFonts(fontpaths=None, fontext='ttf')\nThe font can then be loaded using fnt = ImageFont.truetype(font, 60)","Q_Score":2,"Tags":"python,python-3.x,python-imaging-library","A_Id":65180042,"CreationDate":"2020-12-04T09:47:00.000","Title":"Get a list of all available fonts in PIL","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working on a project in which I am using a webcam attached to a raspberry pi to then show what the camera is seeing through a website using a client and web server based method through python, However, I need to know how to link the raspberry pi to a website to then output what it sees through the camera while then also outputting it through the python script, but then i don't know where to start\nIf anyone could help me I would really appreciate it.\nMany thanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":111,"Q_Id":65141422,"Users Score":0,"Answer":"So one way to do this with python would be to capture the camera image using opencv in a loop and display it to a website hosted on the Pi using a python frontend like flask (or some other frontend). However as others have pointed out, the latency on this would be so bad any processing you wish to do would be nearly impossible.\nIf you wish to do this without python, take a look at mjpg-streamer, that can pull a video feed from an attached camera and display it on a localhost website. The quality is fairly good on localhost. You can then forward this to the web (if needed) using port forwarding or an application like nginx.\nIf you want to split the recorded stream into 2 (to forward one to python and to broadcast another to a website), ffmpeg is your best bet, but the FPS and quality would likely be terrible.","Q_Score":0,"Tags":"python,raspberry-pi","A_Id":65146963,"CreationDate":"2020-12-04T09:56:00.000","Title":"raspberry pi using a webcam to output to a website to view","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I tried to execute a try except command on the app to get log of the error but it still crash so I think it is a problem of PATH ? library ?\nWhy does it work with pyinstaller and not after I created an installer.nsi with HM NSIS edit ?\nI'm sorry but I have no idea of how to debug it !\nIt is a \"simple\" project. tkinter app, excel creation, 1 thread...\nI don't know from where it comes from (I'm not good in system and OS).\nPS: even stranger, when I install the app with Install.exe, if I decide to launch the app directly it works !!!\nBut it never work a second time when I use the shortcut or the .exe in C:\\Program Files (x86)\\MYDIRECTORY.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":138,"Q_Id":65141524,"Users Score":0,"Answer":"This a permission problem, I tried to write on a protected file.\nSolution to debug:\ndo not use --noconsole when you run pyinstaller, it will let you the console and with windows+G, you can keep track of the error even if the terminal stop immediatly after the crash of the programm.\n(I tried to register the error in a .json in a try except but that the thing that made my program crash !)","Q_Score":0,"Tags":"python,pyinstaller","A_Id":65157159,"CreationDate":"2020-12-04T10:03:00.000","Title":"Fatal error detected \"Failed to execute script main\".Works with pyinstaller, crash with HM NSIS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to python MNE and EEG data in general.\nFrom what I understand, MNE raw object represent a single trial (with many channels). Am I correct? What is the best way to average data across many trials?\nAlso, I'm not quite sure what the mne.Epochs().average() represents. Can anyone pls explain?\nThanks a lot.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":99,"Q_Id":65141809,"Users Score":0,"Answer":"From what I understand, MNE raw object represent a single trial (with many channels). Am I correct?\n\nAn MNE raw object represents a whole EEG recording. If you want to separate the recording into several trials, then you have to transform the raw object into an \"epoch\" object (with mne.Epochs()). You will receive an object with the shape (n_epochs, n_channels and n_times).\n\nWhat is the best way to average data across many trials? Also, I'm not quite sure what the mne.Epochs().average() represents. Can anyone pls explain?\n\nAbout \"mne.Epochs().average()\": if you have an \"epoch\" object and want to combine the data of all trials into one whole recording again (for example, after you performed certain pre-processing steps on the single trials or removed some of them), then you can use the average function of the class. Depending on the method you're choosing, you can calculate the mean or median of all trials for each channel and obtain an object with the shape (n_channels, n_time).\nNot quite sure about the best way to average the data across the trials, but with mne.epochs.average you should be able to do it with ease. (Personally, I always calculated the mean for all my trials for each channel. But I guess that depends on the problem you try to solve)","Q_Score":0,"Tags":"mne-python","A_Id":65271464,"CreationDate":"2020-12-04T10:21:00.000","Title":"Does python mne raw object represent a single trail? if so, how to average across many trials?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having the script to send outlook email through python libaray win32\/Automagica .\nThe email have successfully send if i run the script in normal way(using IDE)\nwhen i try to run the same script from jenkins ,it throws \"Exception: Could not launch Outlook, do you have Microsoft Office installed on Windows?\"\noutlook = Outlook(account_name=accountName)\nFile \"C:\\Python39\\lib\\site-packages\\automagica\\utilities.py\", line 41, in wrapper\nreturn func(*args, **kwargs)\nFile \"C:\\Python39\\lib\\site-packages\\automagica\\activities.py\", line 4186, in init\nself.app = self._launch()\nFile \"C:\\Python39\\lib\\site-packages\\automagica\\activities.py\", line 4202, in _launch\nraise Exception(\nException: Could not launch Outlook, do you have Microsoft Office installed on Windows?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":96,"Q_Id":65142863,"Users Score":0,"Answer":"Yeah ..Now it is working ..it is because of Jenkins running under Admin rights and Outlook is in User Rights...after i start to run Jenkins on User rights(not Admin)it is able to send email through Outlook","Q_Score":0,"Tags":"python,jenkins,outlook","A_Id":65178555,"CreationDate":"2020-12-04T11:36:00.000","Title":"Cannot able to send email from Jenkins using python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am currently translating a pytorch code into tensorflow.\nThere is a point that i am aggregating 3 losses in a tensorflow custom loop and i get an error that i am passing a two dimensional array vs a 1 dimensional into CategoricalCrossEntropy of tensorflow which is very legit and i understand why this happens... but in the pytorch code i am passing the same shapes and it's perfectly working with CrossEntropyLoss. Does anybody know what i have to do to transfer this into TF?\nthe shapes that are passing in are (17000,100) vs (17000)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":117,"Q_Id":65143404,"Users Score":0,"Answer":"Try using the loss loss=tf.keras.losses.sparse_categorical_crossentropy","Q_Score":0,"Tags":"python,numpy,tensorflow,pytorch","A_Id":65144531,"CreationDate":"2020-12-04T12:17:00.000","Title":"Pytorch CrossEntropyLoss Tensorflow Equivalent","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Pls I have downloaded postgresql for Windows and I want to connect my models in Django model file to postgresql database but it keep throwing that error\nName : \"django 1\",\nUser: \"postgresql\",\nPassword: \"bless90\",\nHost: \"local host\"","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":65144004,"Users Score":0,"Answer":"The result of those type of error is when u used a wrong word while passing the database config like the Name,User,Host,and password but for my case I used postgresql instead of postgres","Q_Score":0,"Tags":"database,postgresql,django-rest-framework,python-3.9,postgresql-13","A_Id":65234993,"CreationDate":"2020-12-04T12:57:00.000","Title":"Password authentication failed for user \"postgresql\"","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm running a Mac with Catalina 10.15.6 on an Intel MBP. I'm trying to debug a C++ library that has a Python 3.7.7 binding, Python being installed in a venv. I used to be able to debug it via lldb by going,\nlldb `which python` -- -m pytest myCrashingTest.py\nThen calling 'run', have it segfault and then do the normal debug fandango.\nNow when I call 'run' it tells me...\n\nerror: process exited with status -1 (Error 1)\n\nIf I try to debug python on it's own, that gives me the same error.\nlldb `which python` \nI can't figure this one and can't find any thing useful via google searches. If I try to debug system python, I gets a System Integrity Error, which I can get round if need be, but I'm not running system python. I'm being forced to put in debug prints in the C++ lib like it's the 1980s all over again.\nAny help appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":65147028,"Users Score":0,"Answer":"When SIP is on, lldb is not allowed to debug system binaries, and more generally any binaries that are codesigned and not marked as willing to be debugged. The system Python does not opt into being debugged, so you will either have to turn off SIP (not sure how you do that in a venv) or build a debug version of python yourself. I generally do the latter, Python isn't that hard to build.","Q_Score":0,"Tags":"python,macos,lldb","A_Id":65151409,"CreationDate":"2020-12-04T16:18:00.000","Title":"Debugging python 3.7 in LLDB on MacOS 10.15.6","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I did a huge mistake. I was trying to install python latest version so I went to finder and deleted all data related to my previous version of python some 4000 plus files. now when I type python --version on terminal it still shows the old one. and when I try to install pip it shows :\n\nFile \"\/usr\/bin\/easy_install\", line 8, in \nfrom pkg_resources import load_entry_point\nImportError: No module named pkg_resources\n\nCan anyone please help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":65148923,"Users Score":0,"Answer":"Try to install the old version and then uninstall it.","Q_Score":0,"Tags":"python,terminal,pip","A_Id":65149167,"CreationDate":"2020-12-04T18:35:00.000","Title":"Error: No module named pkg_resources. How do I install?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a program with \"pynput\" which uses Listener() to detect key presses and then based on them performs some action. However one of these actions is typing by sending keystrokes with Controller(). The issue is when Controller is typing something, those keystrokes are also detected by Listener. I want the Listener thread to only listen to key presses done by the user and not by the script","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":31,"Q_Id":65148944,"Users Score":1,"Answer":"I think that's impossible, but you could try to ignore input right when the Controller() writes.","Q_Score":0,"Tags":"python,pynput","A_Id":65149661,"CreationDate":"2020-12-04T18:36:00.000","Title":"is there a way to differentiate between whether the keypresses are done by a user or script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed Anaconda for Windows 10. I later installed Ubuntu under the WSL. I would like to run the python from Anaconda in the Ubuntu shell. Is this possible? How do I activate the environment?\nAlternatively, if I install Anaconda under ubuntu, will I be able to use that environment in Visual Studio 2019? (My end goal is to do my python dev in VS2019, be able to run in debug mode there, and also use the bash shell to run python scripts.)","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2828,"Q_Id":65151949,"Users Score":1,"Answer":"You can use VS on Ubuntu's anaconda python, however you will need to install it there aswell, and for now there's no official support for wsl GUI from Microsoft, though you can still install thirth part programs that will do that for you.\nIn order to use Anaconda under the wsl shell you will first need to install it there.\nYou can do that by going into the anaconda webpage and copying the link to download the latest linux version, and under the bash you type wget [link].\nAfter the download is done you can install it running sudo bash [name of the archive]\nYou can find it's name by typing ls, and it should match the version that you just downloaded.\nafter that you should reload the bash source ~\/.bashrc and you should now be able to use anaconda under the linux bash, though it's not possible yet to have it on the GUI.\nThere are multiple alternatives if you still want it displaying in your browser, you could either change the path to output in your windows browser, install a complete linux GUI or just use a windows software to display the anaconda GUI alone.\nAssuming you want the later:\nGo to MobaXterm website and download it. This is a lightweight software that comes with various inbuilt tools to access SSH server, VNC, SFTP, Command Terminal, and more.\n\nOpen the MobaXterm, once you have that on your system.\n\nClick on the Session option given in the Menu of its.\n\nSelect WSL available at the end of the tools menu.\n\nFrom Basic WSL Setting, click on the Dropdown box and select Ubuntu and press the OK button.\n\nNow you will see your Ubuntu WSL app on MobaXterm, great.\n\nThere, simply type: anaconda-navigator\n\nThat\u2019s it, this will open the graphical user interface of the Anaconda running on Ubuntu WSL app of Windows 10.\n\nStart creating environments and Installation of different packages directly from the GUI of Navigator.","Q_Score":1,"Tags":"python,visual-studio,anaconda,windows-subsystem-for-linux","A_Id":66953111,"CreationDate":"2020-12-04T23:02:00.000","Title":"How do I launch Windows Anaconda python from WSL Ubuntu?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am making turn based game in python using Pyglet. The game has a player-vs-AI mode in which the bot calculates a move to play against the player. However, the function which calculates the bot's move takes around 3-5 seconds to run, blocking the game's UI. In order to get around this, I am running the bot's calculation on a second process using multiprocessing.Process. I got it to work well without blocking the UI, however every time I open the second process to run the function a new Pyglet window opens, then closes again when the process is closed. Is there any way to open a second process in a Pyglet program without a second window opening? Let me know if examples of my code is required, and I will try to come up with similar code to share. Thanks in advance to anyone who can help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":120,"Q_Id":65154811,"Users Score":0,"Answer":"You can fix the problem by moving the initialization of the window inside of the main block","Q_Score":1,"Tags":"python,multiprocessing,window,pyglet","A_Id":65164876,"CreationDate":"2020-12-05T07:37:00.000","Title":"Multiprocessing with Pyglet opens a new window","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm planning to implement a log retrieval solution from AWS CloudWatch logs using Insights logs query which let the users get the logs for 15 days. The queried data can range any where between KBs to GBs for that time frame.\nIs there any way to retrieve that log data in paginated way using AWS services? Since the response limit is capped for API Gateway and Lambda, it is hard to retrieve the data in paginated fashion.\nAre there any other AWS services that can be used to retrieve the cloudwatch logs data?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":530,"Q_Id":65157214,"Users Score":0,"Answer":"One option is that first you can export Cloudwatch data to S3. You may export data periodically from Cloudwatch or use Lambda subscription to export data to s3.\nOnce data are in s3, you can use Athena to query data using SQL like language.","Q_Score":1,"Tags":"python,amazon-web-services,aws-lambda,amazon-cloudwatch,aws-cloudwatch-log-insights","A_Id":65183647,"CreationDate":"2020-12-05T13:05:00.000","Title":"Log retrieval solution using AWS CloudWatch insights logs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i have a 3d point clouds of my object by using Open3d reconstruction system ( makes point clouds by a sequence of RGBD frames) also I created a 3d bounding box on the object in point clouds\nmy question is how can I have 2d bounding box on all of the RGB frames at the same coordinates of 3d bounding box?\nmy idea Is to project 3d bb to 2d bb but as it is clear, the position of the object is different in each frame, so I do not know how can i use this approach?\ni appreciate any help or solution, thanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":472,"Q_Id":65160942,"Users Score":0,"Answer":"calculate points for the eight corners of your box\ntransform those points from the world frame into your chosen camera frame\nproject the points, apply lens distortion if needed.\n\nOpenCV has functions for some of these operations and supports you with matrix math for the rest.\nI would guess that Open3d gives you pose matrices for all the cameras. you use those to transform from the world coordinate frame to any camera's frame.","Q_Score":0,"Tags":"python,opencv,point-clouds,bounding-box,open3d","A_Id":65161287,"CreationDate":"2020-12-05T19:15:00.000","Title":"How can i have 2D bounding box on a sequence of RGBD frames from a 3D bounding box in point clouds?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Problem statement\nI would like to achieve the following:\n(could be used for example to organize some sort of a speeddating event for students)\nCreate a schedule so people talk to each other one-on-one and this to each member of the group.\nbut with restrictions.\n\nInput: list of people. (eg. 30 people)\nRestrictions: some of the people should not talk to each other (eg. they know each other)\nOutput: List of pairs (separated into sessions) just one solution is ok, no need to know all of the possible outcomes\n\nExample\neg. Group of 4 people\n\nJohn\nSteve\nMark\nMelissa\n\nRestrictions: John - Mellisa -> NO\nOutcome\nSession one\n\nJohn - Steve\nMark - Melissa\n\nSession two\n\nJohn - Mark\nSteve - Melissa\n\nSession three\n\nSteve - Mark\n\nJohn and Mellisa will not join session three as it is restriction.\nQuestion\nIs there a way to approach this using Python or even excel?\nI am especially looking for some pointers how this problem is called as I assume this is some Should I look towards some solver? Dynamic programming etc?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75,"Q_Id":65163061,"Users Score":0,"Answer":"Your given information is pretty generous, you have a set of all the students, and a set of no-go pairs (because you said it yourself, and it makes it easy to explain, just say this is a set of pairs of students who know each other). So we can iterate through our students list creating random pairings so long as they do not exist in our no-go set, then expand our no-go set with them, and recurse on the remaining students until we can not create any pairs that do not exist already in the no-go set (we have pairings so that every student has met all students).","Q_Score":0,"Tags":"python,algorithm,dynamic-programming","A_Id":65163302,"CreationDate":"2020-12-05T23:26:00.000","Title":"Create a schedule where a group of people all talk to each other - with restrictions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hello I am needing assistance. I am currently doing within url.py for Django:\nurlpatterns = [\npath('cookies\/', admin.site.urls),\n]\nThis is being done from urls.py in atom and when I look at the terminal it is not able to GET the new url. When I have 127.0.0.1\/cookies\/ I am still directed to a not found page. Anyone please help I am currently using Ubuntu Linux.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":34,"Q_Id":65163474,"Users Score":0,"Answer":"It looks like you are changing an urls.py from your app not from your main project's urls.py.\nYou generated this file byyourself, right?\nIf you want to change your admin path, go to your main projects' urls.py.It is in the same folder as settings.py.\nIf you cannot find it, just search for admin.site.urls in your text editor.","Q_Score":0,"Tags":"python,django","A_Id":65163648,"CreationDate":"2020-12-06T00:32:00.000","Title":"I am having trouble changing the url.py within Django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"\"python --version\nPython was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.\"\nThis is what I get trying to make sure it works (clearly it doesn't). I'm quite a rookie with all this. I started cause I wanted to run some script on bluestacks, so I needed Python and ADB added both PATH. The problem comes here.... It is indeed added to Path:\nC:\\windows\\system32;C:\\windows;C:\\windows\\System32\\Wbem;C:\\windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\windows\\System32\\OpenSSH\\;C:\\Users\\Sierra\\AppData\\Local\\Microsoft\\WindowsApps;C:\\platform-tools;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\Lib;\nThis is PATHEXT: .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC\nThis is PYTHONPATH (I made it since I saw someone saying it would fix it):\nC:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\Lib;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\include;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\DLLS;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\Scripts;C:\\Users\\Sierra\\AppData\\Local\\Programs\\Python\\Python39\\Lib\\site-packages\nWeird enough the fact that ADB works fine:\nC:\\Users\\Sierra>adb --version Android Debug Bridge version 1.0.41 Version 30.0.5-6877874 Installed as C:\\platform-tools\\adb.exe \nSince I did the same in both cases, I can't get why it's not working. Maybe I did something wrong with the Python version I downloaded? Weird thing too, since I also have stilled version 3.8\nAlso, the script I need says \"Python 3.7.X installed and added to PATH.\" I guessed 3.9 would work, since it's the newest\nI apoloogize cause my English. I'm not native speaker, so I could have messed up somewhere. Many thanks!!\nForgot to tell, I use Windows 10","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3120,"Q_Id":65164477,"Users Score":0,"Answer":"When you run the setup exe for python a list of tiny boxes should pop up and one will say add PATH to python? If you click yes it will add PATH to python which seems like your issue I think? If you have multiple versions of python installed that could cause a issue type python in cmd if it has a error then you didn't install python properly. Double check you have all the required modules installed if that doesn't work then I'm lost. Anyways here's the code to open cmd.\n\nimport os\n\n\nos.system(\"cmd\")","Q_Score":0,"Tags":"python,python-3.x,windows","A_Id":65164897,"CreationDate":"2020-12-06T03:46:00.000","Title":"How to make Python 3.9 run command prompt windows 10?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Good day!\nInstalled the Python 3.9.1, checked \"Add to path\", the cmd did not work though.\nAdded Environment Variable Path, both folder\n\nC:\\Users\\XXXXX\\AppData\\Local\\Programs\\Python\\Python39\n\n(file manager opens the path to python.exe just fine)\nand script lines:\n\nC:\\Users\\XXXXX\\AppData\\Local\\Programs\\Python\\Python39\n\nStill the commands python -version and pip --version do not work from the command line.\nPy --version works just fine though.\nAnyone might share and idea what might be the reason?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6178,"Q_Id":65166813,"Users Score":0,"Answer":"If you had Python installed in the system before, the new path is added at the end of PATH system variable and when system looks for python.exe it finds first the old version that is available under a different folder.\nIf you used a command window opened before the new version got installed, it is also possible that system variables did not reload. Close it and use a new one to check.","Q_Score":4,"Tags":"python,python-3.x,python-3.9,system-paths","A_Id":65173353,"CreationDate":"2020-12-06T10:06:00.000","Title":"Python 3.9.1 path variable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am relatively new to the python's subprocess and os modules. So, I was able to do the process execution like running bc, cat commands with python and putting the data in stdin and taking the result from stdout.\nNow I want to first know that a process like cat accepts what flags through python code (If it is possible).\nThen I want to execute a particular command with some flags set.\nI googled it for both things and it seems that I got the solution for second one but with multiple ways. So, if anyone know how to do these things and do it in some standard kind of way, it would be much appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":55,"Q_Id":65166931,"Users Score":0,"Answer":"In the context of processes, those flags are called arguments, hence also the argument vector called argv. Their interpretation is 100% up to the program called. In other words, you have to read the manpages or other documentation for the programs you want to call.\nThere is one caveat though: If you don't invoke a program directly but via a shell, that shell is the actual process being started. It then also interprets wildcards. For example, if you run cat with the argument vector ['*'], it will output the content of the file named * if it exists or an error if it doesn't. If you run \/bin\/sh with ['-c', 'cat *'], the shell will first resolve * into all entries in the current directory and then pass these as separate arguments to cat.","Q_Score":1,"Tags":"python,linux,subprocess","A_Id":65169064,"CreationDate":"2020-12-06T10:22:00.000","Title":"Is there any way to know the command-line options available for a separate program from Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am just trying to calculate the percentage of one column against another's total, but I am unsure how to do this in Pandas so the calculation gets added into a new column.\nLet's say, for argument's sake, my data frame has two attributes:\n\nNumber of Green Marbles\nTotal Number of Marbles\n\nNow, how would I calculate the percentage of the Number of Green Marbles out of the Total Number of Marbles in Pandas?\nObviously, I know that the calculation will be something like this:\n\n(Number of Green Marbles \/ Total Number of Marbles) * 100\n\nThanks - any help is much appreciated!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1029,"Q_Id":65167120,"Users Score":0,"Answer":"df['percentage columns'] = (df['Number of Green Marbles']) \/ (df['Total Number of Marbles'] ) * 100","Q_Score":1,"Tags":"python,pandas,math,percentage","A_Id":65167227,"CreationDate":"2020-12-06T10:45:00.000","Title":"Pandas: How to calculate the percentage of one column against another?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed python 3.9 selenium and behave\nI want to run first feature file but had \"behave is not recognized as an internal or external command\"\nI added C:\\ProgramFiles\\Python39\\Scripts\\ and C:\\ProgramFiles\\Python39\\ to environemt var and to system path variables. In cmd when typing python --version I got proper answser.\nI dont have any code yet just Scenario in Feature file\nAlso I dont see Behave configuration template when try to ADD Configuration to run Behave trough Pycharm, so Behave is not installed\nScenario: some scenario\nGiven ...\nWhen ...\nThen ...\nwhen typing behave login.feature got this error","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1586,"Q_Id":65168644,"Users Score":0,"Answer":"I deleted python39 and installed 38 now all is working fine","Q_Score":0,"Tags":"python,selenium,cmd,environment-variables,python-behave","A_Id":65169175,"CreationDate":"2020-12-06T13:31:00.000","Title":"why Im getting \"behave is not recognized as an internal or external command\" on windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to implement a dive-and-fix algorithm in Gurobi. What I want to build is a function into which you put an optimized model last_model, make a deepcopy of this model called new_model and add certain constraints to new_model accoording to certain optimized values of last_model.\nI found the function .copy() that would make a deepcopy for me. But I\u2019m still having an awful time adding constraints to my copied new_model as I can\u2019t in any way alter my constraints. (And yes, i am using last_model.update() before copying)\nIf I don\u2019t do anything to my variables after new_model = last_model.copy() and tried to add a constant on z, it would tell me that Variable not in model.\nI\u2019ve tried .getVarByName(\u2018z\u2019), which would tell me that z was a NoneType. (I found this on stackexchange)\nI\u2019ve tried new_model._data = last_model._data, which just returns that the function _data does not exist. (I found this on the gurobi support site)\nI\u2019ve tried .getVars which would just create a list and does not allow me to add any constraints on the actual variables.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":152,"Q_Id":65169092,"Users Score":0,"Answer":"You are on the right track with getVars() - you really do need to get the variables of the copied model again to be able to add new constraints. The variables of the original model are only going to refer to the original model, even if they may have the same name. Think of the variables as handles for the actual variables in the model - they can only refer to one model.","Q_Score":0,"Tags":"python,optimization,deep-copy,gurobi","A_Id":65201353,"CreationDate":"2020-12-06T14:21:00.000","Title":"How to make a copy of a Gurobi model and add constraints to the copy in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So right now, I'm making a sudoku solver. You don't really need to know how it works, but one of the checks I take so the solver doesn't break is to check if the string passed (The sudoku board) is 81 characters (9x9 sudoku board). An example of the board would be: \"000000000000000000000000000384000000000000000000000000000000000000000000000000002\"\nthis is a sudoku that I've wanted to try since it only has 4 numbers. but basically, when converting the number to a string, it removes all the '0's up until the '384'. Does anyone know how I can stop this from happening?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":92,"Q_Id":65170090,"Users Score":2,"Answer":"There is no way to prevent it from happening, because that is not what is happening. Integers cannot remember leading zeroes, and something that does not exist cannot be removed. The loss of zeroes does not happen at conversion of int to string, but at the point where you parse the character sequence into a number in the first place.\nThe solution: keep the input as string until you don't need the original formatting any more.","Q_Score":0,"Tags":"python","A_Id":65170118,"CreationDate":"2020-12-06T15:58:00.000","Title":"int to str in python removes leading 0s","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to modify the Extensions that I send in the client Hello packet with python.\nI've had a read of most of the source code found on GitHub for urllib3 but I still don't know how it determines which TLS extensions to use.\nI am aware that it will be quite low level and the creators of urllib3 may just import another package to do this for them. If this is the case, which package do they use?\nIf not, how is this determined?\nThanks in advance for any assistance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":107,"Q_Id":65171598,"Users Score":0,"Answer":"The HTTPS support in urllib3 uses the ssl package which uses the openssl C-library. ssl does not provide any way to directly fiddle with the TLS extension except for setting the hostname in the TLS handshake (i.e. server_name extension aka SNI).","Q_Score":1,"Tags":"python,ssl,httpclient,tls1.2,urllib3","A_Id":65172963,"CreationDate":"2020-12-06T18:29:00.000","Title":"How does urllib3 determine which TLS extensions to use?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying pip3 install mediapipe but I am getting an error:\n\nERROR: Could not find a version that satisfies the requirement mediapipe\nERROR: No matching distribution found for mediapipe\n\nMy Python version is 3.7.9 and pip version is 20.3.1.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":13279,"Q_Id":65172162,"Users Score":0,"Answer":"I just uninstalled and reinstalled Python version 3.7.7","Q_Score":4,"Tags":"python,pip,mediapipe","A_Id":67674897,"CreationDate":"2020-12-06T19:24:00.000","Title":"Cannot install \"mediapipe\" library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying pip3 install mediapipe but I am getting an error:\n\nERROR: Could not find a version that satisfies the requirement mediapipe\nERROR: No matching distribution found for mediapipe\n\nMy Python version is 3.7.9 and pip version is 20.3.1.","AnswerCount":5,"Available Count":3,"Score":0.1586485043,"is_accepted":false,"ViewCount":13279,"Q_Id":65172162,"Users Score":4,"Answer":"I solved it by typing py -m pip install mediapipe instead.","Q_Score":4,"Tags":"python,pip,mediapipe","A_Id":67230528,"CreationDate":"2020-12-06T19:24:00.000","Title":"Cannot install \"mediapipe\" library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying pip3 install mediapipe but I am getting an error:\n\nERROR: Could not find a version that satisfies the requirement mediapipe\nERROR: No matching distribution found for mediapipe\n\nMy Python version is 3.7.9 and pip version is 20.3.1.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":13279,"Q_Id":65172162,"Users Score":0,"Answer":"You need to install the 64-bit version of Python.","Q_Score":4,"Tags":"python,pip,mediapipe","A_Id":69868584,"CreationDate":"2020-12-06T19:24:00.000","Title":"Cannot install \"mediapipe\" library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using pipenv to manage virtual environments but I'm confused about the following.\nIf I run:\n\npipenv shell\npip list (or pip3 list)\n\nI don't get the modules installed in the virtual environment (or those installed globally), it just prints: pip, setuptools, and wheel.\nIt finds the right packages when running the code and I can see them in the Pipfile, but shouldn't they show when running pip list?\nAny clarification will be appreciated.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1602,"Q_Id":65174557,"Users Score":0,"Answer":"you can use pipenv graph to show installed dependencies tree","Q_Score":0,"Tags":"python,python-3.x,pipenv","A_Id":71326120,"CreationDate":"2020-12-07T00:19:00.000","Title":"Pip list to show packages installed through pipenv?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using pipenv to manage virtual environments but I'm confused about the following.\nIf I run:\n\npipenv shell\npip list (or pip3 list)\n\nI don't get the modules installed in the virtual environment (or those installed globally), it just prints: pip, setuptools, and wheel.\nIt finds the right packages when running the code and I can see them in the Pipfile, but shouldn't they show when running pip list?\nAny clarification will be appreciated.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1602,"Q_Id":65174557,"Users Score":0,"Answer":"Type pipenv run pip freeze after activating pipenv with pipenv shell","Q_Score":0,"Tags":"python,python-3.x,pipenv","A_Id":65174629,"CreationDate":"2020-12-07T00:19:00.000","Title":"Pip list to show packages installed through pipenv?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any python library for computing EMD between two signatures? There are multiple options to compute EMD between two distributions (e.g. pyemd). But I didn't find any implementation for the exact EMD value. For example, consider Signature_1 = {(1,1), (4,1)} and Signature_2 = {(1,1), (2,1), (3,1), (4,1)}, where first coordinate is the position and second coordinate is the weight. True EMD(Signature_1, Signature_2) = 0 whereas if we consider these as distributions then the distance is 0.5 (the emd_samples in pyemd gives this answer). But I would be interested in the implementation of True EMD. Any help in this regard would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":146,"Q_Id":65175926,"Users Score":0,"Answer":"No worries. I got the answer. You can just use \"normalized\" = False, \"extra_mass_penalty\" = 0 in the arguments of \"emd_samples\" function of pyemd.","Q_Score":0,"Tags":"python,earth-movers-distance","A_Id":65272218,"CreationDate":"2020-12-07T03:52:00.000","Title":"Exact Earth Mover's Distance (NOT Mallows Distance) Python Code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some pdf files in a directory. Some of them are password protected, some of them are not. I know the passwords for each of the password protected files. How do I automate the process of removing passwords from each of the pdf files? I am thinking of something like:\n\nGetting the password protected file.\nTrying the given passwords from a wordlist I've made.\nPrinting out the password for the file.\nSaving the file as 'Decrypted_filename.pdf'","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1311,"Q_Id":65177156,"Users Score":0,"Answer":"I think you can solve your problem with pyPdf","Q_Score":2,"Tags":"python,python-3.x,pdf,unix","A_Id":65177219,"CreationDate":"2020-12-07T06:43:00.000","Title":"How do I remove passwords of some of all the pdf files in a directory with python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The sample data is given here .\n0.005225 1 282 Rx D 8 00 00 FF F5 FF FF 14 01 I know the meaning of each byte and its unit. I want to decode this data into human readable format, such as CAN ID description, value with or without units. How to do it in python? Any libraries?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":233,"Q_Id":65178300,"Users Score":0,"Answer":"Have you tried python-can python package? It also has a Message object where you can initiate the Message object from a bytearray.","Q_Score":0,"Tags":"python,python-3.x,decode,can-bus,decoder","A_Id":65178614,"CreationDate":"2020-12-07T08:32:00.000","Title":"How to convert raw CAN data to human readable format using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently making a project which uses MQTT to forward data. I am using MQTT Dash app to receive it. I have three different messages to send, all are in text format. Do I have to create three different topics to publish the data? Is there a way to send them in single payload?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":125,"Q_Id":65178890,"Users Score":0,"Answer":"Yes. One possibility: You can put data into a dictionary, jsonify it and put yhe resulting string into payload. At receiving end, de-jsonify the string back into dictionary and access the data.","Q_Score":0,"Tags":"python,mqtt","A_Id":65178968,"CreationDate":"2020-12-07T09:14:00.000","Title":"Can i send multiple data's through one topic in MQTT?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently making some kind of a program that can control PCs using a remote control, and one of the features is music playing (In the background, the player will not be visible). I know that there are other options for that in python, but the only lightweight one is playsound, which cant pause and stop the music.\nAll the other packages increase the executable file size dramatically, so I decided to use VLC as a downloadable extension. Now, I don't want users to install VLC, just to download portable VLC libraries and use them to get VLC playing features. Any way to use python VLC bindings with potable version of VLC? (Without installing, that's the whole point)\nThank you!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":177,"Q_Id":65179158,"Users Score":0,"Answer":"Specifically the python-vlc uses libvlc.dll and libvlccore.dll . On Windows machines, it places copies in \/windows\/system32\/ . You could test copying just those two DLLs, rather than doing a full install.","Q_Score":0,"Tags":"python,python-3.x,audio,vlc,python-vlc","A_Id":66399074,"CreationDate":"2020-12-07T09:33:00.000","Title":"How to initialize python-vlc without VLC installed on the machine (portable VLC intance)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am testing Python 3.6 on Windows iis 10.0 and get the error 401 from browser when I call a .py file.\n\nI have iis working on ASP pages and HTML\nI have phyton.exe installed and working\nI have added a .py handler in iis\nI have tried to allow anonymous access and set up authorization as IUSR in iis\n( And I allowed IUSR and IIS_IUSR... Full access on application directory)\n\nI looked on Google and tried out many suggetsions!\nBUT No way to esecute the script!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":173,"Q_Id":65180178,"Users Score":0,"Answer":"You have to add %s %s at the end of the Python Interpreter address.","Q_Score":0,"Tags":"python,iis","A_Id":65427202,"CreationDate":"2020-12-07T10:41:00.000","Title":"How can I eliminate 401 error on IIS when calling Python script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to install pygame in the pythonista app.\nI installed Stash, but when I tried pip install pygame an error appeared like 'Cannot locate packages.Manual installation required'\nWhy do I get this error and how can I install pygame library to the pythonista app?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":555,"Q_Id":65183637,"Users Score":1,"Answer":"Pygame relies on C code that is not installable on Pythonista.","Q_Score":3,"Tags":"python,pygame,pythonista","A_Id":65224605,"CreationDate":"2020-12-07T14:29:00.000","Title":"How install pygame to pythonista apps","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed termux on my Android tablet, and was able to successfully install Python 3.9 and Numpy, but not matplotlib. Apparently the .whl was downloaded and cached, and now when I try to install, whether using pip or pkg it attempts to use the cached .whl file. I tried clearing memory and reinstalling everything from scratch, but it still downloads the same .whl, with the same result. (The termux wiki provided no clues that I could find)\nAnybody have a work around or fix?","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":4632,"Q_Id":65184413,"Users Score":1,"Answer":"As I did not want to install Ubuntu on the tablet, what I wound up doing was installing Pydroid 3. I was then able to install Numpy and Matplotlib using pip. Thanks for the effort!","Q_Score":1,"Tags":"python,android,matplotlib,termux","A_Id":65860935,"CreationDate":"2020-12-07T15:20:00.000","Title":"pip install matplotlib does not work under termux (Android)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why does Pandas not round DataFrames when the dypes are np.float16?\npd.DataFrame(np.random.rand(10) for x in range(0, 10)).astype(np.float16).round(2)\nOr\nnp.round(pd.DataFrame(np.random.rand(10) for x in range(0, 10)).astype(np.float16), 2)\nOr\npd.DataFrame(np.random.rand(10) for x in range(0, 10)).astype(np.float16).round({0:2, 1:2})\nThis must have come up before but I can't find it anywhere?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":198,"Q_Id":65187037,"Users Score":0,"Answer":"It is rounding. Up to the limits of float16 precision, the results are exactly what you asked for. However, the limits of float16 precision are significantly lower than the 6 significant figures Pandas attempts to print by default, so you see some of the representation imprecision that is usually hidden when printing floating-point numbers.","Q_Score":0,"Tags":"python,pandas","A_Id":65187103,"CreationDate":"2020-12-07T18:06:00.000","Title":"Why doesn't Pandas round when dtype is float16?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After upgrading from python 3.8.0 to python 3.9.1, the tremc front-end of transmission bitTorrent client is throwing decodestrings is not an attribute of base64 error whenever i click on a torrent entry to check the details.\nMy system specs:\nOS: Arch linux\nkernel: 5.6.11-clear-linux","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1953,"Q_Id":65187458,"Users Score":0,"Answer":"base64.encodestring() and base64.decodestring(), aliases deprecated since Python 3.1, have been removed.\nuse base64.encodebytes() and base64.decodebytes()","Q_Score":0,"Tags":"linux,base64,python-3.9","A_Id":68098566,"CreationDate":"2020-12-07T18:37:00.000","Title":"decodestrings is not an attribute of base64 error in python 3.9.1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am dealing with a semantic segmentation problem where the two classes in which I am interested (in addition to background) are quiet unbalanced in the image pixels. I am actually using sparse categorical cross entropy as a loss, due to the way in which training masks are encoded. Is there any version of it which takes into account class weights? I have not been able to find it, and not even the original source code of sparse_categorical_cross_entropy. I never explored the tf source code before, but the link to source code from API page doesn't seem to link to a real implementation of the loss function.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1153,"Q_Id":65188739,"Users Score":1,"Answer":"As far as I know you can use class weights in model.fit for any loss function. I have used it with categorical_cross_entropy and it works. It just weights the loss with the class weight so I see no reason it should not work with sparse_categorical_cross_entropy.","Q_Score":4,"Tags":"python,tensorflow,keras,image-segmentation,cross-entropy","A_Id":65191246,"CreationDate":"2020-12-07T20:16:00.000","Title":"Weighted sparse categorical cross entropy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Pycharm on Windows 10.\nPython version: 3.8.6\nI've checked using the CMD if I have tkinter install python -m tkinter. It says I have version 8.6\nTried:\n\nimport tkinter.\nI get \"No module named 'tkinter' \"\n\nfrom tkinter import *.\nI get \"Unresolved reference 'tkinter'\"\n\nInstalled future package but that didn't seem to change the errors.\n\n\nAny suggestions on how to fix this issue?\nThank you!","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":252,"Q_Id":65190405,"Users Score":-1,"Answer":"You can try \"pip install tkinter\" in cmd","Q_Score":0,"Tags":"python,tkinter,python-3.8","A_Id":70811441,"CreationDate":"2020-12-07T22:29:00.000","Title":"tkinter in Pycharm (python version 3.8.6)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Pycharm on Windows 10.\nPython version: 3.8.6\nI've checked using the CMD if I have tkinter install python -m tkinter. It says I have version 8.6\nTried:\n\nimport tkinter.\nI get \"No module named 'tkinter' \"\n\nfrom tkinter import *.\nI get \"Unresolved reference 'tkinter'\"\n\nInstalled future package but that didn't seem to change the errors.\n\n\nAny suggestions on how to fix this issue?\nThank you!","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":252,"Q_Id":65190405,"Users Score":-1,"Answer":"You just verify in the project settings, sometimes Pycharm doesn't use the same interpreter.","Q_Score":0,"Tags":"python,tkinter,python-3.8","A_Id":65190465,"CreationDate":"2020-12-07T22:29:00.000","Title":"tkinter in Pycharm (python version 3.8.6)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a string like: string = \"[1, 2, 3]\"\nI need to convert it to a list like: [1, 2, 3]\nI've tried using regular expression for this purpose, but to no avail","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":84,"Q_Id":65190850,"Users Score":2,"Answer":"Try\n[int(x) for x in arr.strip(\"[]\").split(\", \")], or if your numbers are floats you can do [float(x) for x in arr.strip(\"[]\").split(\", \")]","Q_Score":0,"Tags":"python","A_Id":65190875,"CreationDate":"2020-12-07T23:17:00.000","Title":"how to convert a string to list I have a string how to convert it to a list?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install a package for python 2 using pip, however it defaults to installing for python 3. The script I need to run only works in python 2.7, which i do have installed alongside python 3. The pip-2.7 command does not exist, nor pip2. Is there a way besides that to directly install the package? (hexdump, btw if that helps).","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":190,"Q_Id":65193101,"Users Score":1,"Answer":"Did you checked if Python2 is installed ?\nDid you tried pip2 command ?\nWhat's the result if you do which pip ?\nif pip2 or pip2.7 don't work, you can try \/usr\/bin\/pip2 or \/usr\/bin\/pip2.7","Q_Score":0,"Tags":"python,python-3.x,linux,python-2.7","A_Id":65193142,"CreationDate":"2020-12-08T04:31:00.000","Title":"How to install packages on Linux for Python 2 with pip when Python 3 is installed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In machine learning, you split the data into training data and test data.\nIn cross validation, you split the training data into training sets and validation set.\n\"And if scaling is required, at each iteration of the CV, the means and standard deviations of the training sets (not the entire training data) excluding the validation set are computed and used to scale the validation set, so that the scaling part never include information from the validation set. \"\nMy question is when I include scaling in the pipeline, at each CV iteration, is scaling computed from the smaller training sets (excluding validation set) or the entire training data (including validation set)? Because if it computes means and std from entire training data , then this will lead to estimation bias in the validation set.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":50,"Q_Id":65193318,"Users Score":1,"Answer":"I thought about this, too, and although I think that scaling with the full data leaks some information from training data into validation data, I don't think it's that severe.\nOne one side, you shuffle the data anyway, and you assume that the distributions in all sets are the same, and so you expect means and standard deviations to be the same. (Of course, this is only theoretic (law of large numbers).)\nOn the other side, even if the means and stds are different, this difference will not be siginificant.\nIn my optinion, yes, you might have some bias, but it should be negligible.","Q_Score":0,"Tags":"python,machine-learning,scikit-learn,pipeline,scaling","A_Id":65193455,"CreationDate":"2020-12-08T04:57:00.000","Title":"Sklearn Pipeline: is there leakage \/bias when including scaling in the pipeline?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a numpy ndarray train_data of length 200, where every row is another ndarray of length 10304.\nHowever when I print np.shape(train_data), I get (200, 1), and when I print np.shape(train_data[0]) I get (1, ), and when I print np.shape(train_data[0][0]) I get (10304, ).\nI am quite confused with this behavior as I supposed the first np.shape(train_data) should return (200, 10304).\nCan someone explains to me why this is happening, and how could I get the array to be in shape of (200, 10304)?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":65200418,"Users Score":0,"Answer":"I'm not sure why that's happening, try reshaping the array:\nB = np.reshape(A, (-1, 2))","Q_Score":0,"Tags":"python,numpy","A_Id":65200891,"CreationDate":"2020-12-08T14:02:00.000","Title":"2D numpy array showing as 1D","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to generate good sentence embeddings for some specific type od texts, using sentence transformer models while testing the the similarity and clustering using kmeans doesnt give good results.\nAny ideas to improve? I was thinking of training any of the sentence transformer model on my dataset(which are just sentences but do not have any labels).\nHow can i retrain the existing models specifically on ny data to generate better embeddings.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":262,"Q_Id":65200530,"Users Score":0,"Answer":"The sentence embeddings produced by pre-trained BERT model are generic and need not be appropriate for all the tasks.\nTo solve this problem:\n\nFine-tune the model with the task specific corpus on the given task (If the end goal is classification, fine-tune the model for classification task, later you can use the embeddings from the BERT model) (This is the method suggested for the USE embeddings, especially when the model remains a black-box)\n\nFine-tune the model in unsupervised manner using masked language model. This doesn't require you to know the task before hand, but you can just use the actual BERT training strategy to adapt to your corpus.","Q_Score":0,"Tags":"python,embedding,bert-language-model,sentence-transformers","A_Id":65201274,"CreationDate":"2020-12-08T14:09:00.000","Title":"How can I train a bert model for representational learning task that is domain specific?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm relatively new to python and have installed a standalone version of python3.8.6 on my Mac computer. More recently I installed also Anaconda, which applies other python 3.8 versions in its virtual environments. However if I try to install a package as e.g. \"requests\" with \"python3 -m pip install requests\" in my python 3.8.6 version I get the message: \"Requirement already satisfied: requests in .\/opt\/anaconda3\/lib\/python3.8\/site-packages (2.24.0).....\"(the messages is much longer) and my standalone python 3.8.6 scripts cannot find that module if I import it, saying: \"ModuleNotFoundError: No module named 'requests'\". So Anaconda seems to block the installation of packages in my standalone python 3.8.6 version.\nHow can I use both Anaconda (for special purposes and courses) and nevertheless import packages into my standalone python 3.8.6 version?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":65201531,"Users Score":0,"Answer":"I found the problem myself. It could perhaps interesting for other newbies! I just had forgotten to deactivate conda (conda deactivate). It remained in a active environment, called \"base\". After I deactivated conda, I could use pip for my standalone python.","Q_Score":0,"Tags":"python,pip,anaconda,modulenotfounderror","A_Id":65204273,"CreationDate":"2020-12-08T15:12:00.000","Title":"How can I use both Anaconda and nevertheless import packages into my standalone python version?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an embedded system on which I can connect to internet. This embedded system must send sensor data to PC client.\nI put a socket client using python on my PC. I put a socket server ( using C++ language on the embedded system because you can only use C++ ).\nI can succesfully connect from my PC to the embedded system using the sockets and send and recieve whatever I want.\nNow, the problem is I use local IP to connect to the system and both of them must be connected to the same Wifi router.\nIn the real application, I won't know where the embedded system is in the world. I need to get to it through internet, because it will be connectet to internet through 4g.\nMy question is, how can I connect to it through internet, if the embedded system is connected to internet using 4G?\nThank you","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":56,"Q_Id":65202965,"Users Score":1,"Answer":"Realistically in typical situations, neither a PC nor an embedded device hanging off a 4g modem will likely have (or should be allowed) to have externally routable addresses.\nWhat this practically means is that you need to bounce your traffic through a mutually visible relay in the cloud.\nOne very common way of doing that for IoT devices (which is basically to say, connected embedded devices) is to use MQTT. You'll find support in one form or another for most computing platforms with any sort of IP networking capability.\nOf course there are many other schemes, too - you can do something with a RESTful API, or websockets (perhaps as an alternate mode of an MQTT broker), or various proprietary IoT solutions offered by the big cloud platforms.\nIt's also going to be really key that you wrap the traffic in SSL, so you'll need support for that in your embedded device, too. And you'll have to think about which CA certs you package, and what you do about time given its formal requirement as an input to SSL validation.","Q_Score":1,"Tags":"python,sockets,tcp,embedded","A_Id":65211281,"CreationDate":"2020-12-08T16:33:00.000","Title":"Socket over internet Python Embedded","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an embedded system on which I can connect to internet. This embedded system must send sensor data to PC client.\nI put a socket client using python on my PC. I put a socket server ( using C++ language on the embedded system because you can only use C++ ).\nI can succesfully connect from my PC to the embedded system using the sockets and send and recieve whatever I want.\nNow, the problem is I use local IP to connect to the system and both of them must be connected to the same Wifi router.\nIn the real application, I won't know where the embedded system is in the world. I need to get to it through internet, because it will be connectet to internet through 4g.\nMy question is, how can I connect to it through internet, if the embedded system is connected to internet using 4G?\nThank you","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":65202965,"Users Score":0,"Answer":"I think your problem is more easily solved if you reverse the roles of your embedded system and PC. If you are communicating to a device using IP protocols across cellular networks, it is much easier to have the device connect to a server on the PC rather than the other way around. Some networks\/cellular modems do not allow server sockets and in any case, the IP address is usually dynamically allocated and therefore difficult to know. By having the device connect to a server, it \"knows\" the domain name (or IP address) and port to which it should make the connection. You just have to make sure that there is indeed a server program running at that host bound to some agreed upon port number. You can wake up the device to form the connection based on a number of criteria, e.g. time or amount of collected data, etc.","Q_Score":1,"Tags":"python,sockets,tcp,embedded","A_Id":65205272,"CreationDate":"2020-12-08T16:33:00.000","Title":"Socket over internet Python Embedded","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two python programs. Program 1 displays videos in a grid with multiple controls on it, and Program 2 performs manipulations to the images and sends it back depending on the control pressed in Program 1.\nEach video in the grid is running in its own thread, and each video has a thread in Program 2 for sending results back.\nI'm running this on the same machine though and I was unable to get multiple socket connections working to and from the same address (localhost). If there's a way of doing that - please stop reading and tell me how!\nI currently have one socket sitting independent of all of my video threads in Program 1, and in Program 2 I have multiple threads sending data to the one socket in an array with a flag for which video the data is for. The problem is when I have multiple threads sending data at the same time it seems to scramble things and stop working. Any tips on how I can achieve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":65203271,"Users Score":0,"Answer":"Regarding If there's a way of doing that - please stop reading and tell me how!.\nThere's a way of doing it, assuming you are on Linux or using WSL on Windows, you could use the hostname -I commend which will output an IP that looks like 192.168.X.X.\nYou can use that IP in your python program by binding your server to that IP instead of localhost or 127.0.0.1.","Q_Score":0,"Tags":"python,multithreading","A_Id":65203946,"CreationDate":"2020-12-08T16:51:00.000","Title":"Multiple threads sending over one socket simultaneously?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can you make mpld3-matplotlib interactive ?\nwhat I mean is display a graph on a web page and be able to update the time series i.e. not simply static graph, but dynamic one page graph-app ?\nWhat can be leveraged from mpld3 ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":65204165,"Users Score":0,"Answer":"if you don't have to support matplotlib then an option is Bokeh or Dash library instead.","Q_Score":0,"Tags":"python,matplotlib,mpld3","A_Id":65226053,"CreationDate":"2020-12-08T17:50:00.000","Title":"Can you make mpld-matplotlib interactive graph web app?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"actually i have a problem. I want to import PySimpleGUI in vscode. but when i install pip with pip install PySimpleGUI i get following error:\n\nWARNING: Retrying (Retry(total=0, connect=None, read=None,\nredirect=None, status=None)) after connection broken by\n'ConnectTimeoutError(, 'Connection to pypi.org timed out.\n(connect timeout=15)')': \/simple\/pysimplegui\/\n\nDo know someone what i'm doing wrong and how i can correctly install that?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1926,"Q_Id":65205122,"Users Score":1,"Answer":"Try loading Pypi.org if that works that its an underlying issue but normally this happens if the site is having trouble, other times even your DNS, but a big one i've heard is being connected to a VPN so if you're connected to a VPN disconnect from it and try again.","Q_Score":0,"Tags":"python,visual-studio-code,pip","A_Id":65205168,"CreationDate":"2020-12-08T18:56:00.000","Title":"how to import PySimpleGUI in vscode using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Good day everybody. I'm still learning parsing data with Python. I'm now trying to familiarize myself with Chrome Developer Tools. My question is when inspecting a directory website like TruePeopleSearch.com, how do I copy or view the variables that holds the data such as Name, Phone, and Address? I tried browsing the tool, but since I'm new with the Developer tool, I'm so lost with all the data. I would appreciate if the experts here points me to the right direction.\nThank you all!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":129,"Q_Id":65206058,"Users Score":0,"Answer":"Upon further navigating the Developer Console, I learned that these strings are located in these variables, by copying the JS paths.\nNAME & AGE\ndocument.querySelector(\"#personDetails > div:nth-child(1)\").innerText\nADDRESS\ndocument.querySelector(\"#personDetails > div:nth-child(4)\").innerText\nPHONE NUMBERS\ndocument.querySelector(\"#personDetails > div:nth-child(6)\").innerText\nSTEP 1\nFrom the website, highlight are that you need to inspect and click \"Inspect Element\"\nSTEP 2\nUnder elements, right-click the highlighted part and copy the JS path\nSTEP 3\nNavigate to console and paste the JS path and add .innerText and press Enter","Q_Score":0,"Tags":"python,json,parsing,uri","A_Id":65207019,"CreationDate":"2020-12-08T20:00:00.000","Title":"Grabbing values (Name, Address, Phone, etc.) from directory websites like TruePeopleSearch.com with Chrome Developer Tool","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When loading in an excel file, Pandas is ingesting a value (1735582) into a float value of (1735582.0). Subsequently, when importing the file to SQL, the value is ingested as a truncated scientific notation value (1.73558e+06), thereby rendering the value useless.\nMy first thought was to trim any trailing '.0' from all values, and then see if the import goes smoothly to retain the native values.\nI have attempted to use the dataframe.replace to identify values across the entire dataframe that have a trailing '.0', but have no come up with the right solution:\ndf_update = df.replace(to_replace ='\\.\\[0]$', value = '', regex = True)\nI need a way to 1) ingest the values without the trailing '.0', 2) remove the trailing '.0', or 3) prevent to_sql from outputting the values as truncated scientific notation.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":67,"Q_Id":65206980,"Users Score":0,"Answer":"Just use df.apply and then use lambda with \"{:.2f}\".format(x) to limit it to 2 digits after the 0","Q_Score":0,"Tags":"python,sql,regex,pandas","A_Id":65207199,"CreationDate":"2020-12-08T21:12:00.000","Title":"Remove trailing .0 from values in Python","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I executed \"conda update --all\", I got the following debug messages. I don't see any misbehavior in my Python or Spyder installation. Does anyone knows why we get this debug messages sometimes, what are they warning us about?\nPreparing transaction: done\nVerifying transaction: done\nExecuting transaction: \/ DEBUG menuinst_win32:init(198): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'C:\\Users\\usuario\\Miniconda3', env_name: 'None', mode: 'user', used_mode: 'user'\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\pythonw.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\pythonw.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py']\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\python.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\python.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py', '--reset']\nDEBUG menuinst_win32:init(198): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'C:\\Users\\usuario\\Miniconda3', env_name: 'None', mode: 'user', used_mode: 'user'\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\pythonw.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\pythonw.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py']\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\python.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\python.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py', '--reset']\n| DEBUG menuinst_win32:init(198): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'C:\\Users\\usuario\\Miniconda3', env_name: 'None', mode: 'user', used_mode: 'user'\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\python.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\python.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\jupyter-notebook-script.py', '\"%USERPROFILE%\/\"']\n\/ DEBUG menuinst_win32:init(198): Menu: name: 'Anaconda${PY_VER} ${PLATFORM}', prefix: 'C:\\Users\\usuario\\Miniconda3', env_name: 'None', mode: 'user', used_mode: 'user'\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\pythonw.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\pythonw.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py']\nDEBUG menuinst_win32:create(323): Shortcut cmd is C:\\Users\\usuario\\Miniconda3\\python.exe, args are ['C:\\Users\\usuario\\Miniconda3\\cwp.py', 'C:\\Users\\usuario\\Miniconda3', 'C:\\Users\\usuario\\Miniconda3\\python.exe', 'C:\\Users\\usuario\\Miniconda3\\Scripts\\spyder-script.py', '--reset']","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2843,"Q_Id":65207104,"Users Score":0,"Answer":"The following command solved this issue for me:\n\nconda clean --yes --all","Q_Score":5,"Tags":"python,debugging,installation,anaconda","A_Id":70377013,"CreationDate":"2020-12-08T21:22:00.000","Title":"DEBUG menuinst_win32 when running conda update --all","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"If I want to determine the type of model i.e. from which framework was it made programmatically, is there a way to do that?\nI have a model in some serialized manner(Eg. a .h5 file). For simplicity purposes, assume that my model can be either tensorflow's or scikit learn's. How can I determine programmatically which one of these 2 is the one?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":65210822,"Users Score":0,"Answer":"you can either use type(model) to see its type\nyou can also use help(model) to get the doc string from model.\nyou can also use dir(model) to see its member function or parameters.\nyou can also use import inspect inspect.getsource(model) to get source code of a object.","Q_Score":0,"Tags":"python,tensorflow,machine-learning,keras,scikit-learn","A_Id":65216668,"CreationDate":"2020-12-09T04:49:00.000","Title":"How to detect the given model is a keras or scikit model using python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Deep Learning. I finished training a model that took 8 hours to run, but I forgot to plot the accuracy graph before closing the jupyter notebook.\nI need to plot the graph, and I did save the model to my hard-disk. But how do I plot the accuracy graph of a pre-trained model? I searched online for solutions and came up empty.\nAny help would be appreciated! Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":318,"Q_Id":65212386,"Users Score":0,"Answer":"What kind of framework did you use and which version? In the future problem, you may face, this information can play a key role in the way we can help you.\nUnfortunately, for Pytorch\/Tensorflow the model you saved is likely to be saved with only the weights of the neurons, not with its history. Once Jupyter Notebook is closed, the memory is cleaned (and with it, the data of your training history).\nThe only thing you can extract is the final loss\/accuracy you had.\nHowever, if you regularly saved a version of the model, you can load them and compute manually the accuracy\/loss that you need. Next, you can use matplotlib to reconstruct the graph.\nI understand this is probably not the answer you were looking for. However, if the hardware is yours, I would recommend you to restart training. 8h is not that much to train a model in deep learning.","Q_Score":0,"Tags":"python,tensorflow,matplotlib,deep-learning,jupyter-notebook","A_Id":65212598,"CreationDate":"2020-12-09T07:30:00.000","Title":"Can you plot the accuracy graph of a pre-trained model? Deep Learning","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I appear to be missing some fundamental Python concept that is so simple that no one ever talks about it. I apologize in advance for likely using improper description - I probably don't know enough to ask the question correctly.\nHere is a conceptual dead end I have arrived at:\nI have an instance of Class Net, which handles communicating with some things over the internet.\nI have an instance of Class Process, which does a bunch of processing and data management\nI have an instance of Class Gui, which handles the GUI.\nThe Gui instance needs access to Net and Process instances, as the callbacks from its widgets call those methods, among other things.\nThe Net and Process instances need access to some of the Gui instances' methods, as they need to occasionally display stuff (what it's doing, results of queries, etc)\nHow do I manage it so these things talk to each other? Inheritance doesn't work - I need the instance, not the class. Besides, inheritance is one way, not two way.\nI can obviously instantiate the Gui, and then pass it (as an object) to the others when they are instantiated. But the Gui then won't know about the Process and Net instances. I can of course then manually pass the Net and Process instances to the Gui instance after creation, but that seems like a hack, not like proper practice. Also the number of interdependencies I have to manually pass along grows rather quickly (almost factorially?) with the number of objects involved - so I expect this is not the correct strategy.\nI arrived at this dead end after trying the same thing with normal functions, where I am more comfortable. Due to their size, the similarly grouped functions lived in separate modules, again Net, Gui, and Process. The problem was exactly the same. A 'parent' module imports 'child' modules, and can then can call their methods. But how do the child modules call the parent module's methods, and how do they call each other's methods? Having everything import everything seems fraught with peril, and again seems to explode as more objects are involved.\nSo what am I missing in organizing my code that I run into this problem where apparently all other python users do not?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":81,"Q_Id":65217340,"Users Score":1,"Answer":"The answer to this is insanely simple.\nAnything that needs to be globally available to other modules can be stored its own module, global_param for instance. Every other module can import global_param, and then use and modify its contents as needed. This avoids any issues with circular importing as well.\nNot sure why it took me so long to figure this out...","Q_Score":0,"Tags":"python,concept","A_Id":66012638,"CreationDate":"2020-12-09T13:03:00.000","Title":"How do I handle communication between object instances, or between modules?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I make python loop faster through DataFrame columns with 1 million rows and search for a pattern of strings? Should return True or False\npattern_example = \"home|property|house|apartment\"\nThis is what I have right now\ndf[field].str.contains(pattern_example.lower(), case = False, regex=False)\nThis is what I am trying to implement\ndf[field].apply(lambda x: True if pattern_example.lower() in x else False)\nHowever, it cannot recognize the OR(|) operator and searchers for full \"home|property|house|apartment\"\nAny suggestions?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":180,"Q_Id":65218798,"Users Score":0,"Answer":"@furas thanks for the contribution.\nIt worked.\nThis is what I used\ndf[field].apply(lambda x: True if any (word in x for word in pattern.lower().split('|')) else False)","Q_Score":2,"Tags":"python,pandas,search,lambda","A_Id":65251649,"CreationDate":"2020-12-09T14:31:00.000","Title":"How to make pandas str.contains faster","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently, I do not see a mark under a calculated variable that is not used later.\nIs there a way to make Spyder marking the unused variables?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":75,"Q_Id":65219016,"Users Score":1,"Answer":"(Spyder maintainer here) We don't have that functionality and I don't know how easy it is to implement (probably hard).","Q_Score":1,"Tags":"python-3.x,editor,spyder","A_Id":65224870,"CreationDate":"2020-12-09T14:42:00.000","Title":"How to make Spyder marking unused variables?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm very new to coding and im learning python and I have a certain problem.\nI'm writing a program which requires the user to input an amount and I want the program to always\nprint out 15 zeros but I want the amount the user inputs to replace the zeros starting from the end.\nFor example, if the user enters 43525.\nThe program would print 000000000043525\nAlso for example if the user inputs 2570.20\nThe program would print 000000000257020 (removes dot automatically)\ncan anyone help me with how I should go about doing this?","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":65221017,"Users Score":0,"Answer":"Using list slicing\nadded_string = added_string.replace(\".\", \"\")\nnew_str = start_string[:len(start_string) - len(added_str)] + added_string","Q_Score":0,"Tags":"python,python-3.x","A_Id":65221140,"CreationDate":"2020-12-09T16:38:00.000","Title":"How do I replace a certain amount of zeros in a string with an amount that the user is asked to input, starting from the end of the string?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The wx.MenuBar class auto generates a Window dropdown that contains the entries: next, prev, close, and close all. How can I remove this option?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":29,"Q_Id":65222511,"Users Score":0,"Answer":"Figured it out.\nI was using wx.aui.AuiMDIParentFrame, and it appends a Window menu item by default unless you use wx.FRAME_NO_WINDOW_MENU style option in the constructor.","Q_Score":0,"Tags":"wxpython","A_Id":65251870,"CreationDate":"2020-12-09T18:09:00.000","Title":"How can I remove the Window dropdown in MenuBar?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have machine A that just cranks out .png files. It gets synced to machine B and I view it on machine B.\nSometimes machine A crashes for whatever reason and stops doing the scheduled jobs, which means then files on machine B will be old.\nI want machine B to run a script to see if the file is older than 1 day, and if it is, then reset the power switch on machine A, so that it can be cold booted. The switch is connected to Google Home but understand I have to use the Assistant API.\nI have installed the google-assistant-sdk[samples] package. Can someone show me some code on how to query and return all devices then flip the switch on and off on that device?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1610,"Q_Id":65222829,"Users Score":1,"Answer":"The google-assistant-sdk is intended for processing audio requests.\nFrom the doc:\n\nYour project captures an utterance (a spoken audio request, such as What's on my calendar?), sends it to the Google Assistant, and receives a spoken audio response in addition to the raw text of the utterance.\n\nWhile you could use that with some recorded phrases it makes more sense to connect to the switch directly or use a service like IFTTT. What kind of switch is it?","Q_Score":4,"Tags":"python,google-assistant-sdk","A_Id":65335478,"CreationDate":"2020-12-09T18:33:00.000","Title":"Google Assistant API, controlling a light switch connected to Google Home","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have machine A that just cranks out .png files. It gets synced to machine B and I view it on machine B.\nSometimes machine A crashes for whatever reason and stops doing the scheduled jobs, which means then files on machine B will be old.\nI want machine B to run a script to see if the file is older than 1 day, and if it is, then reset the power switch on machine A, so that it can be cold booted. The switch is connected to Google Home but understand I have to use the Assistant API.\nI have installed the google-assistant-sdk[samples] package. Can someone show me some code on how to query and return all devices then flip the switch on and off on that device?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1610,"Q_Id":65222829,"Users Score":1,"Answer":"Unfortunately, many smart home companies are building products for consumers, not developers. Google's SDK is letting developers stream consumer voice requests to their servers and turning that into actions. Gosund, similarly, is only interested in API access for Amazon and Google. They're API is probably not documented for public use.\nTo answer your specific question, if you want to use the Google Assistant SDK, you would name your switch something like \"Server A Switch\" and record a short clip of you saying \"Turn off Server A Switch\" and \"Turn on Server A Switch\" and send those two google. The way google matches the requests with your particular account is through OAuth2 tokens, which google will give you in exchange for valid sign in credentials.\nIf Gosund works with Google Assistant, it has a standard OAuth2 server endpoint as well as a Google Assistant compliant API endpoint. I only recommend this if you want to have some fun reverse engineering it.\nIn your Google Assistant app, if you try adding the Gosund integration, the first screen popup is the url endpoint where you can exchange valid Gosund account credentials for a one-time code which you can then exchange for OAuth2 access and refresh tokens. With the access token you can theoretically control your switch. The commands you'll want to send are standardized by Google. However, you'll have to figure out where to send them. The best bet here is probably to email their developers.\nAre you familiar with OAuth2? If not, I don't recommend doing any of the above.\nYour other option is to prevent Server A from hardware crashes. This is what I recommend as the least amount of work. You should start with a server that never crashes, keep it that way and add stuff on top of it. If you only have two servers, they should be able to maintain many months of uptime. Run your scheduled jobs using cron or systemctl and have a watchdog that restarts the job when it detects an error. If your job is crashing the server maybe put it in a VM like docker or something, which gives you neat auto-restart capabilities off the bat.\nAnother hacky thing you can do is schedule your gosund plug to turn off and on once a day through their consumer UI or app, or at whatever frequency you feel like is most optimal.","Q_Score":4,"Tags":"python,google-assistant-sdk","A_Id":65350670,"CreationDate":"2020-12-09T18:33:00.000","Title":"Google Assistant API, controlling a light switch connected to Google Home","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"If I have a single GPU with 8GB RAM and I have a TensorFlow model (excluding training\/validation data) that is 10GB, can TensorFlow train the model?\nIf yes, how does TensorFlow do this?\nNotes:\n\nI'm not looking for distributed GPU training. I want to know about single GPU case.\nI'm not concerned about the training\/validation data sizes.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":304,"Q_Id":65222907,"Users Score":0,"Answer":"No you can not train a model larger than your GPU's memory. (there may be some ways with dropout that I am not aware of but in general it is not advised). Further you would need more memory than even all the parameters you are keeping because your GPU needs to retain the parameters along with the derivatives for each step to do back-prop.\nNot to mention the smaller batch size this would require as there is less space left for the dataset.","Q_Score":1,"Tags":"python,tensorflow,memory,gpu,ram","A_Id":71564955,"CreationDate":"2020-12-09T18:38:00.000","Title":"On single gpu, can TensorFlow train a model which larger than GPU memory?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"this is kind of a dumb question but how would I make a discord.py event to automatically react to a message with a bunch of different default discord emojis at once. I am new to discord.py","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":265,"Q_Id":65223410,"Users Score":0,"Answer":"You have to use on_message event. Its a default d.py function. It is an automatic function.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":65923529,"CreationDate":"2020-12-09T19:13:00.000","Title":"How would I use a bot to send multiple reactions on one message? Discord.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running python 3.8 and using pip to install many packages and inporting it. But when I try to install transforms and import it I get the following messages. Any help would be appreciated.\nC:\\Users\\r.acharyya.CI\\AppData\\Local\\Programs\\Python\\Python38>pip install transforms\nRequirement already satisfied: transforms in c:\\users\\r.acharyya.ci\\appdata\\local\\programs\\python\\python38\\lib\\site-packages (0.1)\nC:\\Users\\buy.rtharyya.CI\\AppData\\Local\\Programs\\Python\\Python38>pip install transforms\nRequirement already satisfied: transforms in c:\\users\\r.acharyya.ci\\appdata\\local\\programs\\python\\python38\\lib\\site-packages (0.1)\nC:\\Users\\buy.rtharyya.CI\\AppData\\Local\\Programs\\Python\\Python38>python\nPython 3.8.1 (tags\/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\n\n\nimport transforms\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"C:\\Users\\r.acharyya.CI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\transforms_init_.py\", line 1, in \nfrom .safe_html import safe_html, bodyfinder\nFile \"C:\\Users\\r.acharyya.CI\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\transforms\\safe_html.py\", line 1, in \nfrom sgmllib import SGMLParser, SGMLParseError\nModuleNotFoundError: No module named 'sgmllib'","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":959,"Q_Id":65225151,"Users Score":0,"Answer":"I used the command pip install sgmllib3k and it solve my issues.","Q_Score":0,"Tags":"python,installation,pip","A_Id":72380755,"CreationDate":"2020-12-09T21:26:00.000","Title":"Why I cannot import transformations after installation in python using pip?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a very large file that I want to open and read specific lines from it, I always know what line number the data I want is at, but I don't want to have to read the entire file each time just to read that specific line.\nIs there a way you can only read specific lines in Python? Or what is the most efficient way possible to do this (i.e. read as little of the file as possible, to speed up execution)?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":568,"Q_Id":65226788,"Users Score":0,"Answer":"This is sadly not possible due to a simple reason: lines do not exist. What your text editor shows you as a line is just two pieces of text with a newline character in the middle (you can type it with \\n in python. If all lines have the same length then it is possible, but I assume that is not the case here.\nThe least amount of reading is done if you only read up to your content + your content. That means you should not use read or readlines. Instead use readline to get and discard the unneeded lines, then use it once more to get what you want. That is the most effective way probably.","Q_Score":1,"Tags":"python-3.x,large-files","A_Id":65226905,"CreationDate":"2020-12-10T00:17:00.000","Title":"Only read specific line numbers from a large file in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am considering working on a project to emulate circuits in order to get more invested in an electronic circuits class that I am currently taking. I have found it useful to create Python scripts to work on my homework calculations, but now I would like to make a website to share with classmates to use these scripts.\nFrom my understanding, I can run only run Javascript on the browser and can only use Python on the backend. I've gotten comfortable with using Python for math and have heard that it's better in general for mathematics. In my assignments, the most that I've worked with is pi and long floating point numbers.\nSo...\nWould it be worth my time to create a python backend to run calculations? Or can I get by with browser `Javascript` for calculations. I am comfortable with both languages.\nAlso as a follow-up question, could I use Flask to run Python in the browser?\nThank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":126,"Q_Id":65228710,"Users Score":3,"Answer":"The only case in which one would definitely be preferable to the other would be if the calculations to be performed may get very expensive, in which case it would be much more user-friendly to have the server shoulder the load, rather than having the client do it (since low-end clients may become unresponsive for too long while calculating).\nIf that's a potential issue for your case, running the expensive code on the backend would be the way to go. (It doesn't have to be Python on the backend - it could even be server-side JS, or even PHP or whatever else you prefer and is performant enough for your needs.)\nIf that isn't something to worry about for your case, then feel free to choose whatever you like (calculate on the client or on the server), using whatever approach you're most comfortable with - there isn't an objective way to choose between them.","Q_Score":0,"Tags":"javascript,python,math","A_Id":65228748,"CreationDate":"2020-12-10T04:56:00.000","Title":"Would it be better to use Javascript or Python for calculations on my website","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am creating a multiplayer game and I would like the communication between my server program (written in python) and the clients (written in c# - Unity) to happen via UDP sockets.\nI recently came across the concept of UDP Multicast, and it sounds like it could be much better for my use case as opposed to using UDP Unicast , because my server needs to update all of the clients (players) with the same content every interval. So, rather than sending multiple identical packets to all the clients with UDP unicast, I would like to be able to only send one packet to all the clients using multicast, which sounds much more efficient.\nI am new to multicasting and my questions are:\nHow can I get my server to multicast to clients across the internet?\nDo I need my server to have a special public multicast IP address? If so how do I get one?\nIs it even possible to multicast across the internet? or is multicasting available only within my LAN?\nAnd what are the pros and cons with taking the multicast approach?\nThank you all for your help in advance!!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":134,"Q_Id":65228805,"Users Score":1,"Answer":"You can't multicast on the Internet. Full stop.\nBasically, multicast is only designed to work when there's someone in charge of the whole network to set it up. As you noted, that person needs to assign the multicast IP addresses, for example.","Q_Score":1,"Tags":"python,sockets,unity3d,networking,udp","A_Id":65239846,"CreationDate":"2020-12-10T05:08:00.000","Title":"How can I get my server to UDP multicast to clients across the internet? Do I need a special multicast IP address?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to build a Ordered Probit model using statsmodel package in python. Used the following code to import:\nfrom statsmodels.miscmodels.ordinal_model import OrderedModel\nbut getting a following error:\nModuleNotFoundError: No module named 'statsmodels.miscmodels.ordinal_model'\nI have updated the package as well but the error persists.","AnswerCount":4,"Available Count":4,"Score":0.1973753202,"is_accepted":false,"ViewCount":3407,"Q_Id":65229307,"Users Score":4,"Answer":"well\uff0cu can install this package in this way\uff1a\n\npip install git+https:\/\/github.com\/statsmodels\/statsmodels","Q_Score":4,"Tags":"python,statsmodels","A_Id":65392118,"CreationDate":"2020-12-10T06:04:00.000","Title":"ModuleNotFoundError: No module named 'statsmodels.miscmodels.ordinal_model'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to build a Ordered Probit model using statsmodel package in python. Used the following code to import:\nfrom statsmodels.miscmodels.ordinal_model import OrderedModel\nbut getting a following error:\nModuleNotFoundError: No module named 'statsmodels.miscmodels.ordinal_model'\nI have updated the package as well but the error persists.","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":3407,"Q_Id":65229307,"Users Score":1,"Answer":"I know it is a pretty old discussion board, but I hope my post can be helpful.\nI recently ran into the same issue. and solved by doing the following:\n\npip3 install git+https:\/\/github.com\/statsmodels\/statsmodels. Just like @AudiR8 mentioned. However, if you are using an IDE with python version 3.0 +, use pip3 is better.\n\nMake sure the package is installed in the correct directory then turn off the IDE.\n\nReopen it and it should be working.\n\n\nHope it can be helpful!","Q_Score":4,"Tags":"python,statsmodels","A_Id":67006283,"CreationDate":"2020-12-10T06:04:00.000","Title":"ModuleNotFoundError: No module named 'statsmodels.miscmodels.ordinal_model'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to build a Ordered Probit model using statsmodel package in python. Used the following code to import:\nfrom statsmodels.miscmodels.ordinal_model import OrderedModel\nbut getting a following error:\nModuleNotFoundError: No module named 'statsmodels.miscmodels.ordinal_model'\nI have updated the package as well but the error persists.","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":3407,"Q_Id":65229307,"Users Score":0,"Answer":"An option that helped me was to restore my console's settings to default and then it worked. My IDE in particular was Spyder","Q_Score":4,"Tags":"python,statsmodels","A_Id":70053019,"CreationDate":"2020-12-10T06:04:00.000","Title":"ModuleNotFoundError: No module named 'statsmodels.miscmodels.ordinal_model'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to build a Ordered Probit model using statsmodel package in python. Used the following code to import:\nfrom statsmodels.miscmodels.ordinal_model import OrderedModel\nbut getting a following error:\nModuleNotFoundError: No module named 'statsmodels.miscmodels.ordinal_model'\nI have updated the package as well but the error persists.","AnswerCount":4,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":3407,"Q_Id":65229307,"Users Score":2,"Answer":"pip install --upgrade --no-deps statsmodels worked for me.","Q_Score":4,"Tags":"python,statsmodels","A_Id":70498752,"CreationDate":"2020-12-10T06:04:00.000","Title":"ModuleNotFoundError: No module named 'statsmodels.miscmodels.ordinal_model'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a use case wherein we need to compare 100s of tables between two databases (Oracle and AWS Redshift) in a summarized way\nThe tables are identical and we need to know if the tables are matching. Let me know if there is any easy way to compare the data in a performant way","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":65229725,"Users Score":0,"Answer":"The best way I have developed for doing this is to make md5 signatures of every column in every table across both databases and compare these signatures (or just combine column signatures to make table signatures). I had to do a bid of coding to make sure that NULLs and empty strings mis-compare and handle a few other corner cases but nothing too extreme.\nRedshift can perform this signature analysis very quickly but the biggest issue I have run into in the past is the speed of other databases in computing signatures. As you asked \"in a performant way\". So I have had to write \"light weight\" hash functions in the past when source databases are just too wimpy to compute large numbers of md5s.","Q_Score":0,"Tags":"python,sql,database,compare","A_Id":65242354,"CreationDate":"2020-12-10T06:50:00.000","Title":"Best Way to compare a list of tables in a dynamic fashion from two DBs","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Problem statement:\nI have a python 3.8.5 script running on Windows 10 that processes large files from multiple locations on a network drive and creates .png files containing graphs of the analyzed results. The graphs are all stored in a single destination folder on the same network drive. It looks something like this\nSource files:\n\\\\drive\\src1\\src1.txt\n\\\\drive\\src2\\src2.txt\n\\\\drive\\src3\\src3.txt\nOutput folder:\n\\\\drive\\dest\\out1.png\n\\\\drive\\dest\\out2.png\n\\\\drive\\dest\\out3.png\nOccasionally we need to replot the original source file and examine a portion of the data trace in detail. This involves hunting for the source file in the right folder. The source file names are longish alphanumerical strings so this process is tedious. In order to make it less tedious I would like to creaty symlinks to the orignal source files and save them side by side with the .png files. The output folder would then look like this\nOutput files:\n\\\\drive\\dest\\out1.png\n\\\\drive\\dest\\out1_src.txt\n\\\\drive\\dest\\out2.png\n\\\\drive\\dest\\out2_src.txt\n\\\\drive\\dest\\out3.png\n\\\\drive\\dest\\out3_src.txt\nwhere \\\\drive\\dest\\out1_src.txt is a symlink to \\\\drive\\src1\\src1.txt, etc.\nI am attempting to accomplish this via\nos.symlink('\/\/drive\/dest\/out1_src.txt', '\/\/drive\/src1\/src1.txt')\nHowever no matter what I try I get\n\nPermissionError: [WinError 5] Access is denied\n\nI have tried running the script from an elevated shell, enabling Developer Mode, and running\nfsutil behavior set SymlinkEvaluation R2R:1\nfsutil behavior set SymlinkEvaluation R2L:1\nbut nothing seems to work. There is absolutely no problem creating the symlinks on a local drive, e.g.,\nos.symlink('C:\/dest\/out1_src.txt', '\/\/drive\/src1\/src1.txt')\nbut that does not accomplish my goals. I have also tried creading links on the local drive per above then then copying them to the network location with\nshutil.copy(src, dest, follow_symlinks = False)\nand it fails with the same error message. Attempts to accomplish the same thing directly in the shell from an elevated shell also fail with the same \"Access is denied\" error message\nmklink \\\\drive\\dest\\out1_src.txt \\\\drive\\src1\\src1.txt\nIt seems to be some type of a windows permission error. However when I run fsutil behavior query SymlinkEvaluation in the shell I get\n\nLocal to local symbolic links are enabled.\nLocal to remote symbolic links are enabled.\nRemote to local symbolic links are enabled.\nRemote to remote symbolic links are enabled.\n\nAny idea how to resolve this? I have been googling for hours and according to everything I am reading it should work, except that it does not.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":379,"Q_Id":65230280,"Users Score":0,"Answer":"Open secpol.msc on PC where the newtork share is hosted, navigate to Local Policies - User Rights Assignment - Create symbolic links and add account you use to connect to the network share. You need to logoff from shared folder (Control Panel - All Control Panel Items - Credential Manager or maybe you have to reboot both computers) and try again.","Q_Score":2,"Tags":"python,windows,network-drive","A_Id":65233061,"CreationDate":"2020-12-10T07:37:00.000","Title":"Create symlink on a network drive to a file on same network drive (Win10)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Can I actually run a unit test without mark 'load demo data' on my database? If yes, what are the consequences? What are the best practises for unit testing? Can you do testing on your actual database? I'm using odoo12 and now working on unit test2 for python codes. Please help me with this matter","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":341,"Q_Id":65230551,"Users Score":1,"Answer":"Yes you can run test without demo data. If you run odoo with --test-enable then odoo runs test for all installed and updated modules. (-u ...)\nI believe stock test are failing if you don't have demo data installed.\nNever run tests in production database it will leave marks on the database.\nI am running tests in isolation and without demo data. But i am running own tests only.","Q_Score":0,"Tags":"python,unit-testing,odoo,odoo-12","A_Id":65232690,"CreationDate":"2020-12-10T07:58:00.000","Title":"Unit Test without Demo Data in odoo","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working with tensorflow. Recently Arch replaced Python 3.8 with 3.9 and at the moment there is no tensorflow build for Python 3.9. Downgrading Python version for the whole system for that single reason do not looks like good idea for me. My goal is to create virtual environment with python 3.8.\nIs there a way to have both (3.8 and 3.9) versions available in the system? Python page of arch wiki doesn't mention that.\nEDIT:\nI know, I can use: virtualenv -p python3.8 py38 but I need an interpreter in the system.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":21392,"Q_Id":65230805,"Users Score":1,"Answer":"Downgrading Python version for the whole system for that single reason do not looks like good idea for me.\n\nThis is a good observation. You should not modify the system installation of python. After you install the AUR package that Ahacad mentions. I suggest using virtualenv or the standard venv package to create a virtual environment for your tensorflow projects.","Q_Score":17,"Tags":"python,linux,tensorflow,virtualenv,archlinux","A_Id":69398241,"CreationDate":"2020-12-10T08:19:00.000","Title":"How to install Python 3.8 along with Python 3.9 in Arch Linux?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Anaconda python distributin. Under Scripts folder, I see several ~.conda_trash files. Can these files be safely deleted?\nI am using Windows 10, anaconda 2020_07.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1244,"Q_Id":65230880,"Users Score":3,"Answer":"The .conda_trash file are generated on windows when conda tries to delete folder containing in-use files. As windows can't delete files that are in use (i think linux users don't meet the .conda_trash problem).\nThere is a delete_trash function at boot that scans the entire tree in search for those files and deletes them.\nSo basically conda should be able to get rid of those files by itself. But if those are not needed anymore (and take too much time at boot), it shouldn't be a poblem to manually delete them.","Q_Score":1,"Tags":"python,anaconda,anaconda3","A_Id":65273994,"CreationDate":"2020-12-10T08:24:00.000","Title":"Can .conda_trash files be safely deleted from Scripts folder in Anaconda?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Anaconda python distributin. Under Scripts folder, I see several ~.conda_trash files. Can these files be safely deleted?\nI am using Windows 10, anaconda 2020_07.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1244,"Q_Id":65230880,"Users Score":1,"Answer":"I have tested on my PC that the ~.conda_trash files can be deleted from Scripts folder without affecting anaconda distribution.","Q_Score":1,"Tags":"python,anaconda,anaconda3","A_Id":65341522,"CreationDate":"2020-12-10T08:24:00.000","Title":"Can .conda_trash files be safely deleted from Scripts folder in Anaconda?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"As we know the array stores data in the memory in a contiguous manner i.e. the data(element of the array) stored is sequential and is not stored randomly at different addresses. And that's the reason why we cannot change the size of the array dynamically. But in the case of python lists, the size of the array can be changed when required. So, do python lists also store data in a contiguous way or do they use some different approach of storing data? Also, the size of all the data elements is also the same in an array for example in java or c++ i.e. all the elements of the array consume the same amount of memory and we clearly know that it isn't the case in python as we can store different data types in the same list. So, basically, my question is, what is the fundamental difference between lists in python and arrays in java(or any other language like c++ or c). I would really appreciate your help.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":627,"Q_Id":65231340,"Users Score":1,"Answer":"Although people compare Python lists to arrays in Java, actually lists are more like ArrayLists in Java (Or Vectors in C++). Lists in Python store pointers to objects rather than objects themselves, which is why they can store heterogenous types ([1,2,3,'x',\"hello\"]). So, technically the list still stores elements of specific size and type (pointers), but those pointers can point to objects of any type, size and value.\nThese variable length lists in Python keep a pointer to themselves and the their length in a list head structure, which work with exponential over-allocation, so that the code can have linear time complexity for append\/remove operations.\nOver-allocating the memory is used to avoid having to resize the list too many times (Say, after every append operation). The growth pattern of the list is something like: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, \u2026\nSimilarly for remove\/pop operations, if the new size is less than half the allocated size then the list is shrunk. Although additional cost of Slicing the list is added in case of removing the element from non-terminal positions.","Q_Score":0,"Tags":"java,python,arrays,list,data-structures","A_Id":68166298,"CreationDate":"2020-12-10T08:57:00.000","Title":"How are python lists different from Java arrays","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running a FastAPI script on Google App Engine, is there a way to get the CPU & Memory usage for a single request made? I'm trying to calculate how many requests a single instance can take.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1162,"Q_Id":65232808,"Users Score":2,"Answer":"No, you can't and that is the wrong thing to measure anyway.\nYou want to run load testing where you have a script that does X requests\/second to your website over a period of time to see what your website can handle.","Q_Score":1,"Tags":"python,google-app-engine,google-cloud-platform,fastapi","A_Id":65236297,"CreationDate":"2020-12-10T10:31:00.000","Title":"How to get CPU & Memory usage for a Python FastAPI script on Google App Engine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I work on a python script executing standard image recognition processing using tensorflow. Using Python 3.8, Tensorflow 2, Idle lauched from a virtual env.\nSince I am following a tutorial, I would like to augment and execute my script chunk by chunk : e.g.\n\nwrite the code for the data load\nexecute\nwrite the code for training\nexecute only training (without reloading the data)\n\nIs there a way to run a python script chunk by chunk, without restarting the idle shell, and keeping results from the previous steps?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":65234016,"Users Score":0,"Answer":"In an IDLE editor, one can right click on multiple lines to add breakpoints. Start IDLE's debugger in Shell, using Shell menu. Run file. Click go and execution will start and stop at first breakpoint. Click go again to run to next breakpoint.","Q_Score":0,"Tags":"python,tensorflow,execution,python-idle","A_Id":65244896,"CreationDate":"2020-12-10T11:53:00.000","Title":"Run edited python script step by step \/ without reloading dataset","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I work on a python script executing standard image recognition processing using tensorflow. Using Python 3.8, Tensorflow 2, Idle lauched from a virtual env.\nSince I am following a tutorial, I would like to augment and execute my script chunk by chunk : e.g.\n\nwrite the code for the data load\nexecute\nwrite the code for training\nexecute only training (without reloading the data)\n\nIs there a way to run a python script chunk by chunk, without restarting the idle shell, and keeping results from the previous steps?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":89,"Q_Id":65234016,"Users Score":1,"Answer":"Read\u2013eval\u2013print loop (REPL)\nYes , this is what exactly you are looking for.\nThis is an interactive environment that takes single user inputs, executes them, and returns the result to the user; a program written in a REPL environment is executed piecewise\nThere are lot of Platforms which offer this.\nJupyter Notebook (Local)\nGoogle Colab (Online)\nI prefer Google colab .\nIts free & we don't have to waste our Local system Resources","Q_Score":0,"Tags":"python,tensorflow,execution,python-idle","A_Id":65234730,"CreationDate":"2020-12-10T11:53:00.000","Title":"Run edited python script step by step \/ without reloading dataset","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I Am Getting This Error Using Pygame 64 bit Windows Python 3.9:\nFailed loading libmpg123.dll: The specified module could not be found.\nI have tried the following\n\nRestarting IDE\nRestarting Computer\n\nThis problem only happens with \".exe\" extension. It works fine if using the \".py\" extension. Any ideas on this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":409,"Q_Id":65237071,"Users Score":0,"Answer":"I have Downloaded That dll And Put It in the folder now it works like charm","Q_Score":0,"Tags":"python,dll,module,pygame","A_Id":65248328,"CreationDate":"2020-12-10T15:04:00.000","Title":"Failed loading libmpg123.dll: The specified module could not be found","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to run an ordinal regression model in stats model and someone posted this (from statsmodels.miscmodels.ordinal_model import OrderedModel) however it doesnt seem to work.\nI also checked on stats models website and ordered models dont appear on there.\nHas anyone done an oridinal logistic regression in Python?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":812,"Q_Id":65237256,"Users Score":0,"Answer":"Due to the ease of analyzing model results (more R like) I would like to use statsmodels to run an ordinal logistic regression as well, but in Python, unless you use rpy2, the only other option I know of is using mord.\nfrom mord import LogisticAT","Q_Score":1,"Tags":"python,statsmodels,ordinal","A_Id":65462824,"CreationDate":"2020-12-10T15:17:00.000","Title":"How do i run a ordinal regression using stats model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've just started to code using Spyder on Windows 10. When I run the program directly through the start menu or anyway but prompt I keep get this message on Spyder console every time I run a script: \n\nReloaded modules: ipykernel, ipykernel._version, ipykernel.connect, ipykernel.kernelapp, zmq.eventloop, zmq.eventloop.ioloop, tornado.platform, tornado.platform.asyncio, tornado.gen, zmq.eventloop.zmqstream, ipykernel.iostream, jupyter_client.session, jupyter_client.jsonutil, dateutil, dateutil._version, dateutil.parser, dateutil.parser._parser, six, dateutil.relativedelta, dateutil._common, dateutil.tz, dateutil.tz.tz, dateutil.tz._common, dateutil.tz._factories, dateutil.tz.win, dateutil.parser.isoparser, jupyter_client.adapter, ipykernel.heartbeat, ipykernel.ipkernel, IPython.utils.tokenutil, ipykernel.comm, ipykernel.comm.manager, ipykernel.comm.comm, ipykernel.kernelbase, tornado.queues, tornado.locks, ipykernel.jsonutil, ipykernel.zmqshell, IPython.core.payloadpage, ipykernel.displayhook, ipykernel.eventloops, ipykernel.parentpoller, win32api, win32security, ntsecuritycon, ipykernel.datapub, ipykernel.serialize, ipykernel.pickleutil, ipykernel.codeutil, IPython.core.completerlib, storemagic, autoreload, spyder, spyder.pil_patch, PIL, PIL._version, PIL.Image, PIL.ImageMode, PIL.TiffTags, PIL._binary, PIL._util, PIL._imaging, cffi, cffi.api, cffi.lock, cffi.error, cffi.model\n\nI wish I could solve this, because start Spyder through prompt is taking like forever, but I don't know what i should look for. Would appreciate any help.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":2608,"Q_Id":65238020,"Users Score":2,"Answer":"Tools > Preferences > Python Interpreter > deselect \"Show reloaded modules list\" under the User Module Reloader","Q_Score":0,"Tags":"python,spyder","A_Id":66608622,"CreationDate":"2020-12-10T16:01:00.000","Title":"Spyder ipkernel issues - Reload message","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some interactive graphs made with plotly and would like to embed them on my \"webpage\" that I created using GitHub pages. I tried something like {%include myfig.html%} but it tells me that the file is not found in the _include file. However, I cannot take hands on this file...\nAny simple way to embed a plotly interactive graph on markdown using github pages ? I've looked for it on the net but could not find anything that helps me.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":731,"Q_Id":65238361,"Users Score":0,"Answer":"The answer was actually quite simple : Just use the commande: {% include_relative myfile.html} instead of {% include myfile.html %}","Q_Score":1,"Tags":"python,plotly,markdown","A_Id":65267622,"CreationDate":"2020-12-10T16:21:00.000","Title":"Embed plotly interactive graphs in markdown file (index.md) with GitHub pages, not using Jekyll","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Kivy Carousel to build an App.\nHowever I would like to maintain Manual control of the carousel and disable the swipe action (I will manually call carousel.load_next)\nI have looked through the Documentation but cannot see any way to disable the swipe action.\nIf anyone could help me I would appreciate it.\nMany thanks,\nSeotha.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":246,"Q_Id":65240081,"Users Score":0,"Answer":"Thanks Authur, I will mark as Answer.\nI also discovered I can subclass Carousel and override on_touch_move with nothing.\nclass MyCarousel(Carousel): def on_touch_move(self,touch): pass","Q_Score":1,"Tags":"python,kivy,carousel","A_Id":65241335,"CreationDate":"2020-12-10T18:16:00.000","Title":"Disable Swipe Actions on Kivy Carousel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it possible in python with the latest CV2 to use CV2 to directly bind mjpeg output from the camera to a stream without having to do source -> cv2.read() -> numpy array -> cv2.imencode(\".jpg\") -> mjpeg? I am looking to do source -> mjpeg in a pythonic way.\nLatency is a major issue so any advice including options beyond CV2 would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":65243321,"Users Score":0,"Answer":"No. OpenCV is not a media library. Its video I\/O is not intended or made for this.\nI would advise to use PyAV, which is the only proper python wrapper around ffmpeg's libraries that I know of. PyAV comes with a few examples to give you a feel for how it works.\nthe basic problem then is how to use ffmpeg's libraries to enumerate available video devices, query their modes, select the mode you want, and move packets around.","Q_Score":1,"Tags":"python,python-3.x,opencv","A_Id":65244382,"CreationDate":"2020-12-10T22:40:00.000","Title":"Python Open CV image streaming without decoding and reencoding?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We use db2 at our company. I would like to find a way to query db2 for data and display that data in grafana. For example to get a number of completed transactions.\nI see grafana does support mysql natively but not db2. Is there a way to just add the db2 driver\/libraries?\nWorst case is writing queries in python and then simply displaying that recorded data with grafana an effective solution?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":402,"Q_Id":65248997,"Users Score":0,"Answer":"Don't know if you found what you need but in case you didn\u00b4t you might consider using 'Db2 rest services' and Grafana plugin 'Simple JSON Datasource'.","Q_Score":0,"Tags":"python,sql,db2,grafana","A_Id":69989267,"CreationDate":"2020-12-11T09:45:00.000","Title":"Is there a way to display db2 data in grafana","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was able to use python IDLE earlier. But when I tried to open now, I was unable to open Python 3.7 IDLE. I have tried uninstalling and reinstalling Python(different versions) and deleting the .idlerc folder. I am using Windows 10.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":180,"Q_Id":65250316,"Users Score":1,"Answer":"If you still have your python installer in your Downloads folder, you can repair your version of python, by:\n\nDouble-Click on the python installer. If your version of python and the installer's version are same their should be an option to repair click on it, Wait...\n\nAfter it's finished, Reboot\/Restart your PC\/Laptop\n\nTry running IDLE again.\n\n\nIf this doesn't solves your question, or have any queries and doubts about it feel free to ask about it to me!\nHappy Coding!","Q_Score":1,"Tags":"python,windows-10,python-idle","A_Id":65250439,"CreationDate":"2020-12-11T11:16:00.000","Title":"Python IDLE no longer opens after clicking on its icon","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"API Gateway is connected to a lambda function that sleeps for 4 seconds. When I execute the API 20 times in 1 second, the first few calls completes the job in 5 seconds but the other calls take more time eg:12 sec,20 sec, and sometimes runs into timeout errors\n\nAre the APIs dependent on its previous calls (i.e) for the 20th API call to get executed the other previous calls must be completed?\nHow to resolve this problem?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":338,"Q_Id":65250756,"Users Score":1,"Answer":"Some things to consider when working with AWS API Gateway and AWS Lambda.\nCreating an instance (execution environment) of your Lambda Function takes a while. This is called a cold start. If your Lambda runs in a VPC this might take longer. Shouldn't take less than a couple seconds though.\nOnce that Lamdba runs, API Gateway can reuse this instance. If there isn't traffic for some time the Lamdba goes back to cold storage. The next request through API Gateway will create another instance of the Lambda.\nGiven enough traffic AWS Lambda triggers concurrent executions, which means there will be more Lambda instances. This is called automatic scaling. Again, it takes a cold start. You have no control when, or how many Lambda instances are running. Welcome to serverless.\nAWS Lambda limits its execution time to 15 minutes. Your function cannot exceed this limit. It will be shutdown hard.\nAPI Gateway has an integration timeout of 30 seconds. This means your Lambda needs to finish within 30 seconds. Otherwise API Gateway returns 502 while your function is still running.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-ec2,aws-lambda,aws-api-gateway","A_Id":65254999,"CreationDate":"2020-12-11T11:44:00.000","Title":"Does AWS API gateway calls takes long if executed multiple times in 1 second?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm stuck in a rock and a hard place.\nI have been given a very restricted Python 2\/3 environment where both os and subprocess library are banned. I am testing file upload functionality using Python selenium; creating the file is straight forward, however, when I use the method driver.send_keys(xyz), the send_keys is expecting a full path rather than just a file name. Any suggestions on how to work around this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":65250817,"Users Score":0,"Answer":"with no way of using the os module it would seem you're SOL unless the placement of your script is static (i.e. you define the current working dirrectory as a constant) and you then handle all the path\/string operations within that directory yourself.\nyou won't be able to explore what's in the directory but you can keep tabs on any files you create and just store the paths yourself, it will be tedious but there's no reason why it shouldn't work.\nyou won't be able to delete files i don't think but you should be able to clear their contents","Q_Score":0,"Tags":"python,python-3.x,selenium-webdriver","A_Id":65250942,"CreationDate":"2020-12-11T11:47:00.000","Title":"Get current directory - 'os' and 'subprocess' library are banned","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install PyAudio but it needs a Python 3.6 installation and I only have Python 3.9 installed. I tried to switch using brew and pyenv but it doesn't work.\nDoes anyone know how to solve this problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3401,"Q_Id":65250951,"Users Score":1,"Answer":"You may install multiple versions of the same major python 3.x version, as long as the minor version is different in this case x here refers to the minor version, and you could delete the no longer needed version at anytime since they are kept separate from each other.\nso go ahead and install python 3.6 since it's a different minor from 3.9, and you could then delete 3.9 if you would like to since it would be used over 3.6 by the system, unless you are going to specify the version you wanna run.","Q_Score":0,"Tags":"python,python-3.x,pyaudio","A_Id":65251064,"CreationDate":"2020-12-11T11:57:00.000","Title":"How to downgrade python from 3.9.0 to 3.6","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've tried to google this but nothing helpful is coming up.\nBasically I have an src folder, and within it are two modules - app.py and app_test.py. In app, there are about 17 functions, and I've written a test for one of them in app_test. A very simple one that checks whether a filepath is being created properly.\nBut when I click on \"Run Tests\", what happens? My whole app.py code runs. I tried with UnitTests instead and get an error that a relative path doesn't exist...which is not referenced anywhere in app_test or the simple test that exists in there on its own. The same thing happens if I run pytest src\/app_test.py from the command line.\nI'm assuming I've set something up wrong but can't work out what!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":18,"Q_Id":65254268,"Users Score":0,"Answer":"hoefling solved it in the comments! probably super obvious to most but in case anybody else is struggling -\nProbably because you have code in app.py on script level that runs on import and should be put under if name == \"main\":","Q_Score":1,"Tags":"python-3.x,unit-testing,pytest,vscode-extensions","A_Id":65254831,"CreationDate":"2020-12-11T15:37:00.000","Title":"Why would pytest be running full code rather than tests when I click \"Run Tests\"?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Okay, So I've been seeing a TON of Lambda functions in Python code. I keep looking at previously asked questions about Lambdas, but they don't explain what they DO. Do they set a variable? For example, if I did Lambda x: x + 1, would it set the variable X to equal x+1? Also, How do you print the value of a Lambda? Thanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":65254330,"Users Score":0,"Answer":"Lambda are anonymous functions.","Q_Score":0,"Tags":"python,function,lambda","A_Id":65254401,"CreationDate":"2020-12-11T15:41:00.000","Title":"What exactly DOES a Lambda do?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have my twitter archive downloaded and wanted to run word2vec to experiment most similar words, analogies etc on it.\nBut I am stuck at first step - how to convert a given dataset \/ csv \/ document so that it can be input to word2vec? i.e. what is the process to convert data to glove\/word2vec format?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":78,"Q_Id":65255029,"Users Score":1,"Answer":"Typically implementations of the word2vec & GLoVe algorithms do one or both of:\n\naccept a plain text file, where tokens are delimited by (one or more) spaces, and text is considered each newline-delimited line at a time (with lines that aren't \"too long\" - usually, short-article or paragraph or sentence per line)\n\nhave some language\/library-specific interface for feeding texts (lists-of-tokens) to the algorithm as a stream\/iterable\n\n\nThe Python Gensim library offers both options for its Word2Vec class.\nYou should generally try working through one or more tutorials to get a working overview of the steps involved, from raw data to interesting results, before applying such libraries to your own data. And, by examining the formats used by those tutorials \u2013 and the extra steps they perform to massage the data into the formats needed by exactly the libraries you're using \u2013 you'll also see ideas for how your data needs to be prepared.","Q_Score":0,"Tags":"python,nlp,stanford-nlp,word2vec","A_Id":65256256,"CreationDate":"2020-12-11T16:24:00.000","Title":"How can I Convert a dataset to glove or word2vec format?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am making a small program in which I need a few functions to check for something in the background.\nI used module threading and all those functions indeed run simultaneously and everything works perfectly until I start adding more functions. As the threading module makes new threads, they all stay within the same process, so when I add more, they start slowing each other down.\nThe problem is not with the CPU as it's utilization never reaches 100% (i5-4460). I also tried the multiprocessing module which creates a new process for function, but then it seems that variables can't be shared between different processes or I don't know how. (newly started process for each function seems to take existing variables with itself, but my main program cannot access any of the changes that function in the separate process makes or even new variables it creates)\nI tried using the global keyword but it seems to have no effect in multiprocessing as it does in threading.\nHow could I solve this problem?\nI am pretty sure that I have to create new processes for those background functions but I need to get some feedback from them and that part I don't know to solve.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":65255247,"Users Score":0,"Answer":"I ended up using multiprocessing Value","Q_Score":0,"Tags":"python,multithreading,module,process,global","A_Id":65280107,"CreationDate":"2020-12-11T16:40:00.000","Title":"Running functions siultaneoulsy in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a python project which involves real time communication between a Raspberry Pi device and Cloud server. After receiving data in the cloud server in real time, I have to display the output in a web application in real time as well. What is the correct way to do this ?\nAs web applications use socket.io (web sockets) and communication between Raspberry Pi and cloud can be done through a normal socket, I am confused on whether to proceed with normal socket or web socket.\nAny feedback will be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":65255354,"Users Score":0,"Answer":"It all depends on whether you want to work with the TCP stack or the HTTP stack, while TCP sockets have a smaller footprint, HTTP sockets have better support with high level libraries. If you don't need multiple simultaneous connections or high latency requirements, they are both identical, so go with the one that is easier to work with, ie the web socket.","Q_Score":0,"Tags":"python,sockets,websocket,raspberry-pi","A_Id":65257643,"CreationDate":"2020-12-11T16:48:00.000","Title":"What is the correct method to communicate in real time between Raspberry PI and Cloud Server and then display output in web app?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm wondering if it's possible to somehow convert a dash app with a page of graphs and assigned callbacks into a single HTML page to be able to send it around and still got the functionality of the callbacks. So to say convert all python code to javascript which is then embedded into the static HTML page.\nI was searching for this already for quite some time, but couldn't find a solution to this, or maybe it's not even possible.\nAny help is highly appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":786,"Q_Id":65256064,"Users Score":1,"Answer":"It is not currently possible to convert a dash page into a static hmtl file because of how the API runs your code in the browser. This is what gives dash the interactive capabilities.\nIf you would like to download a plotly graph into a html file then you'll have to create it using vanilla plotly. This html will still retain the interactive features, like zoom, pan, etc, but you won't have the interaction that is utilized in a dash app.","Q_Score":2,"Tags":"python,plotly,plotly-dash","A_Id":65803402,"CreationDate":"2020-12-11T17:36:00.000","Title":"How to turn a dash plotly hosted webpage into a single static html page?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I am doing a assignment which is to connect to a database and do operations on it. For that I have chosen sqlite3 and for connecting to the database I found the ODBC driver for python is pyodbc.\nMy questions are, what is the difference between using pyodbc and doing it using the library sqlite3, i.e., import sqlite3 ? And is the pyodbc driver integrated in sqlite3?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":571,"Q_Id":65257298,"Users Score":0,"Answer":"The big advantage of using Python's built-in sqlite3 module is that it is built-in; there are no other dependencies. If your app uses pyodbc and SQLite ODBC then both of those external components must be available for the application to function.\nIf this is a personal project then you can obviously install what is necessary, but if this is ever going to be widely deployed then you will need to deal with the additional requirements if you choose pyodbc. Specifically, your [Python] app can register a dependency on pyodbc such that pip install your_app also installs pyodbc, but it cannot (practically) accommodate the requirement for the SQLite ODBC driver automatically so your installation instructions will need to address that.\n\nAnd is the pyodbc driver integrated in sqlite3?\n\nNo. The SQLite ODBC driver is completely separate from both Python [sqlite3] and pyodbc.","Q_Score":1,"Tags":"python,sqlite,pyodbc","A_Id":65260589,"CreationDate":"2020-12-11T19:14:00.000","Title":"Pyodbc and sqlite3 differences","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to get source of a webpage using python requests.get(), but sometimes it retrieves the page source before some elements even load. Is there a way to make the requests.get() wait until certain element appear? The page does not load on javascript and I don't need selenium suggestions.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":65258258,"Users Score":0,"Answer":"When you say \"retrieves page before elements load\" you are implying that some elements are loaded or retrieved via JavaScript. What you get with requests.get() is the initial document and the same thing your browser gets. The difference is that the browser can then read and execute instructions contained in the initial document and fetch more resources, while python can not.\nYou either have to use selenium or go into your browser development tools and see if you can maybe find some endpoints from which the data is getting pulled down.","Q_Score":0,"Tags":"python,python-requests","A_Id":65258350,"CreationDate":"2020-12-11T20:33:00.000","Title":"Make requests.get() wait untill page is fully loaded?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm running CentOS 8 that came with native Python 3.6.8. I needed Python 3.7 so I installed Python 3.7.0 from sources. Now, python command is unknown to the system, while commands python3 and python3.7 both use Python 3.7.\nAll good until now, but I can't seem to get pip working.\nCommand pip returns command not found, while python3 -m pip, python3.7 -m pip, python3 -m pip3, and python3.7 -m pip3 return No module named pip. Only pip command that works is pip3.\nNow whatever package I install via pip3 does not seem to install properly. Example given, pip3 install tornado returns Requirement already satisfied, but when I try to import tornado in Python 3.7 I get ModuleNotFoundError: No module named 'tornado'. Not the same thing can be said when I try to import it in Python 3.6, which works flawlessly. From this, I understand that my pip only works with Python 3.6, and not with 3.7.\nPlease tell me how can I use pip with Python 3.7, thank you.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":65258596,"Users Score":0,"Answer":"I think the packages you install will be installed for the previous version of Python. I think you should update the native OS Python like this:\n\nInstall the python3.7 package using apt-get\nsudo apt-get install python 3.7\nAdd python3.6 & python3.7 to update-alternatives:\nsudo update-alternatives --install \/usr\/bin\/python3 python3 \/usr\/bin\/python3.6 1\nsudo update-alternatives --install \/usr\/bin\/python3 python3 \/usr\/bin\/python3.7 2\nUpdate python3 to point to Python 3.7:\n`sudo update-alternatives --config python3\nTest the version:\npython3 -V","Q_Score":0,"Tags":"python,python-3.x,unix,pip,centos","A_Id":65259268,"CreationDate":"2020-12-11T21:06:00.000","Title":"Python not using proper pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running CentOS 8 that came with native Python 3.6.8. I needed Python 3.7 so I installed Python 3.7.0 from sources. Now, python command is unknown to the system, while commands python3 and python3.7 both use Python 3.7.\nAll good until now, but I can't seem to get pip working.\nCommand pip returns command not found, while python3 -m pip, python3.7 -m pip, python3 -m pip3, and python3.7 -m pip3 return No module named pip. Only pip command that works is pip3.\nNow whatever package I install via pip3 does not seem to install properly. Example given, pip3 install tornado returns Requirement already satisfied, but when I try to import tornado in Python 3.7 I get ModuleNotFoundError: No module named 'tornado'. Not the same thing can be said when I try to import it in Python 3.6, which works flawlessly. From this, I understand that my pip only works with Python 3.6, and not with 3.7.\nPlease tell me how can I use pip with Python 3.7, thank you.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":65258596,"Users Score":1,"Answer":"It looks like your python3.7 does not have pip.\nInstall pip for your specific python by running python3.7 -m easy_install pip.\nThen, install packages by python3.7 -m pip install \nAnother option is to create a virtual environment from your python3.7. The venv brings pip into it by default.\nYou create venv by python3.7 -m venv ","Q_Score":0,"Tags":"python,python-3.x,unix,pip,centos","A_Id":65259124,"CreationDate":"2020-12-11T21:06:00.000","Title":"Python not using proper pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I use Anaconda on windows10.\nEvery time I create a new environment with conda command without specifying a particular python version, it just seems to install different versions of python.\nWhy does this happen? How does conda create command decide which version of python to fetch?\nExample:\nconda create -n env_name1 -> activate env_name1 -> python --version -> python 3.9.1\nconda create -n env_name2 -> activate env_name2 -> python --version -> python 3.8.3","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":376,"Q_Id":65261742,"Users Score":0,"Answer":"Make sure you have deactivated env_name1 before creating the new env_name2, because you may be using different configuration files (.condarc) and channels.\nTry creating all environments from the base one.","Q_Score":1,"Tags":"python,anaconda,conda,environment","A_Id":66163924,"CreationDate":"2020-12-12T04:54:00.000","Title":"Why does default python version change every time I create env with conda?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"ImportError: cannot import name 'event' from 'sqlalchemy'\nI tried pip install -U sqlalchemy and even uninstall and reinstalling. Dont know whats the problem.\nBasically its system file and not my project file.\nBelow are the versions of each -\nFlask-SQLAlchemy 2.4.4 2.4.4\nSQLAlchemy 1.3.20 1.3.20","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":470,"Q_Id":65263233,"Users Score":1,"Answer":"I had a file sqlalchemy.py in my application folder (I created the file). It was masking the actual package. Renaming the file solved the problem.","Q_Score":1,"Tags":"python,sqlalchemy,flask-sqlalchemy","A_Id":67037349,"CreationDate":"2020-12-12T09:04:00.000","Title":"ImportError: cannot import name 'event' from 'sqlalchemy'","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I made a very simple function in Python 3.9 (f(x, y, z) = (x and y) or (not x and z)) that prints the result of this operation. What weirds me out is that in some cases it prints 0 and in some others it prints False. For example, f(0, 1, 0) prints 0 but f(1, 0, 1) prints False. Why is that?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":49,"Q_Id":65270050,"Users Score":3,"Answer":"and returns the first value if it's falsey, otherwise it returns the second value. or returns the first value if it's truthy, otherwise it returns the second value. So, and and or won't transmute types. not, however, will always convert to a boolean of the opposite truthiness. So, not 1 becomes False, False and z becomes False, and if x and y is not truthy, then the expression will evaluate to False. That's why you can get different data types.","Q_Score":1,"Tags":"python,python-3.x,boolean,boolean-logic,boolean-expression","A_Id":65270071,"CreationDate":"2020-12-12T21:33:00.000","Title":"Why does the function sometimes return 0 and sometimes return False?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can't find anyway to setup TTL on a document within AWS Elasticsearch utilizing python elasticsearch library.\nI looked at the code of the library itself, and there are no argument for it, and I yet to see any answers on google.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":460,"Q_Id":65271287,"Users Score":1,"Answer":"There is none, you can use the index management policy if you like, which will operate at the index level, not at the doc level. You have a bit of wriggle room though in that you can create a pattern data-* and have more than 1 index, data-expiring-2020-..., data-keep-me.\nYou can apply a template to the pattern data-expiring-* and set a transition to delete an index after lets say 20 days. If you roll over to a new index each day you will the oldest day being deleted at the end of the day once it is over 20 days.\nThis method is much more preferable because if you are deleting individual documents that could consume large amounts of your cluster's capacity, as opposed to deleting entire shards. Other NoSQL databases such as DynamoDB operate in a similar fashion, often what you can do is add another field to your docs such as deletionDate and add that to your query to filter out docs which are marked for deletion, but are still alive in your index as a deletion job has not yet cleaned them up. That is how the TTL in DynamoDB behaves as well, data is not deleted the moment the TTL expires it, but rather in batches to improve performance.","Q_Score":2,"Tags":"python,amazon-web-services,elasticsearch","A_Id":65282126,"CreationDate":"2020-12-13T00:30:00.000","Title":"Is there a way to set TTL on a document within AWS Elasticsearch utilizing python library?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The issue I am having with bcrypt is that the module can't be imported into the Pythonista app on iOS, which is where I need to run my script. What else would you recommend similar to bcrypt that can generate a random salt, and has something like the checkpw() function built-in to quickly validate salted passwords?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60,"Q_Id":65271323,"Users Score":0,"Answer":"If pbkdf2 is natively available, I'd use that before trying to roll your own bcrypt. When its work factors are sufficiently large, it's still a solid choice when bcrypt or scrypt aren't available, and using it directly is safer than trying to recreate something else by hand.\nNot knowing more about your use case, a general recommendation: use pbkdf2 with a sufficiently large number of rounds to take about a half-second's worth of the upper end of the processor throughput of your target devices. This keeps the UX within tolerable wait times while still providing reasonable resistance to offline attack.\nI'd also recommend randomizing that number of rounds slightly over a range (like a thousand). For example, if you settled on 200,000 as having an acceptable 500ms delay, I'd randomly pick a value between 200,000 to 202,000 (or something like that) - whatever is needed to ensure that most users will have different rounds from each other (assuming that all user passwords might be aggregated into a single location that could be compromised and the hashes stolen). This is because some of the newer \"associative\" \/ \"correlation\" attacks only work well against a large set of hashes when all of the cost factors across that set of hashes are the same.\nLong term, also be sure that your code easily accepts a variable floor and ceiling for the number of rounds, so you can choose to increase your number of rounds over time as processors advance. (You could even get fancy and dynamically calculate the range of rounds based on the processor that the password is being created on, so that it's future ready without any additional intervention.)","Q_Score":0,"Tags":"python-3.x,hash,passwords,pythonista","A_Id":65272038,"CreationDate":"2020-12-13T00:38:00.000","Title":"What hashing algorithms would you recommend I use in Python3 that can generate a random salt, other than bcrypt?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written integration tests for lambdas that hit the dev site (in AWS). The tests are working fine. Tests are written in a separate project that uses a request object to hit the endpoint to validate the result.\nCurrently, I am running all the tests from my local. Lambdas are deployed using a separate Jenkins job.\nHowever, I need to generate a code coverage report for these tests. I am not sure how I can generate the code coverage report as I am directly hitting the dev URL from my local. I am using Python 3.8.\nAll the lambdas have lambda layers which provide a database connection and some other common business logic.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":498,"Q_Id":65271425,"Users Score":1,"Answer":"Code coverage is probably not the right metric for integration tests. As far as I can tell you use integration tests to test your requirements\/use cases\/user stories.\nImagine you have an application with a shopping cart feature. A user has 10 items in that shopping cart and now deletes one of those items. Your integration test would make sure that after this operation only (the correct) 9 items are left in the shopping cart.\nFor this kind of testing it is not relevant which\/how much code was run. It is more like a black box test. You want to know that for a given \"action\" the correct \"state\" is created.\nCode coverage is usually something you use with unit tests. For integration tests I think you want to know how many of your requirements\/use cases\/user stories are covered.","Q_Score":1,"Tags":"python-3.x,aws-lambda,integration-testing","A_Id":65274825,"CreationDate":"2020-12-13T00:56:00.000","Title":"Code Coverage Report for AWS Lambda Integration test using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to visualize such 4D data in the 2D plane. Is there any way to do that?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":449,"Q_Id":65273122,"Users Score":0,"Answer":"Depending on the data type, you can plot them on a 2d field with the following dimensions:\nDim 1-2: X and Y axes\nDim 3: plotted point size\nDim 4: plotted point color gradient\nOr if you do it with a 3D software, you can plot in 3D, with all the point plotted with color gradient. While rotating the field, you can have vision on all dimensions.","Q_Score":2,"Tags":"python,data-analysis","A_Id":69504433,"CreationDate":"2020-12-13T06:54:00.000","Title":"How to visualize 4D data in 2D plane?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to connect Django project and .NET core project because I want it take advantage of good library available in python so I will make one strong .NET core project.so through some lite on it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":823,"Q_Id":65273433,"Users Score":1,"Answer":"I am afraid there is no way to run both Django\/Python and .Net\/C# in a same process. But there are at least one other option.\nThe options is using MicroService architecture. That means you create two separate projects, one for .Net and another for Django\/Python. Then make these two projects talk to each other.\nThey can communicate with each other in several ways. Most common way is to communicate via REST. It means each project provide the other side a bunch of APIs. The other side can consume (call) the API to receive or send the required data.\nAnother way to communicate is to use a shared database. Another one is use messaging solutions like RabbitMQ.\nIn MicroService architecture, you can host each one on a convenient web server or, if you enjoy fancy technologies, you can use docker.\nUpdate\nA practical example would be like this:\n\nYour Python side has the functionality of calculating direct distance between point A and B.\nYou create a Django app with a REST API called calc\nIt get A and B as via query string. Just like http:\/\/localhost:5000\/api\/calc?lata=53.123&longa=34.134&latb=53.999&longb=34.999 (consider a and b in query parameters)\nCreate a .Net app wither as web or desktop\nWith the .Net app, call calc api via utilities like HttpClient\nNow, you have the results in your .Net part","Q_Score":2,"Tags":"python,c#,asp.net,django,asp.net-core-webapi","A_Id":65274376,"CreationDate":"2020-12-13T07:43:00.000","Title":"How to connect Django and .net core project as a single application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"So, I have a local office that sends us a list of new subscribers from their location daily. But instead of just sending over the new data, they just send over a csv with what i think is the last 25,000 records - i'm guessing the clerk just clicks some default option.\nI have a simple python script which inserts this data to a local mysql db, with sub_id set as unique index to prevent duplicates.\nMy problem however, is that I have to send the new subscriber data over to another team.\nI want to add this functionality to the existing python script, and the solution I could think of is to add a \"NEW\" status when inserting to the db, and then export all rows with \"NEW\" status, and then update the \"NEW\" status to \"EXPORTED\".\nThis feels inefficient to me -\nIs there a better approach to this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":65273529,"Users Score":0,"Answer":"Add a created_at timestamp to the table and send everything with a created_at greater than when the process started. This is the most robust option, and also adds a bit of information to the database which may be useful later; you can figure out which row was made by which import.\nOr, have the Python script remember the IDs of every row it inserted. Then select only those rows.\nAlternatively, have the Python script build the report as it inserts the data.","Q_Score":1,"Tags":"python,mysql,python-3.x","A_Id":65273902,"CreationDate":"2020-12-13T07:57:00.000","Title":"Insert only unique data to db and export new data to csv","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose a list of five element are there . a = [1,2,4,5,6] . and i want to insert element 3 in between , so what will happen if i use insert func of python .\n\nA separate memory allocated for element 3 only and its reference is attached to existing\nA new memory will be allocated to (aproxx 1.3 times of previous memory) whole list and element 3\nattached to it\nList item\n\n\n\nIf option 1 , then why time comp for insert() func is o(n) in python , not o(1)","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":73,"Q_Id":65273715,"Users Score":2,"Answer":"Solution 2 is correct- it will take more memory.\nPython has pointers for everything. So when you create a list, say foo = [3, 4, 5], python attaches a unique id to this list, foo, and points each index (position) in the list to the id of the value at the index.\nWhen you do foo.insert(add_index, val), you are changing the pointer for all indices after add_index. This is why the time complexity is O(n), since n pointer may need to be changed depending on the add_index variable.\nMemory doesn't scale linearly with what you are adding. The memory used by a list depends on what is being added (str vs float vs int ...) and whether the element is unique (create new pointer vs reference old pointer). In your case, it's correct that the list will approximately take (6\/5) times the memory.","Q_Score":0,"Tags":"python","A_Id":65274000,"CreationDate":"2020-12-13T08:31:00.000","Title":"Python internal memory management of list","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to communicate with Cylon device (UC32) by Bacnet protocol (BAC0) but I can not discover any device. And I try with Yabe and it does not have any result.\nIs there any document describing how to create my communication driver?\nOr any technique which can be uswed to connect with this device?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":66,"Q_Id":65276725,"Users Score":1,"Answer":"(Assuming you've set the default gateway address - for it to know where to return it's responses, but only if necessary.)\nIf we start with the assumption that maybe the device is not (by default) listening for broadcasts or having some issue sending it - a bug maybe (although probably unlikely), then you could send a unicast\/directed message, e.g. use the Read-Property service to read back the (already known) BOIN (BACnet Object Instance Number), but you would need a (BACnet) client (application\/software) that provides that option, like possibly one of the 'BACnet stack' cmd-line tools or maybe via the (for most part) awesome (but advanced-level) 'VTS (Visual Test Shell)' tool.\nAs much as it might be possible to discover what the device's BOIN (BACnet Object Instance Number) is, it's better if you know it already (- as a small few device's might not make it easy to discover - i.e. you might have to resort to using a round-robin bruteforce approach, firing lots of requests - one after the other with only the BOIN changed\/incremented by 1, until you receive\/see a successful response).","Q_Score":0,"Tags":"python,iot,bacnet","A_Id":67459367,"CreationDate":"2020-12-13T14:28:00.000","Title":"How to communicate with Cylon BMS controller","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm running a few programs (NodeJS and python) in my server (Ubuntu 20.04). I use PM2 CLI to create and manage processes. Now I want to manage all process through an echo system file. But when I run pm2 ecosystem, it just creates a sample file. I want to save my CURRENT PROCESSES to the echo system file and modify it. Anyone know how to save pm2 current process as echo system file?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":388,"Q_Id":65277107,"Users Score":1,"Answer":"If you use pm2 startup pm2 creates a file named ~\/.pm2\/dump.pm2 with all running processes (with too many parameters, as it saves the whole environment in the file)\nEdit:\nThis file is similar to the output of the command pm2 prettylist","Q_Score":0,"Tags":"python,node.js,ubuntu,pm2","A_Id":66128586,"CreationDate":"2020-12-13T15:07:00.000","Title":"Create PM2 Ecosystem File from current processes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"So I was trying to host a simple python script on Heroku.com, but encountered this error. After a little googling, I found this on the Heroku's website: git, Heroku: pre-receive hook declined, Make sure you are pushing a repo that contains a proper supported app ( Rails, Django etc.) and you are not just pushing some random repo to test it out.\nProblem is I have no idea how these work, and few tutorials I looked up were for more detailed use of those frameworks. What I need to know is how can i use them with a simple 1 file python script. Thanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":65280380,"Users Score":0,"Answer":"Okay I got it. It was about some unused modules in requirements.txt, I'm an idiot for not reading the output properly \u200d\u2642\ufe0f","Q_Score":0,"Tags":"python,git,heroku","A_Id":65280586,"CreationDate":"2020-12-13T20:33:00.000","Title":"Git, heroku, pre-receive hook declined","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Since this is a design question I don't think code would do it justice.\nI'm making a program to process and log a high bandwidth stream of data and concurrently trying to observe that data live. (like a production line and an inspector watching the production line)\nI want to distribute the load between cores on my computer to leave room for future functionality but I can't figure out if I can use multiprocessing for this. It seems most examples of multiprocessing all have initial data sets and outputs and don't need to be actively communicating throughout their lifetime.\nAm I able to use multiprocessing to actively communicate between processes or is that a bad idea and I should stick with multithreading?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":84,"Q_Id":65281321,"Users Score":0,"Answer":"If you expect the computation needed to process the full load of signal processing exceeds the capacity of a single core, you should consider spreading the load over multiple cores using multi-processing.\nIf you expect the computation needed to fit on a single core, but you expect many slow or blocking I\/O operations to hold up the work, you should consider multi-threading.\nIf overall performance is an issue, you should reconsider writing the actual solution in pure regular Python and instead look for an implementation in another language, or in a version of Python that gets you a solution closer to the hardware. You can of course still come up with a result that would be usable from a regular Python program.","Q_Score":0,"Tags":"python,multithreading,multiprocessing","A_Id":65281512,"CreationDate":"2020-12-13T22:23:00.000","Title":"Python multithreading or multiprocessing?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I display the user's Name + Discord Tag? As in:\nI know that;\nf\"Hello, <@{ctx.author.id}>\"\nwill return the user, and being pinged.\n(@user)\nAnd that;\nf\"Hello, {ctx.author.name}\"\nwill return the user's nickname, but WITHOUT the #XXXX after it.\n(user)\nBut how do I get it to display the user's full name and tag?\n(user#XXXX)","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":480,"Q_Id":65281822,"Users Score":1,"Answer":"To get user#XXXX you can just do str(ctx.author) (or just put it in your f-string and it will automatically be converted to a string). You can also do ctx.author.discriminator to get their tag (XXXX).","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":65281985,"CreationDate":"2020-12-13T23:30:00.000","Title":"How to get author's Discord Tag shown","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some long machine learning modules I am doing on Python\nusing Anaconda Python and sklearn, and numpy libraries\nThese processes take days to finish.\nand I am doing these processes on my laptop.\nThe problem is that I cannot keep my laptop on for days without turning it off\nIs there a way I can preserve the Machine Learning processes before restarting then resume where stopped after restarting?\nHow to do that?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":98,"Q_Id":65282639,"Users Score":0,"Answer":"As @Mr. For Example stated this can easily be overcome with checkpoints, save checkpoints of your model, and later load the last checkpoint (or just any checkpoint you like) and continue your training process.","Q_Score":2,"Tags":"python,machine-learning,scikit-learn","A_Id":65286117,"CreationDate":"2020-12-14T01:48:00.000","Title":"How to resume python machine learning after restart machine?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some long machine learning modules I am doing on Python\nusing Anaconda Python and sklearn, and numpy libraries\nThese processes take days to finish.\nand I am doing these processes on my laptop.\nThe problem is that I cannot keep my laptop on for days without turning it off\nIs there a way I can preserve the Machine Learning processes before restarting then resume where stopped after restarting?\nHow to do that?","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":98,"Q_Id":65282639,"Users Score":-1,"Answer":"I don't know any way of stopping the process and then start it again in a few days.\nAnyway, there is a free tool called Google Colab where you can execute your code there for up to 12 hours and you won't use your own resources but google servers. The downside to that is that they will keep your code (you will loose the intellectual property) but it executes faster and if you are not using it for business purposes it is perhaps a good alternative.","Q_Score":2,"Tags":"python,machine-learning,scikit-learn","A_Id":65285986,"CreationDate":"2020-12-14T01:48:00.000","Title":"How to resume python machine learning after restart machine?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have a class that I am working on and when I am finished with it, I want to import it into the game that I am working on. The main source code for my game has Tkinter imported, so my question is, do I need to import Tkinter into my seperate class file in order to use it in my game like normal or can I leave that out since Tkinter is already imported into the source code?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":65284179,"Users Score":0,"Answer":"Yes, as a general rule you need to import tkinter in every file that uses tkinter.","Q_Score":0,"Tags":"python,class,tkinter,pygame","A_Id":65284228,"CreationDate":"2020-12-14T05:45:00.000","Title":"Importing Classes and working with Libraries","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using gevent threads in my python flask application, rabbit mq messaging seems to be working fine when i am not monkey patching python thread but rabbit mq client pika stops listening when i do gevent monkey patch. would like to knows about this behaviour.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":143,"Q_Id":65287199,"Users Score":0,"Answer":"The cause of this issue is Pika and gevent.monkey_patch are not compatible when using rabbitmq.You will have to use gevent without patching system calls, if that is possible.\nIn case of if you are using threads with gevent then you might get this issue greenlet.error: cannot switch to a different thread\nThis is because of monkey patching.For this you can try use process instead of threads.","Q_Score":0,"Tags":"python,rabbitmq,monkeypatching,pika","A_Id":65322499,"CreationDate":"2020-12-14T10:16:00.000","Title":"gevent monkey patch stops rabbit mq listting messages","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"SQLAlchemy is logging all queries although I have SQLALCHEMY_ECHO = False (and verified in debug). Further investigation shows that db.engine.echo is True.\nEnv-wise, FLASK_ENV = development, FLASK_DEBUG = True. However changing these settings does not have any impact on the observed behaviour. I.e. I can observe the same with FLASK_ENV = production and FLASK_DEBUG = False.\nRelevant libraries: Flask-SQLAlchemy==2.4.4, SQLAlchemy==1.3.18, GeoAlchemy2==0.7.0\nAnyone got ideas what I can check maybe?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":226,"Q_Id":65287417,"Users Score":0,"Answer":"You should change your debug to False and also forSQLALCHEMY_TRACK_MODIFICATIONS, you need to set it False as well","Q_Score":0,"Tags":"python,sqlalchemy,flask-sqlalchemy","A_Id":68736223,"CreationDate":"2020-12-14T10:33:00.000","Title":"db.engine.echo is true, even though SQLALCHEMY_ECHO is false","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a flask application in the Ubuntu EC2 instance. Locally I can pass the parameters For eg:\n'''http:\/\/0.0.0.0:8888\/createcm?summary=VVV&change=Feauure '''\nwhere summary and change are parameters. How can I pass the same values from outside the EC2 (i.e) using Public DNS or IP address. Any other way to pass the parameters outside the EC2 instance?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":60,"Q_Id":65288652,"Users Score":1,"Answer":"I dont know how works flask but i wont try help u.\nAt first as flask look like u shoud be able to do same like 'http:\/\/0.0.0.0:8888\/createcm?summary=VVV&change=Feauure ' to outside, but you need open ports on your outside machine, like u should do on ec2 instance(i would do it with UFW firewall)\nOther way its use rabbitmq or AWS S3 to communicate it","Q_Score":0,"Tags":"python-3.x,amazon-web-services,flask,amazon-ec2,amazon-api-gateway","A_Id":65288811,"CreationDate":"2020-12-14T11:57:00.000","Title":"How to pass values to a Flask app in EC2 instance using Public DNS or IP address?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have made a Scrapy web crawler which can scrape Amazon. It can scrape by searching for items using a list of keywords and scrape the data from the resulting pages.\nHowever, I would like to scrape Amazon for large portion of its product data. I don't have a preferred list of keywords with which to query for items. Rather, I'd like to scrape the website evenly and collect X number of items which is representative of all products listed on Amazon.\nDoes anyone know how scrape a website in this fashion? Thanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":68,"Q_Id":65292028,"Users Score":1,"Answer":"I'm putting my comment as an answer so that others looking for a similar solution can find it easier.\nOne way to achieve this is to going through each category (furniture, clothes, technology, automotive, etc.) and collecting a set number of items there. Amazon has side\/top bars with navigation links to different categories, so you can let it run through there.\nThe process would be as follows:\n\nFollow category urls from initial Amazon.com parse\nUse a different parse function for the callback, one that will scrape however many items from that category\nEnsure that data is writing to a file (it will probably be a lot of data)\n\nHowever, such an approach would not be representative in the proportions of each category in the total Amazon products. Try looking for a \"X number of results\" label for each category to compensate for that. Good luck with your project!","Q_Score":0,"Tags":"python,web-scraping,scrapy","A_Id":65314264,"CreationDate":"2020-12-14T15:50:00.000","Title":"How to scrape data from multiple unrelated sections of a website (using Scrapy)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've a dataset, where the output is logarithmic, I mean, it varies from values of 0.02 order to 15000 order, I should just train the model normally, or I should preprocess the output someway?\nThanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":81,"Q_Id":65295507,"Users Score":2,"Answer":"You'll likely get better results if you preprocess to ensure that the output is mostly within [0, 1]. Since your output is \"logarithmic\", it may also help to make the output more linear; that is, take the log of your original outputs and rescale such that the logs are in [0, 1].","Q_Score":0,"Tags":"python,tensorflow,keras,deep-learning","A_Id":65295549,"CreationDate":"2020-12-14T19:48:00.000","Title":"How to train a tensorflow model that the output covers a huge range","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can someone tell me what the difference between on_raw_message_delete()\/OnRawMessageDelete and on_message_delete()\/OnMessageDelete is?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":164,"Q_Id":65295903,"Users Score":2,"Answer":"on_raw_message_delete unlike on_message_delete it's called regardless of the state of the internal cache.\nLet's say you send a message, the bot caches it, but the bot suddenly restarts (the message won't be in the cache anymore), if you delete the message, on_message_delete won't be called, but on_raw_message_delete will.","Q_Score":0,"Tags":"python,discord.py","A_Id":65295973,"CreationDate":"2020-12-14T20:19:00.000","Title":"Difference between on_message_delete() and on_raw_message_delete()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"first of all thanks in advance for any help. I really appreciate it.\nI am running some Python multi-threading code, where every thread is streaming the results of a Rendevous Object which basically works totally fine. But now I fugured out, that with some runtime, the results get delayed as if my computer is not able to process all threads near term. I am wondering if it is possible to speed up the process by using multi-threading in combination with multi-processing? I am not quite sure if this is a) even possible and b) solving my problem. Or would I need \"just\" more CPU power in terms of GHz?\nThanks for your help and best regards!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":65296065,"Users Score":0,"Answer":"thanks for your reply @bjhend. Please find more information below.\nBasically I create gPRC based threads in which _MultiThreadedRendezvous objects are getting streamed within a for loop. The result\/feedback of evey thread will be stored in a own pandas dataframe which is not going to be saved on the disk.\nI do use the general threading module and not the concurrent futures module (if that makes already a difference?). At least I decided to go with multi-threading instead of multi processing because I believe that I have I\/O bound processes. In total I am starting round about ~2000 threads in that way.\nAs the one of the feedback data of the _MultiThreadedRendezvous objects is containing the time, I figured out that with some runtime, the results are getting \"delayed\" (like I mentioned above). Anyway as I do imagine the overall process like a \"worker\/costumer procedure\" with which the Queue module\/function is decribed (first come, first serve; each queue is like an order etc.), I feel like that the CPU clockrate is my most concern. On the other hand if I would imagine running the code on 28-core machine instead of a 4-core cpu I think it should perform faster (just a feeling.. again, rationally I think a the clockrate in combination with the possible threads would increase the process way better).\nHopefully those information will help you. Thanks again!","Q_Score":0,"Tags":"python,python-3.x,python-multiprocessing,python-multithreading","A_Id":65310279,"CreationDate":"2020-12-14T20:31:00.000","Title":"multithreading code with rendevous objects time delay while running the code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I extract the filename after sign '>' from a command line in python. For example:\npython func.py -f file1.txt -c > output.txt\nI want to get the output file name 'output.txt' from the above command line.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":160,"Q_Id":65296341,"Users Score":0,"Answer":"You can't.\nWhen you write something like command > file in shell, the shell:\n\ncreates the file if it doesn't exist,\nopens it, and\nassigns the descriptor to the stdout of command.\n\nThe called process (and it doesn't matter if that's Python or something else) doesn't know what happens to the data after it's written because it only sees the descriptor provided by the shell. It's a bit like having one end of a really long pipe: you know you can put stuff in, but you can't see what happens to it on the other end.\nIf you want to retrieve the file name in that call, you need to either:\n\npass it to your Python program as an argument and handle redirection from Python, e.g. python func.py -f file1.txt -c output.txt, or\noutput it from the shell, e.g. echo 'output.txt'; python func.py -f file1.txt -c > output.txt","Q_Score":0,"Tags":"python,command-line","A_Id":65297292,"CreationDate":"2020-12-14T20:53:00.000","Title":"Extract file name after '>' from command line in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm new to python, sorry if this seems awfully rudimentary for some. I know complex numbers can be simply represented using j after an integer e.g.\na = 2 + 5j\nHowever when I try something like the code below, python returns an error and doesn't recognise this as being complex?\nx = 5\na = 2 + xj \nSimilarly this doesn't work:\na = 2 + x*j\nHow can I get around this problem. I'm trying to use this principle is some larger bit of code.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":65298362,"Users Score":4,"Answer":"The j is like the decimal point or the e in floating point exponent notation: It's part of the notation for the number literal itself, not some operator you can tack on like a minus sign.\nIf you want to multiply x by 1j, you have to use the multiplication operator. That's x * 1j.\nThe j by itself is an identifier like x is. It's not number notation if it doesn't start with a dot or digit. But you could assign it a value, like j = 1j, and then x * j would make sense and work.\nSimilarly, xj is not implicit multiplication of x and j, but a separate identifier word spelled with two characters. You can use it as a variable name and assign it a separate value, just like the names x, j and foo.","Q_Score":1,"Tags":"python,python-3.x,complex-numbers","A_Id":65298378,"CreationDate":"2020-12-15T00:22:00.000","Title":"Representing complex numbers in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Error occurred when I am trying to run face_rec using CUDA, there's no file missing, but the system states that it cannot find the related file.\nCould not load library libcudnn_cnn_train.so.8. Error: libcudnn_ops_train.so.8: cannot open shared object file: No such file or directory\nPlease make sure libcudnn_cnn_train.so.8 is in your library path!\nAborted (core dumped)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":617,"Q_Id":65299281,"Users Score":0,"Answer":"I am solving this question. In the NEXT.\nsudo ln -sf \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_adv_train.so.8.3.0 \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_adv_train.so.8\nsudo ln -sf \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_ops_infer.so.8.3.0 \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_ops_infer.so.8\nsudo ln -sf \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_cnn_train.so.8.3.0 \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_cnn_train.so.8\nsudo ln -sf \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_adv_infer.so.8.3.0 \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_adv_infer.so.8\nsudo ln -sf \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_ops_train.so.8.3.0 \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_ops_train.so.8\nsudo ln -sf \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_cnn_infer.so.8.3.0 \/usr\/local\/cuda-11.5\/targets\/x86_64-linux\/lib\/libcudnn_cnn_infer.so.8\nif your cuda not like me, you should change the cuda-11.5 into your cuda","Q_Score":4,"Tags":"python,face-recognition,dlib","A_Id":71674426,"CreationDate":"2020-12-15T02:41:00.000","Title":"Could not load library libcudnn_cnn_train.so.8","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a bot that logs into zoom at specified times and the links are being obtained from whatsapp. So i was wondering if it is was possible to retrieve those links from whatsapp directly instead of having to copy paste it into python. Google is filled with guides to send messages but is there any way to READ and RETRIEVE those messages and then manipulate it?","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":10184,"Q_Id":65299796,"Users Score":0,"Answer":"You can read all the cookies from whatsapp web and add them to headers and use the requests module or you can also use selenium with that.","Q_Score":0,"Tags":"python,selenium,whatsapp","A_Id":68296269,"CreationDate":"2020-12-15T03:56:00.000","Title":"How do I read whatsapp messages from a contact using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I open a python file, Visual Studio Sode show \"Python extension loading...\". After a while, it shows this error: \"Extension host terminated unexpectedly.\"\nI have tried:\n\nuninstall all extensions and reinstall.\nuninstall visual studio code completely and reinstall.\nchange my environment path and remove \";;\" .....\n\nThis problem happened in windows10.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1336,"Q_Id":65301647,"Users Score":0,"Answer":"Change back to an older version, I'm also having this issue, I switched back to the older version and now its fixed.","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":65302063,"CreationDate":"2020-12-15T07:29:00.000","Title":"How to solve the problem\"Python extension loading...\" in visual studio code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with an imbalanced dataset where I have a class variable of 2 different values: 0 and 1.\nThe number of '0' values is 1000 and the number of '1' values is 3000.\nFor XGBClassifier, LGBMClassifier and CatBoostClassifier I found that there is a parameter called \"scale_pos_weight\" which enables to modify the weights of the class values:\nscale_pos_weight = number_of_negative_values \/ number_of_positive_values\nMy question is: how can we know which value of class variable is positive and which negative?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":565,"Q_Id":65304302,"Users Score":0,"Answer":"For binary classification imbalanced dataset, always consider positive value to the minority class (class 1) and negative values to the majority class (class 0).\nBut you have assumed class 0 as minority class & class 1 as majority class.\nBy default value of scale_pos_weight=1 or > 1","Q_Score":0,"Tags":"python,classification,xgboost,catboost,imbalanced-data","A_Id":65304680,"CreationDate":"2020-12-15T10:43:00.000","Title":"How can I know which is the positive class value and negative class value for XGBoost?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I got an image stack of 4 gray scale images which i want to pass to a neural network with tensorflow.\nAfter reading in my 4 gray scale images and convert them to a tensor their shape is (4,120,160)\nWhen i pass it to the neural network i get an error message. After some googling i found that i need the input shape of (4,120,160,1) in which the 1 stands for the color channel.\nI have not found anyway to change the shape of my tensor in this way.\nIt seems to work with the expand dims function but it is a little bit cryptic for me to understand what it does.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":235,"Q_Id":65305528,"Users Score":1,"Answer":"You can use tf.expand_dims(image, -1).\n\nGiven a tensor input, this operation inserts a dimension of length 1 at the dimension index axis of input's shape.","Q_Score":0,"Tags":"python,tensorflow","A_Id":65305663,"CreationDate":"2020-12-15T12:07:00.000","Title":"Add a dimension to a tensorflow tensor","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm training a Neural Network over a TFRecordDataset. However, at the end of every epoch, i.e. with ETA: 0s, the training gets stuck for tens of seconds. For reference, one epoch takes around a minute to be completed over a dataset of around 25GB (before parsing a subset of the features).\nI'm running TensorFlow 2.3.1 with a Nvidia Titan RTX GPU. Is this the intended behavior? Maybe due to the preprocessing in the input pipeline? Is that preprocessing performed by the CPU only or offloaded to the GPU? Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":150,"Q_Id":65309983,"Users Score":2,"Answer":"If you have a validation set and you're using model.fit(), it's probably the time it takes to calculate the loss and the metrics. In most cases, it should take an extra 25% to compute the metrics of a 80\/20 split.","Q_Score":0,"Tags":"python,tensorflow,keras,dataset,nvidia","A_Id":65310128,"CreationDate":"2020-12-15T16:37:00.000","Title":"Tensorflow stuck for seconds at the end of every epoch","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to download the blobs on a google storage bucket. What is the difference between the get_bucket and the bucket methods a client has? Why do they differ in permissions? Can both be used to download blobs?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":627,"Q_Id":65310422,"Users Score":4,"Answer":"If you have a look to the code, you can see that\n\nbucket() is simply a declaration, without any request to Cloud Storage to check if the bucket exist or not (you find the same logic with blob() method)\nget_bucket() performs a call to Cloud Storage API to get the bucket with it's metadata (it's the same logic with get_blob())\n\nIn summary, with get_xxx you check the existence of the object, with the other method, you simply declare a name without checks.\nWith both, you can download the content of a Blob.","Q_Score":1,"Tags":"python,google-cloud-storage","A_Id":65313690,"CreationDate":"2020-12-15T17:06:00.000","Title":"Difference between bucket() and get_bucket() on google storage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a Datastage job that runs on multiple instances on different job cycles. The job could run concurrently or on different times. When one job cycle fails due to the failed Datastage job in that cycle, the other cycles fail as well. Is there a way to prevent this from happening i.e., if the Datastage failed in one cycle, can the other cycle continue to run using the same Datastage job that failed in the other cycle. Is there a way we can do an automatic reset of the failed job? If so, how? Thanks for you info and help.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":234,"Q_Id":65313568,"Users Score":1,"Answer":"You could set up automatic reset only by wrapping each cycle variant in its own sequence. Only sequence jobs support automatic reset after failure, as a property of the Job activity.\nI'm not sure what the case is if another cycle is running when you try to reset. You could test this. It may be that, if you need this reset functionality, you need clones rather than instances of the job.","Q_Score":0,"Tags":"python,db2,datastage","A_Id":65314282,"CreationDate":"2020-12-15T20:55:00.000","Title":"Automatic restart of multi-instance Datastage job with DSJE_BADSTATE fail status","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I need to match substring in format !some text!https:\/\/test.com\/ where anything between two ! marks is text and after comes url without spaces,\nso far I made a regular expression which can find substring\n(!(.*)!((https|http):\\\/\\\/[^\\s+])) but it doesn't work as desired when similar pattern is duplicated anywhere in the text.\nfor example\n!text goes here!https:\/\/amazon.com and some other text comes here should parse\n!text goes here!https:\/\/amazon.com\nand it does perfectly, but when I duplicate such pattern anywhere in the text it matches whole text between the two texts.\ni.e.\n!text goes here!https:\/\/amazon.com some text here !text goes here!https:\/\/amazon.com and after some other text matches whole text\nshould match two separate !text goes here!https:\/\/amazon.com some but it will select substring till the end of second match\n!text goes here!https:\/\/amazon.com some text here !text goes here!https:\/\/amazon.com\nperhaps it takes whole text between first ! and the last !\nIs there a approach halt matching when space met after the url text","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":65316180,"Users Score":3,"Answer":"!.*! matches the longest possible substring surrounded by exclamation points; in other words, it goes from the first bang to the last one.\nYou want to match from the first bang to the next one, which could be a non-greedy match !.*?! or the more precise match, ![^!]*!.","Q_Score":0,"Tags":"javascript,python,regex,parsing,regex-negation","A_Id":65316215,"CreationDate":"2020-12-16T01:49:00.000","Title":"How to separate matching substring at first met space char regex","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script that uses requests library. It is a web scaper that runs for at least 2 days and I don't want to leave my laptop on for that long. So, I wanted to run it on the Cloud but after a lot of trying and reading the documentation, I could not figure out a way to do so.\nI just want this: When I run python my_program.py it shows the output on my command line but runs it using Google Cloud services. I already have an account and a project with billing enabled. I already installed the GCP CLI tool and can run it successfully.\nMy free trial has not ended. I have read quickstart guides but as I am fully beginner regarding the cloud, I don't understand some of the terms.\nPlease, help me","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":48,"Q_Id":65317593,"Users Score":1,"Answer":"I think you'll need to setup a Google Cloud Compute Engine instance for that. It's basically a reserved computer\/machine where you can run your code. Here's some steps that you should do just to get your program running on the cloud.\n\nSpin up a Compute Engine instance\nGain access to it (through ssh)\nThrow your code up there.\nInstall any dependencies that you may have for you script.\nInstall tmux and start a tmux session.\nRun the script inside that tmux session. Depends on your program, you should be able to see some output inside the session.\nDetach it.\n\nYour code is now executing inside that session.\nFeel free to disconnect from the Compute Engine instance now and check back later by attaching to the session after connecting back into your instance.","Q_Score":0,"Tags":"python,google-cloud-platform","A_Id":65317772,"CreationDate":"2020-12-16T05:00:00.000","Title":"How do I control my Python program on the local command line using Google Cloud","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm working a program that will utilize Selenium\/Webdriver to open a webpage, enter some data, and open a new page which is a PDF. Ultimately I would like to download that PDF into a folder. I know it is possible to download a PDF into a folder if you have the URL in your script, but I'm struggling to find a way to download it if it is opened within the program.\nA) Is there a way to download a PDF that is opened explicitly in Chrome using a script?\nB) Is there a way to extract the URL from an opened webpage that then be fed back into the program to download from?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":101,"Q_Id":65317600,"Users Score":1,"Answer":"While I was doing a selenium project, I faced a similar issue.\nI would click on a link that was referring to a PDF file but instead of downloading, the selenium chromedriver would just open it in a new tab.\nWhat solved my problem was that right after I started new chromedriver session, I manually disabled this feature:\n\nIn your Chrome settings, go to Privacy and Securtiy\nSelect Site Settings\nScroll down and click on Additional preferences\nFind a section named 'PDF documents'\nTurn on the option that says \"Download PDF files instead of automatically opening them in Chrome\"\n\nNow, any PDF link you click on will download the file instead of opening them in a new tab. Note that you need to do this every time you start a new chromedriver. Changing this setting in your main Chrome application won't help.","Q_Score":1,"Tags":"python,selenium,selenium-webdriver","A_Id":65317668,"CreationDate":"2020-12-16T05:01:00.000","Title":"Download Opened PDF with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The error messages printed by pip in my Windows PowerShell are dark red on dark blue (default PowerShell background). This is quite hard to read and I'd like to change this, but I couldn't find any hint to how to do this. Even not, if this is a default in Python applied to all stderr-like output, or if it's specific to pip.\nMy configuration: Windows 10, Python 3.9.0, pip 20.2.3.\nThanks for your help!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":233,"Q_Id":65319466,"Users Score":0,"Answer":"Coloring in pip is done via ANSI escape sequences. So the solution to this problem would be, to either change the way, PowerShell displays ANSI colors or the color scheme pip uses. Pip provides though a command-line switch '--no-color' which can be used to deactivate coloring the output.","Q_Score":0,"Tags":"python,powershell,pip,systemcolors","A_Id":65319799,"CreationDate":"2020-12-16T08:12:00.000","Title":"How to change colors of pip error messages in windows powershell","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Configuration:\nTensorFlow 2.3 Python 3.6\nHow to load the TensorFlow model stored on Google Drive or on any other remote server\/location in the Python program?\nI am looking for a solution which do not require the downloading of the model and giving the filepath. ie. a solution in which I can directly give the url of my TensorFlow model to load the TensorFlow model. This url is irrespective of Google Drive, i.e. it can be the url of any remote storage\/server.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":69,"Q_Id":65321863,"Users Score":0,"Answer":"You should download the model to the local device and the load it.","Q_Score":0,"Tags":"python,remote-server,tensorflow2.x","A_Id":65321893,"CreationDate":"2020-12-16T10:53:00.000","Title":"Loading TensorFlow model from remote location","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I will create python api using Django\nnow I trying to verify phone number using firebase authentication end send SMS to user but I don't know how I will do","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":435,"Q_Id":65322927,"Users Score":1,"Answer":"The phone number authentication in Firebase is only available from it's client-side SDKs, so the code that runs directly in your iOS, Android or Web app. It is not possible to trigger sending of the SMS message from the server.\nSo you can either find another service to send SMS messages, or to put the call to send the SMS message into the client-side code and then trigger that after it calls your Django API.","Q_Score":0,"Tags":"python,django,firebase-authentication","A_Id":65325134,"CreationDate":"2020-12-16T12:06:00.000","Title":"python api verified number usinf firebase","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"when I use open() in python it says FileNotFound, only when I try to open files with spacing in naming, (for e.g. example text.txt)\nI tried using with open('Users\/vandit\/Desktop\/example\\ text.txt', 'rb') as f:","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72,"Q_Id":65323541,"Users Score":3,"Answer":"The '\\' is only used in the shell to escape the following space. In python, you can just include the whole thing in quotes:\nwith open('Users\/vandit\/Desktop\/example text.txt', 'w') as f:\nAlso (if you are on windows) if you still get an error, try prepending the string with the drive in which your Windows is installed, which is mostly C:\\. So it will become: with open('C:\/Users\/vandit\/Desktop\/example text.txt', 'w') as f:","Q_Score":2,"Tags":"python,filenotfoundexception","A_Id":65323617,"CreationDate":"2020-12-16T12:45:00.000","Title":"python open() gives me error while opening files having a space in naming(i.e. example text.txt)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say I have a list:\nmy_list = [0.1, 0.14, 0.1, 0.03, 0.3, 0.01, 0.6]\nAnd I want to limit the max value to 0.2 so the wanted result is:\nmy_list\n[0.1, 0.14, 0.1, 0.03, 0.2, 0.01, 0.2]\nI tried\n[0.2 if x>0.2 else x for x in my_list]\nAnd also\nlist(map(lambda x: min(x,0.2),my_list))\nThe first one was found about 5-10 % more efficient, but still too slow.\nIs there any more time\/complexity efficient way?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":323,"Q_Id":65323643,"Users Score":0,"Answer":"The most pythonic solution was:\nIn [3]: %timeit [min(x,0.2) for x in my_list]\n10.5 ms \u00b1 302 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\nFast (but lost some readability)\nIn [4]: %timeit [x if x <0.2 else 0.2 for x in my_list]\n2.18 ms \u00b1 11.6 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\nIf you have lots of data and can use a numeric library like numpy,\nsee @juampa response","Q_Score":1,"Tags":"python,list,performance,iteration,max","A_Id":65372927,"CreationDate":"2020-12-16T12:50:00.000","Title":"What is the most efficient way to loop through a list and set max value Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know they provide add, update, delete and view. But I want the users' data to not get deleted and disable user so they cannot use website until enabled again.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":338,"Q_Id":65323790,"Users Score":0,"Answer":"Uncheck the is_active checkbox and save.\nDjango does not log in users whose is_active value is False.","Q_Score":0,"Tags":"python,django,django-rest-framework,django-admin,disable","A_Id":65324317,"CreationDate":"2020-12-16T12:59:00.000","Title":"How to disable users from admin panel and django?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to do a python or matlab project which visualizes audio data while being synchronized with playback.\nIn details, it means in a GUI, I have two main regions, one for data visulization and the other for audio playback. The form of data visulization can be defined specifically, e.g. as waveform or as a STFT-spectrogram. When I hit the button of audio playback, I can not only listen to the music, but also have a real-time cursor in the data visulization area, which is synchronized with the audio playback and indicates the time position of playback. And I would like to point out that I don't want it look like a digital oscilloscope which refreshes the spectrum or waveform for every buffer time. I want the data visualization as showed for the whole time range of audio, only the cursor to dynamically synchronized\/ moved with audio playback.\nSo I want to ask you, do you know any existing project or packages that can realize similar function as I described? Or do you have any recommendation on how I can put the idea into reality from scratch?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":98,"Q_Id":65325259,"Users Score":1,"Answer":"In principle, it's relatively straightforward. You'd probably want to use:\n\na GUI library such as PyQT\nlibraries for STFT and other maths; SciPy and NumPy are your friends here\nan audio library to read and play back the audio data\n\nAlso you'll need to use threads, since you want to simultaneously play back audio and update your GUI etc. Thus, some understanding of multithreading is useful.\nThough it's unproblematic in a sense, there's a lot of details to get right. If you don't have experience in some or all of these areas, there's a lot you need to learn. Of course, that could be a positive thing.\nThe biggest issue might be the visualization of the audio data. Matplotlib is a popular plotting library, but it's a bit tricky to integrate into a PyQt app, and probably the realtime requirement makes things even harder.","Q_Score":0,"Tags":"python,matlab,audio,fft,visualization","A_Id":65325639,"CreationDate":"2020-12-16T14:31:00.000","Title":"Python or Matlab: synchronized audio playback with data visualization (waveform\/STFT-spectrogram etc.)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am struggling with\n\npyodbc.ProgrammingError: ('String data, right truncation: length 636 buffer 510', 'HY000')\n\nwhile using executeMany() with __crsr.fast_executemany = True. When I remove this line everything works fine.\nI am using pyodbc (4.0.30) and MSSQL with ODBC Driver 17 for SQL Server.\nMy database table looks like 4 columns and each of them is varchar(255).\nI already to add this line: crsr.setinputsizes([(pyodbc.SQL_WVARCHAR, 50, 0)]) and add UseFMTOnly=yes to connection string but it didn't work.\nCould you guys help me, please? I am already tired of that.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":3503,"Q_Id":65326018,"Users Score":-1,"Answer":"Check the ODBC Driver you are using in your pyodbc connection string, if it is an old version, errors could be misleading, use for example:\ndriver=\"{ODBC Driver 17 for SQL Server}\"\ninstead of:\ndriver=\"{SQL Server}\"","Q_Score":3,"Tags":"python,pyodbc","A_Id":71685349,"CreationDate":"2020-12-16T15:12:00.000","Title":"String data, right truncation while using fast executemany with pyodbc","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My question here may seem really naive but I never found any clue about it on web resources.\nThe question is, concerning install_requires argument for setup() function or setup.cfg file, is it a good practice to mention every package used, even python built-in ones such as os for example ?\nOne can assume that any python environment has those common packages, so is it problematic to explicitly mention them in the setup, making it potentially over-verbose ?\nThanks","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3388,"Q_Id":65326080,"Users Score":3,"Answer":"You should list top-level 3rd party dependencies.\n\nDon't list packages and modules from Python's standard library.\n\nDo list 3rd party dependencies your code directly depends on, i.e. the projects containing:\n\nthe packages and modules your code imports;\nthe binaries your code directly calls (in subprocesses for example).\n\n\nDon't list dependencies of your dependencies.","Q_Score":5,"Tags":"python,setuptools,setup.py","A_Id":65326736,"CreationDate":"2020-12-16T15:17:00.000","Title":"Python setup config install_requires \"good practices\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use remote-SSH to connect the server(ubuntu 20.04) and I find that if I click the button to install the module of python, it is only installed for one user.\nFor example:\n\nxxx is not installed, install?\n\nThen I find that the command in the terminal is :\npip install -U module_name --user\nSo I try to add the configuration in settings.json and install again.\n\"python.globalModuleInstallation\": true\nThe terminal has no response, however. Is this a bug?\nThough I can type the install command in terminal by myself, I still want to know if vscode can install the module globally by itself.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":243,"Q_Id":65326646,"Users Score":0,"Answer":"To install it on the global level (for all users) you need to install it as the root user or as an administrator.\nIn short you must give the admin privileges.\nUse sudo for linux or run you VS code as admin.\nRunning the VScode as admin solved my issue. Give it a try.","Q_Score":0,"Tags":"python,visual-studio-code,vscode-settings,vscode-remote","A_Id":65326768,"CreationDate":"2020-12-16T15:50:00.000","Title":"python global module installation when using remote SSH extension","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm fully aware of the previous post regarding this error. That issue was with scikit-learn < 0.20. But I'm having scikit-learn 0.23.2 and I've tried uninstall and reinstall 0.22 and 0.23 and I still have this error.\nFollowup: Although pip list told me the scikit-learn version is 0.23.2, but when I ran sklearn.__version__, the real version is 0.18.1. Why and how to resolve this inconsistency? (Uninstall 0.23.2 didn't work)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":294,"Q_Id":65327149,"Users Score":0,"Answer":"[RESOLVED]\nIt turned out that my Conda environment has different sys.path as my jupyter environment. The jupyter environment used the system env, which is due to the fact that I installed the ipykernel like this: python -m ipykernel install without use --user flag. The correct way should be to do so within the Conda env and run pip install jupyter","Q_Score":0,"Tags":"python,scikit-learn","A_Id":65330188,"CreationDate":"2020-12-16T16:21:00.000","Title":"ImportError: No module named 'sklearn.compose' with scikit-learn==0.23.2","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using PostgreSQL 11 COPY command to import large CSVs into the DB with Python, like the following:\nCOPY \"ns\".\"table\" (\"col1\", \"col2\") FROM STDIN WITH CSV HEADER DELIMITER AS ','\nI didn't find any recent information if this operation is secure in terms of SQL injection attacks or should I manually go over the CSV and escape every value in the file (which is a very heavy operation).\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":377,"Q_Id":65328138,"Users Score":4,"Answer":"There is no danger of SQL injection with this command.\nIf a user supplies bad data, then you end up with bad data in the table, or at worst you could get an error because the file is not correct CSV or because a constraint was violated.\nBut there is no way to subvert security to execute statements, because nothing entered by the user will become part of an SQL statement. With COPY, there is a clear distinction between SQL statement and data.","Q_Score":3,"Tags":"python,postgresql,sql-injection","A_Id":65328220,"CreationDate":"2020-12-16T17:25:00.000","Title":"PostgreSQL COPY SQL injection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"given a dictionary like this: example_dict ={\"mark\":13, \"steve\":3, \"bill\":6, \"linus\":11}\nfinding the key with max value is easy using max(example_dict.items(), key=operator.itemgetter(1)) and min value using min(example_dict.items(), key=operator.itemgetter(1))\nWhat's the easiest way to find the key with the n-th largest value? e.g. the key with the 2nd largest value here is linus","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":65328275,"Users Score":0,"Answer":"Use QuickSelect algorithm. It works in O(n) on average","Q_Score":0,"Tags":"python,algorithm,dictionary","A_Id":65328453,"CreationDate":"2020-12-16T17:34:00.000","Title":"find key in dictionary with n-th largest value","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a connection object in Scala cell, which I would like reuse it in Python cell.\nIs there alternate to temp table to access this connection.\nDatabricks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":65329670,"Users Score":0,"Answer":"Not really - Python code is executed in the different context, so you can have some data exchange either via SparkSession itself (.set\/.get on the SparkConf, but it works only for primitive data), or by registering the temp view","Q_Score":0,"Tags":"python,scala,jdbc,databricks","A_Id":65338265,"CreationDate":"2020-12-16T19:10:00.000","Title":"How can access Scala JDBC connection in Python Notebook ---Databricks","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there an available map-like data structure in python that supports custom ordering, insertion and deletion? I want to store (key,value) pairs where keys are int and values are object with many fields so that they are ordered by some particular fields of the value.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":65334474,"Users Score":0,"Answer":"i am settled with a dict of (key, val) and a priority queue of tuple (score, key)\nwhere score is used for sorting and can be derived from the val object.","Q_Score":0,"Tags":"python,dictionary","A_Id":65346242,"CreationDate":"2020-12-17T03:48:00.000","Title":"dictionary with custom ordering in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use machine learning in Python. Right now I am using sklearn and TensorFlow. I was wondering what to do if I have a model that needs updating when new data comes. For example, I have financial data. I built an LSTM model with TensorFlow and trained it. But new data comes in every day, and I don't want to retrain the model every day. Is there a way just to update the model and not retrain it from scratch?\nIn sklearn, the documentation for .fit() method (using DecisionTreeClassifier as an example) says that it\n\nBuild a decision tree classifier from the training set (X, y).\n\nSo it seems like it will retrain the entire model from scratch.\nIn tensorflow, .fit() method (using Sequential as an example) say\n\nTrains the model for a fixed number of epochs (iterations on a\ndataset).\n\nSo it seems like it does update the model instead of retraining. But I am not sure if my understanding is correct. I would be grateful for some clarification. And if sklearn indeed retrains the entire model using .fit(), is there a function that would just update the model instead of retraining from scratch?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":820,"Q_Id":65334672,"Users Score":0,"Answer":"In sklearn, the .fit() method retrains on the dataset i.e as you use .fit() on any dataset, any info pertaining to previous training will all be discarded. So assuming you have new data coming in every day you will have to retrain each time in the case of most sklearn algorithms.\nAlthough, If you like to retrain the sklearn models instead of training from scratch, some algorithms of sklearn (like SGDClassifier) provide a method called partial_fit(). These can be used to retrain and update the weights of an existing model.\nAs per Tensorflow, the .fit() method actually trains the model without discarding any info pertaining to previous trainings. Hence each time .fit() is used via TF it will actually retrain the model.\nTip: you can use SaveModel from TF to save the best model and reload and re-train the model as and when more data keeps flowing in.","Q_Score":1,"Tags":"python,tensorflow,scikit-learn","A_Id":65335235,"CreationDate":"2020-12-17T04:18:00.000","Title":"Does model get retrained entirely using .fit() in sklearn and tensorflow","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list which has 8 elements and all of those elements are arrays whose shape are (3,480,364).Now I want to transform this list to array as (8,3,480,364).When I use the array=nd.array(list) this command,it will takes me a lot of time and sometimes it will send 'out of memory' error.When I try to use this command array=np.stack(list.aixs=0),when I debug the code,it will stay at this step and can't run out the result.So I wonder how can I transform a list to array quickly when I use the Mxnet framework?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":65337206,"Users Score":0,"Answer":"Your method of transforming a list of lists into an array is correct, but an 'out of memory' error means you are running out of memory, which would also explain the slowdown.\nHow to check how much RAM you have left:\non Linux, you can run free -mh in the terminal.\nHow to check how much memory a variable takes:\nThe function sys.getsizeof tells you memory size in bytes.\nYou haven't said what data type your arrays have, but, say, if they're float64, that's 8 bytes per element, so your array of 8 * 3 * 480 * 364 = 4193280 elements should only take up 4193280 * 8 bytes = about 30 Mb. So, unless you have very little RAM left, you probably shouldn't be running out of memory.\nSo, I'd first check your assumptions: does your list really only have 8 elements, do all the elements have the same shape of (3, 480, 364), what is the data type, do you create this array once or a thousand times? You can also check the size of a list element: sys.getsizeof(list[0]).\nMost likely this will clear it up, but what if your array is really just too big for your RAM?\nWhat to do if an array doesn't fit in memory\nOne solution is to use smaller data type (dtype=np.float32 for floating point, np.int32 or even np.uint8 for small integer numbers). This will sacrifice some precision for floating point calculations.\nIf almost all elements in the array are zero, consider using a sparse matrix.\nFor training a neural net, you can use a batch training algorithm and only load data into memory in small batches.","Q_Score":0,"Tags":"python,mxnet","A_Id":65338624,"CreationDate":"2020-12-17T08:39:00.000","Title":"How can I transform a list to array quickly in the framework of Mxnet?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 3 scripts: the 1st and the 3rd are written in R, and the 2nd in Python.\nThe output of the 1st script is the input of the 2nd script, and its output is the input of the 3rd one.\nThe inputs and outputs are search keywords or phrases.\nFor example, the output of the 1st script is Hello, then the 2nd turns the word to olleH, and the 3rd one converts the letters to uppercase: OLLEH.\nMy question is how can I connect those scripts and let them run automatically, without my intervention, on AWS. What will be the commands? How can the output of the 1st script be saved, and play a role as the input of the 2nd one, etc.?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":44,"Q_Id":65338476,"Users Score":1,"Answer":"I have never used AWS so I'm unfamiliar with that, but this seems like a workflow management system would solve these issues. Take a look into snakemake or nextflow. With these tools you can easily (after you get used to it) do exactly what you describe. Run scripts\/tools that depend on each other sequentially (and also in parallel).","Q_Score":0,"Tags":"python,r,amazon-web-services,amazon-ec2","A_Id":65339243,"CreationDate":"2020-12-17T10:02:00.000","Title":"Running three \"connected\" scripts on AWS EC2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Last week I tried to run my python file and it work fine but when I tried it today it gave this error:\nUnable to create process using 'C:\\Users\\Programmer\\AppData\\Local\\Microsoft\\WindowsApps\\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\\python.exe C:\/Users\/Programmer\/PycharmProjects\/help\/V1.1.py'\nIt also exited with code 101.\nI tried updating pip but It didn't do anything.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":817,"Q_Id":65346772,"Users Score":0,"Answer":"You'd better check the interpreter version you're using. If you are using PyCharm, you can choose File -> Settings -> Project: prjectName -> Porject Interpreter. Finally, select the desired Python version number from the Porject Interpreter drop-down menu on the right side of the window.","Q_Score":0,"Tags":"python,pycharm","A_Id":70603326,"CreationDate":"2020-12-17T18:56:00.000","Title":"Pycharm \"Unable to create process using\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Last week I tried to run my python file and it work fine but when I tried it today it gave this error:\nUnable to create process using 'C:\\Users\\Programmer\\AppData\\Local\\Microsoft\\WindowsApps\\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\\python.exe C:\/Users\/Programmer\/PycharmProjects\/help\/V1.1.py'\nIt also exited with code 101.\nI tried updating pip but It didn't do anything.","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":817,"Q_Id":65346772,"Users Score":-1,"Answer":"I know is old, had the same problem, for me it work to run the Pycharm as Administrator.","Q_Score":0,"Tags":"python,pycharm","A_Id":70053622,"CreationDate":"2020-12-17T18:56:00.000","Title":"Pycharm \"Unable to create process using\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"'''\ncursor.execute(Select * From Table);\n'''\nIam using the above code to execute the above select query, but this code gets stucked, because in the table, I am having 93 million records,\nDo we have any other method to extract all the data from snowflake table in python script","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":363,"Q_Id":65346784,"Users Score":0,"Answer":"Depending on what you are trying to do with that data, it'd probably be most efficient to run a COPY INTO location statement to extract the data into a file to a stage, and then run a GET via Python to bring that file locally to wherever you are running python.\nHowever, you might want to provide more detail on how you are using the data in python after the cursor.execute statement. Are you going to iterate over that data set to do something (in which case, you may be better off issuing SQL statements directly to Snowflake, instead), loading it into Pandas to do something (there are better Snowflake functions for pandas in that case), or something else? If you are just creating a file from it, then my suggestion above will work.","Q_Score":0,"Tags":"python,snowflake-cloud-data-platform,snow","A_Id":65347125,"CreationDate":"2020-12-17T18:58:00.000","Title":"snowflake select cursor statement fails","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I get the median of ConvertedComp column for all Gender = 'Woman' from this Pandas dataframe (It only shows 'Man' because I'm only showing df.head(10)):\n\n\n\n\n\nConvertedComp\nGender\n\n\n\n\n0\n61000.0\nMan\n\n\n1\n95179.0\nMan\n\n\n2\n90000.0\nMan\n\n\n3\n455452.0\nMan\n\n\n4\n65277.0\nMan\n\n\n5\n31140.0\nMan\n\n\n6\n41244.0\nMan\n\n\n7\n103000.0\nMan\n\n\n8\n69000.0\nMan\n\n\n9\n26388.0\nMan","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":886,"Q_Id":65348297,"Users Score":0,"Answer":"Try this:\ndf[df['Gender']=='Woman']['ConvertedComp'].median()","Q_Score":1,"Tags":"python,pandas,dataframe","A_Id":70317860,"CreationDate":"2020-12-17T21:01:00.000","Title":"How to get the median of a column based on another column value using Pandas?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to return all values of all widgets on a screen to their default values (ex. TextInputs text: '') or do I have to write a function to go through each one, one by one, to clear them?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":148,"Q_Id":65349266,"Users Score":0,"Answer":"What you can do is clear all widgets on a screen using .clear_widgets() function. I don't think there's any way to reset all values. Another way to change the values is like if you have a textfield then you can do textfield.text = '' but you have to go through each widget to reset it","Q_Score":0,"Tags":"python,kivy","A_Id":65357031,"CreationDate":"2020-12-17T22:31:00.000","Title":"Kivy: Reset\/Default values to all widgets on a screen?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to return all values of all widgets on a screen to their default values (ex. TextInputs text: '') or do I have to write a function to go through each one, one by one, to clear them?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":148,"Q_Id":65349266,"Users Score":0,"Answer":"I don't think Kivy properties retain any concept of a default value, so you'll have to handle this yourself.","Q_Score":0,"Tags":"python,kivy","A_Id":65349788,"CreationDate":"2020-12-17T22:31:00.000","Title":"Kivy: Reset\/Default values to all widgets on a screen?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I made a cool little project for my friend, basically a timer using tkinter, but I am confused on how to let them access this project without having vscode or pycharm. Is it possible for them to just see the Tkinter window or something like that? Is there an application for this? Sorry if this is a stupid question.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":108,"Q_Id":65352071,"Users Score":2,"Answer":"You can just built an .exe (Application) of your project. Then just share the application file and anyone can use the application through .exe. You can use pyinstaller to convert your python code to exe.\npip install pyinstaller\nthen cd to the project folder then run the following command\npyinstaller --onefile YourFileName.py\nif you want to make exe without console showing up then use this command\npyinstaller --onefile YourFileName.py --noconsole","Q_Score":1,"Tags":"python","A_Id":65352116,"CreationDate":"2020-12-18T05:07:00.000","Title":"How do you set up a python project to be able to send to others without them having to manually copy and paste the code into an editor","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2 python files that do Web scraping using Selenium and Beautifulsoup and store the results in separate CSV files say file1.csv and file2.csv. Now, I want to deploy these files on the Azure cloud, I know Azure function apps will be ideal for this. But, I don't know how Functions app will support Selenium driver on it.\nBasically, I want to time trigger my 2 web scraping files and store the results in two separate files file1.csv and file2.csv that will be stored in blob storage on Azure cloud. Can someone help me with this task?\nHow can I use the selenium driver on Azure functions app?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":447,"Q_Id":65352652,"Users Score":2,"Answer":"Deploying on virtual machines or EC2 is the only option that one can use to achieve this task.\nAlso, with Heroku, we will be able to run selenium on the cloud by adding buildpacks. But when it comes to storing the files, we will not be able to store files on heroku as heroku does not persist the files. So, VMs or EC2 instances are the only options for this task.","Q_Score":2,"Tags":"python,azure,selenium-webdriver,web-scraping,beautifulsoup","A_Id":65372990,"CreationDate":"2020-12-18T06:28:00.000","Title":"Deploy Python Web Scraping files on Azure cloud(function apps)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Dataframe with millions of rows, to create a model, I have taken a random sample from this dataset using dataset.sample(int(len(dataset)\/5)) which returns a random sample of items from an axis of the object. Now I want to verify if the sample does not lose statistical significance from the population, that is, ensure the probability distribution of each of the features (columns) of the sample has the same probability distribution for the whole dataset (population). I have numerical as well as categorical features. How can I check that the features have the same probability distribution in Python?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1562,"Q_Id":65353833,"Users Score":1,"Answer":"This does not require a test. If you have taken a simple random sample from the entire data frame, the probability distribution of whatever features the data set has is, in fact, the whole data set. That's a property of a simple random sample.\nUnfortunately, unless the data set was ALSO sampled properly (something I assume you cannot control at this point) you cannot guarantee that the data set and sample have the same distribution. The probability distribution was determined at the point of sampling the data.\nBut if you're happy to assume that, then you need no additional checking step to ensure that your random sample does its job - this is provably guaranteed.","Q_Score":3,"Tags":"python,machine-learning,probability","A_Id":65354120,"CreationDate":"2020-12-18T08:29:00.000","Title":"How to check if sample has same probability distribution as population in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been searching and reading quite a bit but I can't seem to find an answer to the question of how to pauze a Vpyton object in a simulation (for a second preferably). I considered briefly time.sleep() but this pauzes the entire simulation, my goal however is to pauze just one object.\nIf there are any question or I need to elaborate further, please ask.\nKind regards,\nZo\u00eb","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":25,"Q_Id":65354377,"Users Score":1,"Answer":"Use sleep(1), not time.sleep(1). However, I don't understand the meaning of \"pause just one object\". If by that you mean that you have several objects moving and you want to pause one of them while keeping the others moving, you would set a global variable t = clock() and in your animation loop keep checking whether clock()-t is still less than 1 (second) and, if so, avoid changing the \"paused\" object's position.","Q_Score":0,"Tags":"vpython","A_Id":65543361,"CreationDate":"2020-12-18T09:17:00.000","Title":"How to let a Vpyton object pauze within a simulation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Pycharm 2020.2.3 professional version but I cannot revert previously written command history as It used to be in VSCode IDE.(previously Vs code user). For example if I type python manage.py makemigrations and after closing Pycharm once there are no commands I typed in terminals hitting (up arrow key in keyboard). What should I do to?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":385,"Q_Id":65357490,"Users Score":0,"Answer":"From PyCharm 3.1onwards, press browse history in the console, the Browse History window will appear and start typing to search for specific commands.\nIn Windows, you can use Ctrl Alt e.","Q_Score":2,"Tags":"python,pycharm,ide","A_Id":65358016,"CreationDate":"2020-12-18T12:59:00.000","Title":"How To Enable Command History In PyCharm Terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"To exit my pipenv in window's cmd, I need to type exit. deactivate works, but it does not really exit from pipenv. Instead, it just deactivates the virtualenv created by pipenv. However, in pycham>terminal, the terminal tab just closes without exiting pipenv when I type exit, making it impossible to exit from pipenv properly. Is there a workaround to this? Thank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":118,"Q_Id":65357917,"Users Score":1,"Answer":"I don't really know if you still need the answer, but there may be others that do...so I'll still share what worked for me.\nNOTE: I used backbox linux (an ubuntu based hacking distro) for this and not windows, however, since pycharm is an IDE, it should still have the same settings\nTyping deactivate only works temporarily...when you open a new terminal, the virtualenv will still persist, so I decided to remove the python interpreter altogether.\nFor this, you'll need to go to the top left corner of your pycharm IDE where it says 'File', select settings (or you could just press Ctrl Alt s),\ngo to 'Project ', you'll see 2 options:\n\nPython Interpreter\nProject Structure\n\nClick on Python Interpreter and go to the drop down menu and select No interpreter\nAlternatively, you could just look at the bottom right corner of pycharm, just below Event Log, you should see something like Pipenv (your_project_name) [Python 3.x.x].\nClick it and select Interpreter settings, it will still take you to the settings place...click the drop down and select No Interpreter.\nThat's it! Good luck!","Q_Score":1,"Tags":"python,pycharm,pipenv","A_Id":68845375,"CreationDate":"2020-12-18T13:29:00.000","Title":"How do I exit from pipenv in pycharm?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Python launcher crashes since upgrading to macOS 11.1 - I have checked that I am on the latest version of Python 3.\nI get the error macOS 11 or later required ! Abort trap: 6","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":223,"Q_Id":65358897,"Users Score":0,"Answer":"I had to install MacOS Python package directly from python.org - it wouldn't work otherwise.","Q_Score":0,"Tags":"python,macos,crash,launcher","A_Id":65403356,"CreationDate":"2020-12-18T14:36:00.000","Title":"macOS 11.1 - Python launcher crashes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Question: Why can't I edit the README.md file in Google Colab?\nBackground: I've forked a Github Repo and I want to make updates to the README.md file and push back to Github. However, when I open README.md in Google Colab, I cannot edit it at all.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":708,"Q_Id":65359711,"Users Score":2,"Answer":"I encountered this problem as well. If you have a .md file in your file system, double-clicking it in the \"Files\" sidebar will bring up a read-only view of the rendered markdown content. Usually, when you double-click on a text file, a text editor will come up allowing you to edit the contents.\nThe simplest solution I found was simply to rename the .md file to .txt. For example, change README.md to README.txt. When you double-click it, the text editor will pop up. Remember to change the .txt back to .md when you are done editing.","Q_Score":1,"Tags":"python,github,github-actions,google-colaboratory","A_Id":67826658,"CreationDate":"2020-12-18T15:30:00.000","Title":"Edit a README.md File in Google Colab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using te automated tuning code for classification models in the fasttext library, and I cannot find whether the final model it gives you is trained only on the training set or on both the training and validation set.\nFor example, when, if you run this command:\n>> .\/fasttext supervised -input cooking.train -output model_cooking -autotune-validation cooking.valid\nDoes it give you a model trained on cooking.train only or on both cooking.train and cooking.valid.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":58,"Q_Id":65361645,"Users Score":1,"Answer":"The model is only trained on the training set. The validation set is only for you to measure the models ability to generalize, i.e. to check whether the model has actually learned the concept you want it to learn rather than just memorizing the training data.","Q_Score":0,"Tags":"python,fasttext","A_Id":65361677,"CreationDate":"2020-12-18T17:47:00.000","Title":"Fasttext automated prameter tuning training set","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Why when I using pip list I am receiving libraries list,\nbut then i am checking in Pycharm, I have view more there for my project.\nIt means that using install pip I am installing libraries for all projects , but from Pycharm (in settings) only for selected project?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":65362029,"Users Score":0,"Answer":"In PyCharm, you work in a Virtual Environment also known as venv.\nInstalling packages there will not be installed globally.\nFor e.g.\nIn Pycharm, if you run pip install tabulate and then try importing tabulate outside PyCharm, it will show an ImportError and vice-versa\nIf you want to install that package for all projects, turn on Make available to all projects in project settings or while creating a new project.\nTo install package outside it, you will need to run pip install in Command Prompt","Q_Score":0,"Tags":"python","A_Id":65362092,"CreationDate":"2020-12-18T18:16:00.000","Title":"libraries from pycharm vs pip, python 3.8","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to calculate EMA for a set of data from csv file where dates are in descending order.\nWhen I apply pandas.DataFrame.ewm I get EMA for the latest (by date) equal to the value. This is because ewm starts observation from top to bottom in DataFrame.\nSo far, I could not find option to make it reverse for ewm. So I guess, I will have to reverse all my dataset.\nMaybe somebody knows how to make ewm start from bottom values?\nOr is it recommended to always use datetimeindex sorted chronologically? From oldest values on top to newest on the bottom?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":91,"Q_Id":65362721,"Users Score":0,"Answer":"From pandas' documentation:\n\nTimes corresponding to the observations. Must be monotonically increasing and datetime64[ns] dtype.\n\nI guess, datetimeindex must be chronological..","Q_Score":0,"Tags":"python,pandas,dataframe,datetimeindex","A_Id":65362758,"CreationDate":"2020-12-18T19:17:00.000","Title":"Do I have to sort dates chronologically to use pandas.DataFrame.ewm?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying with with this code\na=np.logspace(-1,np.log10(10,),11.)[::-1]\nbut getting error as below\nTypeError: object of type cannot be safely interpreted as an integer.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":40,"Q_Id":65364050,"Users Score":3,"Answer":"You passed 11. as num, which is supposed to be an integer. 11. is a float literal; remove the . to make it 11, an int literal.","Q_Score":0,"Tags":"python-3.x","A_Id":65364076,"CreationDate":"2020-12-18T21:19:00.000","Title":"cannot be safely interpreted as an integer in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Basically, I have built a login system.\n\nThe first time that a user uses the login system, a text file is created and the username and password are saved there when the user uses a \"remember password?\" function.\n\nThe second time the software uses the system, the system already has the user and password typed in if the user previously used the \"remember password?\" function.\n\n\nThe thing is, the text file where the password and user are stored can be accessed by simply just going to folder and double clicking on it, which is awful for security reasons.\nIs it possible to make it so that the text file can't be accessed outside the program?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":65367255,"Users Score":0,"Answer":"Basically, there isn't a way to securely store a password in clear in a file in the file system.\nYou can arrange things so that a file can only be read by a specific account on the system. However, someone with admin privilege will (probably) be able to do things that will give themselves access. And there will most likely be other ways to break file system security in some circumstances.\nThe ideal solution is to NOT store the password. Even storing it encrypted with a private key is a bad idea.\nCreating and storing a \"salted hash\" of a password can be secure ... provided that there is no way that a \"bad guy\" can capture the password characters before they have been hashed, but I don't think that is going to help you here ... since you apparently need to be able to recover the actual password.\nMaybe the best approach would be to investigate \"password safe\" or \"key chain\" products that are provided by the client's OS or web browser. Unfortunately, this is going to be platform specific.\nYou could also just \"hide\" the file containing the password, or (reversibly) obscure the password. But this is insecure. It can easily be defeated by reverse engineering the steps that your code is taking to hide the password.","Q_Score":0,"Tags":"python,security","A_Id":65367374,"CreationDate":"2020-12-19T06:24:00.000","Title":"Is it possible to make a program that can read from a file, but you can't open the file from outside the program?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Spyder with Anaconda on macOS. I have already updated Anaconda and Spyder to version 4.2.0 (4.2.1 was not found). The problem is now, that if I am typing in Spyder it takes about one second until the letters appear, which is very annoying. Is there somebody who also has this problem? Or does somebody have a suggestion on what might be the problem?","AnswerCount":8,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":10753,"Q_Id":65369681,"Users Score":0,"Answer":"tools -> reset spyder to factory defaults \nand it worked","Q_Score":9,"Tags":"python,macos,anaconda,spyder","A_Id":70178564,"CreationDate":"2020-12-19T12:10:00.000","Title":"Spyder on MacOS. Typing is very laggy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Spyder with Anaconda on macOS. I have already updated Anaconda and Spyder to version 4.2.0 (4.2.1 was not found). The problem is now, that if I am typing in Spyder it takes about one second until the letters appear, which is very annoying. Is there somebody who also has this problem? Or does somebody have a suggestion on what might be the problem?","AnswerCount":8,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":10753,"Q_Id":65369681,"Users Score":2,"Answer":"I'd like to chime in and say I'm getting this on Catalina (not Big Sur).\nUsing Spyder 5.0.5 seemed to fix it.\n-- Edit\nNo it didn't. I'm on Catalina and it's still laggy as hell. Using v5.0.5","Q_Score":9,"Tags":"python,macos,anaconda,spyder","A_Id":68455345,"CreationDate":"2020-12-19T12:10:00.000","Title":"Spyder on MacOS. Typing is very laggy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Spyder with Anaconda on macOS. I have already updated Anaconda and Spyder to version 4.2.0 (4.2.1 was not found). The problem is now, that if I am typing in Spyder it takes about one second until the letters appear, which is very annoying. Is there somebody who also has this problem? Or does somebody have a suggestion on what might be the problem?","AnswerCount":8,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":10753,"Q_Id":65369681,"Users Score":23,"Answer":"Had the same exact issue with Spyder 5.0.0, on Catalina; editor being very laggy (the console was fine).\nSolution worked for me: Disable Kite!\nFrom the top menus:\nPython > Preferences... > Completion and linting,\nDeselect any option that calls Kite:\n\nNotify me when Kite can provide missing completions (but is unavailable!)\nEnable Kite provider\n\nPS: Tried pyqt solutions with no success (this now generates warnings in the terminal every time I open Spyder).","Q_Score":9,"Tags":"python,macos,anaconda,spyder","A_Id":68902149,"CreationDate":"2020-12-19T12:10:00.000","Title":"Spyder on MacOS. Typing is very laggy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to process a HMAC* length extension attack for a university task.\nTherefore I have both, a HMAC* and the corresponding message, provided and want to attach another arbitrary message and recalculate the HMAC without having the key.\nRegarding our lecture, this is possible and a very common attack scenario.\nMy problem is rather implementation based:\nTo drive this attack, I need to replace the default SHA256 starting values (h0 to h7) with the existing HMAC* I already have. As I do not have the key, just pushing in the orginal data will not be possible.\nIs there any way except reimplementing SHA256 that would allow me to replace these starting values in python3?\nClarification\nI have a valid HMAC* h given.\nFurthermore, there is the a message m that has been used (together with a secret key k) to generate h. (h = SHA256(k || m)).\nMy task: I need to find a way to derivate another HMAC* h' without knowing k on the basis of m. It turned out, that the new message is m' = m + pad(k||m) + a with a randomly chosen a.\nFurther clarification\n*: With \"HMAC\" I do not refer to the standard of RFC 2014. HMAC in general \"is a specific type of message authentication code (MAC) involving a cryptographic hash function and a secret cryptographic key.\" (Wikipedia.org\/HMAC).\nIn this case, the HMAC is calculated as h = SHA256(k || m) where k is the secret key, || is the concatenation and m is the message.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":599,"Q_Id":65369917,"Users Score":1,"Answer":"First of all, the SHA256 context includes multiple parameters. In this solution, two of these are relevant: The state which somehow represents the \"progress\" of the SHA256 algorithm. The final state is actually the SHA256 hash sum. And the seconds parameter is the overall message length in bits, that will be set at the end of the padding.\nFurthermore, SHA256 always uses padding what means, another sequence of bytes p is implicitly added to the input data before calculating the final hash and depends on the actual input value. So lets say SHA256(x) = mySHA256(x || p(x)) assuming that mySHA256 is not using padding.\nWhen the given HMAC h has been generated using h = SHA256(k || m) = mySHA256(k || m || p) where k was the secret key and m was the message, h represented the final state of the SHA256 context. Additionally, we have an implicit padding p that depends on k || m. Hereby, p is not rather dependend on len(k) and not k itself, what means that we can calculate p without knowing the key but it's length.\nAs my target will only accept my modified message m' = m + a when I deliver a correct HMAC h' = SHA256(k || m'), I need to focus on that point now. By knowing the original HMAC h, I can set the state of the SHA256 context corresponding to h. As I know the message m as well, and I know that the overall message length in bits is (len(k) + len(m) + len(p)) * 8, my overall message length is just depending on len(k) (not k!) because len(p) only depends on len(k) and len(m). I will iterate through a range of len(k), like 1 - 64. In each iteration step, I can just insert my value for len(k). So it is possible to set the overall message length (the second parameter of my SHA256 context), too.\nWhen iterating through all key lengths, there will be one value that represents the length of the key that has actually been used. In that case, I have a SHA256 context that exactly equals the context of the original calculation. We can now add our arbitrary data a to the hash calculation and create another HMAC h' that does depend on the key k without knowing it. h' = SHA256(k || m || p || a)\nBut now, we have to ensure that this HMAC h' equal to that one, the target calculates using our message m'.\nTherefore, we add our padding p to the end of original message m followed by our arbitrary message a. Finally we have m' = m || p || a.\nAs the target knows the secret key in order to validate the input data, it can easily calculate SHA256(k || m') = SHA256(k || m || p || a)* and oooops! Indeed that is the same hash sum as our HMAC h' that we calculated without knowing the secret key k\nResult:\nWe can not add a fully arbitrary message, but a message that is fully arbitrary after the padding. As the padding is mostly filled with Null-Bytes, that can disturb our attack, but that depends on each case. In my case, the Null-Bytes were ignored and I just had one artifact from the overall message length displayed before my inserted message.","Q_Score":1,"Tags":"python-3.x,cryptography,sha256,hmac,hashlib","A_Id":65384823,"CreationDate":"2020-12-19T12:40:00.000","Title":"Python hashlib SHA256 with starting value","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm training a word embedding using GENSIM (word2vec) and use the trained model in a neural network in KERAS. A problem arises when I have an unknown (out-of-vocabulary) word so the neural network doesn't work anymore because it can't find weights for that specific word. I think one way to fix this problem is adding a new word () to the pre-trained word embedding with zero weights (or maybe random weights? which one is better?) Is this approach fine? Also, for this word embedding, the weights are not trainable in this neural network.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":652,"Q_Id":65370297,"Users Score":1,"Answer":"Most typical is to ignore unknown words. (Replacing them with either a plug-word, or the origin-vector, is more distorting.)\nYou could also consider training a FastText mode instead, which will always synthesize some guess-vector for an out-of-vocabulary word, from the character-n-gram vectors created during training. (These synthetic vectors are often better than nothing, especially when a word has overlapping word roots with related words \u2013 but getting more training data with examples of all relevant word usages is better, and simply ignoring rare unknown words isn't that bad.)","Q_Score":1,"Tags":"python,keras,gensim,word2vec,word-embedding","A_Id":65376881,"CreationDate":"2020-12-19T13:23:00.000","Title":"Unknown words in a trained word embedding (Gensim) for using in Keras","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a data science project in Python and I wonder how to manage my data. Some details about my situation:\n\nMy data consists of a somewhat larger number of football matches, currently around 300000, and it is supposed to grow further as time goes on. Attached to each match are a few tables with different numbers of rows\/columns (but similar column formats across different matches).\nNow obviously I need to iterate through this set of matches to do some computations. So while I don\u2019t think that I can hold the whole database in memory, I guess it would make sense to have at least chunks in memory, do computations on that chunk, and release it.\nAt the moment I have split everything up into single matches, gave each match an ID and created a folder for each match with the ID as folder name. Then I put the corresponding tables as small individual csv files into the folder that belongs to a given match. Additionally, I have an \u201eoverview\u201c DataFrame with some \u201emetadata\u201c columns, one row per match. I put this row as a small json into each match folder for convenience as well.\nI guess there would also be other ways to split the whole data set into chunks than match-wise, but for prototyping\/testing my code with a small number of matches, it actually turned out to be quite handy to be able to go to a specific match folder in a file manager and look at one of these tables in a spreadsheet program (although similar inspections could obviously also be made from code in appropriate settings). But now I am at the point where this huge number of quite small files\/folders slows down the OS so much that I need to do something else.\nJust to be able to deal with the data at all right now, I simply created an additional layer of folder hierarchy like \u201erange-0\u201c contains folders 0-9999, \u201erange-1\u201c contains 10000-19999 etc. But I\u2018m not sure if this is the way to go forward.\nMaybe I could simply save one chunk - whatever one chunk is - as a json file, but would lose some of the ease of the manual inspection.\nAt least everything is small enough, so that I can do my statistical analyses on a single machine, such that I think I can avoid map\/reduce-type algorithms.\nOn another note, I have close to zero knowledge about database frameworks (I have written a few lines of SQL in my life), and I guess I would be the only person making requests to my database, so I am in doubt that this makes sense. But in case it does, what are the advantages of such an approach?\n\nSo, to the people out there having some experience with handling data in such projects - what kind of way to manage my data, on a conceptual level, would you suggest or recommend to use in such a setting (independent of specific tools\/libraries etc.)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":61,"Q_Id":65371475,"Users Score":1,"Answer":"Your arrangement is not bad at all. We are not used to think of it this way, but modern filesystems are themselves very efficient (noSQL) databases.\nAll you have to do is having auxiliary files to work as indexes and metadata so your application can find its way. From your post, it looks like you already have that done to some degree.\nSince you don't give more specific details of the specific files and data you are dealing with, we can't suggest specific arrangements. If the data is proper to be arranged in an SQL tabular representation, you could get benefits from putting all of it in a database and use some ORM - you'd also have to write adapters to get the Python object data into Pandas for your numeric analysis if you that, and it might end up being a superfluous layer if you are already getting it to work.\nSo, just be sure that whatever adaptations you do to get the files easier to deal with by hand (like the extra layer of folders you mention), don't get in the way of your code - i.e., make your code so that it automatically find its way across this, or any extra layers you happen to create (this can be as simple as having the final game match folder full path as a column in your \"overview\" dataframe)","Q_Score":0,"Tags":"python,pandas,database,dataframe","A_Id":65371568,"CreationDate":"2020-12-19T15:35:00.000","Title":"How should I handle a data set with around 300000 small groups of data tables?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In python why this statement is giving false:- print(3 < (2 or 10))\nShouldn't it give true?\nPlease explain","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":79,"Q_Id":65372776,"Users Score":1,"Answer":"print(2 or 10) prints 2\nprint(10 or 2) prints 10\n\n\nTherefore, print(3 < (2 or 10)) means print(3 < 2) which is False","Q_Score":1,"Tags":"python","A_Id":65372821,"CreationDate":"2020-12-19T17:51:00.000","Title":"In python why this statement is giving false:- print(3 < (2 or 10))","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In python why this statement is giving false:- print(3 < (2 or 10))\nShouldn't it give true?\nPlease explain","AnswerCount":4,"Available Count":2,"Score":-0.049958375,"is_accepted":false,"ViewCount":79,"Q_Id":65372776,"Users Score":-1,"Answer":"Python always iterate from left --> right so when program see that there is first value is something and then there is or statement then it stops iterating and take the value which was first. In Your case it itrate first 2 or 10 and take 2 and 3 is greater than 2 . So here your statement is false.","Q_Score":1,"Tags":"python","A_Id":65372932,"CreationDate":"2020-12-19T17:51:00.000","Title":"In python why this statement is giving false:- print(3 < (2 or 10))","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"so I was preparing to take this year's USACO test, but noticed that there was a new rule in the about saying \"As an important change from last year, ALL SUBMISSIONS NOW USE STANDARD INPUT AND OUTPUT (e.g., cin and cout in C++) instead of file input and output. You therefore no longer need to open files to read your input or write your output. As another note, participants are advised to re-read the contest rules, as we have clarified some of the key contest regulations (in particular, that use of any previously-written code or code from external sources is NOT allowed).\" What does the Standard input and output mean? How would that work for python? Does that mean that you don't read in data from files and write them into other files?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":429,"Q_Id":65372955,"Users Score":0,"Answer":"input() and print() should suffice. It worked when I tried to do a USACO problem in python a while back.\nIf you want to read the input like a file, you can use sys.stdin and sys.stdout, but I find that it is unnecessary and probably just extra work.","Q_Score":0,"Tags":"python,input,output","A_Id":70571221,"CreationDate":"2020-12-19T18:08:00.000","Title":"USACO Contest New Rules with Standard Input and Output","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using spyder & want to install finplot. However when I did this I could not open spyder and had to uninstall & reinstall anconda.\nThe problem is to do with PyQt5 as I understand. The developer of finplot said that one solution would be to install PyQt5 version 5.9.\n\nError: spyder 4.1.3 has requirement pyqt5<5.13; python_version >= \"3\", but you'll have pyqt5 5.13.0 which is incompatible\n\nMy question is how would I do this? To install finplot I used the line below,\n\npip install finplot\n\nIs there a way to specify that it should only install PyQt5?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":3234,"Q_Id":65373375,"Users Score":2,"Answer":"As far as I understand you just want to install PyQT5 version 9.0.You can try this below if you got pip installed on your machine\n\npip install PyQt5==5.9\n\nEdit: First you need to uninstall your pyQT5 5.13\n\npip uninstall PyQt5","Q_Score":1,"Tags":"python,pip,anaconda","A_Id":65373447,"CreationDate":"2020-12-19T18:54:00.000","Title":"pip install a specific version of PyQt5","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have single html page with dynamic images from database in Django.\nI also have a modal in the same page set to invisible and opens when image is clicked.\nMy intension is when I click on any image it should open a html model with the clicked image and its description text from db.\nHow do I display data of the current clicked image and its description.\nI have tried to pass {{profile.image.url}} but this gives one information on click to all images.\nI didn't have to give sample code on this.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":548,"Q_Id":65374111,"Users Score":2,"Answer":"There are 2 options on how to achieve this:\n\nrender modals for all images with django and open them when user clicks one of the images.\ncreate 1 modal and write some javascript to fetch information about the clicked image in the background. Note, that you'll also need to create an endpoint in Django that will accept image ID and return image information.","Q_Score":2,"Tags":"python,html,css,django,django-views","A_Id":65374228,"CreationDate":"2020-12-19T20:25:00.000","Title":"load dynamic data in html modal in Django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"It is about the following: I want to develop a tool where you click on a button \"Browse\", a Win Explorer window opens and there you can select a PDF file. After this is done, you click on \"Compress now\" and the file size is then automatically reduced so that you can send the PDF file, for example, via E-Mail (possibly also PDF files with images -> the quality of the images can also be degraded). Is there a way to implement such a tool with Python? If so how? Is there a specific module for it?\nI have already started to program the tool so that now I can click on \"Browse\" and select a PDF file, now I have to go to the compressing part. Does anyone have any ideas?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":413,"Q_Id":65374739,"Users Score":0,"Answer":"Absolutelly YES.\n\nTo do that you will need a gui library as tkinter or PyQt5(there are more).\nFor manipulations of pdf like splitting or merging pdfminer or PyPDF2.\nAs for ziping there is a non-third party library like zipfile where you can zip files.\nThen to make it a real app you can use cx_freeze or pyinstaller to make an exe!\n\nGood Luck!!!","Q_Score":0,"Tags":"python,pdf,compression,reduce,image-compression","A_Id":65374804,"CreationDate":"2020-12-19T21:41:00.000","Title":"Compress select PDF-Files with Python - Reduce the Size","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a notebook that %run another notebook under JupyterLab. They can call back and forth each other functions and share some global variables.\nI now want to convert the notebooks to py files so it can be executed from the command line.\nI follow the advice found on SO and imported the 2nd file into the main one.\nHowever, I found out that they can not call each other functions. This is a major problem because the 2nd file is a service to the main one, but it uses continuously functions that are part of the main one.\nEssentially, the second program is non-GUI and it is driven by the main one which is a GUI program. Thus whenever the service program needs to print, it checks to see if a flag is set that tells it that it runs in a GUI mode, and then instead of simple printing it calls a function in the main one which knows how to display it on the GUI screen. I want to keep this separation.\nHow can I achieve it without too much change to the service program?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":107,"Q_Id":65375282,"Users Score":0,"Answer":"I ended up collecting all the GUI functions from the main GUI program, and putting them into a 3rd file in a class, including the relevant variables.\nIn the GUI program, just before calling the non GUI program (the service one) I created the class and set all the variables, and in the call I passed the class.\nThen in the service program I call the functions that are in the class and got the variables needed from the class as well.\nThe changes to the service program were minor - just reading the variables from the class and change the calls to the GUI function to call the class functions instead.","Q_Score":0,"Tags":"python,share","A_Id":65392206,"CreationDate":"2020-12-19T22:58:00.000","Title":"Running another script while sharing functions and variable as in jupyter notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use extra_view Django package. I've installed it and in terminal it showed as successful and I added it to installed apps in setting.py. As I don't use virtual environment it's all installed globally, but when I try to import that package there is an unresolved import error. I use VS Code if it means something. It also happened with some other packages that I installed. Do you have any idea what is happening?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":65375302,"Users Score":0,"Answer":"Everything is ok actually. I just restarted VS Code and then there weren't any errors anymore. Apparently VS Code was the only issue.","Q_Score":0,"Tags":"python,django,django-views","A_Id":65375568,"CreationDate":"2020-12-19T23:00:00.000","Title":"Django error: unresolved import 'extra_views'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I trained a model using Tensorflow object detection API using Faster-RCNN with Resnet architecture. I am using tensorflow 1.13.1, cudnn 7.6.5, protobuf 3.11.4, python 3.7.7, numpy 1.18.1 and I cannot upgrade the versions at the moment. I need to evaluate the accuracy (AP\/mAP) of the trained model with the validation set for the IOU=0.3. I am using legacy\/eval.py script on purpose since it calculates AP\/mAP for IOU=0.5 only (instead of mAP:0.5:0.95)\npython legacy\/eval.py --logtostderr --pipeline_config_path=training\/faster_rcnn_resnet152_coco.config --checkpoint_dir=training\/ --eval_dir=eval\/\nI tried several things including updating pipeline config file to have min_score_threshold=0.3:\neval_config: {\nnum_examples: 60\nmin_score_threshold: 0.3\n..\nUpdated the default value in the protos\/eval.proto file and recompiled the proto file to generate new version of eval_pb2.py\n\/\/ Minimum score threshold for a detected object box to be visualized\noptional float min_score_threshold = 13 [default = 0.3];\nHowever, eval.py still calculates\/shows AP\/mAP with IOU=0.5\nThe above configuration helped only to detect objects on the images with confidence level < 0.5 in the eval.py output images but this is not what i need.\nDoes anybody know how to evaluate the model with IOU=0.3?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":116,"Q_Id":65375340,"Users Score":0,"Answer":"I finally could solve it by modifing hardcoded matching_iou_threshold=0.5 argument value in multiple method arguments (especially def __init) in the ..\/object_detection\/utils\/object_detection_evaluation.py","Q_Score":1,"Tags":"python,tensorflow,object,detection,evaluation","A_Id":65417109,"CreationDate":"2020-12-19T23:06:00.000","Title":"How to evaluate trained model Average Precison and Mean AP with IOU=0.3","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been trying to stack two images.\nThe end result will be used as the input to my convolutional neural network.\nNow I tried to use dstack, I also tried to use PIL by importing Image.blend but I cannot seem to be arriving to my desired result.\nI am asking if anyone has any other ideas which I can use would be greatly appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":225,"Q_Id":65375662,"Users Score":0,"Answer":"Resize them so that they are the same size, and then use np.stack with axis=3 (if you are using multi-channel images. Else, use axis=2.\nOr are you trying to combine them into one image? If so, how? Masking, adding subtracting?","Q_Score":1,"Tags":"python,image-processing,stack","A_Id":65377715,"CreationDate":"2020-12-19T23:57:00.000","Title":"Stack two images to obtain a single image on Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My question: How does the Python tool Poetry know the path of the virtual environment of a project?\nExplanation: When I run poetry init inside a directory, a new project is created. Then I run poetry install and a new virtual environment is created. HOWEVER, neither the path nor the hash of that virtual environment are stored in pyproject.toml or poetry.lock as I expected. How does Poetry then know the location of the virtual environment when I run poetry env info -p?\nBesides wanting to know what is going on, I need to know this for 2 reasons:\n\nHow can I move a directory with a Poetry project without breaking the link to its virtual environment?\nHow to know which of Poetry's virtual environments are unused and can thus be deleted?\n\nPossible solution: Looking into the source code of Poetry, it seemed to me that a file envs.toml may include a mapping from filesystem directories to virtual environment hashes, but on my Mac OS 11.1 I can't find such a file.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3585,"Q_Id":65376059,"Users Score":9,"Answer":"I dived deeper into the source code and I may have understood it now:\n\nThe relevant code is at poetry\/utils\/env.py inside the method EnvManager.generate_env_name(...)\nThe code deduces the location of the environment by using the project name from pyproject.toml and adding the hash of the parent directory of pyproject.toml\n\nAs a consequence:\n\nThere is no simple way to delete environments which are not used anymore\nAlso, if I want to move the directory of a poetry project, I would need to rename the virtual environment's folder and replace the hash correctly. This should work, but it would be preferable to just run poetry install again and create a new virtual environment","Q_Score":9,"Tags":"python-poetry","A_Id":65376300,"CreationDate":"2020-12-20T01:05:00.000","Title":"How does the Python tool Poetry know the path of a project's virtual environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the difference between pool.imap_unordered() and pool.apply_async()?\nWhen pool.imap_unordered() is preferred over pool.apply_async() or vice versa?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":111,"Q_Id":65376757,"Users Score":1,"Answer":"The result of calling pool.apply_async(f, (1, 2, 3, 4)) is that f(1, 2, 3, 4) will be called in some thread. The value returned by apply_async is an AsyncResult which you can use to wait on the result.\nThe result of calling pool.imap_unordered(f, (1, 2, 3, 4)) is an iterator. It returns the results of f(1), f(2), f(3) and f(4) is an unspecified order.","Q_Score":1,"Tags":"python,multiprocessing,pool","A_Id":65376858,"CreationDate":"2020-12-20T03:51:00.000","Title":"What is the difference between pool.imap_unordered() and pool.apply_async()?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My jupyter notebooks are no longer capable of saving files, so I need to completely uninstall all python, jupyter, and anaconda files so that it's like I never had them, but I can't figure out how. Uninstalling anaconda in control panel does not uninstall jupyter. Uninstalling it from anaconda navigator also does not work.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":851,"Q_Id":65377319,"Users Score":0,"Answer":"Turns out I was uninstalling successfully, but the problem was not Anaconda. The Windows Defender \"Controlled Folder\" protection was on.","Q_Score":0,"Tags":"python,windows,jupyter-notebook,anaconda","A_Id":65377424,"CreationDate":"2020-12-20T05:57:00.000","Title":"How to completely uninstall anaconda and jupyter? Windows 10","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300\nCan someone please explain?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":544,"Q_Id":65380064,"Users Score":0,"Answer":"Random forests introduce stochasticity by randomly sampling data and features. Running RF on the exact same data may produce different outcomes for each run due to these random samplings. Fixing the seed to a constant i.e. 1 will eliminate that stochasticity and will produce the same results for each run.","Q_Score":0,"Tags":"python,data-science,random-forest,random-seed","A_Id":65411284,"CreationDate":"2020-12-20T12:53:00.000","Title":"random_state in random forest","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300\nCan someone please explain?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":544,"Q_Id":65380064,"Users Score":0,"Answer":"train_test_split splits arrays or matrices into random train and test subsets. That means that everytime you run it without specifying random_state, you will get a different result, this is expected behavior.\nWhen you use random_state=any_value then your code will show exactly same behaviour when you run your code.","Q_Score":0,"Tags":"python,data-science,random-forest,random-seed","A_Id":65381558,"CreationDate":"2020-12-20T12:53:00.000","Title":"random_state in random forest","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just learned how to use the machine learning model Random Forest; however, although I read about the random_state parameter, I couldn't understand what it does. For example, what is the difference between random_state = 0 and random_state = 300\nCan someone please explain?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":544,"Q_Id":65380064,"Users Score":0,"Answer":"In addition, most people use the number 42 when we use random_state.\nFor example, random_state = 42 and there's a reason for that.\nBelow is the answer.\nThe number 42 is, in The Hitchhiker's Guide to the Galaxy by Douglas Adams, the \"Answer to the Ultimate Question of Life, the Universe, and Everything\", calculated by an enormous supercomputer named Deep Thought over a period of 7.5 million years. Unfortunately, no one knows what the question is","Q_Score":0,"Tags":"python,data-science,random-forest,random-seed","A_Id":65385737,"CreationDate":"2020-12-20T12:53:00.000","Title":"random_state in random forest","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When we have discrete variable such as age, number of sick leaves, number of kids in the family and number of absences within a dataframe which i wanted to make a prediction model with binary result, is it okay to include these variables along with other numeric continuous variables into a standardization or normalization process?\nor should i categorize these discrete variables into a categoric variable and turned them into dummy variables?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":980,"Q_Id":65382855,"Users Score":0,"Answer":"If they are not one of the target variables, It is okay to include these variables along with other numeric continuous variables into a standardization or normalization process.","Q_Score":0,"Tags":"python,normalization,methodology,discrete,standardization","A_Id":65388263,"CreationDate":"2020-12-20T17:39:00.000","Title":"Standardizing or Normalizing discrete variable?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the builtin auth app of Django not the django-allauth. Is it possible to limit the allowed special characters in Django's builtin auth app usernames? the default allowed characters are:\nLetters, digits and @\/.\/+\/-\/_\nI want to be:\nLetters, digits and .\/_\nRegards","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":169,"Q_Id":65383465,"Users Score":0,"Answer":"You can define your own authentication backend. That way you can make the rules you want.","Q_Score":0,"Tags":"python,django,django-authentication,django-users","A_Id":65383744,"CreationDate":"2020-12-20T18:39:00.000","Title":"How to limit Django's builtin auth allowed username characters?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a large CSV file(>100 GB) that I want to read into memory and process the data in chunks. There are two constraints I have:\n\nObviously I cannot read the whole entire file into memory. I only have about 8GB of ram on my machine.\nThe data is tabular and unordered. I need to read the data in groups.\n\n\n\n\n\nTicker\nDate\nField1\nField2\nField3\n\n\n\n\nAAPL\n20201201\n0\n0\n0\n\n\nAAPL\n20201202\n0\n0\n0\n\n\nAAPL\n20201203\n0\n0\n0\n\n\nAAPL\n20201204\n0\n0\n0\n\n\nNFLX\n20201201\n0\n0\n0\n\n\nNFLX\n20201202\n0\n0\n0\n\n\nNFLX\n20201203\n0\n0\n0\n\n\nNFLX\n20201204\n0\n0\n0\n\n\n\n\nThe concern here is that the data has to be read in groups. Grouped by Ticker and date. If I say I want to read 10,000 records in each batch. The boundary of that batch should not split groups. i.e. All the AAPL data for 2020 December should end up in the same batch. That data should not appear in two batches.\nMost of my co-workers when they face a situation like this, they usually create a bash script where they use awk, cut, sort, uniq to divide data into groups and write out multiple intermediate files to the disk. Then they use Python to process these files. I was wondering if there is a homogenous Python\/Pandas\/Numpy solution to this.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":134,"Q_Id":65384208,"Users Score":0,"Answer":"I would look into two options\nVaex and Dask.\nVaex seems to be focused exactly on what you need. Lazy processing and very large datasets. Check their github. However it seems, that you need to convert files to hdf5, which may be little bit time consuming.\nAs far as Dask is concerned, I wouldnt count on success. Dask is primarily focused on distributed computation and I am not really sure if it can process large files lazily. But you can try and see.","Q_Score":3,"Tags":"python,pandas,numpy","A_Id":65384605,"CreationDate":"2020-12-20T19:56:00.000","Title":"How do you read a large file with unsorted tabular data in chunks in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a large CSV file(>100 GB) that I want to read into memory and process the data in chunks. There are two constraints I have:\n\nObviously I cannot read the whole entire file into memory. I only have about 8GB of ram on my machine.\nThe data is tabular and unordered. I need to read the data in groups.\n\n\n\n\n\nTicker\nDate\nField1\nField2\nField3\n\n\n\n\nAAPL\n20201201\n0\n0\n0\n\n\nAAPL\n20201202\n0\n0\n0\n\n\nAAPL\n20201203\n0\n0\n0\n\n\nAAPL\n20201204\n0\n0\n0\n\n\nNFLX\n20201201\n0\n0\n0\n\n\nNFLX\n20201202\n0\n0\n0\n\n\nNFLX\n20201203\n0\n0\n0\n\n\nNFLX\n20201204\n0\n0\n0\n\n\n\n\nThe concern here is that the data has to be read in groups. Grouped by Ticker and date. If I say I want to read 10,000 records in each batch. The boundary of that batch should not split groups. i.e. All the AAPL data for 2020 December should end up in the same batch. That data should not appear in two batches.\nMost of my co-workers when they face a situation like this, they usually create a bash script where they use awk, cut, sort, uniq to divide data into groups and write out multiple intermediate files to the disk. Then they use Python to process these files. I was wondering if there is a homogenous Python\/Pandas\/Numpy solution to this.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":134,"Q_Id":65384208,"Users Score":0,"Answer":"How about this:\n\nopen the file\nloop over reading lines: For each line read:\n\n\nparse the ticker\nif not done already:\n\ncreate+open a file for that ticker (\"ticker file\")\nappend to some dict where key=ticker and value=file handle\n\n\nwrite the line to the ticker file\n\n\nclose the ticker files and the original file\nprocess each single ticker file","Q_Score":3,"Tags":"python,pandas,numpy","A_Id":65384348,"CreationDate":"2020-12-20T19:56:00.000","Title":"How do you read a large file with unsorted tabular data in chunks in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I had some embedded profile pictures in datastudio worked great until a couple of days.\nI used this kind of URL:\n\"https:\/\/twitter.com\/(userName)\/profile_image?size=original\"\nIt suddely doesn't work at all. I couldn't find any alternative either.\nPlease Help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":65384347,"Users Score":0,"Answer":"Since the legacy twitter.com was shut down on 15th December 2020, these URLs will no longer respond.\nYou will need to use the official Twitter API (either v1.1 or v2) to retrieve the User object, and get the profile image from there.","Q_Score":0,"Tags":"python,twitter","A_Id":65385680,"CreationDate":"2020-12-20T20:11:00.000","Title":"link to Twitter User profile Pictures not Working anymore","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have data set that has both NaN and inf values and I am looking for linear regression library that can take both NaN and inf values. I have used sklearn in the past but also have seen linregress used a lot, but both libraries require NaN and inf values to be dropped beforehand.\nThanks for the suggestions","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":65385645,"Users Score":1,"Answer":"As @Moosefeather mentioned you have to deal with this yourself. Easiest option is to drop those samples or replace them with an average.\nA more sophisticated approach would be something like estimating the expected missing value conditioned on the other values of the observation. This is more work and if you have enough clean data dropping the bad values might be better.","Q_Score":0,"Tags":"python,scikit-learn","A_Id":65386130,"CreationDate":"2020-12-20T22:55:00.000","Title":"Linear Regression Library that can take NaN and inf values suggestion","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to make a server info command and I want it to display the server name, boost count, boost members and some other stuff as well.\nOnly problem is I have looked at the docs and searched online and I cant find out how to find the boost information.\nI dont have any code as Ive not found any code to try and use for myself\nIs there any way to get this information?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2193,"Q_Id":65385757,"Users Score":4,"Answer":"Guild Name - guild_object.name\nBoost count - guild_object.premium_subscription_count\nBoosters, the people who boosted the server - guild_object.premium_subscribers\nIf your doing this in a command as I assume, use ctx.guild instead of guild_object. For anything further, you can re-read the docs as all of the above information is in it under the discord.Guild","Q_Score":1,"Tags":"python,discord.py","A_Id":65386320,"CreationDate":"2020-12-20T23:15:00.000","Title":"Get the number of boosts in a server discord.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I keep on reading that Naive Bayes needs fewer features than many other ML algorithms. But what's the minimum number of features you actually need to get good results (90% accuracy) with a Naive Bayes model? I know there is no objective answer to this -- it depends on your exact features and what in particular you are trying to learn -- but I'm looking for a numerical ballpark answer to this.\nI'm asking because I have a dataset with around 280 features and want to understand if this is way too few features to use with Naive Bayes. (I tried running Naive Bayes on my dataset and although I got 86% accuracy, I cannot trust this number as my data is imbalanced and I believe this may be responsible for the high accuracy. I am currently trying to fix this problem.)\nIn case it's relevant: the exact problem I'm working on is generating time tags for Wikipedia articles. Many times the infobox of a Wikipedia article contains a date. However, many times this date appears in the text of the article but is missing from the infobox. I want to use Naive Bayes to identify which date from all the dates we find in the article's text we should place in the infobox. Every time I find a sentence with a date in it I turn it into a feature vector -- listing what number paragraph I found this in, how many times this particular date appears in the article, etc. I've limited myself to a small subset of Wikipedia articles -- just apple articles -- and as a result, I only have 280 or so features. Any idea if this is enough data?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":92,"Q_Id":65386211,"Users Score":1,"Answer":"I know there is no objective answer to this -- it depends on your exact features and what in particular you are trying to learn -- but I'm looking for a numerical ballpark answer to this.\n\nWell, you kind of answered this question yourself but you're still hoping there is an objective answer ;)\nThere can't be any kind of objective answer (whether precise or not) because it depends on the data, i.e. relationships between features and class. It's easy to find examples of simple problems where only a couple features is enough to achieve perfect performance, and it's also easy to create a dataset of millions of random features which can't even reach mediocre performance.\n\ngood results (90% accuracy)\n\nSimilar point about performance: there are tasks where 90% accuracy is mediocre and there are tasks where 60% is excellent, it depends how hard the problem is (i.e. how easy it is to find the patterns in the data which help predict the answer).\n\nI'm asking because I have a dataset with around 280 features and want to understand if this is way too few features to use with Naive Bayes.\n\nDefinitely not too few, as per my previous observations. But it also depends on how many instances there are, in particular the ratio features\/instances. If there are too few instances, the model is going to overfit badly with NB.\n\nmy data is imbalanced and I believe this may be responsible for the high accurac\n\nGood observation: accuracy is not an appropriate evaluation measure for imbalanced data. The reason is simple: if the majority class represents say 86% of the instances, the classifier can just label all the instances with this class and obtain 86% accuracy, even though it does nothing useful. You should use precision, recall and F-score instead (based on the minority class).","Q_Score":2,"Tags":"python,nlp,dataset,wikipedia,naivebayes","A_Id":65396742,"CreationDate":"2020-12-21T00:34:00.000","Title":"What's the minimum number of features you need to get good results with a Naive Bayes model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to manipulate a txt file that uses en dashes, but cmd reads it as \u00e2\u20ac\u201c. Em dashes also have a broken formatting and are displayed as \u00e2\u20ac\u201d\nThe funny thing is that if I use both symbols inside the script (.py file) and associate it to a print command, all is displayed correctly. In interpreter also no problem at all.\nIs there any way I can make it recognize those characters before importing the file? Thank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":65387109,"Users Score":0,"Answer":"I no longer need help with this since I has able to figure out on my own, but I'm keeping it here since it might help others in the future.\nThe problem was that py was opening the file as ANSI, while due to the special characters the file had to be opened as UTF-8. So by adding encoding='utf-8' when calling the open function solved the problem.","Q_Score":2,"Tags":"python,formatting,windows-10","A_Id":65401833,"CreationDate":"2020-12-21T03:07:00.000","Title":"En dash\/ Em dash breaking txt file formatting when trying to read in cmd","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have to create .lis(output generated from SQR) type of file using python.\nI am not able to find any existing python package for it.\nplease help. Thanks","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":74,"Q_Id":65388946,"Users Score":1,"Answer":"The LIS files are just text files.\nWe created our own python library that allows us to use print positioning like SQR does, including support for page count, headers, footers, landscape, and portrait. They are not offering it to the public at this time.\nIt shouldn't be too hard to create a your own python package to do this yourself.","Q_Score":0,"Tags":"python,lis,sqr","A_Id":65400117,"CreationDate":"2020-12-21T07:29:00.000","Title":"Is there any prefeered Python package to create lis type of file(output file generated by SQR)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I packed my python file with this command: pyarmor pack --clean -e \" --onefile --icon favicon.ico\" myfile.py\nBut the problem is, after packing and running the .exe file. The program gives me the error:\n[Errno 2] No such file or directory: '.\/files\/urls.txt'\nEven though in the directory which I'm running the .exe there is a folder with the name \"files\" and in that folder there is a .txt file with the name \"urls\".\nHow can I fix this error? Thanks for any help in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":187,"Q_Id":65390808,"Users Score":1,"Answer":"I cannot give you the complete answer, only some pointer to help you, because I do not know the specifics of pyarmor.\nI would check where myfile.py \"thinks\" you are. This can be done with a print and a file (I think you can easily find the absolute path of . (dot) that is the current dir where you are)\nYou can also print the absolute path of '.\/files\/urls.txt' to verify its existance. (see also the related question: how do I check wheather a file exists without exception)\nIf that does not point in the right direction, please comment or edit your question to provide more details.","Q_Score":1,"Tags":"python,pyarmor","A_Id":65405784,"CreationDate":"2020-12-21T10:07:00.000","Title":"Pyarmor .exe \"can't find file\" error after packing .py file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using wkhtmltopdf 0.12.6 (qt patched) with python 3.6 and pdfkit as wrapper for wkhtmltopdf.\nI am getting this error \"wkhtmltopdf exited with non-zero code -9. error:\" and I am stuck not able to find any information what this error is related to. Can someone please guide me to fix this error.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":298,"Q_Id":65391353,"Users Score":0,"Answer":"This is due to your wkhtmltopdf process running out of memory.\nPlease increase the cpu and memory of the system where you are running this process.","Q_Score":0,"Tags":"python,python-3.x,wkhtmltopdf,pdfkit","A_Id":67334489,"CreationDate":"2020-12-21T10:45:00.000","Title":"wkhtmltopdf exited with non-zero code -9. error. How can I get rid of this error?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I run the .exe from my project folder from terminal with dist\\app\\app.exe it runs fine, I can see my output in terminal etc.\nHowever, with double-clicking the .exe I just get a flashing terminal window.\nDoes anyone have an idea or a clue?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":899,"Q_Id":65393282,"Users Score":0,"Answer":"When double-clicking you are going to run your application and it will close immediately after completion. The only exception is if the application is going to ask for input. This means that your application most likely runs fine.\nIt is also possible that you opened cmd-line as Administrator and thanks the application runs fine but when you double-click it is not being executed because it lacks access. It is not possible to tell without a closer investigation though.","Q_Score":0,"Tags":"python,python-3.x,windows,pyinstaller,executable","A_Id":65393406,"CreationDate":"2020-12-21T13:03:00.000","Title":"Pyinstaller .exe works from terminal but not by double-clicking -> flashing console window","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"While I install python in windows 10 operation System every single time I get error for importing opencv.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":65395138,"Users Score":0,"Answer":"Try installing the OpenCV package and reinstalling Python. Also you might need to downgrade your Python version.","Q_Score":0,"Tags":"python,opencv-python","A_Id":65395221,"CreationDate":"2020-12-21T15:05:00.000","Title":"How do I install Python in my operating system properly","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm a fairly novice programmer who is trying to create a simple 2d animation software for a school project. I want there to be the option to save a short animation as a single file so that it can be loaded by the program again. Part of this will be storing multiple images in a single file along with some metadata such as the frame number of the image and possibly its size and position on a canvas. I have done some research and have found multiple different methods which seem like they will work, but aren't really suited to this purpose. There's no documentation I can find online on how to build an animation software so I would really appreciate if someone could point me towards any suitable method I can use in python.\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":65396216,"Users Score":0,"Answer":"You could define your own small file standard, where you set a specific length of bytes for your metadata for each image and a header that tells you how many images are stored in the file as well as their size. Then you could read that file by first reading the header (which has a specific length). This header then tells you how many images there are and their size. Then you always read a chunk of the file containing the metadata for each file (also a fixeed size in bytes) and then the image with a size in bytes that was given by the header. This is a bit of work to implement however.","Q_Score":0,"Tags":"python,animation,image-processing,file-management,storing-data","A_Id":65396474,"CreationDate":"2020-12-21T16:15:00.000","Title":"Efficient way to store a fairly small amount of images in a single file in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I were given a list of intervals, for example [[10,40],[20,60]] and a list of position [5,15,30]\nwe should return the frequency of position appeared in the list, the answer would be [[5,0],[15,1],[30,2]] because 5 didn't cover by the interval and 15 was covered once, 30 was covered twice.\nIf I just do a for loop the time complexity would be O(m*n) m is the number of the interval, n is the number of position\nCan I preprocess the intervals and make it faster? I was thinking of sort the interval first and use binary search but I am not sure how to implement it in python, Can someone give me a hint? Or can I use a hashtable to store intervals? what would be the time complexity for that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":65396940,"Users Score":0,"Answer":"You can use a frequency array to preprocess all interval data and then query for any value to get the answer. Specifically, create an array able to hold the min and max possible end-points of all the intervals. Then, for each interval, increment the frequency of the starting interval point and decrease the frequency of the value just after the end interval. At the end, accumulate this data for the array and we will have the frequency of occurrence of each value between the min and max of the interval. Each query is then just returning the frequency value from this array.\n\nfreq[] --> larger than max-min+1 (min: minimum start value, max: maximum end value)\nFor each [L,R] --> freq[L]++, freq[R+1] = freq[R+1]-1\nfreq[i] = freq[i]+freq[i-1]\nFor any query V, answer is freq[V]\n\nDo consider tradeoffs when range is very large compared to simple queries, where simple check for all may suffice.","Q_Score":0,"Tags":"python,optimization,time-complexity","A_Id":65419266,"CreationDate":"2020-12-21T17:02:00.000","Title":"find frequency of a int appear in a list of interval","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a classification problem involving the identification of fraudulent transactions, I reduced the dimensionality of the data(28 columns)[A complete quasi-separation was detected by Logit in statsmodels] using a stacked auto encoder(28->15->5) and fed the compressed data(5 columns) into a neural network with two hidden layers, each having 10 nodes and 'relu' activation function. I trained the model over a 100 epochs(The AUC metric didn't go beyond 0.500 and the train loss became constant after a few epochs).The model predicted all the records of the test set as non-fraudulent(0 class) and yielded a confusion matrix like this:\nConfusion matrix:\n[[70999 0]\n[ 115 0]]\nAccuracy Score : 0.9983828781955733\nCan someone please explain the problem behind this result and suggest a feasible solution?..","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":31,"Q_Id":65396944,"Users Score":0,"Answer":"Since your accuracy is over 99% on all zero class prediction, the percent of fraud cases in your train set is less than 1%\nTypically if the fraud cases are rare, the model will not place enough importance on the fraud cases to predict well.\nto fix this you can add costs to penalize the majority class, or add weights to penalize the majority class or use class balancing methods such as SMOTE","Q_Score":0,"Tags":"python,neural-network,classification","A_Id":65397808,"CreationDate":"2020-12-21T17:02:00.000","Title":"\"All zero class\" prediction by Neural Network","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019m using a project which is calling a script where python means python3. However python -V shows version 2. I know I have both versions (python3 -V, shows expected version) installed.\nHow can I run a script from this project and it run the correct version of Python? I suspect I shouldn\u2019t have to change anything like aliases or environment paths. Although I haven\u2019t used them before, this sounds exactly like what a virtual env is for. Am I correct?\nETA:\nI\u2019m calling a script from the command line:\n\nmake foobar","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":65397714,"Users Score":0,"Answer":"Set your desired Python version on the IDE and create a virtualenv if especiall the project requires certain packages\/libraries to keep it isolated and not run into certain errors down the road.","Q_Score":0,"Tags":"python,virtualenv","A_Id":65397829,"CreationDate":"2020-12-21T17:56:00.000","Title":"I\u2019m using a project which is expecting Python3. How can use that without messing anything up?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So i have created a telegram bot in python.\nIt is run by its main.py file.\nIt is running on the server.\nBut sometimes it stops Due to internet or some minor issue.\nCan there be code which can automatically restart the the main.py code on server maybe using Deamonize library.\nIf Yes,please suggest how to do\n\nThank YOU","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":130,"Q_Id":65398617,"Users Score":1,"Answer":"If you are using Linux, you can use crontab\ncrontab is used for scheduling event","Q_Score":1,"Tags":"python,server,telegram-bot","A_Id":68671493,"CreationDate":"2020-12-21T19:10:00.000","Title":"Can we Automatically Restart the Telgram bot(Python) once it is stopped on server using Deamonize or something else?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pygame project stored in C:\\Users\\name\\GameProject. I followed instructions to create an exe by typing in pyinstaller --onfile -w game.py. However, every time it tells me that pyinstaller is not recognized as an internal or external command. I googled some answers, and apparently python is not in my path. The problem is, I've added everything I could to both the user path and system path. I even reinstalled python, checking add to path. Pyinstaller still is unable to make me an exe file. Can I have some insight on why this might be? I'm on windows 10, using python 3.9.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3773,"Q_Id":65399842,"Users Score":0,"Answer":"To everyone that has the same problem as me, if you are using PyCharm, make sure to install pyinstaller on the project interpreter as well! It worked for me.\nIf you don't know how:\n\nClick file at the top left corner\nClick settings\nFind your project on the toolbar on the left\nClick project interpreter\nTo the right there will be a plus sign\nClick that and search pyinstaller\nInstall\n\nGood luck!","Q_Score":0,"Tags":"python,path,pyinstaller","A_Id":65412184,"CreationDate":"2020-12-21T20:55:00.000","Title":"How do I add pyinstaller to PATH?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pygame project stored in C:\\Users\\name\\GameProject. I followed instructions to create an exe by typing in pyinstaller --onfile -w game.py. However, every time it tells me that pyinstaller is not recognized as an internal or external command. I googled some answers, and apparently python is not in my path. The problem is, I've added everything I could to both the user path and system path. I even reinstalled python, checking add to path. Pyinstaller still is unable to make me an exe file. Can I have some insight on why this might be? I'm on windows 10, using python 3.9.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":3773,"Q_Id":65399842,"Users Score":1,"Answer":"I know that this might seem odd but try this:\n\ntry uninstalling python, then reinstalling it \"make sure you press\nadd to PATH\".\n\nwhen you do install it again, restart your device and open the\ncommand prompt.\n\ntype pip install pyinstaller.\n\n\nI encountered the same issue and when I did this it worked perfectly.\nit's definitely an environmental variable issue.","Q_Score":0,"Tags":"python,path,pyinstaller","A_Id":69990276,"CreationDate":"2020-12-21T20:55:00.000","Title":"How do I add pyinstaller to PATH?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"tried reinstalling, uninstalling same thing. Windows 10, python 3.9. Any suggestions? Tried now again and it gave me a returned non-zero exit status 1.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":7487,"Q_Id":65401324,"Users Score":1,"Answer":"Running python -m pip install python-dotenv solved it for me.","Q_Score":3,"Tags":"python","A_Id":70058004,"CreationDate":"2020-12-21T23:27:00.000","Title":"Python Error: ModuleNotFoundError: No module named 'dotenv'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python project that uses two R packages. I have to use these packages because they don't exist for Python as of today. While my project works great, one obstacle is that users have to install these two packages using R (or R studio) in their local systems. I was wondering if it is possible to add these package names in the python projects requirements.txt file so that they get installed with other python packages. Any leads on this are helpful... just trying to make it easy for the users of my project.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":189,"Q_Id":65403108,"Users Score":1,"Answer":"As essentially answered in the comments, Python and R have completely different packaging systems. It is not possible to add R packages to requirements.txt because is it used to store Python packages.\nHowever, you could have setup code for your Python package install R packages when your Python code is installed or at runtime. In that case the R packages are installed using R's own packaging system, and nothing prevents you from storing them in a flat file (for example called requirements_r.txt).\nA word of caution though. Installing a Python package that has the side effect of changing the directory of available R packages might be frowned upon by some.","Q_Score":0,"Tags":"python,r,rpy2,requirements.txt,feather","A_Id":65403442,"CreationDate":"2020-12-22T04:02:00.000","Title":"Installing R packages from python project's requirements.txt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to have pynput hold down on a key instead of just pressing it once? I have checked other posts on this website and none of them really help. keyboard.press only presses a key once, and doesn't hold it down.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":65403247,"Users Score":0,"Answer":"You can use a library called pyAutoGui. It has the functions that you are looking for. Not sure how to do it within pynput","Q_Score":0,"Tags":"python,pynput","A_Id":65403265,"CreationDate":"2020-12-22T04:21:00.000","Title":"Press and hold a key using pynput","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have bunch of urls to fetch. I wish to know which one would be better strategy for speed and good User experience.\nStrategy:\n\nUsing Javascript (fetch or XHR). All urls will be fetched on client side.\nSend only one fetch\/XHR (calling python in JS) to python and get all the URL's data in python and send back the response to client.\n\nPlease let me know which one would be better strategy.\nThank you,\nNamratha Patil.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":46,"Q_Id":65403964,"Users Score":1,"Answer":"You can priorities your data loading on may factors like:\n\nWhether the data is required in first fold to render elements and similar factors. if you go with 2nd strategy if will delay the loading and you will have bad user experience.","Q_Score":1,"Tags":"javascript,python","A_Id":65404063,"CreationDate":"2020-12-22T06:07:00.000","Title":"Wish to Know Better Strategy for Fetching Multiple URLS Javascript + Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have bunch of urls to fetch. I wish to know which one would be better strategy for speed and good User experience.\nStrategy:\n\nUsing Javascript (fetch or XHR). All urls will be fetched on client side.\nSend only one fetch\/XHR (calling python in JS) to python and get all the URL's data in python and send back the response to client.\n\nPlease let me know which one would be better strategy.\nThank you,\nNamratha Patil.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":46,"Q_Id":65403964,"Users Score":1,"Answer":"The network requests are computationally expensive, so batching and sending only one request is almost always preferable.","Q_Score":1,"Tags":"javascript,python","A_Id":65403974,"CreationDate":"2020-12-22T06:07:00.000","Title":"Wish to Know Better Strategy for Fetching Multiple URLS Javascript + Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to classify images as \"good\" or \"bad\". In a given region of interest, if the region of interest painted well then it is a good image else bad. I segmented the painted parts using K means clustering then I counted pixels of painted parts. How can I set a threshold value to classify images as good or bad by using the counted pixel numbers? Or is there a better approach that I can try? I tried training simple CNN but the dataset has a big class imbalance (as I observed) and I don't have labels for images.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":47,"Q_Id":65404798,"Users Score":1,"Answer":"There is no \"right\" answer to your question, you are the only who could know what constitutes an acceptable paint-job. My suggestion would be to create a script which processes a big number of images you consider to be \"good\", append all your pixel counts to a list and then extract some statistics from that list. See what the min, max, mean values of that list are and decide accordingly what your thershold value would be. Then make the same thing for images you consider to be \"bad\" and see if the threshold value is always biggest than your max \"bad\" value. Of course the more data you have, the more reliable your result will be.","Q_Score":0,"Tags":"python,computer-vision","A_Id":65404934,"CreationDate":"2020-12-22T07:36:00.000","Title":"Setting a threshold value for color segmented images to classify them","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to convert this String to only the link: {\"link\":\"https:\/\/i.imgur.com\/zfxsqlk.png\"}\nI'm trying to create a discord bot, which sends random pictures from the API https:\/\/some-random-api.ml\/img\/red_panda.\nWith imageURL = json.loads(requests.get(redpandaurl).content) I get the json String, but what do I have to do that I only get the Link like this https:\/\/i.imgur.com\/zfxsqlk.png\nSorry if my question is confusingly written, I'm new to programming and don't really know how to describe this problem.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":65405747,"Users Score":0,"Answer":"What you get from json.loads() is a Python dict. You can access values in the dict by specifying their keys.\nIn your case, there is only one key-value pair in the dict: \"link\" is the key and \"https:\/\/i.imgur.com\/zfxsqlk.png\" is the value. You can get the link and store it in the value by appending [\"link\"] to your line of code:\nimageURL = json.loads(requests.get(redpandaurl).content)[\"link\"]","Q_Score":0,"Tags":"python,json","A_Id":65406385,"CreationDate":"2020-12-22T08:56:00.000","Title":"Convert Json format String to Link{\"link\":\"https:\/\/i.imgur.com\/zfxsqlk.png\"}","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi i was created a new file\/folder in s3 bucket now i want to change the name of the folder\/file name .\nhow to rename the file\/folder name in s3 bucket by using python boto3","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1899,"Q_Id":65408114,"Users Score":1,"Answer":"There is no 'rename' function in Amazon S3. Object names are immutable.\nYou would need to:\n\nCopy the object to a new Key (filename)\nDelete the original object\n\nPlease note that folders do not actually exist in Amazon S3. Rather, the full path of the object is stored in its Key. Thus, it is not possible to rename folders, since that would involve renaming all objects within that path. (And objects can't be renamed, as mentioned above.)\nIf you wanted to \"rename a folder\", you could write a Python script that will:\n\nObtain a listing of all objects within the given Prefix\nLoop through each object, then:\nCopy the object to a new Key\nDelete the original object\n\nIf you do not want to code this, then there are some tools (eg Cyberduck) that give a nice user interface and can do many of these operations for you.","Q_Score":0,"Tags":"python,python-3.x,amazon-web-services,amazon-s3,boto3","A_Id":65431512,"CreationDate":"2020-12-22T11:44:00.000","Title":"How to rename the s3 file name by using python boto3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have made a django admin superuser previously for a project and now for another project i created another admin site super user but when i register models on admin site it gets registered on both admin sites( i.e prev one and in new one)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":65408468,"Users Score":0,"Answer":"You will need to create a Proxy model for you current model and register this Proxy model for your new admin panel.","Q_Score":0,"Tags":"python,django","A_Id":65408932,"CreationDate":"2020-12-22T12:08:00.000","Title":"Registring models on django admin site","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to start to use the visual studio code to learn the python. What I want to do is when I run for the code like Print (\"Hello World\"), I wants to appear it at the external shell - pop up cmd ( not at the integrated terminal at below ). Before I install python and its extension on visual studio code, the output seems to appear like that. It is my first trying and testing so I don't know what exactly make it look like that. so what should I do to make my code output can be appeared at the external pop up shell.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":681,"Q_Id":65410066,"Users Score":0,"Answer":"You could just go to the directory where the Python file is Eg C:\/users\/\/ and run using py .py\nExample if your Python file is named test.py run it with py test.py. However if you can't find the path to your file, just go to the File Explorer and at around the left area in the input box, type in cmd (command prompt) and it will open it for you.","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":65410599,"CreationDate":"2020-12-22T14:04:00.000","Title":"Visual Studio Code - Output in the External Shell (Not Integrated)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it okay to use python pandas to manipulate tabular data within a flask\/django web application?\nMy web app receives blocks of data which we visualise in chart form on the web. We then provide the user some data manipulation operations like, sorting the the data, deleting a given column. We have our own custom code to perform these data operations which it would be much easier to do it using pandas however I'm not sure if that is a good idea or not?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":259,"Q_Id":65412596,"Users Score":2,"Answer":"Actually yes, but don't forget to move your computation into a separate process if it takes too long.","Q_Score":0,"Tags":"python,pandas,flask","A_Id":65412933,"CreationDate":"2020-12-22T16:47:00.000","Title":"Using pandas within web applications - good or bad?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it okay to use python pandas to manipulate tabular data within a flask\/django web application?\nMy web app receives blocks of data which we visualise in chart form on the web. We then provide the user some data manipulation operations like, sorting the the data, deleting a given column. We have our own custom code to perform these data operations which it would be much easier to do it using pandas however I'm not sure if that is a good idea or not?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":259,"Q_Id":65412596,"Users Score":2,"Answer":"It\u2019s a good question, pandas could be use in development environment if dataset isn\u2019t too big , if dataset is really big I think you could use spark dataframes or rdd, if data increase in function of time you can think on streaming data with pyspark.","Q_Score":0,"Tags":"python,pandas,flask","A_Id":65412799,"CreationDate":"2020-12-22T16:47:00.000","Title":"Using pandas within web applications - good or bad?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a new programmer and I saw that Google is written in python. I know that HTML, CSS, and JS are used to make websites, so how is python \"linked\" to this. This is probably a very basic question but I am new to all this.","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":152,"Q_Id":65417854,"Users Score":2,"Answer":"So your code in browser is called front-end (FE). Sometimes it's all you need. However, sometimes you need to store some data on the server and\/or retrieve it from there. That is where back-end (BE) comes into play.\nBE is basically an app on some computer (maybe a server, maybe a Raspberry Pi, anything really) that listens to requests from the network. Let's say your code needs some data from the server. Your code on the front end makes an AJAX request to the network address of this server on some specific port. The BE, which may be written in Python, or any other language, receives the request and does something with it.\nIt can fetch data from the DB or anything really. Then it send a response to your FE back, sending some data, or confirmation that everything was done successfully, or an error if something went wrong.","Q_Score":5,"Tags":"javascript,html,python-3.x,web,backend","A_Id":65417906,"CreationDate":"2020-12-23T01:10:00.000","Title":"I am curious as to how python is connected to websites","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a new programmer and I saw that Google is written in python. I know that HTML, CSS, and JS are used to make websites, so how is python \"linked\" to this. This is probably a very basic question but I am new to all this.","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":152,"Q_Id":65417854,"Users Score":1,"Answer":"Python is used for backend development. Backend is the part of your website that runs on your server, not browser. Backend is used for authentication and communicating with database and many more. There are some popular frameworks in python like django and flask.","Q_Score":5,"Tags":"javascript,html,python-3.x,web,backend","A_Id":65417943,"CreationDate":"2020-12-23T01:10:00.000","Title":"I am curious as to how python is connected to websites","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a new programmer and I saw that Google is written in python. I know that HTML, CSS, and JS are used to make websites, so how is python \"linked\" to this. This is probably a very basic question but I am new to all this.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":152,"Q_Id":65417854,"Users Score":0,"Answer":"Google in front of you is called front end which is written in HTML, CSS, and JS and usually interpreted by browsers. In the end, HTML, CSS, and JS are all codes, thus, string (or binary).\nPython is used to generate those strings, the codes, in back end.","Q_Score":5,"Tags":"javascript,html,python-3.x,web,backend","A_Id":65417986,"CreationDate":"2020-12-23T01:10:00.000","Title":"I am curious as to how python is connected to websites","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Django version 3.0.3\nI'm trying to use Django's built-in user authentication system, specifically its login form and view.\nWhere is form = LoginForm() or form = AuthenticationForm() line located?\nI can't find it in LoginView and AuthenticationForm definition. And I have no other ideas as to where to look into.\nI'm just curious about how Django determines the context behind the hood.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":116,"Q_Id":65419725,"Users Score":0,"Answer":"you need to import build-in Authentication form\nfrom django.contrib.auth.forms import AuthenticationForm\n\nYou need to make an object for Authentication form and just pass it to render function.\nreturn render(request, 'user\/login.html', {'form': form, 'title': 'Sign In'})\n\n\nNote: \nwithout sending form instance to render function the login form won't be displayed\nSimilar for RegistrationForm also.","Q_Score":3,"Tags":"python,django","A_Id":65419804,"CreationDate":"2020-12-23T06:02:00.000","Title":"Django - where does the system determine \"form\" is Authentication Form?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am building a face-recognition for a large number of people and I when to detect more and more when I add more data to train the model.\nMy current pipeline is:\n\nDetecting faces with Yolov4-tiny\nRecognize faces with KNN classifier (I train it with around 80 classes with each class contains around 5 pictures)\n\nCurrently, it can run in real-time with around 10fps on CPU. My concern is that through some research, I found that KNN will have problems if I increase the dataset (the curse of dimensionality). So i would like to know if it is ok for using KNN for this problem ? If not is there a way around this or another way to sort this problem ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":219,"Q_Id":65420484,"Users Score":2,"Answer":"Increasing the dataset does not cause the curse of dimensionality. The curse of dimensionality occurs in high dimensional spaces for example when using a large number of features. Increasing the dataset has instead a positive effect.\nI do see a problem in only using 5 pictures per class.\nAlso if you are interested in real-time performance (usually people mean 30fps+ when talking about real-time), I would look into running yolov4-tiny on a GPU instead of a CPU if that is possible.","Q_Score":0,"Tags":"python,tensorflow,machine-learning,deep-learning,computer-vision","A_Id":65420642,"CreationDate":"2020-12-23T07:29:00.000","Title":"Face recognition for large number of people","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want a security profiler for python. Specifically, I want something that will take as input a python program and tell me if the program tries to make system calls, read files, or import libraries. If such a security profiler exists, where can I find it? If no such thing exists and I were to write one myself, where could I have my profiler 'checked' (that is, verified that it works).\nIf you don't find this question appropriate for SO, let me know if there is another SE site I can post this on, or if possible, how I can change\/rephrase my question. Thanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":190,"Q_Id":65420594,"Users Score":3,"Answer":"Usually, python uses an interpreter called CPython. It is hard to say for python code by itself if it opens files or does something special, due a lot of python libraries and interpreter itself are written in C, and system calls\/libc calls can happen only from there. Also python syntax by itself can be very obscure.\nSo, by answering your suspect: I suspect this would need specific knowledge of the python programming language, it does not look like that, due it is about C language.\nYou can think it is possible to patch CPython itself. Well it is not correct too as I guess. A lot of shared libraries use C\/C++ code as CPython itself. Tensorflow, for example.\nGoing further, I guess it is possible to do following things:\n\npatch the compiler which compiles C\/C++ code for CPython\/modules, which is hard I guess.\njust use an usual profiler, and trace which files, directories and calls are used by python itself for operation, and whitelist them, due they are needed, which is the best option by my opinion (AppArmor for example).\nmaybe you can be interested in the patching of CPython itself, where it is possible to hook needed functions and calls to external C libraries, but it can be annoying due you will have to revise every added library to your project, and also C code is often used for performance (e.g. json module), which doesn't open too much things.","Q_Score":5,"Tags":"python-3.x,security","A_Id":65506767,"CreationDate":"2020-12-23T07:39:00.000","Title":"Finding or building a python security profiler","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to upload training job artifacts to S3 in a non-compressed manner.\nI am familiar with the output_dir one can provide to a sagemaker Estimator, then everything saved under \/opt\/ml\/output is uploaded compressed to the S3 output dir.\nI want to have the option to access a specific artifact without having to decompress the output every time. Is there a clean way to go about it? if not any workaround in mind?\nThe artifacts of my interest are small meta-data files .txt or .csv, while in my case the rest of the artifacts can be ~1GB so downloading and decompressing is quite excessive.\nany help would be appreciated","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":253,"Q_Id":65421005,"Users Score":1,"Answer":"I ended up using the checkpoint path that is by default being synced with the specified S3 path in an uncompressed manner.","Q_Score":1,"Tags":"python,boto3,amazon-sagemaker","A_Id":65977591,"CreationDate":"2020-12-23T08:15:00.000","Title":"how to save uncompressed outputs from a training job in using aws Sagemaker python SDK?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I could able to read excel that is there in the network shared drive when i run python code in my local, but when i try to do the same in PCF, it is throwing error like \"No such file or directory\", what should i do to make my code to read excel in PCF?\nShared drive path: df=pd.read_excel('\/\/X\/\/Proj\/\/app\/\/Data\/\/sep.xlsm')\nerror in PCF: 2020-12-23T13:53:15.77+0530 [APP\/PROC\/WEB\/0] ERR with open(filename, \"rb\") as f:\n2020-12-23T13:53:15.77+0530 [APP\/PROC\/WEB\/0] ERR FileNotFoundError: [Errno 2] No such file or directory: '\/\/X\/\/Proj\/\/app\/\/Data\/\/sep.xlsm'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":132,"Q_Id":65421279,"Users Score":0,"Answer":"It seems like you dint mount the file share to your app and hence the issue....\nIf file share is mounted and accessible from pcf then would suggest to try with fully qualified name (like \/\/fs00ab01.svr.net\/Proj\/app\/Data\/sep.xlsm) rather the X drive or something..","Q_Score":0,"Tags":"python,pcf,shared-drive","A_Id":65438041,"CreationDate":"2020-12-23T08:37:00.000","Title":"How to make my python code that is hosted in PCF to read excel file from network shared drive","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The basic way I usually use is by using the list.index(element) and reversed_list.index(element), but this fails when I need to search for many elements and the length of the list is too large say 10^5 or say 10^6 or even larger than that. What is the best possible way (which uses very little time) for the same?","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":65422175,"Users Score":0,"Answer":"Well, someone needs to do the work in finding the element, and in a large list this can take time! Without more information or a code example, it'll be difficult to help you, but usually the go-to answer is to use another data structure- for example, if you can keep your elements in a dictionary instead of a list with the key being the element and the value being an array of indices, you'll be much quicker.","Q_Score":0,"Tags":"python-3.x,list,find-occurrences","A_Id":65422216,"CreationDate":"2020-12-23T09:49:00.000","Title":"What is the best possible way to find the first AND the last occurrences of an element in a list in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to build an app with buildozer, and it gets stuck when downloading hostpython3. The last message is:\n[INFO]: Downloading hostpython3 from https:\/\/www.python.org\/ftp\/python\/3.8.1\/Python-3.8.1.tgz\nand it never gets downloaded. I tried downloading it manually and placing it in ...\/Python\/kivy_sms\/.buildozer\/android\/platform\/build-armeabi-v7a\/packages, but it removes it and does the same thing. Is there a way to bypass this and download it manually?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":65422740,"Users Score":0,"Answer":"check other DIRs under ...\/packages, those downloaded ones should have an empty file starts with \".make\" sth., besides the .tgz file. Mimic this should stop buildozer from downloading the file again.","Q_Score":0,"Tags":"python,kivy,buildozer","A_Id":66131137,"CreationDate":"2020-12-23T10:27:00.000","Title":"Buildozer doesn't install hostpython3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently looking for a possibly fast way to create (or use an already existing) classifier for images. The main goal is to classify the images whether they are (mainly) a logo or not. This means that I am not interested in recognizing the brand\/name of the company. But instead the model would have to tell how possible it is that the image is a logo.\nDoes such a categorizer already exist? And if not, is there any possible solution to avoid neural networks for this task?\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":60,"Q_Id":65422821,"Users Score":1,"Answer":"I am not sure about the existence of this project, but I have a couple of ideas that can work for this without neural networks. I think as a convention neural networks would be much easier but I think it might be done K-means algorithm or by a clustering algorithm. I have imagined like if logo data are in the same area and image data are in another same area, they can be clustered.However, I haven't done it sth like that before but theoretically, it seems logical","Q_Score":0,"Tags":"python,image-processing,image-classification","A_Id":65423238,"CreationDate":"2020-12-23T10:32:00.000","Title":"Binary image classifier for logos","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently constructing a model in Pytorch that requires multiple custom layers. I have only been defining the forward method and thus do not define a backward method. The model seems to run well, and the optimizer is able to update using the gradients from the layers. However, I see many people defining backward methods, and I wonder if I am missing something.\nWhy might you need to define a backwards pass?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":481,"Q_Id":65425429,"Users Score":3,"Answer":"In very few cases should you be implementing your own backward function in PyTorch. This is because PyTorch's autograd functionality takes care of computing gradients for the vast majority of operations.\nThe most obvious exceptions are\n\nYou have a function that cannot be expressed as a finite combination of other differentiable functions (for example, if you needed the incomplete gamma function, you might want to write your own forward and backward which used numpy and\/or lookup tables).\n\nYou're looking to speed up the computation of a particularly complicated expression for which the gradient could be drastically simplified after applying the chain rule.","Q_Score":1,"Tags":"python,optimization,pytorch,layer","A_Id":65425809,"CreationDate":"2020-12-23T13:54:00.000","Title":"Why define a backward method for a custom layer in Pytorch?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had read a file from excel and read as pandas dataframe.\nI have to make few transformation in floating data type values.\nI want to add trailing zeroes or change decimal places of the values.\nFor example:\n25 to 25.0000\n23.3 to 23.3000\n24.55 to 24.5500\nI have tried changing the values to str and then add zeroes.\nBut on exporting it back to excel ,I am getting string(text),which I want as numbers.\nI have also tried Decimal library, but facing same issue while exporting.\nkindly help guys.\nI want to export file to excel so need to make changes accordingly.\nThank you","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":452,"Q_Id":65425954,"Users Score":0,"Answer":"You could try df['column']=df['column'].astype(float). This would change for a float number.\nIf you back to string, Excel read as text.","Q_Score":0,"Tags":"python,excel,pandas,dataframe,numpy","A_Id":65426368,"CreationDate":"2020-12-23T14:30:00.000","Title":"How to add trailing zeroes to floating point value and export to excel as float(number)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently developing a Django app that allows students to programmatically develop SVG graphics. They can code Python in their browser with ACE editor. The code is executed on the server, stored in a database and the generated SVG (custom library) returned and displayed. An example code that displays a filled ellipse looks like so:\ngraph.draw(Circle(cx=0, cy=0, r=20, fill=\"lime\").scale(2, 1)\nNow I'm wondering, how I could extend this app to do some 3D. I stumbled over X3Dom, which seems promising and not too hard to generate and I could write another lightweight pythonic library for this. But, it doesn't seem to do CSG (constructive solid geometry) which is major drawback.\nAny hints in what direction I should investigate for some 3D web technology that allows easy 3D scene generation with server-side python and that implements CSG?\nNB: OpenJSCad is simalar to what I'd like to achieve, except that my solution allows for classroom collaboration and it must expose Python to students as the programming language. The aim is to spice up the teaching of Python programming with graphics.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":232,"Q_Id":65426061,"Users Score":0,"Answer":"I believe I used three.js to do CSG a while back. There used to be an example online. You are right that X3D does not do CSG. I was doing cross sections of the earth and found a way with X3D. You might be able to use VPython or brython in the browser if your worried about Python not running in the browser. I\u2019ve only brython for a short time testing another person\u2019s project and vpython not at all.\nIf you\u2019re doing something like inverseCSG or CSGNet, is your class available online?\nIn other words, maybe try to find a Python library that does CSG instead searching for a rendering engine in JS. Don\u2019t view the browser as limited to JS.\nI only found three.js when I was looking.\nMaybe search for a solution which is not a solid solution.","Q_Score":0,"Tags":"python,3d,x3d","A_Id":67595240,"CreationDate":"2020-12-23T14:38:00.000","Title":"Python web app for programmatic 3D scene construction","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm currently developing a Django app that allows students to programmatically develop SVG graphics. They can code Python in their browser with ACE editor. The code is executed on the server, stored in a database and the generated SVG (custom library) returned and displayed. An example code that displays a filled ellipse looks like so:\ngraph.draw(Circle(cx=0, cy=0, r=20, fill=\"lime\").scale(2, 1)\nNow I'm wondering, how I could extend this app to do some 3D. I stumbled over X3Dom, which seems promising and not too hard to generate and I could write another lightweight pythonic library for this. But, it doesn't seem to do CSG (constructive solid geometry) which is major drawback.\nAny hints in what direction I should investigate for some 3D web technology that allows easy 3D scene generation with server-side python and that implements CSG?\nNB: OpenJSCad is simalar to what I'd like to achieve, except that my solution allows for classroom collaboration and it must expose Python to students as the programming language. The aim is to spice up the teaching of Python programming with graphics.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":232,"Q_Id":65426061,"Users Score":0,"Answer":"try checking out the library \"trimesh\" for python which relays mainly on watertight stl files, but allows you to do some boolean operations for CSG. You can substract one file from the other, extended and find the intersection. Plus, it has some primitive functions directly like cylinders and spheres.","Q_Score":0,"Tags":"python,3d,x3d","A_Id":67607317,"CreationDate":"2020-12-23T14:38:00.000","Title":"Python web app for programmatic 3D scene construction","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am on an Ubuntu machine and I want to use python in my C code but when I include the Python.h header file, it shows a warning:\nPython.h: No such file or directory\nAny method for this. I have already tried to use:\nsudo apt-get install python3-dev and;\nsudo apt-get install python-dev\nBut it keeps showing error.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":32,"Q_Id":65428162,"Users Score":1,"Answer":"The Python.h file is not in the default compiler include path.\nAdd the output of pkg-config --cflags python3 to your compiler command line.\nNow the compiler will know where to find Python.h (and any dependencies it may have)","Q_Score":0,"Tags":"c,python-3.8,ubuntu-20.04","A_Id":65431056,"CreationDate":"2020-12-23T17:08:00.000","Title":"How to make Python.h file work in Ubuntu?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have Qiskit installed via Anaconda and a virtual environment set up in Python 3.8. when I run (.venv) C:\\Users\\brenm>jupyter notebook (in Anaconda prompt) it fails and throws 'jupyter' is not recognized as an internal or external command, operable program or batch file.\nTo counter this, I ran (.venv) C:\\Users\\brenm>python -m pip install jupyter --user and jupyter notebook installed properly. But when I run jupyter notebookin the Anaconda prompt, it still throws 'jupyter' is not recognized as an internal or external command, operable program or batch file.\nI'm very confused as to what is happening because I believed jupyter notebook was a Qiskit dependency that was supposed to be installed already. More so, I'm confused why when I manually install jupyter notebook, the command jupyter notebook is not recognized.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":67,"Q_Id":65428875,"Users Score":0,"Answer":"Since you are sure that your Python library path is in your system variables , you can try this command :\npython -m notebook","Q_Score":0,"Tags":"python,python-3.x,jupyter-notebook,anaconda,qiskit","A_Id":65428918,"CreationDate":"2020-12-23T18:03:00.000","Title":"Jupyter Notebook failed command in Anaconda Prompt for Qiskit","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Meaning can I name my methods and attributes as long (and descriptive) as I want, or should I make an effort to make them as concise and short as possible for the sake of runtime.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":24,"Q_Id":65430198,"Users Score":1,"Answer":"The length of a method name affects only the parser for the interpreter. The time to read the characters and build the string is insignificant compared to the run time of the program. You should use meaningful names: any time a human spends in interpreting the method name will be more than the microseconds lost in execution.","Q_Score":0,"Tags":"python,python-3.x,oop","A_Id":65430248,"CreationDate":"2020-12-23T19:56:00.000","Title":"Does the name length for an attribute \/ method add runtime to a function?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Meaning can I name my methods and attributes as long (and descriptive) as I want, or should I make an effort to make them as concise and short as possible for the sake of runtime.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":65430198,"Users Score":0,"Answer":"The length of the name doesn't affect the runtime or memory consumption, it all depends on the value of the attribute","Q_Score":0,"Tags":"python,python-3.x,oop","A_Id":65430241,"CreationDate":"2020-12-23T19:56:00.000","Title":"Does the name length for an attribute \/ method add runtime to a function?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Trying apt(-get) doesn't work, pip doesn't work and downloading the .deb package itself doesn't work either so here we are. I'll post any error messages deemed necessary, thanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":187,"Q_Id":65431822,"Users Score":0,"Answer":"if anyone runs into trouble with this, I managed to solve this by adding deb http:\/\/ftp.nl.debian.org\/debian stretch main to \/etc\/apt\/sources.list then doing sudo apt update and finally sudo apt install python-wxgtk3.0","Q_Score":0,"Tags":"python,installation,debian-based","A_Id":65432642,"CreationDate":"2020-12-23T22:43:00.000","Title":"How do I install python-wxgtk3.0 on Parrot-sec (pretty much debian)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an ModelAdmin with a set of fields in list_display.\nI want the user to be able to click a checkbox in order to add or remove these fields.\nIs there a straightforward way of doing this? I've looked into Widgets but I'm not sure how they would change the list_display of a ModelAdmin","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":71,"Q_Id":65431971,"Users Score":0,"Answer":"To do this I had to\n\nOverride an admin template (and change TEMPLATES in settings.py). I added a form with checkboxes so user can set field\nAdd a new model and endpoint to update it (the model stores the fields to be displayed, the user submits a set of fields in the new admin template)\nUpdate admin.py, overriding get_list_display so it sets fields based on the state of the model object updated","Q_Score":0,"Tags":"python,django","A_Id":65443640,"CreationDate":"2020-12-23T23:02:00.000","Title":"How can I let the user of an Django Admin Page control which list_display fields are visible?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Let's say we have an established websocket connection between our server and a remote server and the remote server from time to time sends some data to us.\nSo does our websocket connection spend outbound data traffic when receiving data?\nMy guess is it does not, because the receiving data gets accumulated in the memory. So when you do a .recv(), the websocket just pulls out the data from the memory locally and sends nothing to the other server. Is this correct?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":135,"Q_Id":65432174,"Users Score":1,"Answer":"At the level of the web socket protocol it is likely that recipient server sends a ping message to the originating client every couple of minutes, which responds with a pong.\nSo, the few-tens-of-bytes pong message flows back every once in a while.\nAt the TCP \/ IP level, the receiver server responds to every second incoming data packet with an ACK packet, comprising 30 bytes. Incoming data packets can carry up to 1460 bytes of payload data plus the 30 bytes.\nSo, there's a minimal amount of reverse data transmission, but it's not zero.\nIf you must have zero reverse transmission you need to use a datagram protocol. But datagram protocols are lossy: there's no way to recover a lost datagram.","Q_Score":1,"Tags":"python,websocket,protocols,packet","A_Id":65432341,"CreationDate":"2020-12-23T23:29:00.000","Title":"Does a websocket spend outbound data traffic when receiving data?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a while loop in a shell command:\nsp = subprocess.Popen(\"while [ 1 ]; do say \\\"hello world\\\"; done;\").\nHowever, when I send sp.kill(), the loop does not terminate immediately. Rather, it finishes the iteration it is on and then exits. I notice that the version without the loop, \"say \\\"hello world\\\", will terminate immediately. I've tried sending the various C codes using signal, but nothing seems to work. How can I immediately exit from the loop?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":65432191,"Users Score":0,"Answer":"I think the best way is killing the process using kill command with -9 option for sending SIGKILL signal. This doesn't let the process to handle the signal and it terminates it entirely. There are other ways for sending this signal. Like os.kill()\nYou just need to figure out what is the UID of the process and then kill it.","Q_Score":0,"Tags":"python-3.x,bash","A_Id":65454279,"CreationDate":"2020-12-23T23:31:00.000","Title":"Exit a shell script immediately with subprocess","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"SOLVED!\nI am using keras with functional API and I have a tensor that is X = (None, 2) tensor and I need to concatenate it with a Y = (None,7) tensor to get a (None,9) tensor. The problem is X and Y's first dimension are unknown so I will have to repeat X a variable number of times, z, to make it equal Y.I have figured out how to repeat X with an unknown z using RepeatedVector but that adds an extra dimension (None, None, 2). So Now I need a way to flatten (None, None, 2) into (None, 2) so I can concatenate them leaving me with an object (None, 9) that I can put into a dense layer.\nso what i have tried...\n1 - tf.squeeze(X) but this removes all dimensions (None, None)\n2 - tf.keras.layers.Reshape but this doesn't accept None arguments for output_shape which i need since y is variable\n3 - K.flatten but this makes it 1 dimension.\n4- tried adding a dimension to Y = (1,None,7) created odd error.\nSOLUTION:\ntf.reshape(X, shape=[tf.shape(X)[0]*tf.shape(x)[1],2])\ni am calling the None dimensions using tf.shape() and multiplying them by each other.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":378,"Q_Id":65432925,"Users Score":0,"Answer":"Solution\ntf.reshape(X, shape=[tf.shape(X)[0]*tf.shape(x)[1],2])\ni am calling the None dimensions using tf.shape() and multiplying them by each other.","Q_Score":2,"Tags":"python,tensorflow,keras","A_Id":65434005,"CreationDate":"2020-12-24T01:29:00.000","Title":"Reduce (None, None, 2) to (None,2) using Keras","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install openCV on WSL+UBUNTU20.04 with python3.8. I am trying to install using miniconda without any success.\nAfter searching over internet, it seems that openCV may not be supported on python3.8. If anyone has done this successfully, I would appreciate some help.\nUpdate: Solved. Please check my answer.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1687,"Q_Id":65434139,"Users Score":1,"Answer":"Thanks to Christoph's suggestion, I decided to install using pip in the virtual enviornment of conda. I did the following:\n\nRun\nconda create -n env_name\nsource activate env_name\n, where env_name is the name of your virtual environment.\nTo install pip to my venv directory, I ran:\nconda install pip\nI went to the actual venv folder in anaconda directory. It should be somewhere like:\n\/home\/$USER\/miniconda3\/envs\/env_name\/\nI then installed new packages by doing\n\/home\/$USER\/miniconda3\/envs\/env_name\/bin\/pip install opencv-python","Q_Score":2,"Tags":"python-3.x,opencv,windows-subsystem-for-linux,ubuntu-20.04","A_Id":65450314,"CreationDate":"2020-12-24T04:55:00.000","Title":"How to install openCV on WSL + UBUNTU20.04 + python3.8 using Anaconda?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"atbswp is a software that help you automate all the mouse clicks and movements and keyboards keys so you can automate everything u do and repeat it or replay it\nand by using crontab you can schedule it so you can run automated sequence at specific time\nthe app extracts a python file\nand you run it inside the app or in terminal without the need of the app\nthe problem is\nwhen i run it in terminal it runs ok\nwhen i put it in crontab to run it doesnt run and i got errors in the crontab log file\ni really need help it is something amazing for everyone i think\nthis is the cron log error\nTraceback (most recent call last):\nFile \"\/home\/zultan\/bot1\", line 4, in \nimport pyautogui\nFile \"\/home\/zultan\/.local\/lib\/python3.8\/site-packages\/pyautogui\/init.py\", line 241, in \nimport mouseinfo\nFile \"\/home\/zultan\/.local\/lib\/python3.8\/site-packages\/mouseinfo\/init.py\", line 223, in \n_display = Display(os.environ['DISPLAY'])\nFile \"\/usr\/lib\/python3.8\/os.py\", line 675, in getitem\nraise KeyError(key) from None\nKeyError: 'DISPLAY'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":95,"Q_Id":65436004,"Users Score":0,"Answer":"i found the solution for everybody\nput this\nin the crontab -e\nDISPLAY=:0\nXAUTHORITY=\/run\/user\/1000\/gdm\/Xauthority","Q_Score":0,"Tags":"python,linux,automation,cron,click","A_Id":65651053,"CreationDate":"2020-12-24T08:49:00.000","Title":"atbswp python file is not running on crontabs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to compare the differnt commits with two repos, for example Android_10.0_r2 and Android_11.0_r3\nthe changes are frequent and since Google merge inner code to AOSP, some commit even older than Android_10.0_r2 merged to Android_11.0_r3\uff0c I don't want to miss that from git log checking\nso I record all commit logs in both repos and select the different change_ids\/commit_ids.\nBut since the git log is too much in AOSP and it have 400+ repos, it runs 1 hour+ in my PC.\nAny idea that git command may have directly way for geting the different commit_ids between two repo?\nThe git diff with two repo dir shows diff of the files, since changelist is long, the commit message diff is more effective","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":348,"Q_Id":65438922,"Users Score":1,"Answer":"Every version of AOSP has its own manifest. Use repo sync -n -m \/path\/to\/manifest1.xml and repo sync -n -m \/path\/to\/manifest2.xml to fetch repositories' data of both. -n instructs repo to fetch data only and not checkout\/update the worktrees, which could be omitted if you want to see the real files.\nAnd then use repo diffmanifests \/path\/to\/manifest1.xml \/path\/to\/manifest2.xml to display the diff commits between 2 code bases. It has an option --pretty-format= which works like --pretty= in git log.\nHowever the output is still a bit rough. Another solution is making a script, in Python for example, to parse the 2 manifests and run git log or git diff to get the detailed information. It's much more flexible. To my experience, it won't take that long. Our code base has about 1500 repositories.","Q_Score":0,"Tags":"python,git,shell","A_Id":65444644,"CreationDate":"2020-12-24T13:24:00.000","Title":"git diff in two repos and got commit id list","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to implement machine learning on the audio's received from zoom meeting. The sounds that are received at output of Speaker should get into a .wav file file or in a variable.\nI would like to receive python code for doing this.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":678,"Q_Id":65439661,"Users Score":0,"Answer":"Depending on your operating system you have you will have to set this up differently.\n\nNeed to create a virtual microphone\nSet that virtual microphone as the speaker for zoom\nPoint pyaudio to grab that virtual microphone as an input device.\nGrab data from the stream, parse and feed it into your ML.\n\nI have used virtual microphones for zoom so I can control video and audio through my webcam rather than screenshare. One good virtual microphone that works on windows is \"Voicemeeter\".","Q_Score":0,"Tags":"python,audio,pyaudio,zoom-sdk","A_Id":65439941,"CreationDate":"2020-12-24T14:30:00.000","Title":"Python code to get the (speaker)Audio of zoom app","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run a Python-Flask project. I have all the requirements installed. But still, whenever I try to run the server I get an error saying\nsqlalchemy.exc.ArgumentError: Can't load plugin: sqlalchemy.dialects:driver\nCan anyone please tell me what can be wrong here and which file I need to examine?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":132,"Q_Id":65440246,"Users Score":0,"Answer":"I had similar issue on Heroku using Heroku Postgres plug-in.\nAfter two days headache the bellow solutions works me.\nDATABASE_URL = os.environ.get(\"DATABASE_URL\").replace(\"postgres\", \"postgresql\")","Q_Score":0,"Tags":"python,sqlalchemy","A_Id":66767105,"CreationDate":"2020-12-24T15:25:00.000","Title":"sqlalchemy.exc.ArgumentError: Can't load plugin: sqlalchemy.dialects:None","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"how to understand difference between a+=1 and a=+1 in Python?\nit seems that they're different. when I debug them in Python IDLE both were having different output.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":438,"Q_Id":65441109,"Users Score":1,"Answer":"It really depends on the type of object that a references.\nFor the case that a is another int:\nThe += is a single operator, an augmented assignment operator, that invokes a=a.__add__(1), for immutables. It is equivalent to a=a+1 and returns a new int object bound to the variable a.\nThe =+ is parsed as two operators using the normal order of operations:\n\n+ is a unary operator working on its right-hand-side argument invoking the special function a.__pos__(), similar to how -a would negate a via the unary a.__neg__() operator.\n= is the normal assignment operator\n\nFor mutables += invokes __iadd__() for an in-place addition that should return the mutated original object.","Q_Score":1,"Tags":"python","A_Id":65441571,"CreationDate":"2020-12-24T16:49:00.000","Title":"What is the difference between a+=1 and a=+1..?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"how to understand difference between a+=1 and a=+1 in Python?\nit seems that they're different. when I debug them in Python IDLE both were having different output.","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":438,"Q_Id":65441109,"Users Score":2,"Answer":"a+=1 is a += 1, where += is a single operator meaning the same as a = a + 1.\na=+1 is a = + 1, which assigns + 1 to the variable without using the original value of a","Q_Score":1,"Tags":"python","A_Id":65441147,"CreationDate":"2020-12-24T16:49:00.000","Title":"What is the difference between a+=1 and a=+1..?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to know please, how can I define variables in a python file and share these variables with their values with multiple python files?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":63,"Q_Id":65442371,"Users Score":1,"Answer":"To do this, you can create a new module specifically for storing all the global variables your application might need. For this you can create a function that will initialize any of these globals with a default value, you only need to call this function once from your main class, then you can import the globals file from any other class and use those globals as needed.","Q_Score":0,"Tags":"python,python-3.x","A_Id":65442442,"CreationDate":"2020-12-24T19:05:00.000","Title":"different python files sharing the same variables","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to know please, how can I define variables in a python file and share these variables with their values with multiple python files?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":65442371,"Users Score":0,"Answer":"You can create a python module\nCreate a py file inside that module define variables and import that module in the required places.","Q_Score":0,"Tags":"python,python-3.x","A_Id":65442416,"CreationDate":"2020-12-24T19:05:00.000","Title":"different python files sharing the same variables","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do you flatten an image?\nI know that we make use conv2d and pooling to detect the edges and minimize the size of the picture, so do we then flatten it after that?\nWill the flattened, pooled image will be a vector in one row and features or one column and the features?\nDo we make the equation x_data=x_date\/255 after flattening or before convolution and pooling?\nI hope to know the answer.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":313,"Q_Id":65445537,"Users Score":0,"Answer":"Here's the pipeline:\nInput image (could be in batches - let's say your network processes 10 images simultaneously) so 10 images of size (28, 28) -- 28 pixels height \/ weight and let's say the image has 1 filter only (grayscale).\nYou are supposed to provide to your network an input of size (10, 28, 28, 1), which will be accepted by a convolutional layer. You are free to use max pooling and maybe an activation function. Your convolutional layer will apply a number of filters of your choice -- let's assume you want to apply 40 filters. These are 40 different kernels applied with different weights. If you want to let's say classify these images you will (most likely) have a number of Dense layers after your convolutional layers. Before passing the output of the convolutional layers (which will be a representation of your input image after a feature extraction process) to your dense layers you have to flatten it in a way (You may use the simplest form of flattening, just passing the numbers one after the other). So your dense layer accepts the output of these 40 filters, which will be 'images' -- their size depends on many things (kernel size, stride, original image size) which will later be flattened into a vector, which supposedly propagates forward the information extracted by your conv layer.\nYour second question regarding MinMaxScaling (div by 255) - That is supposed to take place before everything else. There are other ways of normalizing your data (Standar scaling -- converting to 0 mean and unit variance) but keep in mind, when using transformations like that, you are supposed to fit the transformation on your train data and transform your test data accordingly. You are not supposed to fit and transform on your test data. Here, dividing by 255 everything is accepted but keep that in mind for the future.","Q_Score":0,"Tags":"python,image-processing,deep-learning,conv-neural-network","A_Id":65447758,"CreationDate":"2020-12-25T05:28:00.000","Title":"How to flatten an image?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am going to develop django rest framework apis using postgres legacy database in which apart from the existing tables no other default django tables should be created. I would like to know without creating any django default tables or doing migrations,\n\ncan i able to access records from tables using ORM?\nCan i able to access admin page without creating django auth table (i.e superuser)?\nIf no for both questions, only way to connect to db is using any db adapter like psycopg2?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":185,"Q_Id":65445547,"Users Score":0,"Answer":"can i able to access records from tables using ORM?\n\nIf the schema is same in your models and DB tables, then yes.\n\nCan i able to access admin page without creating django auth table\n(i.e superuser)?\n\nAs far as I can tell, no, you can't get that, unless your existing 'tables' already have that, and you have all the necessary credentials.\nFor question in #3, I'd use something less constraining like Flask if you have these problems.","Q_Score":0,"Tags":"python,django,orm,legacy-database","A_Id":65445614,"CreationDate":"2020-12-25T05:30:00.000","Title":"django rest framework apis using legacy db without creating django default tables","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In almost, if not every, programming language, pressing ctrl+c cancels your code. Why does it specifically have to be ctrl+c? So in other words, what made the developers of programming languages decide that ctrl+c has to be the combination to cancel the code and cause an error?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":59,"Q_Id":65445877,"Users Score":2,"Answer":"Ctrl+C is not a programming language thing. It is an \"interrupt\" signal sent to the process executing.","Q_Score":3,"Tags":"python","A_Id":65445943,"CreationDate":"2020-12-25T06:46:00.000","Title":"Why does ctrl+c have to be the combination to cause an error (e.g. KeyboardInterrupt)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In almost, if not every, programming language, pressing ctrl+c cancels your code. Why does it specifically have to be ctrl+c? So in other words, what made the developers of programming languages decide that ctrl+c has to be the combination to cancel the code and cause an error?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":65445877,"Users Score":3,"Answer":"It doesn't \"have to be\" anything, it is a convention formed over a long history: Control-C, \"C\" for cancel, was I think first used in TOPS-10, a Digital Equipment Corporation (DEC) mainframe operating system used in the late 1960s. The control key itself goes back much further, used on telegraphs and teletypewriters to key in non-printing characters. The idea of non-printing characters controlling a machine without sending a message has an even longer history, including procedure signs in Morse code, \"NUL\" and \"DEL\" in Baudot code, and other late 19th-century teleprinter standards and their devices.","Q_Score":3,"Tags":"python","A_Id":65446215,"CreationDate":"2020-12-25T06:46:00.000","Title":"Why does ctrl+c have to be the combination to cause an error (e.g. KeyboardInterrupt)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am making a website. And I want to know how to connect React js to my Flask back end. I have tried searching online but unfortunately it was not what I am looking for. If you know how to do it please recomend me some resources. And I also want to know the logic of how Flask and React work together.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1048,"Q_Id":65448624,"Users Score":2,"Answer":"Flask is a backend micro-service and react is a front-end framework. Flask communicates with the database and makes the desired API hit points. The backend listens for any API request and sends the corresponding response as a JSON format. So using React you can make HTTP requests to the backend.\nFor testing purposes have the backend and frontend separated and communicate only using the REST APIs. For production, use the compiled js of React as static files and render only the index.html of the compiled react from the backend.\nP.S: I personally recommend Django rest framework over flask if you are planning to do huge project.","Q_Score":0,"Tags":"python,reactjs,flask","A_Id":65448798,"CreationDate":"2020-12-25T13:44:00.000","Title":"How to connect a Python Flask backend to a React front end ? How does it work together?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How do I change the preset soundfonts for pygame or fluidsynth?\nIm using Python 3.7.3, pygame 2.0.1, fluidsynth 1.1.11 to play Midi files. When I call pygame.mixer.music.load(), I receive a few fluidsynth errors:\nfluidsynth: error: Unable to open file \"\/usr\/share\/sounds\/sf3\/FluidR3Mono_GM.sf3\" fluidsynth: error: Couldn't load soundfont file fluidsynth: error: Failed to load SoundFont \"\/usr\/share\/sounds\/sf3\/FluidR3Mono_GM.sf3\" fluidsynth: error: Unable to open file \"\/usr\/share\/sounds\/sf2\/TimGM6mb.sf2\" fluidsynth: error: Couldn't load soundfont file fluidsynth: error: Failed to load SoundFont \"\/usr\/share\/sounds\/sf2\/TimGM6mb.sf2\" \nThese soundfont files are not on my device. So I'd like to point fluidsynth to another soundfont. How do I change the preset soundfonts for fluidsynth inside PyGame?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":303,"Q_Id":65449321,"Users Score":0,"Answer":"I wasn't able to change the preset soundsfonts, but I was able to install the missing soundfonts.\napt-get install fluidr3mono-gm-soundfont for FluidR3Mono_GM.sf3\napt-get install timgm6mb-soundfont for TimGM6mb.sf2","Q_Score":0,"Tags":"python,pygame,midi,fluidsynth","A_Id":65450953,"CreationDate":"2020-12-25T15:14:00.000","Title":"PyGame change preset soundfont","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating django application for online Barber Shop booking. I came out with an algorithm that store avaliable hours. Here is exaple how it works. Let's say that barber_1 works form 8am to 4pm. We store this date in list with tuple [(8, 16)] if someone book an 1 hour apointment for 1pm it checks if it is between tuple and if it is, it creates 2 different tuples with updated hours [(8, 13), (14, 16)], if another apointment is booked for let's say 10am, than it deletes first tuple and creates two new on it's place [(8, 10), (11, 13), (14, 16)] and so on and so on.\nI have a problem with \"translating\" my python code to the Django model. Can't find a way to store list of tuples in accesable way to be able to edit it. I read on different post that list of tuples can be stored in many to one relation but I dont think that it will be good in my case.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":194,"Q_Id":65450949,"Users Score":0,"Answer":"You can do a many-to-one relation between a barber table and an appointments table. The appointments would each have a barber id foreign key, a start time, and an end time column.\n\n\n\n\nappointment_id\nbarber_id\nstart\nend\n\n\n\n\n1\n1\n8\n10\n\n\n2\n1\n11\n13\n\n\n3\n1\n14\n16\n\n\n\n\nYou'd probably want to keep an index on the barber_id.","Q_Score":0,"Tags":"python,django,list,tuples","A_Id":65451052,"CreationDate":"2020-12-25T19:03:00.000","Title":"Store list of tuples django model in easy to access way","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm creating django application for online Barber Shop booking. I came out with an algorithm that store avaliable hours. Here is exaple how it works. Let's say that barber_1 works form 8am to 4pm. We store this date in list with tuple [(8, 16)] if someone book an 1 hour apointment for 1pm it checks if it is between tuple and if it is, it creates 2 different tuples with updated hours [(8, 13), (14, 16)], if another apointment is booked for let's say 10am, than it deletes first tuple and creates two new on it's place [(8, 10), (11, 13), (14, 16)] and so on and so on.\nI have a problem with \"translating\" my python code to the Django model. Can't find a way to store list of tuples in accesable way to be able to edit it. I read on different post that list of tuples can be stored in many to one relation but I dont think that it will be good in my case.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":194,"Q_Id":65450949,"Users Score":1,"Answer":"Use JSONField but it is actually a textfield, I personally would convert the list into json format then store it as text in your model","Q_Score":0,"Tags":"python,django,list,tuples","A_Id":65451023,"CreationDate":"2020-12-25T19:03:00.000","Title":"Store list of tuples django model in easy to access way","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"sorry for potentially asking stupid questions but I am newbie, learning Python from YT videos.\nI need to use Ptwin32 extension, and use the libraries there.\n\nEDIT: using Windows 10\n\nDowloaded \"pywin32-300.win-amd64-py3.9\" from here \"https:\/\/github.com\/mhammond\/pywin32\/releases\"\n\nUsing python 3.9.1 and PyCharm 2020.3.1.\n\nOn the beginning of my program I write:\nimport win32gui \nimport win32con \n\n\nand then I get the message \"ModuleNotFoundError: No module named 'win32gui'\"\nTried to search for solutions but nothing worked so far.\nThx in advance for help.\nLoonak","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1354,"Q_Id":65451896,"Users Score":0,"Answer":"Go to your terminal and say either pip install win32gui or pip3 install win32gui(Try the other one if one of these 2 don't work)\nHappy Coding!","Q_Score":0,"Tags":"python","A_Id":65452198,"CreationDate":"2020-12-25T21:21:00.000","Title":"ModuleNotFoundError: No module named 'win32gui","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"sorry for potentially asking stupid questions but I am newbie, learning Python from YT videos.\nI need to use Ptwin32 extension, and use the libraries there.\n\nEDIT: using Windows 10\n\nDowloaded \"pywin32-300.win-amd64-py3.9\" from here \"https:\/\/github.com\/mhammond\/pywin32\/releases\"\n\nUsing python 3.9.1 and PyCharm 2020.3.1.\n\nOn the beginning of my program I write:\nimport win32gui \nimport win32con \n\n\nand then I get the message \"ModuleNotFoundError: No module named 'win32gui'\"\nTried to search for solutions but nothing worked so far.\nThx in advance for help.\nLoonak","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1354,"Q_Id":65451896,"Users Score":0,"Answer":"I had the same problem & py -m pip install win32gui didn't solve it,\nfinally it was fixed using py -m pip install pywin32 as @Kenny Ostrom advised","Q_Score":0,"Tags":"python","A_Id":71084859,"CreationDate":"2020-12-25T21:21:00.000","Title":"ModuleNotFoundError: No module named 'win32gui","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a repository on GitHub where I uploaded a Jupyter Notebook file. I created another branch and want to edit the ipynb file.\nClicking the edit button produces an HTML file which is really confusing. I want to edit the ipynb file and run it before pushing the commit. How do I do this, please?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":3052,"Q_Id":65454954,"Users Score":2,"Answer":"The short answer is no, you can't edit an .ipynb file directly on Github. At least, you can't edit it in the interactive way that you can using Jupyter Notebook. If you know what you are doing, you can edit the JSON, but that doesn't seem like a very good way to do it. Instead, just clone your repo locally to edit it.","Q_Score":2,"Tags":"git,github,jupyter-notebook,ipython,git-commit","A_Id":65458385,"CreationDate":"2020-12-26T09:14:00.000","Title":"How do I edit an ipynb file on github?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a repository on GitHub where I uploaded a Jupyter Notebook file. I created another branch and want to edit the ipynb file.\nClicking the edit button produces an HTML file which is really confusing. I want to edit the ipynb file and run it before pushing the commit. How do I do this, please?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3052,"Q_Id":65454954,"Users Score":0,"Answer":"You cannot edit a .ipynb file on github. You have to clone the repository on your device , checkout the branch and start editing","Q_Score":2,"Tags":"git,github,jupyter-notebook,ipython,git-commit","A_Id":65487296,"CreationDate":"2020-12-26T09:14:00.000","Title":"How do I edit an ipynb file on github?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I normally starts the thread using the following command,\nthreading.Thread(target=function_name).start()\nIs there any way to pause it like\nthreading.Thread(target=name).wait()\nand resume is like\nthreading.Thread(target=name).set()","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":328,"Q_Id":65459138,"Users Score":1,"Answer":"The threading module has a Semaphore, which might help. If there isn't something already built-in to the threading module, you could try using a Tkinter semaphore variable specific to each thread, and have the thread check its semaphore while in some polling loop.","Q_Score":1,"Tags":"python,python-3.x,multithreading,tkinter","A_Id":65459242,"CreationDate":"2020-12-26T17:54:00.000","Title":"Is there any way to pause and resume thread in Tkinter?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to encrypt a bitstream data or basically a list of binary data like this [1,0,1,1,1,0,0,1,1,0,1,1,0,1] in python using AES encryption with block size of 128bit, the problem is that i want the output to be binary data as well and the same size as the original binary data list, is that possible?how do i do that?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":115,"Q_Id":65459791,"Users Score":1,"Answer":"Yes, there are basically two ways:\n\nYou have a unique value tied to the data (for instance if they are provided in sequence then you can create a sequence number) then you can simply use the unique value as nonce and then use AES encryption in counter mode. Counter mode doesn't expand the data but it is insecure if no nonce is supplied. Note that you do need the nonce when decrypting.\n\nYou use format preserving encryption or FPE such as FF1 and FF3 defined by NIST. There are a few problems with this approach:\n\nthere are issues with these algorithms if the amount of input data is minimal (as it seems to be in your case);\nthe implementations of FF1 and FF3 are generally hard to find;\nif you have two unique bit values then they will result in identical ciphertext.\n\n\n\nNeither of these schemes provide integrity or authenticity of the data obviously, and they by definition leak the size of the plaintext.","Q_Score":0,"Tags":"python,encryption,aes,pycrypto","A_Id":65461384,"CreationDate":"2020-12-26T19:08:00.000","Title":"AES 128 bit encryption of bitstream data in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was looking at the scikit-learn logistic regression documentation, and saw that the penalty can be L1 and L2. I know that lasso and ridge regression are also known as L1 and L2 regularization respectively, so I was wondering if the L1 and L2 penalties refer to the same thing?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":292,"Q_Id":65460759,"Users Score":1,"Answer":"Yes that's correct. For L1 and L2 regularization (Lasso and Ridge Regression), the regularization term is often called L1 penalty or L2 penalty.","Q_Score":1,"Tags":"python,scikit-learn,logistic-regression,lasso-regression","A_Id":65461219,"CreationDate":"2020-12-26T21:11:00.000","Title":"L1 and L2 Penalties on Sklearn Logistic Classifier","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As I was working on a project the topic of code obfuscation came up, as such, would it be possible to encrypt python code using either RSA or AES and then de-code it on the other side and run it?. And if it's possible how would you do it?. I know that you can obfuscate code using Base64, or XOR, but using AES or RSA would be an interesting application. This is simply a generic question for anyone that may have an idea on how to do it. I am just looking to encrypt a piece of code from point A, send it to point B, have it decrypted at point B and run there locally using either AES or RSA. It can be sent by any means, as long as the code itself is encrypted and unreadable.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":247,"Q_Id":65460849,"Users Score":0,"Answer":"Yes this is very possible but would require some setup to work.\nFirst off Base64 is an encoder for encoding data from binary\/bytes to a restricted ascii\/utf subset for transmission usually over http. Its not really an obfuscator, more like a packager for binary data.\nSo here is what is needed for this to work.\n\nA pre-shared secret key that both point A and point B have. This key cannot be transmitted along with the code since anyone who gets the encrypted code would also get the key to decrypt it.\n\nThere would need to be an unencrypted code\/program that allows you to insert that pre-shared key to use to decrypt the encrypted code that was sent. Can't hardcode the key into the decryptor since again anyone with the decryptor can now decrypt the code and also if the secrey key is leaked you would have to resend out the decryptor to use a different key.\n\nOnce its decrypted the \"decryptor\" could save that code to a file for you to run or run the code itself using console commands or if its a python program you can call eval or use importlib to import that code and call the function within.\nWARNING: eval is known to be dangerous since it will execute whatever code it reads. If you use eval with code you dont trust it can download a virus or grab info from your computer or anything really. DO NOT RUN UNTRUSTED CODE.\n\n\nAlso there is a difference between AES and RSA. One is a symmetric cipher and the other is asymmetric. Both will work for what you want but they require different things for encryption and decryption. One uses a single key for both while the other uses one for encryption and one for decryption. So something to think about.","Q_Score":0,"Tags":"python,encryption,obfuscation","A_Id":65461014,"CreationDate":"2020-12-26T21:26:00.000","Title":"Running encrypted python code using RSA or AES encryption","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two python scripts and one of them runs on python 3.8.6 64bit and the other runs on python 3.8.6 32bit version. I have been trying to run them using different python version using shebang but it does not seem to work.\nI'm currently using Visual Studio Code and even though I put shebang code like this\n#!\"C:\/Python\/3.8.6\/64\/python.exe\"\nit does not change the python version the script is running\nIs there a way to make python code to run at specific version?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":65463550,"Users Score":0,"Answer":"Press CTRL + SHIFT + P and search for open user settings\nIn there search for this setting: Python: python interpreter\nThen you can change the path of your python interpreter","Q_Score":0,"Tags":"python,version,shebang,hashbang","A_Id":65463691,"CreationDate":"2020-12-27T07:02:00.000","Title":"how to run code in different python versions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two python scripts and one of them runs on python 3.8.6 64bit and the other runs on python 3.8.6 32bit version. I have been trying to run them using different python version using shebang but it does not seem to work.\nI'm currently using Visual Studio Code and even though I put shebang code like this\n#!\"C:\/Python\/3.8.6\/64\/python.exe\"\nit does not change the python version the script is running\nIs there a way to make python code to run at specific version?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":65463550,"Users Score":0,"Answer":"If you're using visual studio code, use ctrl + shift + p and choose specific interpreter for your code to run.\nOther option is to make a virtualenv with the python version you need and run it your code there.","Q_Score":0,"Tags":"python,version,shebang,hashbang","A_Id":65463687,"CreationDate":"2020-12-27T07:02:00.000","Title":"how to run code in different python versions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"pickle[\"CSE_TS_SAMPLING\"].head(2)\noutput:\n0 02-DEC-2020 20:16:09\n1 03-DEC-2020 03:43:33\nName: CSE_TS_SAMPLING, dtype: object\nfrom datetime import datetime\ndef convert_date(x):\nreturn datetime.strptime(x, \"%d-%m-%Y %H:%M:%S\")\npickle[\"sample_date\"]=pickle[\"CSE_TS_SAMPLING\"].apply(lambda x:convert_date(x))\nerror:\ntime data '02-DEC-2020 20:16:09' does not match format '%d-%m-%Y %H:%M:%S'","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":27,"Q_Id":65463995,"Users Score":1,"Answer":"%m is the month 01 through 12. %b would match your date of \"DEC\" (note, %b depends on locale).","Q_Score":0,"Tags":"python-3.x,datetime","A_Id":65464094,"CreationDate":"2020-12-27T08:24:00.000","Title":"Error while converting string to date time using datetime.strptime()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering if it is possible to concatenate two different pytorch tensors with different shapes.\none tensor is of shape torch.Size([247, 247]) and the other is of shape torch.Size([10, 183]). Is it possible to concatenate these using torch.cat() on dim=1?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":158,"Q_Id":65466208,"Users Score":1,"Answer":"I think you should use broadcasting. That is, to broadcast torch.Size([10, 183]) along dimension 0 (to reach 247) or do it for the other dimensions. For torch.cat to work, you need to have matching dimensions along which you are trying to concatenate.","Q_Score":0,"Tags":"python,python-3.x,concatenation,tensor","A_Id":65817111,"CreationDate":"2020-12-27T13:20:00.000","Title":"How to concatenate 2d tensors with 2 different dimensions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to Django and databases.\nDjango used the sqlite database as it's default database. Is it possible to store images in the sqlite database and display it on my webpage?\nI can't find any documentation on the official Django website for it.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2017,"Q_Id":65466375,"Users Score":0,"Answer":"Generally speaking SQLite is not a great database for any serious project. You would normally use something like MongoDB or Postgres or MySQL. I think Django works quite well with PostgreSQL. SQLite is great for practice and testing or small project but once you have to scale up, you can get in trouble.\nAlso, like Willem said, storing images in a DB is not a good idea. Normally instances stored in a DB would have a path towards an image file stored in your computer or a key to an image stored on an image storing service like Cloudinary.","Q_Score":0,"Tags":"python,django,sqlite,django-models,django-views","A_Id":65466454,"CreationDate":"2020-12-27T13:42:00.000","Title":"Storing images in sqlite - Django","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Pelican to create a static site. What I want to achieve is to create a new .html file, let's say \"contactus.html\" in \"templates\" directory and parse it's output in \"content\" directory.\nThank you very much in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":52,"Q_Id":65468668,"Users Score":0,"Answer":"What I did @Alex N was to create a .md file in the content\/pages directory and added attributes \"save_us:contact.html\" and \"template:contactus\". What it does in fact is to generate a contact.html file in content root page and get the info from the \"contactus.html\" placed in my template directory.\nFor any further questions please let me know.","Q_Score":0,"Tags":"python,html,pelican","A_Id":65644132,"CreationDate":"2020-12-27T17:48:00.000","Title":"How is it possible to add a .html file in templates directory and parse .html file in output in Pelican?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Newbie Python Learner here,\nA question for programmers,\nEnglish is not main language, will be hard to explain the question I wish to convey.\nHow do you programmers know which modules exist and which don't?\nSay you are writing a script\/program etc.\nThere are modules\/functions etc. which you may need to create or use to either write your program, or perhaps complete it quicker, how will you know a required module\/function etc. exists that may help you write your program? What prevents you from wasting your time in writing an entire module\/function etc. which you might need to use which may already exist without you knowing so?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":65471143,"Users Score":0,"Answer":"The main sources of info are\n\ndocs.python.org (The Tutorial also introduces important modules in the standard lib)\npypi.org\nStackOverflow (of course)\nGoogle\n\nYou can be almost sure that basic functionalities are provided by standard lib, pypi.org allows then to search by several criteria.","Q_Score":1,"Tags":"python","A_Id":65471228,"CreationDate":"2020-12-27T22:38:00.000","Title":"Knowing existing functions\/modules etc","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Newbie Python Learner here,\nA question for programmers,\nEnglish is not main language, will be hard to explain the question I wish to convey.\nHow do you programmers know which modules exist and which don't?\nSay you are writing a script\/program etc.\nThere are modules\/functions etc. which you may need to create or use to either write your program, or perhaps complete it quicker, how will you know a required module\/function etc. exists that may help you write your program? What prevents you from wasting your time in writing an entire module\/function etc. which you might need to use which may already exist without you knowing so?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":38,"Q_Id":65471143,"Users Score":1,"Answer":"If you're asking how to find Python packages, in general, that can help you, usually a quick Google search or few will show you packages other people have used for similar problems as yours.","Q_Score":1,"Tags":"python","A_Id":65471187,"CreationDate":"2020-12-27T22:38:00.000","Title":"Knowing existing functions\/modules etc","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Newbie Python Learner here,\nA question for programmers,\nEnglish is not main language, will be hard to explain the question I wish to convey.\nHow do you programmers know which modules exist and which don't?\nSay you are writing a script\/program etc.\nThere are modules\/functions etc. which you may need to create or use to either write your program, or perhaps complete it quicker, how will you know a required module\/function etc. exists that may help you write your program? What prevents you from wasting your time in writing an entire module\/function etc. which you might need to use which may already exist without you knowing so?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":65471143,"Users Score":0,"Answer":"Experience is the short answer for this.\nA more detailed explanation is understanding the scope of your needs. If you believe that the problem you are faced with is a common one, chances are that someone has come up with a solution for it and put it into a module. You become aware of modules by running into a challenge and then searching up how others have solved it. You will most likely run into others who have come across the same thing and have used others modules to solve it.\nThe more specific your problem is the less likely there will be a module already made for it. For example, plotting data is a widely common need, which is why the Matplotlib module is known by most python programmers. Searching the PyPi website will show you a lot of modules that can come in handy later.\nGood Luck and have fun looking at all the oddly specific modules out there!","Q_Score":1,"Tags":"python","A_Id":65471205,"CreationDate":"2020-12-27T22:38:00.000","Title":"Knowing existing functions\/modules etc","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why does [[]]*10 make a list of ten empty lists? [[]*10] makes more sense to me.","AnswerCount":5,"Available Count":3,"Score":-0.0798297691,"is_accepted":false,"ViewCount":113,"Q_Id":65473101,"Users Score":-2,"Answer":"[[]]*10 :\nThis mean you have 10 \"list in list\".\n\n[[]],[[]],[[]],[[]],[[]],[[]],[[]],[[]],[[]],[[]]\n2.[[]*10]\nThis mean you have one list contain 10 list:\n[[],[],[],[],[],[],[],[],[],[]]","Q_Score":3,"Tags":"python,list","A_Id":65473152,"CreationDate":"2020-12-28T04:45:00.000","Title":"Why is [[]]*10 the way to create a list of 10 empty lists in python instead of [[]*10]?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why does [[]]*10 make a list of ten empty lists? [[]*10] makes more sense to me.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":65473101,"Users Score":0,"Answer":"Here are my explanations:\n\n[[] * 10] multiplies the values within the list, it gives [[]] because there are no values in the sublist, it's just an empty list, so as we all know 0 * 10 is 0.\n\n[[]] * 10 works because it multiples the value inside (as you know from above, I said it multiplies the values), here there are values it's the empty list [], it multiples the [[]] ten times, so it multiplies the value ten times, so the value ten times would be: [[], [], [], [], [], [], [], [], [], []]","Q_Score":3,"Tags":"python,list","A_Id":65473207,"CreationDate":"2020-12-28T04:45:00.000","Title":"Why is [[]]*10 the way to create a list of 10 empty lists in python instead of [[]*10]?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why does [[]]*10 make a list of ten empty lists? [[]*10] makes more sense to me.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":65473101,"Users Score":0,"Answer":"Because [] + [] == [] in Python. So for your preferred syntax, the * operator is basically repeatedly adding [] to an empty list and getting another empty list back.","Q_Score":3,"Tags":"python,list","A_Id":65473140,"CreationDate":"2020-12-28T04:45:00.000","Title":"Why is [[]]*10 the way to create a list of 10 empty lists in python instead of [[]*10]?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed python 3.9 at first, but I wanted to use TensorFlow so I deleted and install python 3.8.7 and it worked out. But now every time I try to run command it says that there is a python39.dll missing even though I have my python38.dll file already","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1444,"Q_Id":65473848,"Users Score":1,"Answer":"Well, I resolved the problem just by uninstalling every python program on the computer and reinstalling everything, I'm not quite sure what was causing that error but now everything is alright.","Q_Score":0,"Tags":"python","A_Id":65511609,"CreationDate":"2020-12-28T06:41:00.000","Title":"why i am having python39.dll file missing if I installed python 3.8. 7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I set up a task on PythonAnywhere to run a file every day however when it came time to run it, it said that \"No module named pyowm.owm\". However when I run the finally normally in the file section of PythonAnywhere it runs perfectly fine. I've tried installing the modules for both Python 3.7 and 3.8 however neither have worked.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":153,"Q_Id":65477962,"Users Score":1,"Answer":"You need to specify Python executable for the script either by adding a shebang #!\/usr\/bin\/python3.8 at the beginning of the script or by defining the scheduled task's command like: python3.8 \/home\/myusername\/myproject\/myscript.py (examples are for Python 3.8). You may use a virtualenv as well, in such case the command should look like: \/home\/myusername\/.virtualenvs\/myvenv\/bin\/python \/home\/myusername\/myproject\/mytask.py.","Q_Score":0,"Tags":"python,pythonanywhere","A_Id":65480275,"CreationDate":"2020-12-28T12:55:00.000","Title":"PythonAnywhere module not found","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I try to execute py manage.py runserver I had the following message:\n\nraise ImportError(\n\nImport Error: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?\n\n)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1523,"Q_Id":65479892,"Users Score":0,"Answer":"Try to make it in a virtual environment, it will work:\n\nFirst, in cmd type pip install virtualenvwrapper\nMake a folder in c drive which will consist all of your django projects like C:\/Django\nThen go to cmd and type mkvirtualenv [name you want to give] and then close the cmd\nOpen it again and then navigate to the folder C:\/Django then when on that folder type workon [the name which you gave]\nFinally, install django by the pip command and then runserver","Q_Score":1,"Tags":"python,django,wsgi","A_Id":67041782,"CreationDate":"2020-12-28T15:21:00.000","Title":"Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? --> WSGI.PY file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I passed a integer variable as a context when i rendered a webpage. That variable is the initial number of untagged images that i have uploaded to my web application.\nI then display my images in a table with the first column being the image and second column being the tag name and the third containing a button which leads to a modal that allows me to edit the tags for the images.\nHow am i able to decrease the count of untagged images counter whenever i fill in an empty tag?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":65481760,"Users Score":0,"Answer":"If you are keeping track of the tag variable as well, then you can check if it is an empty string, if it is then it's untagged, if it is not then you can decrease the counter. You can loop through the list of tags to check all of them.","Q_Score":0,"Tags":"javascript,python,html,jquery,django","A_Id":65481844,"CreationDate":"2020-12-28T17:44:00.000","Title":"Decrease counter based on html element","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to setup lanelet2 package from their git repo. When i try to run the tutorial package on my system running Ubuntu 20.04LTS, i am getting the above error. I tried the same on a python shell and running the command ' import lanelet2', it still throws the same error.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1270,"Q_Id":65487367,"Users Score":-1,"Answer":"The issue was resolved by one of my colleague. It was mainly caused due to libboost version mismatch from anaconda in my setup. Had to uninstall all libboost packages along with anaconda at first. Then installed libboost again. This resolved the error.","Q_Score":2,"Tags":"boost,boost-python","A_Id":65623864,"CreationDate":"2020-12-29T04:40:00.000","Title":"Lanelet2: ImportError: \/usr\/lib\/x86_64-linux-gnu\/libboost_python38.so.1.71.0: undefined symbol: _Py_tracemalloc_config","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When my friends try to open the executable resulting from building my project (i.e. a \".exe\"), they take the error \"Windows protected your PC\".\nBoth my c++ and python projects cannot be executed on my friends' computers because of the error.\nHow can I allow it?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1692,"Q_Id":65488839,"Users Score":1,"Answer":"Its due to windows built-in security feature. Try not to send .exe but send your code and your friend could compile it at their own. Windows just not authenticating it, as the source is not verified, according to your friend OS.","Q_Score":3,"Tags":"python,c++,runtime-error","A_Id":65488899,"CreationDate":"2020-12-29T07:37:00.000","Title":"How can I avoid \"Windows protected your PC\" problem when my friends try to use my executables?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project that needs to do the following:\n\n[C++ Program] Checks a given directory, extracts all the names (full paths) of the found files and records them in a vector.\n[C++ Program] \"Send\" the vector to a Python script.\n[Python Script] \"Receive\" the vector and transform it into a List.\n[Python Script] Compares the elements of the List (the paths) against the records of a database and removes the matches from the List (removes the paths already registered).\n[Python Script] \"Sends\" the processed List back to the C++ Program.\n[C++ Program] \"Receives\" the List, transforms it into a vector and continues its operations with this processed data.\n\nI would like to know how to send and receive data structures (or data) between a C ++ Script and a Python Script.\nFor this case I put the example of a vector transforming into a List, however I would like to know how to do it for any structure or data in general.\nObviously I am a beginner, that is why I would like your help on what documentation to read, what concepts should I start with, what technique should I use (maybe there is some implicit standard), what links I could review to learn how to communicate data between Scripts of the languages \u200b\u200bI just mentioned.\nAny help is useful to me.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":131,"Q_Id":65488962,"Users Score":0,"Answer":"If the idea is to execute the python script from the c++ process, then the easiest would be to design the python script to accept input_file and output_file as arguments and the c++ program should write the input_file, start the script and read the output_file.\nFor simple structures like list-of-strings, you can simply write them as text files and share, but for more complex types, you can use google-protocolbuffers to do the marshalling\/unmarshalling.\nif the idea is to send\/receive data between two already stared process, then you can use the same protocol buffers to encode data and send\/receive via sockets between each other. Check gRPC","Q_Score":1,"Tags":"python,c++","A_Id":65489333,"CreationDate":"2020-12-29T07:50:00.000","Title":"How to send and receive data (and \/ or data structures) from a C ++ script to a Python script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried importing websocket, but python returns\nImportError: no module named websocket.\nI checked sys.path, and the directory that the websocket package is in, Library\/Python\/3.7\/lib\/python\/site-packages, is included. I also confirmed that there's __init__.py in the package.\nI tried importing the other modules in Library\/Python\/3.7\/lib\/python\/site-packages: none of them can be imported.\nWhy can't I import any of the packages in that path?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":3195,"Q_Id":65489160,"Users Score":3,"Answer":"You may have more than one python version installed.\nFor example, the script you are running is defined to run with python 2.7 but your default python version is >=3 so pip will install websocket for python 3 only.\nTry using\npip2 install websocket\nAlso check to see if you have #!\/usr\/bin\/python2.7 in your first line of the script","Q_Score":1,"Tags":"python,macos,websocket,module,package","A_Id":66350542,"CreationDate":"2020-12-29T08:11:00.000","Title":"ImportError: No module named websocket","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried importing websocket, but python returns\nImportError: no module named websocket.\nI checked sys.path, and the directory that the websocket package is in, Library\/Python\/3.7\/lib\/python\/site-packages, is included. I also confirmed that there's __init__.py in the package.\nI tried importing the other modules in Library\/Python\/3.7\/lib\/python\/site-packages: none of them can be imported.\nWhy can't I import any of the packages in that path?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3195,"Q_Id":65489160,"Users Score":0,"Answer":"Whatever editor you are using to write your code. I use Visual Studio Code. On some libraries, I have to restart the editor when installing a library. I had to restart Atom just adding plugins.","Q_Score":1,"Tags":"python,macos,websocket,module,package","A_Id":65513121,"CreationDate":"2020-12-29T08:11:00.000","Title":"ImportError: No module named websocket","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried importing websocket, but python returns\nImportError: no module named websocket.\nI checked sys.path, and the directory that the websocket package is in, Library\/Python\/3.7\/lib\/python\/site-packages, is included. I also confirmed that there's __init__.py in the package.\nI tried importing the other modules in Library\/Python\/3.7\/lib\/python\/site-packages: none of them can be imported.\nWhy can't I import any of the packages in that path?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":3195,"Q_Id":65489160,"Users Score":0,"Answer":"Sometimes you have to restart your ide when installing some libraries. And in others you may need to reboot the whole system","Q_Score":1,"Tags":"python,macos,websocket,module,package","A_Id":65489407,"CreationDate":"2020-12-29T08:11:00.000","Title":"ImportError: No module named websocket","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Desired Output -\n\n\n\n\nid\ngender\nrandom_genre\n\n\n\n\n1\nF\nfiction\n\n\n1\nF\nhistory\n\n\n2\nM\nmystery\n\n\n3\nF\nfiction\n\n\n4\nM\nhistory\n\n\n\n\nI have a dataframe where a user can have multiple preferences and I have to join these with some other kind of preferences which makes the table exponentially large. Like x, y etc. preferences are up to 80+ of one kind and almost 40 are of another kind.\nUntil now I was pivoting the table (pivot_table) and performing merge but I want to use the output for charts (count of a preference etc.).\nCurrent output after pivot_table -\n\n\n\n\nid\ngender\nfiction\nhistory\nmystery\n\n\n\n\n1\nF\n1\nNaN\nNaN\n\n\n1\nF\nNaN\n2\nNaN\n\n\n2\nM\nNaN\nNaN\n3\n\n\n3\nF\n1\nNaN\nNaN\n\n\n4\nM\nNaN\n2\nNaN\n\n\n\n\nHaving almost 80+ preferences of just one type and more after joining.\nHow can I convert all these back after joins to the first table, where I just have a single column for all the preferences and if an id has multiple preferences a new row is created for the same?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":176,"Q_Id":65490297,"Users Score":0,"Answer":"Have a look at pandas.melt function. It might do the trick for you","Q_Score":1,"Tags":"python-3.x,pandas","A_Id":65490557,"CreationDate":"2020-12-29T09:52:00.000","Title":"Wide to long format pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am very new to python and started using pycharm ide for development.\nHow to display all the functions \/ API available in a class\/file in python.\ne.g. like when we code in scala\/java in ide wee get all the functions\/ methods available for that class\/object by just typing object_name.(dot) and it shows all the API.\nnot able to get that for python in pycharm.\nfor example, I wanted to see all the functions available in the python List. I searched dir(module-name) but that is not what I need while developing faster.\nthx in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":65492607,"Users Score":0,"Answer":"dir(module-name) and dir(class-name) is exactly how you do this. Yes, it doesn't return things in a userfriendly manner and you may need to dig down further for nested modules, but the only better option would be to learn from the package\/api docs.","Q_Score":0,"Tags":"python-3.x,pycharm","A_Id":65492893,"CreationDate":"2020-12-29T13:05:00.000","Title":"How to display all the functions \/ api available in a class \/ file in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am not able to run my local server through atom terminal, even though all the requirements are meant. This is the error i get when I run python manage.py runserver,\nFile \"manage.py\", line 17\n) from exc\n^\nSyntaxError: invalid syntax\nI tried python3 manage.py runserver as suggested by some people online as a solution for mac users but it gave a different error,\nImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forge\nt to activate a virtual environment?\nSharing the screenshot of my atom terminal.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":241,"Q_Id":65492797,"Users Score":0,"Answer":"Make sure you also installed django for python3.\nBy doing pip -V you can verify that your pip belongs to the python installation you expected.\nYou might need to use pip3 if you're running python 2 and 3 in parallel. Alternatively, you can use python3 -m pip install Django to make sure it's for python 3","Q_Score":0,"Tags":"python,django","A_Id":65492865,"CreationDate":"2020-12-29T13:20:00.000","Title":"Unable to run python manage.py runserver command even though django is installed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've been trying to install dlib library for face recognition project but every time I'm trying to install the packages, an error occurs.\nIs there any way I could work on using face_recogniton library and install dlib without using Visual Studio 2019 (Desktop Development for C++)?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":291,"Q_Id":65493422,"Users Score":0,"Answer":"First download Anaconda\nthen go to Environments Under Home click on create (bottom left) enter name of new environment (in packages make sure python version is 3.7) then click on create\nthen click on the new environment that you created then you will see (installed) change it to (Not installed) and type in search bar dlib it will show then click on it and click apply and it will install","Q_Score":0,"Tags":"python,opencv,pip,face-recognition,dlib","A_Id":69292833,"CreationDate":"2020-12-29T14:05:00.000","Title":"I want to install dlib without installing Visual Studio 19","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The problem:\nEach iteration of a python loop sends a cypher-query to Neo4j that returns values.\nThis takes a considerable amount of time and is a bottleneck, as the number of loop items is 50 or more.\nEnvironment\n\nwindows server 2019,\nNeo4j Server version: 3.5.9 (community),\npython 3.6\nNeo4j python driver 4.2.1\n\nQuestion\nWould one call to a custom stored procedure be faster than repeated client requests to Neo4j?\nDetails\nThe stored procedure would be passed cypher parameters for all 50 items, and need to return results for all 50.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":157,"Q_Id":65493606,"Users Score":1,"Answer":"Well, if you can refactor your code to make one call to a stored procedure, it means you can also refactor to make one Cypher query and handle the result application side.\nMaybe a stored procedure will be faster since you can leverage the JVM multithreading.\nHowever, there are some drawbacks to using stored procedures :\n\nYou need to redeploy and restart Neo4j when the code of the stored procedure changes\nYou need to maintain it and upgrade with new Neo4j versions\nIt implies you can write them in Java\n\nI have rarely seen the usage of stored procedures for improving performance (it happens but it has some niche use cases).\nAs you stated, the bottleneck is your loop querying the db, I would try to fix that first.\nSecondly, performance in Neo4j is generally handled by three major factors :\n\nYour ability to model your graph well, in order to optimise for querying\nYour ability to tune your Cypher queries\nYour index configuration is correct\n\nIn 99.99% of the problems you will have during your life with Neo4j will be solved by those 3 points above.","Q_Score":0,"Tags":"python,stored-procedures,neo4j,cypher","A_Id":65494609,"CreationDate":"2020-12-29T14:18:00.000","Title":"Neo4j > User-defined stored procedures: Would a stored procedure be faster than many cypher queries called from python?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a function I want to run whenever there is nothing else for the program to do. How do I implement this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":65496562,"Users Score":0,"Answer":"The atexit module may be what you are looking for.","Q_Score":0,"Tags":"python","A_Id":65496598,"CreationDate":"2020-12-29T17:57:00.000","Title":"How do I make a function run when a program finishes in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running a python program that processes a large dataset. Sometimes, it runs into a MemoryError when the machine runs out of memory.\nI would like any MemoryError that is going to occur to happen at the start of execution, not in the middle. That is, the program should fail-fast: if the machine will not have enough memory to run to completion, the program should fail as soon as possible.\nIs it possible for Python to pre-allocate space on the heap?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":65501923,"Users Score":0,"Answer":"Is it possible to allocate heap memory at the start of python process,\n\nPython uses as much memory as needed, so if your program is running out of memory, it would still run out of it even if there was a way to allocate the memory at the start.\nOne solution is trying to allow for swap to increase your total memory, although performance will be very bad in many scenarios.\nThe best solution, if possible, is to change the program to process data in chunks instead of loading it entirely.","Q_Score":1,"Tags":"python","A_Id":65502016,"CreationDate":"2020-12-30T04:11:00.000","Title":"Is it possible in Python to pre-allocate heap to fail-fast if memory is unavailable?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to preprocess an RGB image before sending it into my model. The shape of the image is (2560, 1440,3). For that I need to calculate the mean of every channel and substract them from corresponding channel pixels. I know that I can do it by:\nnp.mean(image_array, axis=(0, 1)).\nHowever, I cannot understand the process how it is being done.\nI am aware of how axes work individually (axis=0 for columns and axis = 1 for rows). How does the axis = (0,1) work in this situation?\nAnd also how can I do the same thing for multiple images, say, train_data_shape = (1000, 256, 256, 3)?\nI appreciate every feedback!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":78,"Q_Id":65501984,"Users Score":1,"Answer":"Consider what happens when you have an array X of shape (5, 3) and you execute np.mean(X, axis=0). You\u2019ll get back an array of shape (1, 3) where the (0, i) element is the average of the 5 values in column i. You\u2019re essentially \u2018averaging out\u2019 that first dimension. If you instead set axis=1, you\u2019d get back an array of shape (5, 1) where the (i, 0) element is the average of the 3 values in row i - now, you\u2019re averaging out that second dimension.\nIt works similarly when multiple axes are provided. Say X is of shape (5, 4, 2). Then, executing np.mean(X, axis=(0,1)) will return an array of shape (1, 2) where the (0, i) element is the average of the sub-array X[:, :, i] (of shape (5, 4)). We\u2019re averaging out the first two dimensions.\nTo answer your second question: If you want to compute means on an image-by-image and channel-by-channel basis, use axis=(1,2). If you want to compute means over all of your images per channel, use axis=(0,1,2).","Q_Score":0,"Tags":"python,mean,preprocessor","A_Id":65502295,"CreationDate":"2020-12-30T04:22:00.000","Title":"I am trying to understand how the process of calculating the mean for every channel of RGB image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The Ruby 3.0 release has introduced Ractors and the way they're represented along their examples, brings Python's MultiProcessing module into mind.\nSo...\n\nAre Ruby's Ractors just multiple processes in disguise and the GIL is still ruling over the threads?\n\nIf they aren't, could you provide an example in which Ractors have the upper hand against MultiProcessing in both speed and communication latency?\n\nCan Ractors be as fast as C\/C++ threads and with low latency?\n\n\nThanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":153,"Q_Id":65505576,"Users Score":4,"Answer":"Are Ruby's Ractors just multiple processes in disguise and the GIL is still ruling over the threads?\n\n\nThe Ractor specification does not prescribe any particular implementation strategy. It most certainly does not prescribe that an implementor must use OS processes. In fact, while that would be a pretty simple implementation because the OS does all the hard work for you, it would also be a pretty stupid implementation because Ractors are meant to be light-weight, which OS processes are typically not.\nSo, I expect that every implementor will choose their own most efficient implementation strategy. For example, I would expect TruffleRuby's and JRuby's implementation to be based on something like Kilim or Project Loom, Opal's implementation to be based on WebWorkers, Realms, and Promises, Artichoke's implementation to be based on Actix, Riker, or Axiom, and maybe MRuby's implementation might even be based on OS processes because of MRuby's focus on simplicity.\nRight at this very moment, there does not exist any production-ready implementation of Ractors. In fact, there cannot be a production-ready implementation of Ractors, because the Ractor specification itself is still experimental, and thus not finalized.\nThe only implementation in existence right now is Koichi Sasada's original prototype which currently ships with YARV 3.0.0. This implementation does not implement Ractors as processes, it implements them as OS threads. YARV does not have a GIL, but it does have a per-Ractor GVL. So, only one thread of a Ractor can run at the same time, but multiple Ractors can each run one thread at the same time.\nHowever, this is not a very optimized implementation, only a prototype. I would expect TruffleRuby's or JRuby's implementation to not have any sort of global lock. They never had one before, and Ractors don't share any data, so there simply is nothing to lock in the first place.\n\n\nIf they aren't, could you provide an example in which Ractors have the upper hand against MultiProcessing in both speed and communication latency?\n\n\nThis comparison doesn't make much sense. First of all, Ractor is a specification with potentially multiple implementations, whereas to my understanding, Python's multiprocessing module is simply a way of starting multiple Python interpreters.\nSecondly, Ractors are a language feature with specific language semantics.\n\n\nCan Ractors be as fast as C\/C++ threads and with low latency?\n\n\nIt's not quite clear what you mean by this. C doesn't have threads, so asking about C threads doesn't make sense. C++ has threads, but just like Ractors, they are simply a specification with multiple possible implementations. It will simply depend on the particular implementation of Ractors and C++ threads.\nIt is certainly possible to implement Ractors using threads. The current YARV prototype is proof of that.","Q_Score":1,"Tags":"python,ruby,multithreading,multiprocessing,gil","A_Id":65506442,"CreationDate":"2020-12-30T10:36:00.000","Title":"Are Ruby Ractors the Same as Python's MultiProcessing module?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have imported openpyxl in my test .py file. It work great in that file but when I import it in the software it shows this error:\n\nC:\\Python39\\python.exe \"C:\/Users\/User\/Desktop\/Wash & Fold Laundry Billing System\/main.py\"\nTraceback (most recent call last):\nFile \"C:\\Users\\User\\Desktop\\Wash & Fold Laundry Billing System\\main.py\", line 5, in \nfrom openpyxl import Workbook\nModuleNotFoundError: No module named 'openpyxl'\nProcess finished with exit code 1\n\nI have tried each and every question in stackoverflow, google, w3schools, and geeksforgeeks\nbut all the answers were in vain.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":66,"Q_Id":65508937,"Users Score":0,"Answer":"Ok So I Tried Everything and this one solution is the best.\nYou Just Have To Delete all the configurations(pycharm) or Interpreters.","Q_Score":1,"Tags":"python,excel,openpyxl,importerror","A_Id":65509534,"CreationDate":"2020-12-30T14:56:00.000","Title":"Importing Openpyxl in one file works but in another file ModuleNotFoundError Comes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Windows 10 Pro, Python 3.9, SQL Server 2012.\nI have written a bunch of Windows-based Python scripts over the years. One aspect, I use the pypyodbc library to insert data harvested via Python into SQL Server. This has been working well for years whether my Python script is run from the Python IDLE or if the Python script is run from the command line via a Windows batch (.bat) file for automation.\u00a0\nI recently introduced calling a, fully operational, SQL Server stored procedure from my\u00a0Windows-based Python script that is already doing SQL Server inserts successfully. When I run my Python script from the Python\u00a0IDLE, the SQL Server inserts and the stored procedure runs successfully.\nPROBLEM: when I run my Python script via a Windows batch (.bat) file, the SQL Server inserts are successful but the stored procedure does not run. I have also tried \"Run as administrator\" for the .bat file invoking my Python script. Still, the stored procedure will not run.\nGiven the SQL Server inserts are successful no matter what and the stored procedure WILL ONLY run when originating the Python script via the Python IDLE, perhaps this problem is a Windows configuration issue?\nThanks in advance for any guidance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":65509257,"Users Score":0,"Answer":"I located the problem. It was a permissions problem in SQL Server. This is now closed. Thanks.","Q_Score":0,"Tags":"python,sql-server,windows","A_Id":65510463,"CreationDate":"2020-12-30T15:18:00.000","Title":"Windows Python - invoking SQL Server stored procedure works only in Python IDLE","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I noticed the types.SimpleNamespace class seems to be recursively implemented. In the types module, the class is defined as type(sys.implementation). However, type(sys.implementation) simply returns types.SimpleNamespace, which wouldn't be possible, since both names rely on each other to be available. How is types.SimpleNamespace actually implemented?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":288,"Q_Id":65509605,"Users Score":2,"Answer":"Of course type(sys.implementation) is types.SimpleNamespace, but that doesn\u2019t mean that the latter is the source of the type (and types. doesn\u2019t appear as the output of anything). Like most types exposed in types, SimpleNamespace is defined in the implementation (in C for CPython), and types just gives names to whatever circumlocutions are \u201cconvenient\u201d for accessing the type. (TracebackType is obtained from actually throwing and catching a dummy exception!) The sys module actually uses the built-in type, along with other shenanigans like creating version_info even though the type can\u2019t be instantiated (afterwards).","Q_Score":2,"Tags":"python,python-3.x,sys","A_Id":65510664,"CreationDate":"2020-12-30T15:44:00.000","Title":"How is types.SimpleNamespace implemented?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to know if I can create a python script with a folder in the same directory with all the assets of a python module, so when someone wants to use it, they would not have to pip install module, because it would import from the directory.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":41,"Q_Id":65509685,"Users Score":1,"Answer":"Yes, you can, but it doesn't mean that you should.\nFirst, ask yourself who is suposed to use that code.\nIf you plan to give it to consumers, it would be a good idea to use a tool like py2exe and create executable file which would include all modules and not allow for code to be changed.\nIf you plan to share it with another developer, you might want to look into virtual environments and requirements.txt file.\nThere are multiple reasons why sharing modules is bad idea:\n\nIt is harder to update modules later, at least without upgrading whole project.\nIt uses more space on version control, which can create issues on huge projects with hundreds of modules and branches\nIt might be illegal as some licenses specifically forbid including their code in your source code.\nThe pip install of some module might do different things depending on operating system version or installed packages. The modules on your machine might be suboptimal on someone else's machine, and in some instances might not even work.\n\nAnd probably more that I can't think of right now.\nThe only situation where I saw this being unavoidable was when the module didn't support python implementation the application was running on. The module was changed, and its source was put under lib folder with the rest of the libraries.","Q_Score":0,"Tags":"python","A_Id":65510122,"CreationDate":"2020-12-30T15:49:00.000","Title":"Is it possible to have users not pip install modules and instead include the modules used in a different folder and then import that?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use a LabJack product U3 using Python and I am using PyCharm for development of my code. I am new to both Python and PyCharm FYI. In the LabJack documentation they say to run python setup.py install in the directory I down loaded there Python links for using there device. I did this and when run under straight Python console can get the import u3 to run and am able to access the U3 device. Yet when I run this in PyCharm I can not get it to run. It always tells me module not found. I have asked LabJack for help but they do not know PyCharm. I have looked on the net but I can seem to see how to get the module properly under PyCharm. Could i please get some help on how to do this properly?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":65511138,"Users Score":0,"Answer":"First Yll download that module inside of pycharm settings if it's still not working then import module in terminal of pycharm then try to run you're python script","Q_Score":0,"Tags":"python","A_Id":65511317,"CreationDate":"2020-12-30T17:33:00.000","Title":"Unable to get LabJack U3 model loaded into PyCharm properly","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to import pandas into a script.\nI'm using anaconda, I've already edited the python path to the executable file in the anaconda3 folder.\nI'm just not able to access any of the libraries I've downloaded (matplotlib, pandas, numpy)\nJust looking for some guidance. Thanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":51,"Q_Id":65511588,"Users Score":1,"Answer":"Don\u2019t edit the PYTHONPATH or PATH yourself. Let Anaconda handle it for you. Switch to the appropriate conda environment using conda at the command-line and try importing those modules again.","Q_Score":0,"Tags":"python,pandas,anaconda","A_Id":65512778,"CreationDate":"2020-12-30T18:07:00.000","Title":"How do I link Anaconda3 libraries python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a doubt about classification algorithm comparation.\nI am doing a project regarding hyperparameter tuning and classification model comparation for a dataset.\nThe Goal is to find out the best fitted model with the best hyperparameters for my dataset.\nFor example: I have 2 classification models (SVM and Random Forest), my dataset has 1000 rows and 10 columns (9 columns are features) and 1 last column is lable.\nFirst of all, I splitted dataset into 2 portions (80-10) for training (800 rows) and tesing (200rows) correspondingly. After that, I use Grid Search with CV = 10 to tune hyperparameter on training set with these 2 models (SVM and Random Forest). When hyperparameters are identified for each model, I use these hyperparameters of these 2 models to test Accuracy_score on training and testing set again in order to find out which model is the best one for my data (conditions: Accuracy_score on training set < Accuracy_score on testing set (not overfiting) and which Accuracy_score on testing set of model is higher, that model is the best model).\nHowever, SVM shows the accuracy_score of training set is 100 and the accuracy_score of testing set is 83.56, this means SVM with tuning hyperparameters is overfitting. On the other hand, Random Forest shows the accuracy_score of training set is 72.36 and the accuracy_score of testing set is 81.23. It is clear that the accuracy_score of testing set of SVM is higher than the accuracy_score of testing set of Random Forest, but SVM is overfitting.\nI have some question as below:\n_ Is my method correst when I implement comparation of accuracy_score for training and testing set as above instead of using Cross-Validation? (if use Cross-Validation, how to do it?\n_ It is clear that SVM above is overfitting but its accuracy_score of testing set is higher than accuracy_score of testing set of Random Forest, could I conclude that SVM is a best model in this case?\nThank you!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":168,"Q_Id":65516888,"Users Score":0,"Answer":"I would suggest splitting your data into three sets, rather than two:\n\nTraining\nValidation\nTesting\n\nTraining is used to train the model, as you have been doing. The validation set is used to evaluate the performance of a model trained with a given set of hyperparameters. The optimal set of hyperparameters is then used to generate predictions on the test set, which wasn't part of either training or hyper parameter selection. You can then compare performance on the test set between your classifiers.\nThe large decrease in performance on your SVM model on your validation dataset does suggest overfitting, though it is common for a classifier to perform better on the training dataset than an evaluation or test dataset.","Q_Score":0,"Tags":"python,machine-learning,model,comparison,hyperparameters","A_Id":65516996,"CreationDate":"2020-12-31T05:11:00.000","Title":"Hyper-prparameter tuning and classification algorithm comparation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Java\/Scala API, there is a readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo) method which reads files in the path based on the given fileInputFormat. With this method, I can read other file types, e.g. a gzip file.\nIs there a corresponding method in Python API? (Or how can I read a gzip file with Python API?)\nThanks,\nAcan","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":196,"Q_Id":65517103,"Users Score":0,"Answer":"You can call the read_text_file method of stream_execution_environment in PyFlink","Q_Score":0,"Tags":"python,apache-flink","A_Id":65517965,"CreationDate":"2020-12-31T05:44:00.000","Title":"PyFlink - How to readFile() with specified file input format (instead of text format)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an RGB image and I converted to Lab colorspace. Now, I want to convert the image in LAB space to grayscale one. I know L NOT = Luminance.\nSo, any idea how to get the equivalent gray value of a specific color in lab space?\nI'm looking for a formula or algorithm to determine the equivalent gray value of a color given the LAB values.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":215,"Q_Id":65517516,"Users Score":1,"Answer":"The conversion from Luminance Y to Lightness L* is defined by the CIE 1976 Lightness Function. Put another way, L* transforms linear values into non-linear values that are perceptually uniform for the Human Visual System (HVS). With that in mind, your question is now dependent on what kind of gray you are looking for, if perceptually uniform and thus non-linear, the Lightness channel from CIE Lab* is actually that of CIE 1976 and is appropriate. If you need something linear, you would have to convert back to CIE XYZ tristimulus values and use the Y channel.","Q_Score":1,"Tags":"python-3.x,matlab,image-processing,colors,grayscale","A_Id":65519051,"CreationDate":"2020-12-31T06:41:00.000","Title":"Equivalent gray value of a color given the LAB values","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to install pyaudio on python 3.8 but after reading a lot, I found out that it is best to use python 3.6. Now to install pyaudio, I want to install it on python 3.6 on my terminal but whenever I type python --version, it shows this Python 2.7.16 version.\nHow can I do the change?\nP.S. - I use pycharm to write the code.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":317,"Q_Id":65517546,"Users Score":-1,"Answer":"You can use python3.6 --version if you have it installed, it should give you Python 3.6.x. If you're trying to install pyaudio with Python 3.6 (I assume with pip), you can use pip3.6 ....","Q_Score":0,"Tags":"python,python-3.x,python-2.7,python-module,python-venv","A_Id":65517651,"CreationDate":"2020-12-31T06:46:00.000","Title":"How to switch Python versions in mac terminal?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have more than 210000 links which I want to scrape. Is there any way we can print how many links it has completed scraping during the execution and sleep or pause execution for 10 mins after every 10000 page count?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":170,"Q_Id":65519951,"Users Score":0,"Answer":"great response! I was thinking of using the sleep inside an if statement, for example.\nif parsed_pages == 100*x\ntime.sleep(rand(20,150)\nelse:","Q_Score":1,"Tags":"python-3.x,scrapy","A_Id":65531000,"CreationDate":"2020-12-31T10:55:00.000","Title":"Count scraped items from scrapy during execution and pause or sleep after a certain number of pages","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I used sublime text till now for python, but today I installed wing personal for python.\nI installed the module \"sympy\" both manually and by pip. I worked fine in sublime text, but when I wrote import sympy in the wing ide, it showed this error:\nbuiltins.ImportError: No module named sympy. What is happening?\nI use wing personal, os: windows 10","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":168,"Q_Id":65520090,"Users Score":0,"Answer":"Take a look at Show Python Environment in the Source menu and compare the interpreter there to the value of sys.executable (after 'import sys') in Python outside of Wing or in Sublime. You are probably not using the same interpreter.\nThis can be changed from Project Properties in the Project menu by setting Python Executable under the Environment tab:\nIf you're using a base install of Python then select Command Line and select the interpreter from the drop down or enter the full path to python or python.exe.\nIf you're using a virtualenv you can also do that and enter the virtualenv's Python or select Activated Env and enter the path to the virtualenv's activate.","Q_Score":1,"Tags":"python,python-3.x,python-import,sympy,wing-ide","A_Id":65522570,"CreationDate":"2020-12-31T11:10:00.000","Title":"Error in wing IDE: no module named \"sympy\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm looking to create the below JSON file in python. I do not understand how I can have multiple dictionaries that are not separated by commas so when I use the JSON library to save the dictionary to disk, I get the below JSON;\n{\"text\": \"Terrible customer service.\", \"labels\": [\"negative\"], \"meta\": {\"wikiPageID\": 1}}\n{\"text\": \"Really great transaction.\", \"labels\": [\"positive\"], \"meta\": {\"wikiPageID\": 2}}\n{\"text\": \"Great price.\", \"labels\": [\"positive\"], \"meta\": {\"wikiPageID\": 3}}\ninstead of a list of dictionaries like below;\n[{\"text\": \"Terrible customer service.\", \"labels\": [\"negative\"], \"meta\": {\"wikiPageID\": 1}},\n{\"text\": \"Really great transaction.\", \"labels\": [\"positive\"], \"meta\": {\"wikiPageID\": 2}},\n{\"text\": \"Great price.\", \"labels\": [\"positive\"], \"meta\": {\"wikiPageID\": 3}}]\nThe difference is, in the first example, each line is a dictionary and they are not in a list or separated by commas.\nWhereas in the second example, which is what I'm able to come up with is a list of dictionaries, each dictionary separated by a comma.\nI'm sorry if this a stupid question I have been breaking my head over this for weeks, and have not been able to come up with a solution.\nAny help is appreciated.\nAnd thank you in advance.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":538,"Q_Id":65521537,"Users Score":0,"Answer":"The way you want to store the Data in one file isn't possible with JSON.\nEach JSOn file can only contain one Object. This means that you can either have one Object defined within curly braces, or an Array of objects as you mentioned.\nIf you want to store each Object as a JSON object you should use separate files each containing a single Object.","Q_Score":0,"Tags":"json,python-3.x,spacy,doccano","A_Id":65521632,"CreationDate":"2020-12-31T13:28:00.000","Title":"Creating a JSON file in python, where they are not separated by commas","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i'm kind of new to the Keras\/Tensorflow field and am currently trying to learn by using existing tutorial models for keras and trying to modify them afterwards. I chose an image classification task as this is what i will need it for if i ever get far enough to be able to handle the whole stuff ;)\nSituation: I gathered 20k indoor and outdoor pictures of apartements and houses to sort them into indoor\/outdoor pictures. The model I use and modified step by step has an accuracy of 95.2% right now but i would like to try reaching an even higher accuracy.\nProblem: On my PC, is takes about 24h to run 50 epochs with training and testing and a batch size of 128 with pictures of 256x256. This means, it takes endless to check if a modification to the model does result in an reasonable improvement of the result.\nFor example, lower batch size, smaller pictures or less epochs result in lower accuracy of the model. Reducing epochs has a smaller impact on the divergence between the test run and an full run, but in the end, it does not make a difference if the model needs 12 or 24 hours for training, if i would like to check, if a modification does have a positive effect.\nMy question: How do you check if changes to a model tend to result in a higher accuracy? Test runs with only a few pics? Reducing the epochs to a very low number? I tested some ways but every time a reduce the complexity in terms of pictures number, picture resolution, epochs etc., a \"test run\" does most of the time not indicate the final accuracy with a full run. E.g. test run performs worse while full run does better or vice versa.\nAny hints? Would be much appreciated!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":65521826,"Users Score":0,"Answer":"You are the only one how could answer this question. Let me see if I can explain my answer. You have 2 factors to take in consideration. The complexity of your model and the diversity of your pictures and those go hand in hand. It is usually not required to train your model on all data because there is a certain degree of \"repetition\". That is basically the reason why Stochastic Gradient Descendent works.\nYou could try to find an optimal dataset size by doing the following.\n\nSelect a random number of images ( let's say, 500).\nPick a configuration for our model, epochs, etc\nTrain your model and record the accuracy\nIncrease the number of images by 10% and repeat step 1-4 a few times.\nPlot the accuracy of your model.\n\nIf you notice that the accuracy does not increase, it could be because you either reached the \"optimal dataset size\" for your model or your model is not capable of learn anymore.\nAt this point, you could start reducing the complexity of your model, number of epochs, etc and again plot the accuracy. If that does not decreased , your model was capable of learning but there was nothing more to learn from your images.\nJust an idea.","Q_Score":0,"Tags":"python,tensorflow,machine-learning,keras","A_Id":65535869,"CreationDate":"2020-12-31T13:57:00.000","Title":"Keras Tensoflow model optimization based on test runs possible?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to get full_path of places.sqlite file present in '%APPDATA%\\Mozilla\\Firefox\\Profiles\\\\places.sqlite' using Python OS module. The issue as you can see that has a random name and there could be multiple folders inside the Profiles folder.\nHow do I navigate\/find the path to the places.sqlite file?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":65523906,"Users Score":0,"Answer":"Use the os module to list out all directories in %APPDATA%\\Mozilla\\Firefox\\Profiles\\\nloop over the directories until you find places.sqlite file (also using os module)","Q_Score":0,"Tags":"python,python-3.x,os.path","A_Id":65523935,"CreationDate":"2020-12-31T17:26:00.000","Title":"Find a file 1 folder level down","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to do TIN Interpolation on a layer but when I fill all the fields with the right data (vector layer, interpolation attribute, extent etc) the algorithm does not run and shows me this message:\nTraceback (most recent call last):\nFile \"C:\/PROGRA~1\/QGIS 3.14\/apps\/qgis\/.\/python\/plugins\\processing\\algs\\qgis\\TinInterpolation.py\", line 188, in processAlgorithm\nwriter.writeFile(feedback)\nException: unknown\nExecution failed after 0.08 seconds\nDoes anybody have an idea about it?? Thank you","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":641,"Q_Id":65524297,"Users Score":0,"Answer":"I had the same issue. I converted a dxf file into a shape file and then I tried to use Tin interpolation but it didn't work. Then I realized that, in my dxf file, there were some very very small lines and polyline and, after removing them, the interpolation went just fine. I don't really have an explanation but maybe this could help you.","Q_Score":0,"Tags":"python,algorithm,vector,interpolation,qgis","A_Id":65921946,"CreationDate":"2020-12-31T18:09:00.000","Title":"Why I cannot do TIN Interpolation in QGIS?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to do TIN Interpolation on a layer but when I fill all the fields with the right data (vector layer, interpolation attribute, extent etc) the algorithm does not run and shows me this message:\nTraceback (most recent call last):\nFile \"C:\/PROGRA~1\/QGIS 3.14\/apps\/qgis\/.\/python\/plugins\\processing\\algs\\qgis\\TinInterpolation.py\", line 188, in processAlgorithm\nwriter.writeFile(feedback)\nException: unknown\nExecution failed after 0.08 seconds\nDoes anybody have an idea about it?? Thank you","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":641,"Q_Id":65524297,"Users Score":0,"Answer":"It is because of some small lines that are in your file that cannot be handled by Interpolation. You can use Generalizer3 in QGIS plugins to remove those lines.","Q_Score":0,"Tags":"python,algorithm,vector,interpolation,qgis","A_Id":71080481,"CreationDate":"2020-12-31T18:09:00.000","Title":"Why I cannot do TIN Interpolation in QGIS?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'c:\\users\\imranliaqat\\appdata\\local\\programs\\python\\python39\\Lib\\site-packages\\cv2\\cv2.cp39-win_amd64.pyd'\nConsider using the --user option or check the permissions.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":3299,"Q_Id":65525561,"Users Score":1,"Answer":"Just type the command you want execute with the user permission, if you don't want to change the permission:\npip3 install opencv-contrib-python --user\nOr just change the access permission, where the particular package is going to install.\nIn your case windows10:\n\ngoto \"C:\\Program Files (x86)\\Python39\" (wherever your python is installed.)\nright click on Python39 folder and click on properties\ngoto Security tab and allow full control by clicking edit button.\nagain open new cmd terminal and try to install the package again.\n\nOtherwise open command prompt with Run as administrator and do the same thing.","Q_Score":1,"Tags":"python,python-imaging-library","A_Id":65525595,"CreationDate":"2020-12-31T21:09:00.000","Title":"pip install opencv-contrib-python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"ERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: 'c:\\users\\imranliaqat\\appdata\\local\\programs\\python\\python39\\Lib\\site-packages\\cv2\\cv2.cp39-win_amd64.pyd'\nConsider using the --user option or check the permissions.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3299,"Q_Id":65525561,"Users Score":0,"Answer":"On windows10 python3.8, I had this permission problem even when installing as Administrator.\nThe problem was due to the McAfee so-called protection package.\nI deinstalled it and then pip installs worked OK.","Q_Score":1,"Tags":"python,python-imaging-library","A_Id":65580677,"CreationDate":"2020-12-31T21:09:00.000","Title":"pip install opencv-contrib-python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a kivy n00b, using python, and am not sure if this is the right place to ask.\nCan someone please explain how a user can input data in an Android app, and how\/where it is stored (SQL table, csv, xml?). I am also confused as to how it can be extended\/used for further analysis.\nI think it should be held as a SQL table, but do not understand how to save\/set up a SQL table in an android app, nor how to access it. Similarly, how to save\/append\/access a csv\/xml document, nor how if these are made, how they are secure from accidental deletion, overwriting, etc\nIn essence, I want to save only the timestamp a user enters some data, and the corresponding values (max 4).\nUser input would consist of 4 variables, x1, x2, x3, x4, and I would write a SQL statement along the lines: insert into data.table timestamp, x1, x2, x3, x4, and then to access the data something along the lines of select * from data.table and then do\/show stuff.\nCan someone offer suggestions on what resources to read? How to set up a SQL Server table in an android app?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":124,"Q_Id":65525770,"Users Score":1,"Answer":"This works basically the same way on android as on the desktop: you have access to the local filesystem to create\/edit files (at least within the app directory), so you can read and write whatever data storage format you like.\nIf you want to use a database, sqlite is the simplest and most obvious option.","Q_Score":0,"Tags":"python,kivy","A_Id":65525777,"CreationDate":"2020-12-31T21:45:00.000","Title":"save user input data in kivy and store for later use\/analysis python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Eg i have a chat application,\nhowever, i realised that for my application, as long as you have the link to the chat, you can enter. how do I prevent that, and make it such that only members of the group chat can access the chat. Something like password protected the url to the chat, or perhaps something like whatsapp. Does anyone have any suggestion and reference material as to how I should build this and implement the function? Thank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":377,"Q_Id":65527007,"Users Score":1,"Answer":"I am in the exact same condition as you.What I am thinking of doing\nis\nStore group_url and the respective user_ids (which we get from django's authentication) in a table(with two columns group_url and allowed_user_ids) or in Redis.\nThen when a client connects to a channel,say chat\/1234 (where 1234 is the group_url),we get the id of that user using self.scope['user'].id and check them in the table.\nIf the user_id is in the respected group_url,we accept the connection.Else reject the connection. I am new to this too.Suggest me if you find a better approach","Q_Score":0,"Tags":"python,django,websocket,django-channels","A_Id":65527131,"CreationDate":"2021-01-01T02:54:00.000","Title":"Django: Channels and Web Socket, how to make group chats exclusive","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Gtk to create an application and I need to use a stack page again and again with different data on each page. I designed a page with glade but now want to clone it and use it in different pages of the stack. Please help me.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":93,"Q_Id":65527314,"Users Score":1,"Answer":"I got what I wanted to do. Basically, I wanted to use multiple widgets arranged in order to give me a template to use dynamically in my app. Composite widgets are one I was looking for. Which can be made and used with pygi(gi_composite library in python)\n[https:\/\/github.com\/virtuald\/pygi-composite-templates][1]","Q_Score":1,"Tags":"python,c++,gtk,glade","A_Id":65563056,"CreationDate":"2021-01-01T04:42:00.000","Title":"Clone widgets and its all child in Gtk","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run a simple python script using the beautifulsoup4 module.\nHowever, when I run the .py file, I get a ModuleNotFoundError.\nWhen I try to pip3 install beautifulsoup4 I get Requirement already satisfied.\nI've tried:\n\nuninstalling and reinstalling [with and without sudo];\neasy_install beautifulsoup4;\nchecking the paths with which python3; and\nall of the above in python2.7\n\nIs there a simple fix for this?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":82,"Q_Id":65528636,"Users Score":-1,"Answer":"Since you used pip3 install module run your script with python3 script.py","Q_Score":0,"Tags":"python,python-3.x,python-2.7","A_Id":65528659,"CreationDate":"2021-01-01T09:39:00.000","Title":"Python module issue - \"ModuleNotFoundError\" and \"Requirement already satisfied\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"Iam trying to make a password manager with login system using python and tkinter. To store those passwords I'm using notepad. But, the problem is that, the passwords are easily accessible just by opening the .txt file(without logging into the app) that I'm using to store them.So, is there any way to make those text files only accessible through that app or any better way to store that information? Please dont mind any mistakes","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":160,"Q_Id":65529075,"Users Score":1,"Answer":"There are some issues with the way you're going about doing this.\nFirst, you shouldn't be storing actual passwords. Instead, hash them and store the hashes, then when your users try to log in, you hash the passwords they give and check that it matches the hash you have stored for them.\nSecond, your app should be running under a service account and whatever secrets it needs should be accessible only to that account, which prevents any other user from being able to read its sensitive data. For added protection, you can disable ssh and shell access for that service account, and also disable sudo and root logins on your server to make it a bit harder for an attacker to become the service account (be aware it doesn't make it impossible, just harder).\nThird, you shouldn't be rolling your own secrets manager anyway. But assuming this is some sort of toy app or school project where you're doing it to learn how to do it, keep some basic rules of info sec in mind. You can also encrypt the file you keep the password hashes in, but don't think that necessarily buys more security as anyone who can get access to your service account to read the password file at all can also get the encryption key. This is the basic reason \/etc\/passwd and \/etc\/shadow aren't encrypted and are instead protected by restrictive file permissions.\nMind you, a user database in a flat text file isn't particularly scalable, so while it works fine for servers, apps with a lot of accounts tend to use real databases, and you can use the database manager's built-in access control and encryption functionality to restrict access to sensitive tables and encrypt sensitive columns.","Q_Score":1,"Tags":"python,python-3.x,tkinter,text,notepad++","A_Id":65530461,"CreationDate":"2021-01-01T10:57:00.000","Title":"Suggestions regarding accessiblity of files in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Python script which is supposed to create quite a lengthy list, currently the VM I am using has an 8 core processor and 30GB RAM however that is not enough as the machine runs out of memory before it can generate the list.\nI am wondering, is it possible to have say 5 similar VMs and have them work on the same script and pool their resources together? I was thinking of using a MIG however I am wondering where would I then store the script and how would they be able to communicate among them?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":44,"Q_Id":65529323,"Users Score":2,"Answer":"Resolved this issue by changing my Python script to use a generator instead of saving the list to a var and simply for loop through that generator, no need for extra processing power.","Q_Score":2,"Tags":"python,google-cloud-platform,virtual-machine,google-compute-engine","A_Id":65538930,"CreationDate":"2021-01-01T11:34:00.000","Title":"How can I make multiple VMs work on the same script in GCP?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I use GMail and send a message to myself it appears unread in my inbox.\nIf I send a message via the GMail API using a python script, the message is sent up it only appears in \"All Mail\" and \"Sent\" and is marked as read.\nI have checked my filters and there is nothing there that would be doing this (and I would expect filters to apply consistently across both of the above use-cases).\nAny thoughts?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":170,"Q_Id":65531150,"Users Score":0,"Answer":"Looks like one of those \"Google\" things.\nWaited an hour - no changes to the code but the behaviour is now as expected !","Q_Score":0,"Tags":"python,gmail-api","A_Id":65532786,"CreationDate":"2021-01-01T15:47:00.000","Title":"GMail-API (Python) EMail Sent To Self - By Passes Inbox","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to develop a program in Python to predict the outcome of a Pseudo Random Number Generator.\nI already have a program that gets the seed of the previously generated number using seed = random.getstate(). My question is whether there is any way to calculate the next seed that will be used, so I can predict the next number.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":145,"Q_Id":65532406,"Users Score":1,"Answer":"The reason that pseudorandom number generators are so named is that they're deterministic; they generate a sequence of numbers that appear to be random, but which aren't really. If you start a PRNG with the same seed, you'll get the same sequence every time.\n\nI already have a programm that gets the seed of the previous generated number using seed = random.getstate()\n\nYou're not really getting a seed here, but rather the internal state of the PRNG. You could save that state and set it again later. That could be useful for testing, or just to continue with the same sequence.\n\nNow, my question is if there is anyway to calculate the next seed that will be used, so I can predict the number.\n\nAgain, not really a seed, which is the initial value that you supply to start a PRNG sequence. What you're getting is the internal state of the PRNG. But yes, if you have that state, then it's trivial to predict the next number: just call random.setstate(...) with the state that you got, generate the next number, and then call random.setstate(...) again to put the PRNG back in the same state so that it'll again generate that same number.","Q_Score":1,"Tags":"python,random,random-seed","A_Id":65532559,"CreationDate":"2021-01-01T18:18:00.000","Title":"Is there anyway to calculate the next seed knowing the previous seed?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am currently trying to make something that uses mkvmerge to merge audio, video and an unknown amount of subtitle files. For now, those subtitle filepaths are in a list.\nHere is my issue. I cannot for the life of me think of a way of putting this into subprocess run, without throwing an error. If I combine the subtitle names into a single string, mkvmerge throws an error, so each file would need to be inside \"\",\"\" themselves.\nSo, the command without subtitles looks like this:\nsubprocess.run(['C:\\MKVToolNix\\mkvmerge.exe', '-o', f'E:\\Videos\\{output_filename}.mkv', 'E:\\Videos\\viddec.mp4', 'E:\\Videos\\auddec.mp4'])\nSo this will produce a working video.\nAFAIK, a properly formatted subprocess call including two subtitles would need to look like this.\nsubprocess.run(['C:\\MKVToolNix\\mkvmerge.exe', '-o', f'E:\\Videos\\{output_filename}.mkv', 'E:\\Videos\\viddec.mp4', 'E:\\Videos\\auddec.mp4', 'E:\\Videos\\eng.srt', 'E:\\Videos\\nor.srt'])\nIs it possible to add variables like that, as individual strings into a subprocess.run call, so that it will function properly? or is there perhaps a different method\/call I cannot think of?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":35,"Q_Id":65533234,"Users Score":1,"Answer":"You can build the list of arguments before the subprocess.run call, as long as you need it to be, and pass that list in the call.","Q_Score":0,"Tags":"python,python-3.x,subprocess","A_Id":65533259,"CreationDate":"2021-01-01T20:01:00.000","Title":"Subprocess.run and an unknown amount of files, how?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was recommended that I \"kill the kernel\" because my code would not function correctly in my computer even though it worked on my teacher's and he suggested before leaving that the code in memory was different than the one showing and that I should kill the kernel to see if it worked, but I searched and I can only find that the kernel is the core of my OS, so I don't know what that has to do with anything, since my code is pretty simple.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":611,"Q_Id":65533679,"Users Score":-1,"Answer":"When I turn off a program on my computer, it deletes all the temporarily saved info and closed the program down. Killing the kernel does exactly that, but without turning off the front end.\nImagine you are using microsoft word and it freezes so you need to restart it. It closes the whole dang thing and then reopens it. If you could kill the kernel in word, it would be keeping the front end of word open, but just closing the backend.","Q_Score":0,"Tags":"python,pycharm,terminology","A_Id":65534083,"CreationDate":"2021-01-01T21:03:00.000","Title":"What is \"killing the kernel\" when coding?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I get the user\/member object in discord.py with only the Name#Discriminator? I searched now for a few hours and didn't find anything. I know how to get the object using the id, but is there a way to convert Name#Discriminator to the id?\nThe user may not be in the Server.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1332,"Q_Id":65533874,"Users Score":0,"Answer":"There's no way to do it if you aren't sure they're in the server. If you are, you can search through the servers' members, but otherwise, it wouldn't make sense. Usernames\/Discriminators change all the time, while IDs remain unique, so it would become a huge headache trying to implement that. Try doing what you want by ID, or searching the server.","Q_Score":1,"Tags":"python,discord.py","A_Id":65534406,"CreationDate":"2021-01-01T21:31:00.000","Title":"Discord.py get user with Name#0001","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 3 buttons with same classnames How I should click for example 2nd button?\n
\u2026<\/div>","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":47,"Q_Id":65534162,"Users Score":0,"Answer":"You can use:\ndriver.find_elements_by_class_name('css-gvi9bl-control')[1].click()\nAlternatively:\ndriver.find_element_by_css_selector('.css-gvi9bl-control')[1].click()\nPersonally, I would use XPATH, but if you really want to use classname, css selector.","Q_Score":0,"Tags":"python,python-3.x,selenium,selenium-chromedriver","A_Id":65534269,"CreationDate":"2021-01-01T22:15:00.000","Title":"Multiple buttons with same classname","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What will happen if I use the same training data and validation data for my machine learning classifier?","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":192,"Q_Id":65534207,"Users Score":1,"Answer":"We create multiple models and then use the validation to see which model performed the best. We also use the validation data to reduce the complexity of our model to the correct level. If you use train data as your validation data, you will achieve incredibly high levels of success (your misclassification rate or average square error will be tiny), but when you apply the model to real data that isn't from your train data, your model will do very poorly. This is called OVERFITTING to the train data.","Q_Score":2,"Tags":"python,validation,machine-learning","A_Id":65534230,"CreationDate":"2021-01-01T22:21:00.000","Title":"Train and validation data structure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What will happen if I use the same training data and validation data for my machine learning classifier?","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":192,"Q_Id":65534207,"Users Score":0,"Answer":"Basically nothing happens. You are just trying to validate your model's performance on the same data it was trained on, which practically doesn't yield anything different or useful. It is like teaching someone to recognize an apple and asking them to recognize just the same apple and see how they performed.\nWhy a validation set is used then? To answer this in short, the train and validation sets are assumed to be generated from the same distribution and thus the model trained on training set should perform almost equally well on the examples from validation set that it has not seen before.","Q_Score":2,"Tags":"python,validation,machine-learning","A_Id":65534468,"CreationDate":"2021-01-01T22:21:00.000","Title":"Train and validation data structure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What will happen if I use the same training data and validation data for my machine learning classifier?","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":192,"Q_Id":65534207,"Users Score":0,"Answer":"Generally, we divide the data to validation and training to prevent overfitting. To explain it, we can think a model that classifies that it is human or not and you have dataset contains 1000 human images. If you train your model with all your images in that dataset , and again validate it with again same data set your accuracy will be 99%. However, when you put another image from different dataset to be classified by the your model ,your accuracy will be much more lower than the first. Therefore, generalization of the model for this example is a training a model looking for a stickman to define basically it is human or not instead of looking for specific handsome blonde man. Therefore, we divide dataset into validation and training to generalize the model and prevent overfitting.","Q_Score":2,"Tags":"python,validation,machine-learning","A_Id":65534699,"CreationDate":"2021-01-01T22:21:00.000","Title":"Train and validation data structure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What will happen if I use the same training data and validation data for my machine learning classifier?","AnswerCount":5,"Available Count":4,"Score":0.0798297691,"is_accepted":false,"ViewCount":192,"Q_Id":65534207,"Users Score":2,"Answer":"If the train data and the validation data are the same, the trained classifier will have a high accuracy, because it has already seen the data. That is why we use train-test splits. We take 60-70% of the training data to train the classifier, and then run the classifier against 30-40% of the data, the validation data which the classifier has not seen yet. This helps measure the accuracy of the classifier and its behavior, such as over fitting or under fitting, against a real test set with no labels.","Q_Score":2,"Tags":"python,validation,machine-learning","A_Id":65534254,"CreationDate":"2021-01-01T22:21:00.000","Title":"Train and validation data structure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Python3,\nThe classic binary search first step\nmid = start + (end - start) \/ 2 throws\nTypeError: list indices must be integers or slices, not float because division yeilds float\ninstead of int by default.\nIs there a better way to deal with this than doing int(mid)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":31,"Q_Id":65535512,"Users Score":1,"Answer":"In Python 3, the \/ operator does floating-point division, even if it's between two integers.\nYou want \/\/ instead, which will perform integer division.","Q_Score":0,"Tags":"python-3.x,binary-search","A_Id":65535524,"CreationDate":"2021-01-02T02:21:00.000","Title":"In Python3, the classic mid = start + (end - start) \/ 2 throws TypeError: list indices must be integers or slices, not float","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm still learning Python and decided to combine two different elements to understand loops and files better.\nI made a simple for loop that takes multiple inputs from the user and writes it into a file.\nI read and watch some tutorials on how to work with for loop and files.\nWhat I wanted to do is every time the user writes a line of string a number will be written at the beginning stating the position.\nSo, if I only write one line then the for loop will also write the number \"1\" at the beginning.\nIf I write two lines of strings then it will be\n\nstring\nstring\n\nThe problem I'm having with this is that this method only works if I keep writing different lines of string in that debugging only.\nIf I stop writing and decide to start the loop again to write a new line then it start with number \"1\" again. Any idea how can I fix this and why is it happening?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":140,"Q_Id":65535664,"Users Score":1,"Answer":"The way to do that would be to read the file, then find the most recent line, and get that number.\nYou would then add 1 to it, and keep going from there\nP.S. regex will help","Q_Score":0,"Tags":"python,file,for-loop,input","A_Id":65535765,"CreationDate":"2021-01-02T02:56:00.000","Title":"Python For Loop with File","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How would I error check if there is a previous page for the chromium browser to go to? I have a button which freezes out if the chromium instance has just been launched and there is not a previous page to go to using the command page.goback()","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":85,"Q_Id":65540674,"Users Score":0,"Answer":"This is a bug and is not intended behavior. You should consider reporting it as an issue on the GitHub repo.","Q_Score":1,"Tags":"python,error-handling,chromium,pyppeteer","A_Id":65589496,"CreationDate":"2021-01-02T15:15:00.000","Title":"How to error check pyppeteer page.goBack()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm developing a matchmaking bot for discord. I read through the documentation and looked through stack exchange, I can't find relevant information on this. I'm trying to eliminate clutter and make it easier for a users to accept and decline a matchmaking request. The bot would listen for a \"!match @user\" and handle the event. I only want the @user to see and accept it and the user that sent the message to see it (obviously). Right now, I believe it's not possible unless it assigns a secret role and deletes these roles after they are finished; this wouldn't be ideal though. Any information or forward to a post, that would help tremendously. Thanks in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2595,"Q_Id":65541872,"Users Score":0,"Answer":"You can not currently hide messages for any specific user\/role.\nYou can try 2 approaches:\n\nSend Message in User DMs\nMake a channel and set it permissions for a specific role and assign that role to the user","Q_Score":0,"Tags":"python,discord.py","A_Id":65547946,"CreationDate":"2021-01-02T17:13:00.000","Title":"discord.py: Hide messages from all other users, but the @mention user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to build an executable from a Python multi-threading script using \"pyinstaller\". And I want to use the number of threads of the destination computer (it's not constant).\nI fear that multiprocessing.cpu_count() value comes from the building computer even if the number of threads is different from the destination PC.\nCan we take out the value?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":65545542,"Users Score":1,"Answer":"I fear that multiprocessing.cpu_count() value comes from the building computer even if the number of threads is different from the destination PC.\n\nIt doesn't, it is a runtime call.\nAFAIK, pyinstaller does not build anything to native code. Instead, it bundles a Python interpreter with your own code (or its bytecode representation). Thus your program is still interpreted and should retain the same behavior.","Q_Score":1,"Tags":"python,multiprocessing","A_Id":65545625,"CreationDate":"2021-01-03T00:31:00.000","Title":"Can we take out \"cpu_count()\" value?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So the current methods for capturing a downscaled image is to capture the full thing then downscale it, but I'm trying to go fast and thus don't want to capture the full image only every 10th pixel giving me an image 144 along and 108 down. which would result in a 10X speed increase because I don't have to get all the pixels from a full resolution just the few I want.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":65545844,"Users Score":0,"Answer":"I don't think this is possible due to driver\/library limitations, I'm thinking they'd be sending you the entire frame, try looking into the library you're using","Q_Score":1,"Tags":"python,image,video,resize","A_Id":65545880,"CreationDate":"2021-01-03T01:29:00.000","Title":"Python if I'm capturing at 1080 by 1440 is there a way to only read the pixel value from every 10th pixel","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to add expiry date to a Huey Dynamic periodic task ?\nJust like there is an option in celery task - \"some_celery_task.apply_async(args=('foo',), expires=expiry_date)\"\nto add expiry date while creating the task.\nI want to add the expiry date while creating the Huey Dynamic periodic task. I used \"revoke\" , it worked as it supposed to , but I want to stop the task completely after the expiry date not revoking it . When the Huey dynamic periodic task is revoked - message is displayed on the Huey terminal that the huey function is revoked (whenever crontab condition becomes true).\n(I am using Huey in django)\n(Extra)\nWhat I did to meet the need of this expiry date -\nI created the function which return Days - Months pairs for crontab :\nFor eg.\nstart date = 2021-1-20 , end date = 2021-6-14\nthen the function will return - Days_Month :[['20-31',1], ['*','2-5'], ['1-14','6']]\nThen I call the Huey Dynamic periodic task (three times in this case).\n(the Days_Month function will return Day-Months as per requirement - Daily, Weekly, Monthly or repeating after n days)\nIs there a better way to do this?\nThank you for the help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":247,"Q_Id":65546519,"Users Score":0,"Answer":"The best solution will depend on how often you need this functionality of having periodic tasks with a specific end date but the ideal approach is probably involving your database.\nI would create a database model (let's call it Job) with fields for your end_date, a next_execution_date and a field that indicates the interval between repetitions (like x days).\nYou would then create a periodic task with huey that runs every day (or even every hour\/minute if you need finer grain of control). Every time this periodic task runs you would then go over all your Job instances and check whether their next_execution_date is in the past. If so, launch a new huey task that actually executes the functionality you need to have periodically executed per Job instance. On success, you calculate the new next_execution_date using the interval.\nSo whenever you want a new Job with a new end_date, you can just create this in the django admin (or make an interface for it) and you would set the next_execution_date as the first date where you want it to execute.\nYour final solution would thus have the Job model and two huey decorated functions. One for the periodic task that merely checks whether Job instances need to be executed and updates their next_execution_date and another one that actually executes the periodic functionality per Job instance. This way you don't have to do any manual cancelling and you only need 1 periodic task that just runs indefinitely but doesn't actually execute anything if there are no Job instances that need to be run.\nNote: this will only be a reasonable approach if you have multiple of these tasks and you potentially want to control the end_dates in your interface.","Q_Score":0,"Tags":"django,cron,django-celery,periodic-task,python-huey","A_Id":65550829,"CreationDate":"2021-01-03T04:00:00.000","Title":"How to add expiry date in HUEY dynamic periodic task just like in celery tasks?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have audio bytes that are AAC encoded and would like to transcode them to a bytearray in Python in WAV format (PCM signed 16-bit little-endian) without calling ffmpeg to do so. How to do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":174,"Q_Id":65547693,"Users Score":0,"Answer":"I do that with soundfile library which is very light and portable.","Q_Score":1,"Tags":"python,wav,aac","A_Id":69177722,"CreationDate":"2021-01-03T07:51:00.000","Title":"Convert AAC bytes to WAV bytes in Python without ffmpeg?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating a webapp using django and react in which multiple user can control playback of a spotify player (play\/pause, skip). This is useful in case of a house party or something where people are listening to a common device . I am thinking if i can integrate the spotify web player sdk and all users can listen to synced playback and control at the sametime remotely. I understand single spotify account needs to register that webapp to be used as device. My question is if can i control the state of playback if the page is opened by multiple users so that they listen to a song synchronously.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":314,"Q_Id":65549146,"Users Score":0,"Answer":"IDEA:\nYou can make a \"Room\" in your webapp like every room will generate a unique ID for every room and a Password any person who want to enter this room will use this Password to enter that room.\nHow to Sync every song ?\nThis is the hard part but you can do it using sockets (my recommendation,... you can also use something else). Socket will enable you to send information (if anyone changed, play, pause or stoped the song) to every user who are accessing that room you have to change, play, pause or stop the song accordingly.","Q_Score":0,"Tags":"javascript,python,reactjs,django,spotify","A_Id":65549258,"CreationDate":"2021-01-03T11:12:00.000","Title":"Playback state control of spotify web player sdk using spotify api","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I want to train a Word2Vec model using \"gensim\". I want to determine the initial rating rate. However, it is written that both \"alpha\" and \"start_alpha\" parameters can be used to do so. What is the difference between them? Are they the same?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":65549685,"Users Score":0,"Answer":"They're essentially the same. If you specify non-default values in the constructor:\n\nthey're cached inside the model as alpha & min_alpha\n\nif you also took the option of specifying a training corpus in the constructor (sentences\/corpus_iterable\/corpus_file), the alpha & min_alpha will become the start_alpha, end_alpha for the automatic .train()\n\n\nIf you instead call .train() explicitly:\n\nyou can specify a start_alpha & end_alpha, just for that call, that will be used instead of the values set during the constructor (but won't be cached in the model for future calls)\n\nIf you're calling .train() multiple times on the same model some reason \u2013 which is usually an unnecessary bad idea, except in some non-standard expert-usage situations \u2013 it's likely you'd want to properly choose the right progression of alpha ranges across each call (so that it's not bouncing up-and-down nonsensically).\nThat expert use is the real reason the separate, non-cached start_alpha & end_alpha parameters exist. Most users should stick with the defaults (so not use any of these), and if they ever do get to the point of tinkering, just try different alpha values in the constructor, carefully monitoring if it helps or not.","Q_Score":0,"Tags":"python,gensim,word2vec","A_Id":65553620,"CreationDate":"2021-01-03T12:10:00.000","Title":"Difference between \"alpha\" and \"start_alpha\"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want the person who used the command to be able to delete the result. I have put the user's ID in the footer of the embed, and my question is: how do I get that data from the message where the user reacted to.\nreaction.message.embed.footer doesn't work. I currently don't have code as I was trying to get that ID first.\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":114,"Q_Id":65549860,"Users Score":1,"Answer":"discord.Message object has no attribute embed, but it has embeds. It returns you a list of embeds that the message has. So you can simply do: reaction.message.embeds[0].footer.","Q_Score":0,"Tags":"python,python-3.x,discord.py,discord.py-rewrite","A_Id":65549978,"CreationDate":"2021-01-03T12:30:00.000","Title":"Get embed footer from reaction message","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a complete beginner in Python and it is my first question on Stackoverflow. I have tried numerous tutorials on youtube + some additional google searching, but havent been really able to completely solve my task. Briefly putting it below asf:\nWe have a dataset of futures prices (values) for next 12-36 months. Each value corresponds to one month in future. The idea for the code is to have an input of following:\n\nstarting date in days (like 2nd of Feb 2021 or any other)\nduration of given days (say 95 or 150 days or 425 days)\nThe code has to calculate the number of days from each given month between starting and ending date (which is starting + duration) and then to use appropriate values from corresponding month to calculate an average price for this particular duration in time.\n\nExample:\nStarting date is 2nd of Feb 2021 and duration is 95 days (end date 8th of May). Values are Feb - 7750, Mar - 9200, April - 9500, May is 10100.\nI have managed to do same in Excel (which was very clumsy and too complicated to use on the daily basis) and average stands for around 8949 taking in mind all above. But I cant figure out how to code same \"interval\" with days per month in Python. All of the articles just simply point out to \"monthrange\" function, but how is that possible to apply same for this task?\nAppreciate your understanding of a newbie question and sorry for the lack of knowledge to express\/explain my thoughts more clear.\nLooking forward to any help relative to above.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":94,"Q_Id":65550486,"Users Score":2,"Answer":"You can use dataframe.todatetime() to constuct your code. If you need further help, just click ctrl + tab within your code to see the inputs and their usage.","Q_Score":0,"Tags":"python,datetime,finance","A_Id":65550534,"CreationDate":"2021-01-03T13:42:00.000","Title":"Datetime usage in Python for finance related task","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to understand (from a python noob perspective) how could I set GNU Radio for multiple OS's\/machines usage. Ideally I'd use GRC only on my Ubuntu machine, but run .py from both my windows machine and my raspberry pi. I've seen this thread that implies that using venv is the best alternative (which I'd love), but when I used GNU Radio back in 2018 it seemed that pybombs was the best alternative and usage in MacOS or windows was rather bad.\nIs there a good way to handle multiple OS's usage? I want to be sure before installing the required packages and asking the other guys who'll help me with the project to do so.\nThanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":65553306,"Users Score":0,"Answer":"Install it on your Ubuntu machine. Open a remote shell into that machine from the other 2 machines to run the command line python programs. It's hard enough to get everything working on 1 machine, never mind 3, never mind moving all the hardware around risking damage from static and accidents.\nA VM is also an option, but you still have to move hardware around, and depending on what you're doing a VM might be too slow, and that's not an option on a RPi.","Q_Score":0,"Tags":"python,gnuradio","A_Id":65556111,"CreationDate":"2021-01-03T18:17:00.000","Title":"Running GNU Radio in multiple OS's","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Im trying to run a script (.ipynb) in a jupyter notebook, but this script is in a different folder from the original jupyter notebook. I would like to execute: \n%run Scripts\/python\/script.ipynb \nThe folder Scripts is out the folder of the jupyter notebook.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":65554012,"Users Score":0,"Answer":"just copy paste file on jupyter root directory in my case it is Users\/{user-name}\/,","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":65554249,"CreationDate":"2021-01-03T19:27:00.000","Title":"Run script (.ipynb) that is out the folder of a jupyter notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having problems with installing a python package (pandas) on Ubuntu 18.04 to a specific Python (3.6.5) distribution in Salome_meca located in:\n\/home\/username\/salome_meca\/V2019.0.3_universal\/prerequisites\/Python-365\/lib\/python3.6\/os.py\nif I run:\nsudo python3.6 -m pip install --install-option=\"--prefix=\/home\/username\/salome_meca\/V2019.0.3_universal\/prerequisites\/Pandas-120\" pandas\nIt raises an error:\nRequirement already satisfied: pandas in \/usr\/lib\/python3\/dist-packages\nAnd I cannot import this module as the python (3.6.5) distribution in Salome_meca cannot find it, when I run the code in the Salome_meca invornment.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":215,"Q_Id":65554051,"Users Score":0,"Answer":"The problem was solved by firstly input in terminal .\/salome shell and then pip3 install pandas and this installed pandas under the python distribution within salome_meca. The only problem is that it was not installed in the correct folder (but works anyway). Probably one should set also the target dir, and then the command should be: pip3 install pandas --target=\/home\/lskrinjar\/salome_meca\/V2019.0.3_universal\/prerequisites\/Pandas-115","Q_Score":0,"Tags":"python,pandas,ubuntu,pip","A_Id":65619170,"CreationDate":"2021-01-03T19:31:00.000","Title":"Install python package to python distribution of salome_meca on Ubuntu 18.04","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how can I make a login form that will remember the user so that he does not have to log in next time.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":65554149,"Users Score":0,"Answer":"Some more information would be nice but if you want to use a database for this then you would have to create a entry for the user information last entered.\nAnd then on reopening the programm you would check if there are any entrys and if yes load it.\nBut I think that writing the login information to a file on you pc would be a lot easier. So you run the steps from above just writing to a file instead of a database.\nI am not sure how you would make this secure because you can't really encrypt the password because you would need a password or key of some type and that password or key would be easy to find in the source code especially in python. It would be harder to find in other compiler based programming languages but also somewhere. And if you would use a database you would have a password for that but that would also lay on the hardrive if not encrypted otherwise but there we are where we started.\nSo as mentioned above a database would be quite useless for a task like this because it doesn't improve anything and is a hassle for beginners to setup.","Q_Score":0,"Tags":"python-3.x,tkinter","A_Id":65554208,"CreationDate":"2021-01-03T19:40:00.000","Title":"How to do auto login in python with sql database?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using time series data from surgery to label a patient (binary) as having a certain medical condition or not as the patient is discharged from surgery. I am hoping to use LSTM or RNN. However, there are certain features that are missing in the sequence when the patient goes on bypass (for example, there is no pulse because the heart is stopped). Thus, all my patients have 'missing data' at some point. Should I still use imputation or is there a way that I can still use these features with their known gaps?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":65555469,"Users Score":0,"Answer":"Depends on your application.\nOne way to deal with it is to remove any samples with missing features from the train set.\nOf course, at test-time, the predictions will not be reliable, as it's an unseen situation. So it's your task to detect those conditions at test-time and tell the user the predictions are not reliable.\nAnother option:\nIf the features that might be missing are categorical, an \"unknown\" category might be just fine to train on. For numeric features that doesn't work.","Q_Score":0,"Tags":"python,deep-learning","A_Id":65555562,"CreationDate":"2021-01-03T22:13:00.000","Title":"Known periods of missing data for deep learning applications","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm still working on this cocktail app and I ran into another issue. After some time, the api call will drop after an hour; the api will only stay open for an hour. After that hour, I have to make a new api call. But I don't want to start all over when trying to pull data back. The data I'm using has over 300 thousand records in it to match, which on average takes about 10 hours to complete. Sorry, if this question does not make any sense I'm still new to coding and I'm doing my best!\nFor instance:\n\nMake api call\nUse data from csv file 300k records to get data back on\nRetrieve the data back.\nAPI connection drops--- Will need to make new api connection.\nMake API call\nDetermine what data has been matched from csv file, then pick up from the next record that has not been matched.\n\nIs there a way to make sure that when I have to get a new API key or when the connection drops, whenever the connection is made again it picks up where it left off?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":65556049,"Users Score":0,"Answer":"Sounds like you want your API to have something like \"sessions\" that the client can disconnect and reconnect to.\nJust some sort of session ID the server can use to reconnect you to whatever long running query it is doing.\nIt probably gets quite complex though. The query may finish when no client is connected and so the results need to be stored somewhere keyed by the session id so that when the API reconnects it can get the results.\nPS. 10 hours to match 300K records? That is a crazy amount of time ! I'd be doing some profiling on that match algorithm.","Q_Score":0,"Tags":"python,api,soap","A_Id":65556139,"CreationDate":"2021-01-03T23:36:00.000","Title":"How do I have an API pickup where it left off after the request times out","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a bit confused about the definition of the r-squared score in the linear regression model.\nAs far as I know, the R-squared score represents how much of the dependent variable can be determined by the independent variables. However, in the scikit learn library, we have an r2-score function that calculates the r-squared score like r2_score(y_true, y_pred). But both of the parameters here are the output values, and it doesn't seem that it involves any of the indepent variables. Could you help me to understand how this is calculated?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":398,"Q_Id":65556670,"Users Score":1,"Answer":"You asked about the python code x = r2_score(y_true, y_pred)\nNote that:\n\n\n\n\ny_pred\ny_true\n\n\n\n\ny_pred stands for \"prediction of the y-variable\"\ny_true stands for \"true value of the y-variable\"\n\n\nThe predicted value is never the raw data. Often times, the predicted value is a line of best fit.\ny_true are the raw numbers you collect during either an experiment, survey, or scientific study.\n\n\n\n\nSuppose that you have a model of a teenagers height as a function of age.\n\n\n\n\nAGE (years since birth)\n10 years\n12 years\n14 years\n16 years\n\n\n\n\nHIEGHT (inches)\n55\n60\n65\n68\n\n\n\n\nAGE is defined as years since birth (not years since conception, like in China).\nAlso, we round age down to the nearest whole year.\nA child 10.81773 years old is listed as 10 years old.\nAn example of a predicted value might be that you think, on average, a child 10 years of age is 55 inches tall.\nIf you conduct a study where you measure the height of 1,038 children who are each 10 years old, you will find that the children are not all exactly 55 inches tall.\nThe raw data (measured heights of children) is known as the set of true y-values.\nStatisticians often measure error by comparing the distance from the measured height of a child from the predicted height.\nFor example, 10-year-old Joanna's height is 52 inches (rounded to the nearest whole inch).\nWe predicted Joanna's height to be 55 inches.\nThere are 3 inches of difference between the true, and the predicted, values.\nOften times, statisticians want one number for a data set, instead of 1,038 different numbers.\nOne thing you can do is convert the difference between the predicted height, and the actual heights of the children, into a positive number. For example, -5 becomes +5\nAfter that, compute the average positive-difference (in inches) between actual height and predicted height.\nTaking the absolute difference is important. Some children are shorter than predicted (-2 inches), and some children are taller (+7 inches).\nIf you allow negative numbers the average difference between the average height and actual height will always be zero.\n\nTake 1,038 actual heights.\nsubtract 55 inches from the actual height.\nsum up the height discrepancies without converting to positive numbers\nthe result will always be zero.\n\nIn fact, one way to define mean is that the mean of a sequence of numbers is a number x such that when you calculate the difference between each data-point and x, then sum the results, the answer is zero.\nGenerally, statisticians square the differences.\nSince Joanna was short (-2 inches) the squared error for Joanna is +4 inches.\nA negative number times a negative number is always a positive number.\nSquaring gets rid of the negative signs.\nTaking the absolute value gets rid of the negative signs.\nActually... there are about a million ways to get rid of the negative signs.\nSome statistician's are like the parrots in the 1998 movie \"Paulie.\"\nI say \"taco\" and they say \"TACO! TACO! TACO!\"\nThey copy what other statisticians do, and it never occurs to them that there is more than one way to do statistical analysis.\nI have a degree in math, I have seen proofs that the curve which minimizes the mean-squared-error is ideal in some ways.\nHowever, the mean-squared-error is more of a heuristic, or proxy, than a measure of what really, truly, matters.\nNobody actually has a formula which perfectly computes for data-set A, and data-set B, which data-set is more \"spread-out\" than the other.\nIt is difficult to tell what humans care about.\nAnyways, mean-square-error is better than nothing. IT measures how spread-out a data-set is. A\nre the data-points WAY WAY far away from the average, or all pretty close to the average?\nWhat-if 55 inches was the true average height of a 10-year-old child?\nAlso imagine \"what-if\" the true standard deviation was 4 inches.\nIn that imaginary world, suppose you randomly sampled 1,038 children, each 10 years of age.\nYour sample-variance (computed from experimental data) is 7.1091 inches.\nWhat is the likelihood that a sample of 1,038 children would have a variance of 7.1091 inches or more?\nIf your model is correct, what is the likelihood that the data would as far, or further, from the model's prediction as that you observed?\nIf the data you see is way WAY WAY far away from the predicted value, then your model is probably bad.\nAnyway the R-squared measure is:\n\n0% if the data does not match the model at all\n100% if the differences between the data and the prediction are adequately explained by random chance.\n\nFor example, if you toss a fair-coin 1,000 times, it is reasonably that 491 results would be heads instead of exactly 500 results of \"heads\".\nThe question is, is the observed value (491 heads out of 1,000 tosses) likely, or extremely weird, given the model which says that it should be about 500 out of 1,000.","Q_Score":0,"Tags":"python,machine-learning,data-science,modeling","A_Id":65557112,"CreationDate":"2021-01-04T01:31:00.000","Title":"what is r-squared in linear regression models?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm admittedly beginner to intermediate with Python and novice to BeautifulSoup\/web-scraping. However, I have successfully built a couple of scrapers. Normal tags = no problem (e.g., div, a, li, etc)\nHowever, can't find how to reference this tag with .select or .find or attrs=\"\" or anything:\n..........\n requirements.txt to create a requirements.txt folder which you can then use on the other system with pip install -r requirements.txt to recreate the exact same virtual environment.\n\nAll you have to do is keep that requirements.txt up to date and update the virtual environments on either computer whenever it changes, instead of pulling it through GitHub.\nIn more detail, a simple example (for Windows, very similar for Linux):\n\ncreate a project folder, e.g. C:\\projects\\my_project\ncreate a virtual environment for the project, e.g. python -m venv C:\\projects\\venv\\my_project\nactivate the environment, i.e. C:\\projects\\venv\\my_project\\Scripts\\activate.bat\ninstall packages, e.g. pip install numpy\nsave what packages were installed to a file in C:\\projects\\my_project, called requirements.txt with pip freeze > requirements.txt\nstore the project in a Git repo, including that file\non another development machine, clone or pull the project, e.g. git clone https:\/\/github.com\/my_project D:\\workwork\\projects\\my_project\non that machine, create a new virtual environment, e.g. python -m venv D:\\workwork\\venv\\my_project\nactivate the environment, i.e. D:\\workwork\\venv\\my_project\\Scripts\\activate.bat\ninstall the packages that are required with pip install -r D:\\workwork\\projects\\my_project\\requirements.txt\n\nSince you say you're using PyCharm, it's a lot easier still, just make sure that the environment created by PyCharm sits outside your project folder. I like to keep all my virtual environments in one folder, with venv-names that match the project names.\nYou can still create a requirements.txt in your project and when you pull the project to another PC with PyCharm, just do the same: create a venv outside the project folder. PyCharm will recognise that it's missing packages from the requirements file and offer to install them for you.","Q_Score":1,"Tags":"python,git,pycharm,interpreter,pyvenv","A_Id":65558584,"CreationDate":"2021-01-04T06:04:00.000","Title":"Transferring a Python project to different computers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am deploying my Application on Digital Ocean and if I update my application the traffic would be interrupted.\nIs there a way to update the flask application without interrupting the traffic.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":65558830,"Users Score":0,"Answer":"A solution would be having two back-end APIs using same database.\nWhile updating one of the APIs, you'd direct the traffic to the other API using a load balancer, a reverse proxy or whatever.\nAfter updating one of the APIs, you can direct the traffic to the updated API and deploy the other one.","Q_Score":0,"Tags":"python,flask","A_Id":65558860,"CreationDate":"2021-01-04T07:05:00.000","Title":"How can we update Deployed Flask Application without interrupting live traffic?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need your advice on something that I'm working on as a part of my work.\nI'm working on automating the Aurora Dump to S3 bucket every midnight. As a part of it, I have created a ec2 instance that generates the dump and I have written a python script using boto3 which moves the dump to S3 bucket every night.\nI need to intimate a list of developers if the data dump doesn't take place for some reason.\nAs of now, I'm posting a message to SNS topic which notifies the developers if the backup doesn't happen. But I need to do this with Cloudwatch and I'm not sure how to do it.\nYour help will be much appreciated. ! Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":65559529,"Users Score":0,"Answer":"I have created a custom metric to which I have attached a Cloudwatch alarm and it gets triggered if there's an issue in data backup process.","Q_Score":1,"Tags":"python,amazon-web-services,amazon-s3,automation,cloudwatch-alarms","A_Id":65574562,"CreationDate":"2021-01-04T08:15:00.000","Title":"Cloudwatch Alarm for Aurora Data Dump Automation to S3 Bucket","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm looking for a way to make the bot screen share a website. And the link to be a parameter after the command. (To clarify: The website starting URL doesn't change. Only a token that's given after the starting URL. For example: \"https:\/\/websitehere.com\/branch\/ 'token goes here'","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2426,"Q_Id":65560530,"Users Score":1,"Answer":"Bots Can't Screenshare! Discord Bot API Do Not Support It Yet!","Q_Score":1,"Tags":"python,discord,discord.py","A_Id":65563382,"CreationDate":"2021-01-04T09:45:00.000","Title":"Is there a way ot make a discord bot screen share a website","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to automate some processes on my laptop...like checking mail, messaging, sending birthday wishes etc\nThe code is pretty simple for all of them but what i want now is that, for instance, after the function for checking email has being executed, the function should deactivate till the next day...\nAnd the same should happen to the other functions.\nAll those functions are in the same file and executed in order\nSo it's more like :\nwish_birthday_func()\nwish_birthday_func.pause_next_day#even if i #run the file again during the #same day\ncheck_mail_func() #.....","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":25,"Q_Id":65562026,"Users Score":0,"Answer":"you wouldn't like a permanently running task because you could reboout your latop right ?\nSo you should look for some sheduler of your os (Windows at, Linux cron etc.)","Q_Score":0,"Tags":"python,datetime,time","A_Id":65562435,"CreationDate":"2021-01-04T11:35:00.000","Title":"Pause event till next day and continue with the rest of the python code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any polynomial or linear regression function in python, which error function is the Mean Absolute Error? Sklearn use MSE, but since the noise in the data has normal distribution, i would like to use MAE to minimise.\nAfter that i need a mathematical function(coefficients) also.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":319,"Q_Id":65562744,"Users Score":0,"Answer":"light gbm regression can work with MAE","Q_Score":0,"Tags":"python,function,regression,gradient,loss","A_Id":65566268,"CreationDate":"2021-01-04T12:28:00.000","Title":"Polinomial regression with Mean Absolute Error cost function Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to make a small .exe file that will work in any PCs , is it possible to make and if yes then what is the procedure ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":65564592,"Users Score":0,"Answer":"Since you wrote a terrible question, I will write a terrible answer:\nYes, you can make .exe files in python. You need a package called \"psutil\" which can convert your .py files into .exe files AND convert them into executables for other operating systems as well. A .exe file only works on Windows, so some PC running a Linux Distro won't be able to run it.\nEdit: pyinstaller is also useful when you want to just make a .exe","Q_Score":0,"Tags":"python,machine-learning,software-design","A_Id":65564631,"CreationDate":"2021-01-04T14:43:00.000","Title":"is it possible to make software using python only?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Im using an sql server and rabbitmq as a result backend\/broker for celery workers.Everything works fine but for future purposes we plan to use several remote workers on diferent machines that need to monitor this broker\/backend.The problem is that you need to provide direct access to your broker and database url , thing that open many security risks.Is there a way to provide remote celery worker the remote broker\/database via ssh?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":219,"Q_Id":65565698,"Users Score":0,"Answer":"It seems like ssh port forwarding is working but still i have some reservations.\nMy plan works as follows:\n\nport forward both remote database and broker on local ports(auto\nssh) in remote celery workers machine.\nnow celery workers consuming the tasks and writing to remote database from local ports port forwaded.\n\nIs this implementations bad as noone seems to use remote celery workers like this.\nAny different answer will be appreciated.","Q_Score":0,"Tags":"python,ssh,rabbitmq,celery","A_Id":65580702,"CreationDate":"2021-01-04T15:57:00.000","Title":"Python celery backend\/broker access via ssh","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I realize similar questions have been asked however they have all been about a sepficic problem whereas I don't even now how I would go about doing what I need to.\nThat is: From my Django webapp I need to scrape a website periodically while my webapp runs on a server. The first options that I found were \"django-background-tasks\" (which doesn't seem to work the way I want it to) and 'celery-beat' which recommends getting another server if i understood correctly.\nI figured just running a seperate thread would work but I can't seem to make that work without it interrupting the server and vice-versa and it's not the \"correct\" way of doing it.\nIs there a way to run a task periodically without the need for a seperate server and a request to be made to an app in Django?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":65567300,"Users Score":1,"Answer":"'celery-beat' which recommends getting another server if i understood correctly.\n\nYou can host celery (and any other needed components) on the same server as your Django app. They would be separate processes entirely.\nIt's not an uncommon setup to have a Django app + celery worker(s) + message queue all bundled into the same server deployment. Deploying on separate servers may be ideal, just as it would be ideal to distribute your Django app across many servers, but is by no means necessary.","Q_Score":1,"Tags":"python,django","A_Id":65568135,"CreationDate":"2021-01-04T17:44:00.000","Title":"Running \"tasks\" periodically with Django without seperate server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I realize similar questions have been asked however they have all been about a sepficic problem whereas I don't even now how I would go about doing what I need to.\nThat is: From my Django webapp I need to scrape a website periodically while my webapp runs on a server. The first options that I found were \"django-background-tasks\" (which doesn't seem to work the way I want it to) and 'celery-beat' which recommends getting another server if i understood correctly.\nI figured just running a seperate thread would work but I can't seem to make that work without it interrupting the server and vice-versa and it's not the \"correct\" way of doing it.\nIs there a way to run a task periodically without the need for a seperate server and a request to be made to an app in Django?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":87,"Q_Id":65567300,"Users Score":1,"Answer":"I'm not sure if this is the \"correct\" way but it was a cheap and easy way for me to do it. I just created custom Django Management Commands and have them run via a scheduler such as CRON or in my case I just utilized Heroku Scheduler for my app.","Q_Score":1,"Tags":"python,django","A_Id":65568327,"CreationDate":"2021-01-04T17:44:00.000","Title":"Running \"tasks\" periodically with Django without seperate server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I use Python Anaconda and Visual Studio Code for Data Science and Machine Learning projects.\nI want to learn how to use Windows Subsystem for Linux, and I have seen that tools such as Conda or Git can be installed directly there, but I don't quite understand the difference between a common Python Anaconda installation and a Conda installation in WSL.\nIs one better than the other? Or should I have both? How should I integrate WSL into my work with Anaconda, Git, and VS Code? What advantages does it have or what disadvantages?\nHelp please, I hate not installing my tools properly and then having a mess of folders, environment variables, etc.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":96,"Q_Id":65569685,"Users Score":0,"Answer":"If you use conda it's better to install it directly on Windows rather than in WSL. Think of WSL as a virtual machine in your current PC, but much faster than you think.\nIt's most useful use would be as an alternate base for docker. You can run a whole lot of stuff with Windows integration from WSL, which includes VS Code. You can lauch VS code as if it is run from within that OS, with all native extension and app support.\nYou can also access the entire Windows filesystem from WSL and vice versa, so integrating Git with it won't be a bad idea","Q_Score":0,"Tags":"python,anaconda,windows-subsystem-for-linux,wsl-2","A_Id":65582721,"CreationDate":"2021-01-04T20:54:00.000","Title":"Installations on WSL?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I am developing a Bot using discord.py and I want to get all permissions the Bot has in a specific Guild. I already have the Guild Object but I don't know how to get the Permissions the Bot has. I already looked through the documentation but couln't find anything in that direction...","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":111,"Q_Id":65571220,"Users Score":1,"Answer":"From a Member object, like guild.me (a Member object similar to Bot.user, essentially a Member object representing your bot), you can get the permissions that member has from the guild_permissions attribute.","Q_Score":0,"Tags":"python,discord.py","A_Id":65571368,"CreationDate":"2021-01-04T23:27:00.000","Title":"discord.py get all permissions a bot has","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I noticed that in my Anaconda Navigator, the system shows that Python 3.8.3 has been installed, but as I checked the version on Jupiter notebook, it's 3.7.6. I'm wondering is there a way I can update the latest Python on the notebook? Thanks:)","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":4403,"Q_Id":65572205,"Users Score":1,"Answer":"In the Jupyter Notebook, select KERNEL >> CHANGE KERNEL and choose the Conda virtual environment with Python 3.8.","Q_Score":2,"Tags":"python,jupyter-notebook,version,updates","A_Id":65572267,"CreationDate":"2021-01-05T01:53:00.000","Title":"How can I upgrade python to 3.8 in Jupyter Notebook?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to store the state of the python interpreter embedded in a C program (not the terminal interpreter or a notebook) and restore it later resuming execution were it left off?\nOther questions and answers I found about this topic evolved around saving the state of the interactive shell or a jupyter notebook or for debugging. However my goal is to freeze execution and restoring after a complete restart of the program.\nA library which achieves a similar goal for the Lua Language is called Pluto, however I don't know of any similar libraries or built-in ways to achieve the same in an embedded python interpreter.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":205,"Q_Id":65573489,"Users Score":1,"Answer":"No, there is absolutely no way of storing the entire state of the CPython interpreter as it is C code, other than dumping the entire memory of the C program to a file and resuming from that. It would however mean that you couldn't restart the C program independent of the Python program running in the embedded interpreter. Of course it is not what you would want.\nIt could be possible in a more limited case to pickle\/marshal some objects but not all objects are picklable - like open files etc. In general case the Python program must actively cooperate with freezing and restoring.","Q_Score":1,"Tags":"python,c,python-embedding","A_Id":65573874,"CreationDate":"2021-01-05T04:59:00.000","Title":"Save and restore python interpreter state","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Hello and sorry for this basic question since I am new to programming.\nI have a Sony FCB IP camera that works on Visca protocol. I tried socket programming in python to send visca commands but had no idea what i was doing.\nI need to access the ptz port of my IP camera and start sending visca commands to it through my python code. How can I establish a connection between local system and the camera and ensure that commands are sent through TCP protocol?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":65574483,"Users Score":0,"Answer":"You can ask for the .dll\/.so from OEM, in case you are stuck. ;)","Q_Score":1,"Tags":"python","A_Id":65590956,"CreationDate":"2021-01-05T07:08:00.000","Title":"Can I access an IP Camera PTZ port through socket programming in Python and send hex bytes to it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to save the navigable window showing the graph with plt.show()?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":65576303,"Users Score":0,"Answer":"Yes. You can use plt.savefig(\"graph.jpg\") to save your figures.","Q_Score":0,"Tags":"python,matplotlib","A_Id":65576417,"CreationDate":"2021-01-05T09:33:00.000","Title":"Is it possible to save the window obtained with plt.show() in a navigable format and not as a simple image?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Updating the question.\nI am developing a command-line tool for a framework.\nand I am struggling with how to detect if the current directory is a project of my framework.\nI have two solutions in mind.\n\nsome hidden file in the directory\ndetect the project structure by files and folders.\n\nWhat do you think is the best approach?\nThank you,\nShay\nThank you very much","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":117,"Q_Id":65582283,"Users Score":0,"Answer":"In my opinion, a good idea would be to either have a project directory structure that you can use a signature for the project\/framework, that you can use within the tool as a list of signature-like structures, for example\nPROJECT_STRUCTURE_SIGNATURES = [ \"custom_project\", \"custom_project\/tests\", \"custom_project\/build\", \"custom_project\/config\", \"config\/environments\" ] and then just check if any(signature in os.getcwd() for signature in PROJECT_STRUCTURE_SIGNATURES).\nif the project structure is not too complex, I suppose that would be a start in order to identify the requirements that you're looking for.\nHowever, if this is not the case, then I suppose a dictionary-like structure that you could use to traverse the key-value pairs similar to the project's file structure and check the current directory against those would be a better idea, where if none of the elements from the nested dictionary traversal matches, then the directory is not within the project structure.","Q_Score":0,"Tags":"python,architecture","A_Id":67074897,"CreationDate":"2021-01-05T15:58:00.000","Title":"Detect if folder content match file and folders pattern","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been trying to deploy an app on Heroku, which works fine on my local but while deploying gives me the error AttributeError: module 'sklearn.utils.deprecation' has no attribute 'DeprecationDict'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":65582634,"Users Score":0,"Answer":"Solved: The problem arose because the sklearn version used to train the model was old, whereas the version used by Heroku to load the model was the newer one, so need to make sure the requirement.txt has the same version used to train the model.","Q_Score":0,"Tags":"python,pandas,machine-learning,scikit-learn","A_Id":65594165,"CreationDate":"2021-01-05T16:18:00.000","Title":"Heroku Deployment problem AttributeError: module 'sklearn.utils.deprecation' has no attribute 'DeprecationDict'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am building my website which contains a python(.py) file, html, css and JS file. I want to know that how can I run my python script in siteground from my hosting account so that it can scrape data from a site and output a JSON file to Javascript file which can display it on the webpage.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":448,"Q_Id":65583183,"Users Score":0,"Answer":"I would use cron jobs to run jobs in the foreground","Q_Score":0,"Tags":"python,selenium","A_Id":65584141,"CreationDate":"2021-01-05T16:52:00.000","Title":"How can I run my Python Script in Siteground Hosting Server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am currently using Airflow to run a DAG (say dag.py) which has a few tasks, and then, it has a python script to execute (done via bash_operator). The python script (say report.py) basically takes data from a cloud (s3) location as a dataframe, does a few transformations, and then sends them out as a report over email.\nBut the issue I'm having is that airflow is basically running this python script, report.py, everytime Airflow scans the repository for changes (i.e. every 2 mins). So, the script is being run every 2 mins (and hence the email is being sent out every two minutes!).\nIs there any work around to this? Can we use something apart from a bash operator (bare in mind that we need to do a few dataframe transformations before sending out the report)?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":170,"Q_Id":65585318,"Users Score":0,"Answer":"Just make sure you do everything serious in the tasks. It in the python script. The script will be executed often by scheduler but it should simply create tasks and build dependencies between them. The actual work is done in the 'execute' methods of the tasks.\nFor example rather than sending email in the script you should add the 'EmailOperator' as a task and the right dependencies, so the execute method of the operator will be executed not when the file is parsed by scheduler, but when all dependencies (other tasks ) will complete","Q_Score":0,"Tags":"python,airflow","A_Id":65586460,"CreationDate":"2021-01-05T19:26:00.000","Title":"How do I stop Airflow from triggering my python scripts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Why does NumPy allow array[row_index, ] but array[, col_index] is not valid and gives Syntax Error. e.g. if I want to traverse the array row wise NumPy.array[row_index, :] and NumPy.array[row_index, ] both give the same answer where as only NumPy.array[:, col_index] produces the result in the latter case. Is there any reason behind this that I'm missing?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":81,"Q_Id":65585701,"Users Score":2,"Answer":"arr[idx,] is actually short for arr[(idx,)], passing a tuple to the __getitem__ method. In python a comma creates a tuple (in most circumstances). (1) is just 1, (1,) is a one element tuple, as is 1,.\narr[,idx] is gives a syntax error. That's the interpreter complaining, not numpy.\narr[3], arr[3,] and arr[3,:] are all the same for a 2d array. Trailing : are added as needed. Leading : have to be explicit.","Q_Score":1,"Tags":"python,arrays,numpy","A_Id":65585869,"CreationDate":"2021-01-05T19:57:00.000","Title":"NumPy array row wise and column wise slicing syntax","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In the same way that you would deploy a storage account or a upload a blob file from Python.\nI am essentially looking for the Python equivalent of the following bash commands\n\naz functionapp create --resource-group $RESOURCE_GROUP_NAME --os-type Linux --consumption-plan-location $AZ_LOCATION --runtime python --runtime-version 3.6 --functions-version 2 --name $APP_NAME --storage-account $STORAGE_ACCOUNT_NAME\n\nfunc new --name $FUNC_NAME --template \"Azure Queue Storage trigger\"\n\nfunc azure functionapp publish $APP_NAME --build-native-deps\n\n\nA cop-out would be to just have the Python script run the shell commands, but I am looking for a more elegant solution if one exists.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":389,"Q_Id":65590773,"Users Score":1,"Answer":"I am essentially looking for the Python equivalent of the following\nbash commands\n\nIf you check the python sdk of azure or check the REST API document of azure, you will find there is no Ready-made method.Basically, there are two situations to discuss:\nIf you want to deploy azure function to windows OS, then just use any python code to upload local function structure to related storage file share(The premise is that the function app has been created on azure.).\nBut if you want to deploy azure function to linux OS, then it will not be as simple as deploying to windows OS. It needs to perform additional packaging operations, and the specific logic may need to check the underlying implementation of azure cli.\n\nA cop-out would be to just have the Python script run the shell\ncommands, but I am looking for a more elegant solution if one exists.\n\nFor Linux-based azure function, I think you don\u2019t have to consider so much. It has many additional operations when deploy function app, so use command or run command by python to achieve this is more recommended(There is no ready-made python code or REST API that can do what you want.).","Q_Score":1,"Tags":"python,azure,azure-devops,azure-functions,azure-function-app","A_Id":65592524,"CreationDate":"2021-01-06T05:43:00.000","Title":"How can you deploy an Azure Function from within a Python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I downloaded a python script but I have a problem with the script, the problem is that it stops working but when I stop the program and rerun it it has a good feature to resume the process which was terminated last time and it continues the process for some time but again stops working. So,\nI want to create an another script which terminates the real python script and reruns it every 5 mins...\nbut when the real python script starts it asks if we want to continue the old terminated process and we have to enter 'y'...\nCan anyone help me with this and you can use any language to create the rerunning script. ThankYou","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":65591563,"Users Score":0,"Answer":"ThankYou everybody for their contribution here, after reading all your answers I finally resolved the issue. Here's what I did:\n\nI first changed the real python script deleted the code which asked if i wanted to continue the terminated process, so now it simply checks if any session exists and if it does exist then it directly resumes the process.\nThen I created another Python program which simply reruns the real python file.\n\nOnce again Thank You everybody!","Q_Score":1,"Tags":"python,automation","A_Id":65592516,"CreationDate":"2021-01-06T07:09:00.000","Title":"How to create a script to automate another script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new to python. I have installed python as well as I have installed pycharm IDE [without this i am unable to open and run] as well.\nIt is showing in output in console. i need to develop one basic project run on browser from my own for practice.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":65598020,"Users Score":0,"Answer":"you can run python online on Google Colab(it has an in-built GPU)\nor you could install the anaconda Python Distribution bundle,to use their Jupyter Notebook\nAlso Brython which supports python 3\nI hope I answered your question,if not let me know","Q_Score":0,"Tags":"python","A_Id":65598190,"CreationDate":"2021-01-06T14:51:00.000","Title":"How to run python in web browser, planning to develop one basic page using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I see two sentences:\n\ntotal amortized cost of a sequence of operations must be an upper\nbound on the total actual cost of the sequence\n\n\nWhen assigning amortized costs to operations on a data structure, you\nneed to ensure that, for any sequence of operations performed, that\nthe sum of the amortized costs is always at least as big as the sum of\nthe actual costs of those operations.\n\nmy challenge is two things:\nA) both of them meaning: amortized cost >= Real Cost of operation? I think amortized is (n* real cost).\nB) is there any example to more clear me to understand? a real and short example?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":98,"Q_Id":65599170,"Users Score":1,"Answer":"The problem that amortization solves is that common operations may trigger occasional slow ones. Therefore if we add up the worst cases, we are effectively looking at how the program would perform if garbage collection is always running and every data structure had to be moved in memory every time. But if we ignore the worst cases, we are effectively ignoring that garbage collection sometimes does run, and large lists sometimes do run out of allocated space and have to be moved to a bigger bucket.\nWe solve this by gradually writing off occasional big operations over time. We write it off as soon as we realize that it may be needed some day. Which means that the amortized cost is usually bigger than the real cost, because it includes that future work, but occasionally the real cost is way bigger than the amortized. And, on average, they come out to around the same.\nThe standard example people start with is a list implementation where we allocate 2x the space we currently need, and then reallocate and move it if we use up space. When I run foo.append(...) in this implementation, usually I just insert. But occasionally I have to copy the whole large list. However if I just copied and the list had n items, after I append n times I will need to copy 2n items to a bigger space. Therefore my amortized analysis of what it costs to append includes the cost of an insert and moving 2 items. And over the next n times I call append it my estimate exceeds the real cost n-1 times and is less the nth time, but averages out exactly right.\n(Python's real list implementation works like this except that the new list is around 9\/8 the size of the old one.)","Q_Score":1,"Tags":"python,algorithm,math,data-structures,time-complexity","A_Id":65602274,"CreationDate":"2021-01-06T16:04:00.000","Title":"amortized analysis and one basic question (is there any simple example)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"can I run 64-bit Python3 on Raspberry Pi 4B with 64-bit Raspbian OS aarch64?\nIf it is not possible, is installing 64-bit Debian\/etc. on RPi4 worth it? For example in performance, ...\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1109,"Q_Id":65599461,"Users Score":1,"Answer":"Yes of course you can. It is pre-installed.","Q_Score":0,"Tags":"python-3.x,raspberry-pi,64-bit,raspberry-pi4","A_Id":65647471,"CreationDate":"2021-01-06T16:22:00.000","Title":"Raspbian OS run Python3 64-bit","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to build an iOS application using Kivy, is it possible to pack this application using Windows machine?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":886,"Q_Id":65600039,"Users Score":0,"Answer":"I'm afraid not. It might be possible using a macOS virtual machine under Windows, but of course Apple make this difficult and probably forbid it.\nIf there are web services providing app build infrastructure then it may be possible to use them.","Q_Score":2,"Tags":"python,ios,windows,kivy","A_Id":65601890,"CreationDate":"2021-01-06T16:58:00.000","Title":"Can I develop Kivy application for iOS on Windows?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"According to the documentation\nAWS_S3_MAX_MEMORY_SIZE(optional; default is 0 do not roll over)\nThe maximum amount of memory (in bytes) a file can take up before being rolled over into a temporary file on disk.\nCan someone explain this a bit more? Is this a way I could throttle upload sizes? What does \"being rolled over\" refer to?\nThank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":151,"Q_Id":65602282,"Users Score":2,"Answer":"System memory is considered limited, while disk space is usually not (practically speaking). Storing a file in memory is a trade-off where you get better speed of access, but use up more of your memory to do so. Say you have a large file of 1GB, is it really worth using up so much memory just to access that file faster? Maybe if you have a lot of memory and that file is accessed very frequently, but maybe not. That is why there are configurable limits like this. At some point, the trade-off is not worth it.\n\"Rolling over\" would refer to when the in-memory file has gone over the set limit, and then gets moved into file-on-disk.","Q_Score":0,"Tags":"python,django,amazon-s3","A_Id":65602604,"CreationDate":"2021-01-06T19:33:00.000","Title":"What does AWS_S3_MAX_MEMORY_SIZE do in django-storages","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying execute Python script from RDS SQL Server 15 version but I didn't find any documentation around this in AWS Will it be possible to do this?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":253,"Q_Id":65602999,"Users Score":2,"Answer":"Unfortunately that is not possible as of now. RDS for SQL Server is just Relational Database Service and it does not allow you to execute any program on the RDS instance, except for T-SQL programmability stored within your SQL Server database (triggers, stored procedures, etc).","Q_Score":1,"Tags":"python,boto3,amazon-rds","A_Id":65636688,"CreationDate":"2021-01-06T20:32:00.000","Title":"Does RDS SQL Server support running python script?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making a python discord bot, and I don't want the bot to care about the command being in lower\/upper case.\nHow can I do that without aliases=['clear', 'Clear']?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":314,"Q_Id":65605569,"Users Score":3,"Answer":"Use the lower method. \"Clear\".lower() returns \"clear\".","Q_Score":0,"Tags":"python,discord,discord.py,discord.py-rewrite","A_Id":65605599,"CreationDate":"2021-01-07T01:42:00.000","Title":"How to make discord.py bot case-insensitive?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say you have some proprietary python + selenium script that needs to run daily. If you host them on AWS, Google cloud, Azure, etc. are they allowed to see your script ? What is the best practice to \"hide\" such script when hosted online ?\nAny way to \"obfuscate\" the logic, such as converting python script to binary ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60,"Q_Id":65607541,"Users Score":4,"Answer":"Can the cloud vendors access your script\/source code\/program\/data?\n\nI am not including government\/legal subpoenas in this answer.\nThey own the infrastructure. They govern access. They control security.\nHowever, in the real world there are numerous firewalls in place with auditing, logging and governance. A cloud vendor employee would risk termination and\/or prison time for bypassing these controls.\nSecrets (or rumors) are never secret for long and the valuation of AWS, Google, etc. would vaporize if they violated customer trust.\nTherefore the answer is yes, it is possible but extremely unlikely. Professionally, I trust the cloud vendors with the same respect I give my bank.","Q_Score":0,"Tags":"python,amazon-web-services,selenium,google-cloud-platform,hosting","A_Id":65607689,"CreationDate":"2021-01-07T06:20:00.000","Title":"How do cloud services have access to your hosted script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In my django app, i have user,employee,admin modules.\ndo I can create seperate views.py files for user,employee and admin\neg:\ni have folder called \"AdminFolder\" and I have added seperate adminView.py in it\nis it allowed in django?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":65607902,"Users Score":0,"Answer":"Yes, you can. You may create some folders for separating views files and just show the path for files","Q_Score":0,"Tags":"python,django","A_Id":65608556,"CreationDate":"2021-01-07T06:59:00.000","Title":"can we create separate view files in django app?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am working on a project that need me to generate a real time line chart on a separate display.\nI'm able to have the real time line chart working on my laptop now but I also want the separate display to show only the line chart (the chart window only), I do not want to make the display as the duplicate of my laptop screen. Is there a way for me to do send the chart through HDMI using Python? Is there any library\/ function that would be helpful? If Python won't work, is there any other tool that could be helpful for my case?\nFeel free to let me know if you have any question regarding this scenario, any help is appreciated. :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":213,"Q_Id":65608004,"Users Score":0,"Answer":"You can generate a graph with matplotlib. Save it to a specific location with some name.\nThen use PyGame\/ Tkinter\/ PyQt5 to make a \"window\" to display the image.\nThen extend your display. The projector display will have the PyGame\/PyQt \"window\" in full-screen.\nI think this is pretty much what you want.","Q_Score":2,"Tags":"python,python-3.x,matplotlib,display,hdmi","A_Id":65608209,"CreationDate":"2021-01-07T07:08:00.000","Title":"How to send Python line graph output through HDMI to another display?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm studying python and there's a lab I can't seem to crack. We have a line e.g. shacnidw, that has to be transformed to sandwich. I somehow need to iterate with a for loop and pick the letters with odd indexes first, followed by backward even indexes. Like pick a letter with index 1,3,5,7,8,6,4,2.\nIt looks pretty obvious to use a list or slices, but we aren't allowed to use these functions yet. I guess the question is just how do I do it?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":280,"Q_Id":65608114,"Users Score":2,"Answer":"Programming is all about decomposing complex problems into simpler ones. Try breaking it down into smaller steps.\n\nFirst, can you generate the numbers 1,3,5,7 in a for loop?\nNext, can you generate 8,6,4,2 in a second loop?\n\nTackling those two steps ought to get you on the right track.","Q_Score":2,"Tags":"python,string,for-loop","A_Id":65608246,"CreationDate":"2021-01-07T07:21:00.000","Title":"Iterate through a string forwards and backwards, extracting alternating characters","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building a Python Daemon app to download files which are accessible to an individual O365 user via Graph API. I am trying to use ConfidentialClientApplication class in MSAL for authorization.\nIn my understanding - this expects \u201cApplication Permissions\u201d (the API permission in Azure AD) and not \u201cDelegated permissions\u201d for which, admin has to consent Files.Read.All.\nSo the questions I have are:\n\nDoes this mean, my app will have access to all the files in the organization after the admin consent?\nHow do I limit access to a Daemon app to the files which only an individual user (my O365 user\/UPN) has access to?\nShould I be rather be using a different auth flow where a user consent be also part of the flow: such as on-behalf-of (or) interactive (or) username password?\n\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":317,"Q_Id":65608215,"Users Score":4,"Answer":"Does this mean, my app will have access to all the files in the organization after the admin consent?\nYes, it is the downside of application permissions usually.\nHow do I limit access to a Daemon app to the files which only an individual user (my O365 user\/UPN) has access to?\nI'm pretty sure you can't limit a daemon app's OneDrive access. You can for example limit Exchange access for a daemon app.\nShould I be rather be using a different auth flow where a user consent be also part of the flow: such as on-behalf-of (or) interactive (or) username password?\nIt would certainly allow you to limit the access to a specific user. In general I recommend that you do not use username+password (ROPC); it won't work any way if your account has e.g. MFA. The more secure approach would be that you need to initialize the daemon app once with Authorization Code flow. This gives your app a refresh token that it can then use to get an access token for the user when needed (and a new refresh token). Note it is possible for refresh tokens to expire, in which case the user needs to initialize the app again.","Q_Score":3,"Tags":"python,azure,microsoft-graph-api,msal","A_Id":65608304,"CreationDate":"2021-01-07T07:30:00.000","Title":"Microsoft Graph API: Limiting MSAL Python Daemon app to individual user access","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The following script in crontab is not executed, but it can be executed in the terminal\nCommand: * * * * * \/usr\/bin\/php artisan edit > \/dev\/null 2>&1\nError: [Errno 2] No such file or directory: 'ffprobe\\","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":65608236,"Users Score":0,"Answer":"I think this is not a crontab issue, it says 'ffprobe' as No such file or directory.\nIn your PHP code if you are using 'ffprobe' directory, try giving this as absolute path and not a relative one. I mean the full path and not partial one. Say for example something like \/home\/myuser\/phpcodes\/ffprobe\/ and not just ffprobe.\nPlease try and let me know if this helps.","Q_Score":0,"Tags":"python,macos,cron","A_Id":65608818,"CreationDate":"2021-01-07T07:32:00.000","Title":"The following crontab does not run, but it can be run in the terminal","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a collection of text documents. I've been asked to show each document in tf-idf vector space and in ntc form and then, train a svm model based on documents' vectors in python. What does ntc exactly mean here?\nI Found that it's the same as tf-idf weights with one step of normalization which is called \"cosine normalization\". But i can't find information about such thing. I found \"cosine similarity\" which is in my idea different from \"cosine normalization\". Are they the same? And how can i create this vector in python?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":65613869,"Users Score":0,"Answer":"I suggest the sklearn.feature_extraction.text.TfidfVectorizer,\nscikit learn is a bib in python used for training machine learning model,\nit is easy and very useful,","Q_Score":0,"Tags":"python,nlp,tf-idf","A_Id":65667335,"CreationDate":"2021-01-07T14:07:00.000","Title":"What's exactly ntc form in tf-idf vector space?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the Selenium standalone server for a remote web driver. One thing I am trying to figure out is how to start\/stop it effectively. On their documentation, it says\n\"the caller is expected to terminate each session properly, calling either Selenium#stop() or WebDriver#quit.\"\nWhat I am trying to do is figure out how to programmatically close the server, but is that even necessary? In other words, would it be okay to have the server up and running at all times, but to just close the session after each use with something like driver.quit()? Therefore when I'm not using it the server would be up but there would be no sessions.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":216,"Q_Id":65614348,"Users Score":0,"Answer":"you were right. Use seleniums driver.quit() as it properly closes all browser windows and ends driver's session\/process. Especially the latter is what you want, because you most certainly run the script headless.\nI have a selenium script running on as Raspberry Pi (hourly cron job) headless. That script calls driver.quit() at the end of each iteration. When i do a -ps A (to list al active processes under unix), no active selenium\/python processes are shown anymore.\nHope that satisfies your question!","Q_Score":0,"Tags":"python,selenium,selenium-webdriver","A_Id":65616980,"CreationDate":"2021-01-07T14:34:00.000","Title":"Properly starting\/stopping Selenium standalone server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to read the pixel colours values of an image without using any external libraries?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":213,"Q_Id":65615069,"Users Score":0,"Answer":"Yes, there is a way. You could read the bytes of the binary file and parse its bytes according to the image format you are facing. This is very time consumptive and error-prone and due to this, I would not recommend doing it.","Q_Score":0,"Tags":"python,image-processing","A_Id":65615121,"CreationDate":"2021-01-07T15:18:00.000","Title":"Python image cropping without any external library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I determine the convexity of an incomplete (i.e. non-watertight) trimesh? Convexity is well-defined even for incomplete meshes: Iff the incomplete trimesh T can be completed to a convex trimesh, T is convex. (I use the term \"incomplete\" over \"non-watertight\" because a. in my case it is a subset of an existing trimesh and b. because \"non-watertight\" includes cases of small errors\/floating point inaccuracies, while I'm talking about missing patches of triangles, so I think \"incomplete\" is more evocative.)\nI've tried to simply use Trimesh.is_convex, but that always return False for incomplete trimeshes. I thought about determining the convex hull of the point set and to check if it has the same number of points then the incomplete trimesh, but that is too lax. (As a 2D example, think of the letter Z: It's an incomplete polygon, which is non-convex by the above definition. But the points of the convex hull are identical to the original points.)\nI'm thinking about the approach to calculate hashes for all triangles in the incomplete trimesh and for all triangles in its convex hull and then check for inclusion. But is there a conceptionally simpler\/ more efficient way?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":123,"Q_Id":65615298,"Users Score":0,"Answer":"One solution - not particularly fast, not particularly elegant - is to determine the convex hull of the incomplete mesh T and then calculate the distance of all vertices of T and of all face centroids of T to the convex hull. If all distances are (within floating point accuracy range of) 0, T is convex.","Q_Score":1,"Tags":"python,trimesh","A_Id":65627864,"CreationDate":"2021-01-07T15:33:00.000","Title":"How to determine convexity of an incomplete trimesh?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to connect to my bitbucket using API token generated in Bitbucket but the connection is returning HTTP 401 error when using Python requests module.\nI need help to facilitate the completion of a task.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":233,"Q_Id":65616081,"Users Score":0,"Answer":"make sure that you use basic authentication and then set your username and use the personal token as password.\nYour username still required since the token associated with it","Q_Score":0,"Tags":"python,api,python-requests,bitbucket","A_Id":65616976,"CreationDate":"2021-01-07T16:17:00.000","Title":"Bitbucket API connection using token","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a simple GitLab CI pipeline that originally only required Python (>=3.5). The code I've been working on is just to test a prototype, so after realising there are some R libraries that already implement some of the required steps, I began using rpy2 which means that now the CI requires both Python and R.\nFrom what I understand, GitLab CI takes the images from Docker Hub but I was unable to find an image just containing both languages (in the versions that I require). I guess my option is to use the image of an OS and run the commands to install both languages in the pipeline, but that may require a lot of space, is there a simple easy way to achieve what I want?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":89,"Q_Id":65616421,"Users Score":1,"Answer":"Use echo \"Hello world\" to run the containers.","Q_Score":0,"Tags":"python,r,docker,gitlab","A_Id":65616765,"CreationDate":"2021-01-07T16:39:00.000","Title":"How to use Python and R in GitLab CI?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a simple GitLab CI pipeline that originally only required Python (>=3.5). The code I've been working on is just to test a prototype, so after realising there are some R libraries that already implement some of the required steps, I began using rpy2 which means that now the CI requires both Python and R.\nFrom what I understand, GitLab CI takes the images from Docker Hub but I was unable to find an image just containing both languages (in the versions that I require). I guess my option is to use the image of an OS and run the commands to install both languages in the pipeline, but that may require a lot of space, is there a simple easy way to achieve what I want?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":65616421,"Users Score":0,"Answer":"What I did was to use the image 'rpy2\/rpy2' which contains R and Python 2, and run apt install python3. Probably not the best option, but it works.","Q_Score":0,"Tags":"python,r,docker,gitlab","A_Id":65616700,"CreationDate":"2021-01-07T16:39:00.000","Title":"How to use Python and R in GitLab CI?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use Jupyter Lab via pipenv installation. So I have to run it inside a pipenv virtual environment.\nI have a directory dir with two subdictories my_modules and notebooks. Inside notebooks I organize my ipynb's. Inside my_modules I put some .py code I usually use in the notebooks.\nIt seems that the default working directory of a notebook is the directory where it is located. I was wondering... is there's a way to set the default working directory of the notebooks (of this particular installation of Jupyter Lab) to be the dir directory?\nIf not, is there a way that I could relative import the modules from my_modules from ipynb's inside notebooks without resorting to cell code such as %cd .. or import sys; sys.path.append('some\/dir') at the start of each notebook","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":237,"Q_Id":65618163,"Users Score":1,"Answer":"You can change the working directory of a notebook during runtime by using os.chdir('\/some\/directory\/').\nTo relatively import modules without sys.path.append(), you can set an environment variable PYTHONPATH which contains the directory of your modules (you might need to restart Jupyter Lab for this to take effect). On Windows, you can set an environment variable by Start -> type \"Edit environment variables for your account\".","Q_Score":0,"Tags":"python,jupyter-notebook,python-module","A_Id":65618692,"CreationDate":"2021-01-07T18:40:00.000","Title":"Import modules in jupyterlab project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I follow the Gilbert Tunner tutorial to Object Detection using Tensorflow 2, but I have this error during the training with model_main_ft2.py:\nTraceback (most recent call last):\n\nFile \"model_main_tf2.py\", line 113, in \ntf.compat.v1.app.run() File \"C:\\Users\\anaconda3\\envs\\tensorflow2\\lib\\site-packages\\tensorflow\\python\\platform\\app.py\",\nline 40, in run\n_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File\n\"C:\\Users\\anaconda3\\envs\\tensorflow2\\lib\\site-packages\\absl\\app.py\",\nline 300, in run\n_run_main(main, args) File \"C:\\Users\\anaconda3\\envs\\tensorflow2\\lib\\site-packages\\absl\\app.py\",\nline 251, in _run_main\nsys.exit(main(argv)) File \"model_main_tf2.py\", line 110, in main\nrecord_summaries=FLAGS.record_summaries) File \"C:\\Users\\anaconda3\\envs\\tensorflow2\\lib\\site-packages\\object_detection\\model_lib_v2.py\",\nline 566, in train_loop\nunpad_groundtruth_tensors) File \"C:\\Users\\anaconda3\\envs\\tensorflow2\\lib\\site-packages\\object_detection\\model_lib_v2.py\",\nline 339, in load_fine_tune_checkpoint\nif not is_object_based_checkpoint(checkpoint_path): File \"C:\\Users\\anaconda3\\envs\\tensorflow2\\lib\\site-packages\\object_detection\\model_lib_v2.py\",\nline 302, in is_object_based_checkpoint\nvar_names = [var[0] for var in tf.train.list_variables(checkpoint_path)] File\n\"C:\\Users\\anaconda3\\envs\\tensorflow2\\lib\\site-packages\\tensorflow\\python\\training\\checkpoint_utils.py\",\nline 112, in list_variables\nreader = load_checkpoint(ckpt_dir_or_file) File \"C:\\Users\\anaconda3\\envs\\tensorflow2\\lib\\site-packages\\tensorflow\\python\\training\\checkpoint_utils.py\",\nline 67, in load_checkpoint\nreturn py_checkpoint_reader.NewCheckpointReader(filename) File \"C:\\Users\\anaconda3\\envs\\tensorflow2\\lib\\site-packages\\tensorflow\\python\\training\\py_checkpoint_reader.py\",\nline 99, in NewCheckpointReader\nerror_translator(e) File \"C:\\Users\\anaconda3\\envs\\tensorflow2\\lib\\site-packages\\tensorflow\\python\\training\\py_checkpoint_reader.py\",\nline 35, in error_translator\nraise errors_impl.NotFoundError(None, None, error_message) tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful\nTensorSliceReader constructor: Failed to find any matching files for\nC:\/Users\/Desktop\/Tutorial\/models\/research\/object_detection\/efficientdet_d0_coco17_tpu-32\/chechpoint\/ckpt-0\n\nI've create efficientdet_d0_coco17_tpu-32 folder inside object detection folder, downloading and unzipping my model. I've already modify the model inside training folder adding a checkpoint PATH.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":235,"Q_Id":65618919,"Users Score":0,"Answer":"From comments\n\nThe error was an image with size over the efficentdetd0 size request (paraphrased from dons21)","Q_Score":0,"Tags":"python,tensorflow,object-detection,tensorflow2","A_Id":66354328,"CreationDate":"2021-01-07T19:41:00.000","Title":"Tensorflow2.4, model_main_tf2.py, chechpoint problem during training","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my code I need to ask the user to input items of a shopping list and then sort that list into descending order by price.\nFor example if the user where to enter\n\n: Butter 1.70, Coffee 4.99, Milk 0.45,\nKitchen Towel 1.75, Washing powder 6.20\n\nThe output should be:\n\nWashing powder 6.20, Coffee 4.99, Kitchen Towel\n1.75, Butter 1.70, Milk 0.45\n\nHowever I am completely stuck on how to actually do this. Any help is welcome, thanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":65620287,"Users Score":0,"Answer":"Here's some high level steps I'd start with, lmk if you need help with any particular step.\n\nsplit up list with something like input.split(',').strip()\n\nFor each element, split by space item.split(' ')\n\nAssume the number is the last 'word' of each element. Convert that word to a number.\n\nDo the sort.","Q_Score":0,"Tags":"python","A_Id":65620342,"CreationDate":"2021-01-07T21:37:00.000","Title":"Need help sorting a list with letters and numbers in each entry","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried to crawl a lot of data in Python, but I wondered if the item object would run out of memory during data persistence as the amount of data grew","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":65622275,"Users Score":0,"Answer":"Item objects are not kept in memory forever, as data is persisted they are removed from memory, so unless each item object is extremely big (unlikely), a large number of item objects should never be a problem.\nYou are more likely to run out of disk, depending on the amount of data you are extracting. Make sure you have enough disk wherever you are persisting the extracted data.","Q_Score":0,"Tags":"python,scrapy","A_Id":66309899,"CreationDate":"2021-01-08T01:34:00.000","Title":"If you use the Scrapy framework to crawl large amounts of data, the item object will run out of memory\uff1f","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I tried to crawl a lot of data in Python, but I wondered if the item object would run out of memory during data persistence as the amount of data grew","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":65622275,"Users Score":0,"Answer":"Yes it probably will. I've done large jobs and eventually it crashed. The RAM memory limit on your system will depend though. Maybe you can solve it by saving the items into textfiles or similar and then delete the items from memory.\nWhen I need to to large scraping jobs I will usually just divide it into smaller jobs. If you're scraping the same website, like example.com?page={page_number} you can limit to say 10 000 pages and when you run the spider the second time you continue on page 10 000. You'll need to save down what page you're currently on though, in a text file through and API.","Q_Score":0,"Tags":"python,scrapy","A_Id":65625931,"CreationDate":"2021-01-08T01:34:00.000","Title":"If you use the Scrapy framework to crawl large amounts of data, the item object will run out of memory\uff1f","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I tried installing pygame and win10toast modules using pip and they got installed in the python directory\/lib\/site-packages. But in any IDE like PyCharm or VScode non of them were detected and I got ModuleNotFoundError. But when I used easy_install, the modules got installed under anaconda3\/scripts and the error was gone too.\nIs it because of some issues in my path variable? Or maybe the IDEs are not considering the python directory to search for modules?\nHow to resolve this as easy_install says that it will be removed in future versions?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":88,"Q_Id":65623520,"Users Score":1,"Answer":"In Pycharm (and I assume it's the same for VScode), you would select a Python interpreter either when you create a new project or through the settings menu afterwards.\nThe packages that you want to use should be installed in the path that belongs to that interpreter or in an environment that uses that interpreter (e.g. conda environment or virtual environment). If you have chosen to use a Python version installed in an Anaconda environment as your project interpreter, then your IDE is not going to look for anything in another Python path (unless it was instructed to do so).\nSo if you want to install new packages, make sure you have the right environment activated beforehand and then use pip or conda, depending on the package manager that you want to use.\nFor your situation (you seem to use the base conda environment), you can install packages by opening the anaconda prompt, make sure that there is a (base) before the prompt, and type conda install if the package exists on conda and if not you can try pip install ","Q_Score":1,"Tags":"python,pip,easy-install,anaconda3","A_Id":65623710,"CreationDate":"2021-01-08T04:36:00.000","Title":"Why are modules installed in python directory not recognised but those under anaconda3 are?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently did a project where I have to get the following input from the user:\nDate in DD form, Month in MM form, Year in YY form, Century (19, 20 0r 21) and the date format(MM-DD-YYYY, DD-MM-YYYY, YYYY-MM-DD).\nAnd I have to print the date in the format chosen by the user... Further I have to ask the user the number of days to be added to the date that was printed previously and then add the no. of days and then print it and if the user should be asked if he wants to continue adding days, if they choose yes.. again they'll be asked the no. of days to be added, if they select no then program ends there....\nNow I have made a HTML files for that... The problem is I don't know how to combine the Python and HTML files... Please help me in this regard.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1565,"Q_Id":65625385,"Users Score":1,"Answer":"The keywords you should be looking are a web framework to host your application such as Flask, Django, and a template language to combine python and HTML to use it via these frameworks, such as Jinja2 or Django's own template language. I suggest Flask with Jinja2 since it's a micro framework and easy to start with.","Q_Score":1,"Tags":"python,html","A_Id":65625476,"CreationDate":"2021-01-08T08:09:00.000","Title":"How can I combine a HTML and Python File?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a pdf file with tables in it and would like to read it as a dataframe using tabula. But only the first PDF page has column header. The headers of dataframes after page 1 becomes the first row on information. Is there any way that I can add the header from page 1 dataframe to the rest of the dataframes? Thanks in advance. Much appreciated!","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":4066,"Q_Id":65626278,"Users Score":3,"Answer":"One can solve this by following steps:\n\nRead the PDF:\ntables = tabula.read_pdf(filename, pages='all', pandas_options={'header': None})\n\n\nThis will create a list of dataframes, having pages as dataframe in the list.\n\npandas_options={'header': None} is used not to take first row as header in the dataframe.\n\nSo, the header of the first page will be first row of dataframe in tables list.\n\nSaving header in a variable:\ncols = tables[0].values.tolist()[0]\n\n\nThis will create a list named cols, having first row of first df in tables list which is our header.\n\nRemoving first row of first page:\ntables[0] = tables[0].iloc[1:]\n\n\nThis line will remove first row of first df(page) in tables list, as we have already stored in a variable we do not need it anymore.\n\nGiving header to all the pages:\nfor df in tables:\ndf.columns = cols\n\n\nThis loop will iterate through every dfs(pages) and give them the header we stored in cols variable.\nSo the header from page 1 dataframe will be given to the rest of the dataframes(pages).\nYou can also concat it in one dataframe with\n\nimport pandas as pd\n\nand:\n\ndf_Final = pd.concat(tables)\n\nHope this helps you, thanks for this oppurtunity.","Q_Score":1,"Tags":"python,dataframe,pdf","A_Id":65697278,"CreationDate":"2021-01-08T09:17:00.000","Title":"Using tabula.py to read table without header from PDF format","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting vehicle detections from YOLO with a video input. And I got some missing bounding box outputs for some frames. How can I get bounding box estimations for those missing frames using other known bounding boxes and centers of that sequence just by using coordinate geometry without using any advance tracking method?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":150,"Q_Id":65627675,"Users Score":2,"Answer":"One way would be to just interpolate between adjacent frames. When a bounding box is missing in a frame but is present in adjacent frames, you can take the coordinates of the bounding boxes and calculate the average. If you want to make it more eleborate you can even include more frames and add a weight to weigh closer frames heavier.","Q_Score":1,"Tags":"python,object-detection,yolo,bounding-box","A_Id":65627783,"CreationDate":"2021-01-08T10:53:00.000","Title":"How can I estimate missing bounding box outputs using known bounding box data of a sequence?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have used sklearn.tree.DecisionTreeRegressor to predict a regression problem with two independables aka the features \"X\", \"Y\" and the predicted dependable variable \"Z\".\nWhen I plot the tree, the leafs do not seem to differ much from a Classification tree. The result is not a function at each leaf, but it is a single value at each leaf, just like in a classification.\nCan someone explain, why this is called a regression and why it is different to a classification tree?\nBecause I seem to have misunderstood the sklearn class, is there a tree package for python, that does a \"real\" regression and has a function as an output at each leaf? With X,Y and Z, this would probably be some kind of surface at each leaf.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":120,"Q_Id":65631318,"Users Score":1,"Answer":"This is to be expected. The output at each leaf is not a function, it is a single value, representing the predicted numeric (hence regression) output for all instances in that leaf. The output is a \"function\" in the sense that you get different values depending on which leaf you would land in. Classification tree words exactly the same, but the output value represents a class probability, not the predicted value of Z.\nIn other words, regressions output functions that map to arbitrary values, but there is no rule that this function must be continuous. With trees, the function is more of a \"stair step\".","Q_Score":0,"Tags":"python,scikit-learn,regression,decision-tree","A_Id":65631380,"CreationDate":"2021-01-08T14:57:00.000","Title":"How is the result from a Decision Tree Regressor continuous?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have used sklearn.tree.DecisionTreeRegressor to predict a regression problem with two independables aka the features \"X\", \"Y\" and the predicted dependable variable \"Z\".\nWhen I plot the tree, the leafs do not seem to differ much from a Classification tree. The result is not a function at each leaf, but it is a single value at each leaf, just like in a classification.\nCan someone explain, why this is called a regression and why it is different to a classification tree?\nBecause I seem to have misunderstood the sklearn class, is there a tree package for python, that does a \"real\" regression and has a function as an output at each leaf? With X,Y and Z, this would probably be some kind of surface at each leaf.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":120,"Q_Id":65631318,"Users Score":0,"Answer":"For anyone searching for what I was searching for:\nThere are combination of Linear\/Log\/... Regression Models with Decision Trees, called \"Model Tree\". Unfortunately not included in any free python package, but there are some implementations on github, if you google it.It is also supposed to be available soon with scikit-learn.\nThe Model Tree is simular to the M5 algorithm.","Q_Score":0,"Tags":"python,scikit-learn,regression,decision-tree","A_Id":65668497,"CreationDate":"2021-01-08T14:57:00.000","Title":"How is the result from a Decision Tree Regressor continuous?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have trained a custom object detection model using 750 images using ImageAI on Google Colab Pro about a month ago using TensorFlowGPU 1.13 and have roughly 30min\/epoch training time. Now, when I train using the same dataset but with TensorFlowGPU 2.4.3 (ImageAI doesnt support old TF anymore), I am getting very little GPU usage (0.1GB) and 6 hour per epoch training times. I have tried training the same model on my local machine and I am getting very slow training times as well.\nI am using the following imports (based on ImageAI documentation):\n!pip install tensorflow-gpu==2.4.0 keras==2.4.3 numpy==1.19.3 pillow==7.0.0 scipy==1.4.1 h5py==2.10.0 matplotlib==3.3.2 opencv-python keras-resnet==0.2.0\n!pip install imageai --upgrade\nI am pulling my training data from Google Drive.\nIs there anything I could be missing that could speed up my object detection training times on either Google Colab or my local machine? The slow training times is slowing my research down.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":372,"Q_Id":65632510,"Users Score":0,"Answer":"If you want full GPU usage, from my experience, you must revert back to previous versions of ImageAI and it's compatible packages. Here is a list of compatible packages that I have installed that work as of now (January 2021) on my local machine and Google Colab:\n\nTF-GPU==1.13.1\nKeras==2.2.4\nImageai==2.1.0\n\nThis fixed any issue caused by the most recent patch of ImageAI. I now am back to full GPU usage. Until the issue is patched, I suggest using the old version.","Q_Score":1,"Tags":"python,tensorflow,computer-vision,object-detection,imageai","A_Id":65654555,"CreationDate":"2021-01-08T16:10:00.000","Title":"Little to No GPU Usage during Custom Object Detection Training After Recent ImageAI Update","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've tried to import a model on Kaggle:\nfrom statsmodels.tsa.arima.model import ARIMA\nbut it returned this error:\nAttributeError: module 'numpy.linalg.lapack_lite' has no attribute '_ilp64'\nThere's numpy version 1.18.5. Could you please tell how to fix this error?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":920,"Q_Id":65632754,"Users Score":1,"Answer":"I've just back up statmodels library to the version 0.11.0:\npip uninstall statsmodels -y\npip install statsmodels==0.11.0\nIt seems like default version 0.11.1 has a bug","Q_Score":1,"Tags":"python,statsmodels,kaggle","A_Id":65636386,"CreationDate":"2021-01-08T16:26:00.000","Title":"Numpy doesn't provide an attribute","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Have been putting together applications that include their own infrastructure\/deployment code using IntelliJ as my IDE. The total project as checked into Git includes several independent python scripts contained in sub directories under my project (including their package information and supporting python files). My IAC\/deployment code takes care of making sure these scripts have a virtualenv with desired packages when on the infrastructure it is deployed to. All of this is working fine, except for when I try to get IntelliJ to understand the python subdirectory environments.\nWhen I open the whole project tree as an IntelliJ project, there seems to be no way to explain to IntelliJ that some of the sub directories should be viewed as their own python project, such that they can have have their own virtualenv, packages, and base import directory understood. Being that IntelliJ doesn't understand these things, it sees my imports in these scripts as broken and I can't jump around in code etc.\nAs a work-a-round I have been sometimes just opening the python sub directories as their own IntelliJ project so that IntelliJ can understand them. But this is not ideal, as I have to have several different IntelliJ instances for the same larger project.\nSo now perhaps my question makes more sense, and I will restate it. Is there some way I am missing where I could have one instance of IntelliJ correctly understand the whole project including that some subdirectories are like python sub-projects with their own virtual env and packages etc?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":471,"Q_Id":65633545,"Users Score":0,"Answer":"I've already accepted the very detailed answer by @Steve, and found it very useful to get me on the right track. But for my specific IntelliJ version(2020.1.2 on MacOS) I had to set it up a bit differently, so am including what I did as an alternate answer:\nNote, that for my version of IntelliJ and this approach, the subdirectory to serve as a python root does NOT need to be a project itself, as is the case in Steve's answer.\nStep 1) Set up a virtualenv for the subdirectory of your project which is to serve as the root your python code. I used python poetry, whatever method you use, note the path of the virualenv's python executable as you will need to enter it in IntelliJ later.\nStep 2) Open File -> Project Structure. A window pops up (eventually took 30 seconds or so for me).\nStep 3) On the panel at the left of the Project Structure window, select \"Modules\"\nStep 4) A the top of the second column from the left is a + - and copy icon. Click the + icon. A drop down appears and you should select \"Import Module\", causing another pop up to appear.\nStep 5) In the Pop Up, navigate to the subdirectory of your project that is to serve as the root directory of your python script and select it with the Open button at bottom right. The pop up window disappears and after a few seconds an \"Import Module\" popup window appears.\nStep 6) In the Import Module window select \"Create module from existing sources\" then click the Next button and follow the Wizard like steps, which will give you the chance to specify the virtualenv path you setup in step 1.\nStep 7) Click Finish for the Import Module window, and you should be set up.","Q_Score":3,"Tags":"python,pycharm,intellij-14","A_Id":65706216,"CreationDate":"2021-01-08T17:21:00.000","Title":"Within a larger PyCharm or IntelliJ project is there support for subdirectories with their own virtualenvs?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to find an inspect element without opening the f12 menu?","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":186,"Q_Id":65635325,"Users Score":-1,"Answer":"In chrome and a few other browsers you can hit \"Ctrl+Shift+I\"","Q_Score":1,"Tags":"python,selenium,web-inspector","A_Id":65635372,"CreationDate":"2021-01-08T19:31:00.000","Title":"How to search for an inspect element without using the F12 menu","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In the tensorflow documentation I see the call() method defined when subclassing activations and models. However, in case of subclassing regularizers, initializers and constraints, they define the __class__() method instead.\nWhen playing around with both, I could not find any differences myself.\nCould someone tell me what the difference is?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":95,"Q_Id":65635870,"Users Score":2,"Answer":"call() is just a regular method that you can call on an instance of a class, e.g. foo.call(...).\n__call__() is a special method that makes the instance itself callable. So instead of doing foo.call(...) you can just do foo(...). (You can also do foo.__call__() still.)","Q_Score":0,"Tags":"python,class,tensorflow,methods","A_Id":65636427,"CreationDate":"2021-01-08T20:19:00.000","Title":"What is the difference between call() and __call__() method in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In the tensorflow documentation I see the call() method defined when subclassing activations and models. However, in case of subclassing regularizers, initializers and constraints, they define the __class__() method instead.\nWhen playing around with both, I could not find any differences myself.\nCould someone tell me what the difference is?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":95,"Q_Id":65635870,"Users Score":1,"Answer":"_call_ is python magic method (or dunder method) which make class objects callable. but on the other hand call is user defined method in Keras which at the background use mentioned _call_ method but before using it this user defined call do some extra things like building weight and bias tensors based on the input tensor shape.","Q_Score":0,"Tags":"python,class,tensorflow,methods","A_Id":65636398,"CreationDate":"2021-01-08T20:19:00.000","Title":"What is the difference between call() and __call__() method in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an image to threshold. But I dont want to classical methods. I have my own min and max threshold value. How can i apply this threshold values to my image and get a binary mask. For example min is 300 and max is 500 if my pixel value is between these I get 255 if not I get 0. Thanks for helping.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":243,"Q_Id":65636983,"Users Score":0,"Answer":"Assuming you're using opencv, there's the cv2.inRange(img, min, max) function that does exactly that. If you want a library agnostic solution then you could iterate through your image and build the mask yourself by checking each individual pixel value against your bounds.","Q_Score":0,"Tags":"python,image-processing,binary,minmax,image-thresholding","A_Id":65646466,"CreationDate":"2021-01-08T22:01:00.000","Title":"Image thresholding with my own min-max value with python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"for time in df[col]:\ndatetime.datetime.fromtimestamp(int(time))\nWhy is not updating directly the dataframe df[col]? My aim is to update one column of a data frame with new date format.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":65641493,"Users Score":0,"Answer":"I solved; I creat a list and update the related column of the dataframe with the list.\na_list=datetime.datetime.fromtimestamp(int(time))\ndf[col]=a_list","Q_Score":0,"Tags":"python,dataframe,date,updates,epoch","A_Id":65641702,"CreationDate":"2021-01-09T10:18:00.000","Title":"Changing date format of a dataframe \/column","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u2019m building a gui on python with tkinter, i\u2019ve made a login, register and change password screens, hosting the data on mysql, i want to make an option that if user already choose password \u201cx\u201d for example, he will be able to repeat the password only 1 more time, after that it will not give him an option to choose and repeat the same password on the change password screen, any clue how to do it?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":65649344,"Users Score":0,"Answer":"Create a table with (user, passwords_used) as columns. Each time a user changes their password, check it against this table. If that (user, password) pair isn't in the table, add it to the table and change the password. Otherwise reject it as reused.","Q_Score":0,"Tags":"python,python-3.x,tkinter","A_Id":65649381,"CreationDate":"2021-01-10T01:30:00.000","Title":"Limit same password in password change","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way in Visual Studio Code to clear the previous code in the terminal every time I execute the code. It's very annoying to type clear every time I wanna run the code again (or to run another file).","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":460,"Q_Id":65649841,"Users Score":1,"Answer":"To clear Terminal in VS Code simply press Ctrl + Shift + P key.\ntype command Terminal: Clear.\ngo to View in taskbar upper left corner of vs code and open Command palette.\nI think Ctrl + K should do the trick too if you are in windows, or else you can make shortcuts for clearing the terminal using VS Code shortcuts(keybindings file).","Q_Score":1,"Tags":"python,visual-studio-code,terminal,settings","A_Id":65649874,"CreationDate":"2021-01-10T03:19:00.000","Title":"how to clear the vscode terminal when I run the code again?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My code fails when it tries to import requests, despite it already being installed. I did a pip list and saw the module requested there, I uninstalled it and reinstalled it with both pip install and pip3 install, also added sudo both times. Whenever I try to install it I get the message that the requirement is already satisfied. Is there something else that I could try?\nIf it helps I am using VSCode on a Mac, I also have Jupyter and Spyder installed and have used them before however I\u2019ve never used the requests module on this device.\nUPDATE:\nI created a virtualenv and installed requests there, when running the script in the venv I am not getting the error anymore, however I am still curious why it was being thrown on the base env, anything else I could check?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1133,"Q_Id":65651546,"Users Score":1,"Answer":"You probably have multiple installations\/environments.\nBefore the \"import requests\", line put \"import sys; print(sys.executable)\".\nThis prints the python executable being used - verify that it is the same one that you can successfully import requests with .","Q_Score":0,"Tags":"python,python-requests,python-module","A_Id":65651608,"CreationDate":"2021-01-10T08:53:00.000","Title":"ImportError: No module named requests but module already exists","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"long ago i did the Deep Learning specialization on Coursera and i rewrite the NST notebook's code so it will run on Spyder. The code did run well and i haven't touch it for a year. Now i tried to run it again and i get \"NaN\" in the cost function and the generated picture is black. I tried to see the values of the variables from the Variables Explorer and it seems fine. Someone has an idea why it happens? i didn't change the code at all and the only thing i did since last year is installing packages.\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":65652302,"Users Score":0,"Answer":"either your model's weight too large or the learning rate is too large so it diverges, now, make sure to tune the learning rate (make it smaller than it's now) or use grad\/weights clipping","Q_Score":0,"Tags":"python,tensorflow,deep-learning,computer-vision,conv-neural-network","A_Id":65652451,"CreationDate":"2021-01-10T10:37:00.000","Title":"Getting \"NaN\" in Neural Style Transfer code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have macbook air m1 and I can't install python modules like numpy, matplotlib.\nI install python3.8 with homebrew then install virtaulenv. In venv, when I run 'pip install numpy' the error shows up:\n'... ERROR: Failed building wheel for numpy\nFailed to build numpy\nERROR: Could not build wheels for numpy which use PEP 517 and cannot be installed directly'\nI tried 'pip install --upgrade pip setuptools wheel' doesn't work. Please help.","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":3810,"Q_Id":65653536,"Users Score":4,"Answer":"Install python 3.9.1 from python.org (ARM version)\nThen, for Numpy:\npython3 -m pip install cython\npython3 -m pip install --no-binary :all: --no-use-pep517 numpy==1.20rc1\nThen, for matplotlib:\nbrew install libjpeg\npython3 -m pip install matplotlib\nThis works for me on a Mac min with Apple Silicon.\nNo luck for SciPy yet, though.","Q_Score":3,"Tags":"python-3.x,virtualenv,homebrew,macos-big-sur,apple-silicon","A_Id":65745638,"CreationDate":"2021-01-10T12:59:00.000","Title":"I can't install python modules on Apple Silicon","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using django-channels. When an exception is raised within a consumer it will log the error, the websocket connection will be disconnected and then it will re-connect again.\nI would like to send a websocket message before it disconnects. I've tried to catch the error, send the message and then re-raise the exception. But the message still isn't sent.\nWhat's the best way of achieving this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":460,"Q_Id":65653802,"Users Score":2,"Answer":"When you raise an error it seems like the the actual raising of the error takes precedence of sending the message which happens later.\nSo the solution I went with in the end was to catch the exception in place, append the exception and check whether there were any exceptions to be raised after a message was sent.\nIf there was an error to raise, raise it. That way errors are raised server side and any errors will get known frontend side as well.\nAlternatively, which might be a better solution. Catch the error and then log the exception. Have a special method that sends the error back to frontend and then return early.\nThat way the server will never disconnect and there is no need for re-connection. Which saves some time and processing.","Q_Score":0,"Tags":"python,django,channels","A_Id":65654479,"CreationDate":"2021-01-10T13:29:00.000","Title":"Send a websocket message when exceptions is raised in django channels","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"SITUATION:\nI have created a python package where I use the following libraries:\n\nmatplotlib\nregex\nstatistics\nos\nunittest\ncoverage\n\nmy problem is that when I do pip freeze, the result only returns\nversions values for\n\nmatplotlib==3.2.1\nregex==2020.11.13\nstatistics==1.0.3.5\ncoverage==5.3.1\n\nI have read some sites where they say that this is because, for example,\nos and unittest comes already installed with Python3.X.\nQUESTION:\n\nShould I include 'os', 'unittest' in 'requirements.txt'?\nIf so, which is the version I should write?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37362,"Q_Id":65654215,"Users Score":4,"Answer":"You shouldn't include os and unittest in requirements.txt.\nAs you read, I confirm that os and unittest are included in Python 3.X.\nos and unittest version depends on your Python 3.X version.","Q_Score":4,"Tags":"python,python-packaging,requirements.txt","A_Id":65654276,"CreationDate":"2021-01-10T14:13:00.000","Title":"Python 'requirements.txt' file in package","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a bluetooth RGB bulb that I want to use for notifications in discord. It seems that following standard guides I would need to get an admin to add a bot account to each discord I wanted this functionality in.\nIs there a way to get access to messages accessible in my main account without getting another account\/bot added? I only need to be able to read out the messages so I can parse them and trigger RGB stuff.\nI would prefer to do this in python if possible but other solutions are fine if need be.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":65657004,"Users Score":0,"Answer":"Self bots are against discord's TOS.\nYou can try using on_message(message) and using that to parse the message.content","Q_Score":0,"Tags":"python,discord","A_Id":65657053,"CreationDate":"2021-01-10T18:34:00.000","Title":"How to access discord messages from main account instead of bot account","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an excel spreadsheet with raw data in:\ndemo-data:\n\n\n\n\n\n\n\n\n\n\n\n1\n2\n3\n\n\n4\n5\n6\n\n\n7\n8\n9\n\n\n\n\nHow do I combine all the numbers to one series, so I can start doing math on it. They are all just numbers of the same \"kind\"","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":65657590,"Users Score":0,"Answer":"Given your dataframe as df, this function may help df.values.flatten().","Q_Score":0,"Tags":"python,pandas,dataframe,series","A_Id":65657647,"CreationDate":"2021-01-10T19:30:00.000","Title":"Python pandas - multiple columns to one series","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a scraping project I threw together with scrapy in python. Now I've come across certain data that are only loaded onto the page as a user scrolls down. How do I emulate this in my scrapy spiders?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":65659131,"Users Score":0,"Answer":"Inspect the page (Press F12) in Google Chrome or similar.\nClick on the Network tab.\nScroll down on the page until you the new data is rendered.\nAt the same time the new data is rendered in your browser you should see a new file pop up in the inspection panel.\nThe file could be anything depending on the site but most of the time it's JSON.\nClick on the file in inspection panel and copy the Request URL.\nBack in scrapy you can send a request to this URL the get the dynamically rendered data.","Q_Score":0,"Tags":"python-3.x,web-scraping,scrapy","A_Id":65664542,"CreationDate":"2021-01-10T22:19:00.000","Title":"Scraping data requiring scrolling down with scrapy in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I installed a bazelisk exe file and included that file in my environmental Path variable. I can now run bazelisk commands but no bazel commands and I think I was told that that was normal. Is it? If it is, if I cd into my tensorflow folder and run python .\/configure.py because I think that that is a step I need to do to build tensorflow from source I get the message Cannot find bazel. Please install bazel. What am I supposed to do? I am using python 3.6.2 and windows 10 and bazelisk is on v1.7.4","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":106,"Q_Id":65660427,"Users Score":0,"Answer":"Try to rename bazelisk to bazel because it is just a wrapper for bazel","Q_Score":1,"Tags":"python,python-3.x,windows,tensorflow,bazel","A_Id":65661694,"CreationDate":"2021-01-11T01:49:00.000","Title":"I can not use bazel to build tensorflow because I installed bazel with bazelisk?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wanted to know whether it is possible to enable IAP OAuth for App Engine but for a subdomain or a subfolder. I have already enabled it for the domain, but I don't want it to show up for the entire website. For example: I want to use IAP secured login on admin.website.com but users to website.com should be able to access it without any issues. It is also okay if this can be done for website.com\/admin (I suppose enabling on website.com\/admin is a lot easier too)\n(Website name changed for privacy)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":146,"Q_Id":65661512,"Users Score":0,"Answer":"You can activate or deactivate IAP (deactivate means grant allUsers with IAP-Secured Web App User role) per service. It's the finest granularity, you can't deactivate it by URL path.","Q_Score":2,"Tags":"google-app-engine,google-cloud-platform,google-app-engine-python,identity-aware-proxy","A_Id":65669282,"CreationDate":"2021-01-11T04:56:00.000","Title":"How to enable IAP on a subdomain in App Engine?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am using python 32-bits but I want to upgrade it to 64-bit to use some advanced modules.\nBut I don't want to lose my 32-bit projects, suggest help, please.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":103,"Q_Id":65663228,"Users Score":1,"Answer":"Pure Python code is neither 32 nor 64 bits, because Python is a very high level programming language. When you run a Python program, the Python interpreter quickly compiles your source code to machine code and executes it.\nIt doesn't matter whether you use a 32 bit or 64 bit Python interpreter to execute your pure Python program, because the result should be the same. However, if your program uses libraries that contain non-Python code, you might have to reinstall the 64 bit version of those libraries.\nSo to conclude, don't be afraid to download and install the 64 bit version of Python because your Python programs will all run perfectly on it!","Q_Score":0,"Tags":"python,64-bit,32-bit","A_Id":65710349,"CreationDate":"2021-01-11T08:03:00.000","Title":"how to replace python 32-bits with 64-bits without losing my projects?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have few notions of machine learning and data science and I need to make a dimensional reduction of a data set, corresponding to the TV consumption of users. I have approximately 20 columns (features) and hundreds of thousands of samples.\nThe problem is that the features are of different kinds. For example, the region, the date, the type of device, the duration of consumption, etc.\nWhat algorithms could I implement in this particular case to reduce the number of features?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":65664628,"Users Score":0,"Answer":"Look into feature selection algorithms, there are a ton of articles and public libraries that have implementations of these. Support Vector Machine's (SVM) is one that is commonly used. Take a look at sklearn\/tensorflow\/etc. docs to see implementation details and pick which one is best for your problem.","Q_Score":0,"Tags":"python,machine-learning,data-science,dimensionality-reduction","A_Id":65665322,"CreationDate":"2021-01-11T09:56:00.000","Title":"Dimensionality Reduction (Heterogenous Data)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a huge dataframe and I need to display it into an excel sheet such that every other 2 columns are colored except the 1st column.\nFor example:\nIf there are columns 1 to 100,\ncolumn 2,3 must be red\nthen 4,5 non colored\nthen 6,7 again red\nthen 8,9 non colored\nand it goes on and on till last column of the dataframe.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":240,"Q_Id":65667142,"Users Score":1,"Answer":"In Excel, Selected the columns containing you data or the entire spreadsheet. Click Conditional formatting on the Home Ribbon. Click New Rule. Click Use a formula to determine which cells to format. In the formula box enter =OR(MOD(COLUMN(A1),4)=2,MOD(COLUMN(A1),4)=3). Click the Format button. Select the fill tab. Set the fill color to what you want. Hit OK a few times and you should be done.\nThis will fill in the cells that or equal to 2 or 3 mod 4.","Q_Score":1,"Tags":"python,excel,dataframe,xlsxwriter","A_Id":65668488,"CreationDate":"2021-01-11T12:45:00.000","Title":"Color every other 2 columns of a dataframe into an excel?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I'm new at ML and I have a task where I need to be able to identify a specific object with my phone's camera and trigger an action at that moment. I got to the point where I'm able to train the model, hook it up with a sample Android app Google provides and run it. All of this works perfectly with a few datasets I've downloaded from different sites, things like dogs, or flowers work fine. Now, I'm trying to train the model with a set of images that contain a simple object, for this example I'm using a Sony Bluetooth speaker XB12B. I took a bunch of photos of it in different surroundings but when I train the model I always get an accuracy of 1 and when I use that model in my phone using image labeling, anything it sees is 100% that object.\nI'm training the model with only one class.\nAs I mentioned I'm new to this and I'm not sure what I'm doing wrong, if it's the shape of the object, the lack of more elements in the dataset or some other parameter I'm missing.\nAny insights you guys may have or clues are greatly appreciated\nCheers","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":65667515,"Users Score":0,"Answer":"You would have to train the model with an existing dataset + your new set of images labeled as \"speaker\" (for the sake of this example). If you train a model with only 1 class, it will learn to predict \"how close is this object to a speaker?\" for every object it finds instead of \"is this a speaker?\".\nYou need to at least use 2 classes - mark speaker images as 'speaker' and the remaining images as 'other' OR you need to use more than 2 classes - mark speaker images as 'speaker' and the remaining images as per their assigned class of 'dog', 'cat', etc.","Q_Score":0,"Tags":"python,tensorflow,machine-learning,tensorflow-lite,image-classification","A_Id":65675031,"CreationDate":"2021-01-11T13:13:00.000","Title":"Issue with training an Image Classification Model with Tensorflow Lite Model Maker","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to create a conda environment from two yaml files?\nsomething like\n\nconda env create -f env1.yml -f env2.yml\n\nor create a enviornment from one yaml file, then update the environment with the second yaml file?\n\nconda env create -f env1.yml\nconda env update -f env2.yml","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":470,"Q_Id":65668913,"Users Score":1,"Answer":"Does not seem to work. conda env create only uses the final --file argument and ignores the others. conda create and conda update do not support yaml files.","Q_Score":0,"Tags":"python,anaconda,conda","A_Id":66260290,"CreationDate":"2021-01-11T14:40:00.000","Title":"Can I create a conda environment from multiple yaml files?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have Python 3.8.5\nHow can I install the package vtkplotter with this Python version?\nThanks a lot!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1103,"Q_Id":65670181,"Users Score":0,"Answer":"Use pip or conda or whatever to install your package.\nIf you find that a certain version of Python is not compatible with a specific version of a library, you can make a virtualenvironment with a specific python version that is compatible with the package, activate the virtual environment, and then install your package.","Q_Score":3,"Tags":"python","A_Id":65670201,"CreationDate":"2021-01-11T15:53:00.000","Title":"How to install vtkplotter with Python 3.8.5?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to access the settings for a Twitter user using the api, specifically the notification filters.\nFor example I would like to build a code that turns on these filters or turns them off, for example the \"only see notifications from those you follow\/follow you\"\nI don't see any documentation on this or through Tweepy either. Is this possible?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":100,"Q_Id":65670459,"Users Score":0,"Answer":"No, there's no API feature for this.","Q_Score":0,"Tags":"twitter,tweepy,twitterapi-python","A_Id":65671288,"CreationDate":"2021-01-11T16:09:00.000","Title":"Can I access the Twitter notification filters from the api?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to generate an executable from a python project containing multiple folders and files.\nI tried to work with library cx_Freeze, but only worked for a single file project.\nCan you tell me how to do please?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":431,"Q_Id":65670519,"Users Score":0,"Answer":"use pyinstaller. just run\npip install pyinstaller and then open the folder the file is located in with shell and run pyinstaller --onefile FILE.py where file is the name of the python file that should be run when the exe is run","Q_Score":3,"Tags":"python,exec,executable,cx-freeze,generate","A_Id":65670713,"CreationDate":"2021-01-11T16:13:00.000","Title":"Generate an executable from python project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to generate an executable from a python project containing multiple folders and files.\nI tried to work with library cx_Freeze, but only worked for a single file project.\nCan you tell me how to do please?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":431,"Q_Id":65670519,"Users Score":1,"Answer":"Running pyinstaller on your \"main\" python file should work, as PyInstaller automatically imports any dependencies (such as other python files) that you use.","Q_Score":3,"Tags":"python,exec,executable,cx-freeze,generate","A_Id":65670833,"CreationDate":"2021-01-11T16:13:00.000","Title":"Generate an executable from python project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"am doing training for detecting the objects using yolov3 but i face some problem when i set batch_size > 1 it causes me cuda out of memory so i searched in google to see another solution found it depends on my GPU (GTX 1070 8G) .\nmay be the number of epoch is high and it require to be optimized .\nmaybe the epoch number should be decreased? and for training 1k images and 200 pic for validations .\nwhat is best epoch should i set to avoid overfitting?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":65672181,"Users Score":0,"Answer":"your model's overfitting wont depend on the no. of epochs you set.....\nsince you hav made a val split in your data, make sure that your train loss - val loss OR train acc - val acc is nearly the same.This will assure that your model is not overfitting","Q_Score":0,"Tags":"python,machine-learning,deep-learning,pytorch","A_Id":65761464,"CreationDate":"2021-01-11T17:59:00.000","Title":"how many epoch for training 1k images","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I'm editing .py files, certain keyboard shortcuts don't work. Copy, Cut, Paste, Indent Ctrl+[] and comment Ctrl+\/ to name a few. However, undo and redo work. I only experience this when working with .py files. I've looked in the keyboard shortcuts and afaik they are global shortcuts. I'm running Ubuntu 20.0.4","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":65673331,"Users Score":0,"Answer":"Possible its because of the extension of the file, like if I am in a file.vue I use ctrl+shift+A to comment, but in a ruby file that same shortkey does another thing.\nTry that in py files\nSingle line comment\nCtrl + 1\nMulti-line comment select the lines to be commented\nCtrl + 4\nUnblock Multi-line comment\nCtrl + 5","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":65673447,"CreationDate":"2021-01-11T19:29:00.000","Title":"VSCode keyboard shortcuts don't work in all files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have designed and exported a URDF model of a robot using solidworks and the sw2urdf plugin. Now i am trying to load it using pybullet module in python to simulate it and i keep getting the error of:\nerror: Cannot load URDF file.\nAny help would be much appreciated as this doesn't give much information about what might be wrong?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2007,"Q_Id":65674052,"Users Score":0,"Answer":"firstly you need to entire folder from the export in the directory you using. Secondly make sure all coordinate systems match for the global and each joint between links.\nThis seemed to work for me","Q_Score":1,"Tags":"python,ros","A_Id":65706833,"CreationDate":"2021-01-11T20:28:00.000","Title":"Cannot load a URDF file using pybullet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i have designed and exported a URDF model of a robot using solidworks and the sw2urdf plugin. Now i am trying to load it using pybullet module in python to simulate it and i keep getting the error of:\nerror: Cannot load URDF file.\nAny help would be much appreciated as this doesn't give much information about what might be wrong?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2007,"Q_Id":65674052,"Users Score":1,"Answer":"Make sure that the urdf file that you created has been connected to obj file. If you open ufdf file there must be a object link.\n","Q_Score":1,"Tags":"python,ros","A_Id":68151122,"CreationDate":"2021-01-11T20:28:00.000","Title":"Cannot load a URDF file using pybullet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using python venv to create virtual environments. But, since I am working with several projects with different virtual environments, I don't want to manualy set environment variables every time I switch to a different project.\nIs there a way to set venv environment variables automatically when activating the venv?\nWhat is the best practice for this problem?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1886,"Q_Id":65674661,"Users Score":0,"Answer":"This concept is based on Two Scoops of Django. I have implemented it using venv.\n\nOpen the Windows PowerShell Script in your virtual environment generated by venv.\nThe script is located at venv\/Scripts\/Activate.ps1\nAt the bottom of the file, you will see this line of code:\n$env:VIRTUAL_ENV = $VenvDir\nBelow that code, enter your environment variable as follow:\n$env:VARIABLE_NAME = 'variable_value'\n\nSame concept goes if you are using the Command Prompt to activate the environment, you will need to place environment variables in venv\/Scripts\/activate.bat","Q_Score":1,"Tags":"python,environment-variables,python-venv","A_Id":71641302,"CreationDate":"2021-01-11T21:17:00.000","Title":"Is there a way to automatically load environment variables when activating venv?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python venv to create virtual environments. But, since I am working with several projects with different virtual environments, I don't want to manualy set environment variables every time I switch to a different project.\nIs there a way to set venv environment variables automatically when activating the venv?\nWhat is the best practice for this problem?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1886,"Q_Id":65674661,"Users Score":1,"Answer":"you need to write a bash scirpt (in case you are using bash shell), where you specified a particular command which will activate the project python environment and add the project specific envrionment variable in the system environment. and remove the environment variable when you exit the project python environment.\nbut i don't this is good\/correct way to do things. @mz solution will be correct, where you define a .env file and define env variable in it. and use load_env to read the env variable when project runs","Q_Score":1,"Tags":"python,environment-variables,python-venv","A_Id":65675081,"CreationDate":"2021-01-11T21:17:00.000","Title":"Is there a way to automatically load environment variables when activating venv?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python venv to create virtual environments. But, since I am working with several projects with different virtual environments, I don't want to manualy set environment variables every time I switch to a different project.\nIs there a way to set venv environment variables automatically when activating the venv?\nWhat is the best practice for this problem?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1886,"Q_Id":65674661,"Users Score":1,"Answer":"Activating a virtual environment is nothing more than sourcing a shell script. You can edit that script to set whatever variables you like. You will probably also want to edit the definition of deactivate to clear or roll back whatever changes you made to the environment.","Q_Score":1,"Tags":"python,environment-variables,python-venv","A_Id":65674734,"CreationDate":"2021-01-11T21:17:00.000","Title":"Is there a way to automatically load environment variables when activating venv?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are there any solutions to distinguish person names from organization names?\nI was thinking NER, however the data are stored in a structured table (and are not unstructured sentences). Specifically, the NAME column lists person and organization names (which I'm trying to distinguish). In the below example, I would like to produce the values listed within the PERSON column, based on the values listed within the NAME column.\n\n\n\n\nNAME\nPERSON\n\n\n\n\nTom Hanks\nTRUE\n\n\nNissan Motors\nFALSE\n\n\nRyan Reynolds\nTRUE\n\n\nTesla\nFALSE\n\n\nJeff's Cafe\nFALSE","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":65676915,"Users Score":0,"Answer":"If you have reason to believe that all entries are sufficiently known (i.e., common brands and celebrities), you could utilize distant learning approaches with Wikipedia as a source.\nEssentially, you search for each entry on Wikipedia, and utilize results from unique search results (i.e., searching for \"Tesla\" would lead to at least two uniquely different pages, namely the auto company, and the inventor). Most importantly, each page has assigned category tags at the bottom, which make it trivial to classify a certain article by using string matches with a few synonyms (e.g., Apple has the category tag \"Companies liste on NASDAQ\", so you could just say all categories that have \"company\"\/\"companies\" in the name are referring to articles that are companies.\nWith the results from the unique search terms, you should be able to construct a large enough training corpus with relatively high certainty of having accurate ground truth data, which you can in turn use to train a \"conventional\" ML model on the task.\nCaveat:\nI should note that it gets significantly harder if you are doing this with names that are relatively unknown and likely won't have a Wikipedia article. In that case, I could only think of utilizing dictionaries of frequently used first\/last names in your respective country. It is certainly more prone to false positive\/negatives (e.g., \"Toyota\" is also a very common last name in Japan, although Westerners likely refer to the car maker). Similarly, names that are less common (or have a different spelling) will also not be picked up and left out.","Q_Score":2,"Tags":"python,text,nlp","A_Id":65700031,"CreationDate":"2021-01-12T01:40:00.000","Title":"Distinguish Person's names from Organization names in structured table column","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run a python script inside php file, the problem is i can't pass the file as an argument\nsomething like this\n$a =\"python \/umana\/frontend\/upload\/main.py 'filename' \";\n$output =shell_exec($a);\nThe real problem is, the file is not opening in python script.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":129,"Q_Id":65680051,"Users Score":1,"Answer":"It's solved\n$a =\"python-path C:\/python program path '$file_uploaded_path_as_param' \";\n$output =shell_exec($a);\nwe want to add the python path before python script.","Q_Score":0,"Tags":"python,php,yii","A_Id":65697404,"CreationDate":"2021-01-12T08:06:00.000","Title":"Run python script inside php and pass file as a parameter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Django app with a standard django.contrib.auth backend and have a lot of existing users, now I want to add login via Google account using Python Social Auth. Is there any way to allow login via Google account for existing users? How should I associate it with existing users?\nIs it okey to set up 'social_core.pipeline.social_auth.associate_by_email' ?\nSo when user try to log in using Google account and already have an account (created using standard registration with password) in my app then will be automatically logged in. I don't want to allow creating new accounts using Python Social Auth, only allow to login via Google for existing users.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":65681133,"Users Score":0,"Answer":"Yes Cox, you can use this pipeline but pay attention because according to the django doc :\n\nThis pipeline entry is not 100% secure unless you know that the\nproviders enabled enforce email verification on their side, otherwise\na user can attempt to take over another user account by using the same\n(not validated) email address on some provider.","Q_Score":4,"Tags":"python,django,python-social-auth","A_Id":71218315,"CreationDate":"2021-01-12T09:27:00.000","Title":"How to associate existsing users from django.contrib.auth with Python Social Auth (Google backend)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to auto run the python script when I open it from the terminal so that I won't have to press the run button\nFrom the terminal I want to open the file as :\npycharm-community main.py\nHow do I auto run it while it opens?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":222,"Q_Id":65681422,"Users Score":0,"Answer":"Preferences > Tools > Startup Tasks, <+ Add> and select your main.py\nThen anytime you open the project the script will run and display results.\nI wanted main.py to run daily at a certain time, so used Keyboard Maestro. Keyboard Maestro is also able to control the running of main.py, eliminating the step above. That way my script runs only at the desired time of day, not every time I open the project.","Q_Score":2,"Tags":"python,pycharm","A_Id":71053503,"CreationDate":"2021-01-12T09:47:00.000","Title":"Auto run python scripts on pycharm while opening","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to auto run the python script when I open it from the terminal so that I won't have to press the run button\nFrom the terminal I want to open the file as :\npycharm-community main.py\nHow do I auto run it while it opens?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":222,"Q_Id":65681422,"Users Score":2,"Answer":"Do File | Settings | Tools | Startup Tasks (or Ctrl-Alt-S).\nThen: Build, Execution, Deployment > Console > Python Console\nThis gives you a dialogue with an edit box Starting script. Put your import code there. That will run every time you open a new console.","Q_Score":2,"Tags":"python,pycharm","A_Id":65681531,"CreationDate":"2021-01-12T09:47:00.000","Title":"Auto run python scripts on pycharm while opening","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to use a dictionary that only allows a single write of any specific key.\nIs there a (fool-proof) python subclass for a dictionary that will raise an exception when trying to overwrite a key?\nEDIT: Alternatively, is there a simple way to throw an exception that is fool-proof for any type of dictionary update?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":107,"Q_Id":65682628,"Users Score":0,"Answer":"I don't know a built-in way of achieving this, But one solution might be to just add a Boolean value to the each value in the dict. When initializing the dict, you set this Boolean to True and after the first overwrite you change it to False.\nEverytime you try to change some value, you first check this boolean. True means it's okay to change it, False means it's not.","Q_Score":0,"Tags":"python,dictionary","A_Id":65682696,"CreationDate":"2021-01-12T11:01:00.000","Title":"A python dictionary that raises an exception when trying to overwrite a key","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hello I am programming a program to automate mouse presses on a program for certain pixels but I don't want a second program to come in the way with that click, my program is going to look for a green pixel and click it on a certain part of the screen, but if there is another program\/image in the way that is green I don't want it to click on that\nI just want it to click on the process\/program I want it to click on, and not click on the screen\nIf anyone could give me some tips on this, that would be helpful","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":108,"Q_Id":65682858,"Users Score":0,"Answer":"Im sorry im not familiar with keys interacting with programs, but i made a little research and found a library called PyWin32 that should statisfy your need. You can search for its documentation or try your luck by finding videos on this particullar library on youtube.\nAnyways, hopefully this helped you getting set in the right direction, and feel free to ask any question","Q_Score":0,"Tags":"python,automation,process","A_Id":65682986,"CreationDate":"2021-01-12T11:16:00.000","Title":"Clicking on pixel for certain process with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to do a or linebreak add 2 spaces at end","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":52,"Q_Id":65685117,"Users Score":1,"Answer":"Another solution is some kind of a trick without using realloc(). You can read your file twice.\n\nOpen a file\nIterate through its content and find a size of the future array\nClose your file\nAllocate memory\nOpen a file again\nRead numbers from the file and fill your array\n\nP.S. In the future, try to be more specific while writing questions titles.","Q_Score":0,"Tags":"python","A_Id":65685303,"CreationDate":"2021-01-12T13:41:00.000","Title":"quote by placing > at start of line in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a newbie to Python and working on a small opencv application.\nI need local_threshold from scikit-image but getting below error :\n\nTraceback (most recent call last): File\n\"D:\/Projects\/Python\/Document_Scanner\/main.py\", line 6, in \nfrom skimage.filters import threshold_local File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\filters_init_.py\",\nline 4, in \nfrom .edges import (sobel, sobel_h, sobel_v, File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\filters\\edges.py\",\nline 18, in \nfrom ..restoration.uft import laplacian File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\restoration_init_.py\",\nline 13, in \nfrom .rolling_ball import rolling_ball, ball_kernel, ellipsoid_kernel File\n\"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\restoration\\rolling_ball.py\",\nline 3, in \nfrom ._rolling_ball_cy import apply_kernel, apply_kernel_nan ImportError: DLL load failed while importing _rolling_ball_cy: The\nspecified module could not be found.\n\nI tried reverting to older version of scikit-image , but still getting error.\nMy current version of scikit-image is 0.18.0","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":3097,"Q_Id":65687480,"Users Score":0,"Answer":"I was able to get it work by reinstalling python on my machine. Hope it helps somebody.","Q_Score":2,"Tags":"python,python-3.x,opencv,scikit-learn,scikit-image","A_Id":65713830,"CreationDate":"2021-01-12T15:59:00.000","Title":"ImportError: DLL load failed while importing _rolling_ball_cy:","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a newbie to Python and working on a small opencv application.\nI need local_threshold from scikit-image but getting below error :\n\nTraceback (most recent call last): File\n\"D:\/Projects\/Python\/Document_Scanner\/main.py\", line 6, in \nfrom skimage.filters import threshold_local File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\filters_init_.py\",\nline 4, in \nfrom .edges import (sobel, sobel_h, sobel_v, File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\filters\\edges.py\",\nline 18, in \nfrom ..restoration.uft import laplacian File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\restoration_init_.py\",\nline 13, in \nfrom .rolling_ball import rolling_ball, ball_kernel, ellipsoid_kernel File\n\"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\restoration\\rolling_ball.py\",\nline 3, in \nfrom ._rolling_ball_cy import apply_kernel, apply_kernel_nan ImportError: DLL load failed while importing _rolling_ball_cy: The\nspecified module could not be found.\n\nI tried reverting to older version of scikit-image , but still getting error.\nMy current version of scikit-image is 0.18.0","AnswerCount":4,"Available Count":4,"Score":0.1973753202,"is_accepted":false,"ViewCount":3097,"Q_Id":65687480,"Users Score":4,"Answer":"Installing the old version (0.18.0rc0 version) of scikit-image I solved this problem","Q_Score":2,"Tags":"python,python-3.x,opencv,scikit-learn,scikit-image","A_Id":65863625,"CreationDate":"2021-01-12T15:59:00.000","Title":"ImportError: DLL load failed while importing _rolling_ball_cy:","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a newbie to Python and working on a small opencv application.\nI need local_threshold from scikit-image but getting below error :\n\nTraceback (most recent call last): File\n\"D:\/Projects\/Python\/Document_Scanner\/main.py\", line 6, in \nfrom skimage.filters import threshold_local File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\filters_init_.py\",\nline 4, in \nfrom .edges import (sobel, sobel_h, sobel_v, File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\filters\\edges.py\",\nline 18, in \nfrom ..restoration.uft import laplacian File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\restoration_init_.py\",\nline 13, in \nfrom .rolling_ball import rolling_ball, ball_kernel, ellipsoid_kernel File\n\"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\restoration\\rolling_ball.py\",\nline 3, in \nfrom ._rolling_ball_cy import apply_kernel, apply_kernel_nan ImportError: DLL load failed while importing _rolling_ball_cy: The\nspecified module could not be found.\n\nI tried reverting to older version of scikit-image , but still getting error.\nMy current version of scikit-image is 0.18.0","AnswerCount":4,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":3097,"Q_Id":65687480,"Users Score":2,"Answer":"You need to install and distribute the contents of Microsoft's vcredist_x64.exe. In particular you need VCOMP140.DLL.","Q_Score":2,"Tags":"python,python-3.x,opencv,scikit-learn,scikit-image","A_Id":68213823,"CreationDate":"2021-01-12T15:59:00.000","Title":"ImportError: DLL load failed while importing _rolling_ball_cy:","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a newbie to Python and working on a small opencv application.\nI need local_threshold from scikit-image but getting below error :\n\nTraceback (most recent call last): File\n\"D:\/Projects\/Python\/Document_Scanner\/main.py\", line 6, in \nfrom skimage.filters import threshold_local File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\filters_init_.py\",\nline 4, in \nfrom .edges import (sobel, sobel_h, sobel_v, File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\filters\\edges.py\",\nline 18, in \nfrom ..restoration.uft import laplacian File \"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\restoration_init_.py\",\nline 13, in \nfrom .rolling_ball import rolling_ball, ball_kernel, ellipsoid_kernel File\n\"C:\\Users\\nash2\\AppData\\Local\\Programs\\Python\\Python38-32\\lib\\site-packages\\skimage\\restoration\\rolling_ball.py\",\nline 3, in \nfrom ._rolling_ball_cy import apply_kernel, apply_kernel_nan ImportError: DLL load failed while importing _rolling_ball_cy: The\nspecified module could not be found.\n\nI tried reverting to older version of scikit-image , but still getting error.\nMy current version of scikit-image is 0.18.0","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":3097,"Q_Id":65687480,"Users Score":1,"Answer":"You need to install the scikit-image 0.18.0rc0 package. To install this package run the command below.\n\npip install scikit-image==0.18.0rc0\n\nNote - This also takes care of any previously installed versions of it by removing it and installing the specified version. If there is any problem with the installation, try running the command in an administrator terminal window.","Q_Score":2,"Tags":"python,python-3.x,opencv,scikit-learn,scikit-image","A_Id":69097068,"CreationDate":"2021-01-12T15:59:00.000","Title":"ImportError: DLL load failed while importing _rolling_ball_cy:","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using pip regularly\nright now i'm getting errors when trying to run\npip install numpy\nWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/ WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/ WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/ WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/ WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/ ERROR: Could not find a version that satisfies the requirement numpy (from versions: none) ERROR: No matching distribution found for numpy\nI get the same error when running the command from my pc and also where running it from my laptop.\nI had some internet connectivity issues the other day, also the problem seemed to occur after I installed\npip install -U databricks-connect==7.1.* ran some commands(databricks-connect configure and databricks-connect test) and then uninstalled it.\nAgain, the problem occurs on both computers connected to the same network.\nThanks\nroy","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":234,"Q_Id":65688006,"Users Score":0,"Answer":"might be a network provider related problem","Q_Score":0,"Tags":"python,networking,pip","A_Id":65747876,"CreationDate":"2021-01-12T16:31:00.000","Title":"pip timing out on multiple computers on the same network","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is there a way to maximize the pygame window when intializing the window with set_mode without making it completely fulsscreen. I tried to get the required window_size by printing event.size, when VIDEORESIZE is called, which is (1920, 1017). But if I use this value for my window_size when setting mode I just get a window at the same size of a maximized window. If I press maximize in the top right corner it just switches between a thicker and a thinner border.\nObviously i want the thinner border, is that possible from the start?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":65689163,"Users Score":0,"Answer":"You should try to use the\nRESIZEABLE\nWhen using set_mode in pygame like\nDISPLAYSURF = pygame.display.set_mode((1920,1017), RESIZABLE)","Q_Score":0,"Tags":"python,pygame,window,maximize,maximize-window","A_Id":65689305,"CreationDate":"2021-01-12T17:43:00.000","Title":"Maximize the pygame window without making it fullscreen","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using sqlalchemy currently but I can't store multiple values in a column. The only values I can put in a db are strings, int, etc. but not lists. I was thinking what if I wanted a list of integers and I just made it a string in this format: \"1|10|91\" and then split it afterwards. Would that work or would I run out of memory or something?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":65690067,"Users Score":0,"Answer":"In python, you can convert a dictionary easy to a JSON file. A dictionary consists of a bunch of variables. You can then convert the JSON file easy to SQL.\nJSON files are often used to convert variables from one programming language to another.","Q_Score":0,"Tags":"python,string,list","A_Id":65690207,"CreationDate":"2021-01-12T18:43:00.000","Title":"Is using a string instead of a list to store multiple values going to work?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am collaborating with a friend. We are working on the same script. The script loads a file. We both access the file in different locations(not the working directory).\nI don't want any friction when downloading his script and running it on my local computer. One way I thought about was setting the path as one of the parametres of the script. What approach would you recommend I am using a conda environment, maybe one can set some environment variables?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":65691163,"Users Score":0,"Answer":"If you use file for example in a function just pass file path as a parameter. Thats the simplest solution, but it works well.","Q_Score":0,"Tags":"python,path,python-os","A_Id":65691188,"CreationDate":"2021-01-12T19:55:00.000","Title":"How do I dynamically set the path of a file Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to loop through hundreds of stock tickers, calculating the most recent Heikin-Ashi open and close value for each. However, you need the previous HA open and close to calculate the current HA open. Is there any way to do this without having to deal with a stock's entire history?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":248,"Q_Id":65694323,"Users Score":1,"Answer":"You do not need the entire history but the more data you have the least will be the effect on the most recent values.\nFor the calculation of the first HA open, you need to use the normal open. After that, you can iteratively calculate the remaining HA opens.","Q_Score":2,"Tags":"python,finance,stock,trading,technical-indicator","A_Id":65793845,"CreationDate":"2021-01-13T01:10:00.000","Title":"Heikin Ashi without a stock's entire history?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have made a Cryptography App, basically with python and I have posted it in github, now I have decided to create a .exe file with pyinstaller for that, because the Cryptography App needs tkinter and python to work. I have created the .exe file with the pyinstaller in windows 32-bit. Now I know that it will work on Windows 64-bit also, but WILL IT WORK ON ANY OTHER OS LIKE LINUX OR MAC?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":139,"Q_Id":65696219,"Users Score":0,"Answer":"This is a good question with a very simple answer. .exe is a windows file format. You will not be able to run the .exe on a mac simply because this type of file is not supported in MacOS\nA solution, if you want to run this on Mac or Linux, would be to try running it under a Windows compatibility layer such as Wine","Q_Score":0,"Tags":"python,operating-system,pyinstaller,exe","A_Id":65696274,"CreationDate":"2021-01-13T05:37:00.000","Title":"Will an .exe file created by PYINSTALLER on WINDOWS work on any OTHER OS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have Python 3.8.5 installed via Anaconda, but I want to switch it to Python 3.7.6. I tried to use conda install python=3.7.6, but after the command finished, I still get Python 3.8.5.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":65697111,"Users Score":0,"Answer":"I use \"conda install python=3.7 anaconda=custom\" and it works.","Q_Score":0,"Tags":"python,python-3.x,anaconda","A_Id":65713242,"CreationDate":"2021-01-13T07:04:00.000","Title":"How can I change my version of Python to a prior version using Anaconda?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to anaconda and the concept of environments and have a few questions I want to clarify!\n\nDoes the Anaconda graphical installer installs a new \"copy\" of python into my Mac?\n\nSo, in the future, am I correct to say that when I update packages\/python through Conda, it will not affect my native python version? (and therefore will not affect my macOS \"dependencies\"?)\n\nShould I be creating a new environment for my learning instead of using the base environment? (b\/c Conda documentation states that\n\n\n\nWhen you begin using conda, you already have a default environment named base. You don't want to put programs into your base environment, though. Create separate environments to keep your programs isolated from each other.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":65697188,"Users Score":0,"Answer":"Yes, you get a new copy of python that can also have a different version than the one shipped with you OS. conda will set up your PATH environment variable so that the new python will take precedence whenever you call python\n\nYes\n\nThat could be somewhat of an opinion based answer, but I would highly encourage it. It helps you to get used to the environment concept and also in case you mess something up, you can just delete an environment and create a new one\n\nWhen you do pip list it will also show you packages that are in your currently active conda environment. This is again because conda has by default also installed pip and has modified the PATH so that condas pip is found when you do pip commands\n\n\nNote: You can always check with the which command where commands are called from. When doing which pip or which python you should see that both point to your anaconda or miniconda installation directory","Q_Score":0,"Tags":"python,anaconda","A_Id":65697290,"CreationDate":"2021-01-13T07:12:00.000","Title":"Regarding Anaconda's python and native macOS python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I am a beginner at python, and I was trying to install packages using pip. But any time I try to install I keep getting the error:\n\nERROR: Could not install packages due to an EnvironmentError: [WinError 2] The system cannot find the file specified: 'c:\\python38\\Scripts\\sqlformat.exe' -> 'c:\\python38\\Scripts\\sqlformat.exe.deleteme'\n\nHow do I fix this?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":421,"Q_Id":65697374,"Users Score":0,"Answer":"Looks like the issue is with permissions . Try running the same command on the terminal as a \"administrator\". Let me know if that fixes this issue.","Q_Score":0,"Tags":"python,pip","A_Id":65697495,"CreationDate":"2021-01-13T07:28:00.000","Title":"Python Package Installation: Pip Errors","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am a beginner at python, and I was trying to install packages using pip. But any time I try to install I keep getting the error:\n\nERROR: Could not install packages due to an EnvironmentError: [WinError 2] The system cannot find the file specified: 'c:\\python38\\Scripts\\sqlformat.exe' -> 'c:\\python38\\Scripts\\sqlformat.exe.deleteme'\n\nHow do I fix this?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":421,"Q_Id":65697374,"Users Score":1,"Answer":"Try running command line as administrator. The issue looks like its about permission. To run as administrator. Type cmd on search bar and right click on icon of command prompt. There you will find an option of run as administrator. Click the option and then try to install package","Q_Score":0,"Tags":"python,pip","A_Id":65697480,"CreationDate":"2021-01-13T07:28:00.000","Title":"Python Package Installation: Pip Errors","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to make an app with kivy, but am wondering if I can connect it to a Wordpress database, I want to show the wp posts within the kivy app, is there an easy way to do so?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":46,"Q_Id":65698094,"Users Score":0,"Answer":"Wordpress provides a REST API. You can use it to get posts in JSON format using an HTTP request and do whatever you want with the data.","Q_Score":0,"Tags":"python,wordpress,kivy,kivymd","A_Id":65698796,"CreationDate":"2021-01-13T08:25:00.000","Title":"Is there an easy way to show Wordpress posts in kivy\/python app?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ERROR: mysqlclient-1.4.6-cp38-cp38-win32.whl is not a supported wheel on this platform.\n! Push rejected, failed to compile Python app.\n! Push failed","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":65699181,"Users Score":0,"Answer":"Try google, \"mysql build from wheel\".\nFind your operating system.\nDownload.\nTry again.","Q_Score":0,"Tags":"python,django,mysql-cli","A_Id":65702391,"CreationDate":"2021-01-13T09:36:00.000","Title":"Not Able To Deploy Django App Because of This Error How To Fix It","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to access some data stored on the iCloud Drive. I'm currently using Pycharm as my code interpreter. I'm able to access files stored in OneDrive by changing my working directory, but I'm not able to do the same with iCloud. One solution is to move all the data from iCloud to OneDrive, but that seems inefficient. Somebody who can help me with this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":309,"Q_Id":65700706,"Users Score":0,"Answer":"The iCloud Drive does not show up as part of the normal filesystem therefore the Pycharm file picker dialog can not access it. The best solution to your problem is to transfer the data from your iCloud Drive into your computer's hard drive.","Q_Score":0,"Tags":"python,macos,pycharm,icloud","A_Id":65701211,"CreationDate":"2021-01-13T11:08:00.000","Title":"Use iCloud as working directory in Pycharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a special use case where I need to run a task on all workers to check if a specific process is running on the celery worker. The problem is that I need to run this on all my workers as each worker represents a replica of this specific process.\nIn the end I want to display 8\/20 workers are ready to process further tasks.\nBut currently I'm only able to process a task on either a random selected worker or just on one specific worker which does not solve my problem at all ...\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":188,"Q_Id":65701898,"Users Score":0,"Answer":"I can't think of a good way to do this on Celery. However, a nice workaround perhaps could be to implement your own command, and then you can broadcast that command to every worker (just like you can broadcast shutdown or status commands for an example). When I think of it, this does indeed sound like some sort of monitoring\/maintenance operation, right?","Q_Score":1,"Tags":"python,celery","A_Id":65702188,"CreationDate":"2021-01-13T12:22:00.000","Title":"Celery - execute task on all nodes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Using tkinter we can use either widget.grid(row,col) or widget.pack() to place a widget.\nSince row,col corresponds to the row\/col-index in a given Frame\/Window - how do we know how many columns the Frame\/Window consists of? E.g if I want to place a widget in the midle or to the very right","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":411,"Q_Id":65701967,"Users Score":2,"Answer":"Rows and columns are just concepts, not actual things. There is effectively an infinite number of rows and columns, all with a width or height of zero until they contain a widget or are configured to have a minimum width or height. From a practical standpoint, there are as many rows and columns as there are pixels in the window.\nIn reality, the number of rows and columns is entirely up to you. A widget can have a single row and column or it can have several. It all depends on what you add to the window.\nA frame starts out with nothing in it, so there are no rows and columns, just empty space. When you add a widget to a row and column, it now has one row and one column plus maybe some empty space. Even if you place your first widget at row 50 and column 20, there is still just one row (50) and one column (20).\nThere are simple techniques to put something in the middle, or along the right size. For example, because you can define how many columns, you can configure the master to use three columns and the place your widget in the middle column. You can use columnconfigure to cause the last column to take up all extra space with the weight option. This will move any widgets in the last column to the right edge.","Q_Score":1,"Tags":"python-3.x,tkinter","A_Id":65704715,"CreationDate":"2021-01-13T12:26:00.000","Title":"How many rows\/columns are there in a tkinter Frame?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Getting the following error when trying to import RangeParameter from ax\nImportError: cannot import name 'Bounds'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":65702067,"Users Score":0,"Answer":"Upgrade scipy to version 1.1 or above.","Q_Score":0,"Tags":"python,optimization","A_Id":65702104,"CreationDate":"2021-01-13T12:32:00.000","Title":"Error in importing RangeParameter from ax in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to replace \"-inf\" with \"NA\" in my collection containing around 500 fields and ~10 million documents. The reason why I have mentioned string in my query is because I have changed the datatype of dataframe to string before saving in db.\nCan someone please suggest an efficient solution to do so?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":65703051,"Users Score":0,"Answer":"Your choices are:\n\nProgram something that will iterate every record and then every field in that record, perform any updates and save them back into the database.\n\nExport all the data in human readable format, performed a search-and-replace (using sed or similar), then load the data back in.\n\n\nThere's no magic command that will do what you want quickly and easily.","Q_Score":0,"Tags":"python-3.x,mongodb,pymongo","A_Id":65797073,"CreationDate":"2021-01-13T13:35:00.000","Title":"Replacing a string value with another string in entire Mongo collection using pymongo?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Dearpygui used to work just fine, till I upgraded Pycharm and Python from 3.6 to 3.9. Than this error occurred.\nWent back to 3.6. Abandoned Pycharm. Removed Pycharm, Pip and Python. Installed Python 3.6, 3.7, 3.9. Started in new environment. Upgraded pip. Tried all Python versions.... for all versions pip successfully installed Dearpygui to the site-packages... but all failed to load DLL on Python-'import'-command.\nI've seen others with similar problems with other packages, but found no useful suggestion...\nIt looks as if something is changed in the settings\/environment of Window-10... (by Python?)\ndearpygui-0.6.123, pip 20.3.3, python 3.7 (64-bit) a.o., windows 10 Pro 2014\n(PS. although 'the specified module' can not be found upon the Python-'import'-command, PyCharm has somehow full access to the Dearpygui-class-information...)","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":400,"Q_Id":65704181,"Users Score":-1,"Answer":"What is the name of your file? Would it happen to be dearpygui.py?","Q_Score":0,"Tags":"python,dll,pip","A_Id":65707945,"CreationDate":"2021-01-13T14:44:00.000","Title":"ImportError: DLL load failed, Dearpygui, Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I'm currently upgrading a Django app from 1.8 to 3.1 and came upon the following error:\nAttributeError: 'QuerySet' object has no attribute 'field_names'\nI have tried to find deprecation information on this attribute and checked the QuerySet API for information but couldn't find anything relevant.\nHas anybody gone through the same?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":65704613,"Users Score":0,"Answer":"Unless I missed out on any of the release notes from 1.8 to 2.0 or the deprecation timeline, it's not documented that QuerySet objects have an attribute 'field_names' that's basically a list of the names of the columns of the table the QuerySet belongs to.\nLooking at the code from stable 1.8 and stable 1.9 of django\/db\/models\/query.py you can see that the attribute was removed.\nIn case you were using that attribute and need to have it work after an update, you can use the Meta API of Django's Models like this:\nfield_names = [f.name for f in Object._meta.get_fields()]","Q_Score":0,"Tags":"python,django","A_Id":65704614,"CreationDate":"2021-01-13T15:08:00.000","Title":"QuerySet has no field_names?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My platform is Windows 10 and Python 3.9. There is another computer(Windows server 2008R2) without Python. So I'd like to use pyinstaller in my computer and use .exe on the other computer.\nI tried simple script print(\"hello\") and used pyinstaller -F myscript.py\n.exe works on my computer, but failed on the other computer.\nError\nerror loading python dll ~ python39.dll\nShould I use Python 3.8? Or what should I do?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3901,"Q_Id":65704901,"Users Score":5,"Answer":"The problem is that Pyinstaller does not create fully standalone executables, it creates dependencies (E.g. this python39.dll), so this python39.dll should be on the computer which is running this executable. Because python is already installed on your computer, python39.dll is already there and everything works fine. The problem is that machine that you're running this program on probably won't have it.\nTo fix this there are several solutions:\n\nInstall python 3.9 on targets' machine (But in this case you don't need to create an executable)\nInclude python39.dll with your program\n\nFor second solution just create a folder and move your executable into it as well as this python39.dll library. Windows will find it because it's in the same directory where this executable is. You can get this library from c:\\Windows\\System32 folder (Or where all DLL's are stored on your system) and then just copy it into folder with your executable. After that ship not just executable but this folder with library included.\n@Stepan wrote in comments that you can also include this library right in your executable by adding --add-binary \"path\\to\\python39.dll\" to your command when compiling. The final command will look like this:\npyinstaller -F --add-binary \"c:\\Windows\\System32\\python39.dll\" myscript.py","Q_Score":2,"Tags":"python","A_Id":65705677,"CreationDate":"2021-01-13T15:26:00.000","Title":"Pyinstaller exe not working on other computer(with other windows ver.)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have X lists of elements, each list containing a different number of elements (without repetitions inside a single list). I want to generate (if possible, 500) sequences of 3 elements, where each element belongs to a different list, and sequences do not repeat. So something like:\nX (in this case 4) lists of elements: [A1,A2], [B1,B2,B3,B4], [C1], [D1,D2,D3,D4,D5]\npossible results: [A1,B2,D2], [B3,C1,D2], [A1,B2,C1]... (here 500 sequences are impossible, so would be less)\nI think I know how to do it with a nasty loop: join all the lists, random.sample(len(l),3) from the joint list, if 2 indices belong to the same list repeat, if not, check if the sequence was not found before. But that would be very slow. I am looking for a more pythonic or more mathematically clever way.\nPerhaps a better way would be to use random.sample([A,B,C,D], 3, p=[len(A), len(B), len(C), len(D)]), then for each sequence from it randomly select an element from each group in the sequence, then check if a new sequence generated in this way hasn't been generated before. But again, a lot of looping.\nAny better ideas?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":65706729,"Users Score":0,"Answer":"Check itertools module (combination and permutation in particular).\nYou can get a random.choice() from the permutations of 3 elements from the X lists (thus selecting 3 lists), and for each of them get a random.choice() (random module).","Q_Score":0,"Tags":"python,combinations,probability","A_Id":65706871,"CreationDate":"2021-01-13T17:19:00.000","Title":"Selecting mutiple random sequences of N elements from M different-length lists in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am using a venv (virtual environment) for one of my python projects and need to install PyAudio for it. I am using PyCharm for the venv.\nWhen I try to install PyCharm usually it comes up with 'Command errored out with exit status: 1'\nHow can I install PyAudio into my venv properly (I know PyAudio needs to be installed differently I just don't know how to do it in the venv)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":127,"Q_Id":65709096,"Users Score":0,"Answer":"So I fixed my problem. I found the PyAudio file in my normal python (Python 3.9 64-bit) and transferred it to my venv (Python 3.8 64-bit) and now it works fine!","Q_Score":0,"Tags":"python-3.x,pycharm,python-venv","A_Id":65771524,"CreationDate":"2021-01-13T20:11:00.000","Title":"How to install PyAudio via Virtual Environment in PyCharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am researching on Celery as background worker for my flask application. The application is hosted on a shared linux server (I am not very sure what this means) on Linode platform. The description says that the server has 1 CPU and 2GB RAM. I read that a Celery worker starts worker processes under it and their number is equal to number of cores on the machine - which is 1 in my case.\nI would have situations where I have users asking for multiple background jobs to be run. They would all be placed in a redis\/rabbitmq queue (not decided yet). So if I start Celery with concurrency greater than 1 (say --concurrency 4), then would it be of any use? Or will the other workers be useless in this case as I have a single CPU?\nThe tasks would mostly be about reading information to and from google sheets and application database. These interactions can get heavy at times taking about 5-15 minutes. Based on this, will the answer to the above question change as there might be times when cpu is not being utilized?\nAny help on this will be great as I don't want one job to keep on waiting for the previous one to finish before it can start or will the only solution be to pay money for a better machine?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":179,"Q_Id":65709154,"Users Score":1,"Answer":"This is a common scenario, so do not worry. If your tasks are not CPU heavy, you can always overutilise like you plan to do. If all they do is I\/O, then you can pick even a higher number than 4 and it will all work just fine.","Q_Score":2,"Tags":"python,celery","A_Id":65718359,"CreationDate":"2021-01-13T20:16:00.000","Title":"Run multiple processes in single celery worker on a machine with single CPU","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I'm making a python GUI with tkinter that requires drawing lots of circles and lines. I want to limit some of the canvas drawings to a certain part of the window, e.g. the right half. For lines, I can just do the tedious calculation (and have done so) but there's no obvious way to do this for circles because they can be disconnected, so I'm wondering if there's just a simpler solution (possibly a specific keyword) that allows you to limit where canvas can draw? Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":65709538,"Users Score":0,"Answer":"There are a couple of options that you could try.\n\nYou could make two separate canvas', one on the right side and one on the left side and only allow the user to draw in the one on the right.\n\nGiven a center point and a radius, it is possible to calculate if the circle will extend into the left half of the canvas, in which case, you can remove it from the canvas.","Q_Score":0,"Tags":"python,tkinter","A_Id":65709782,"CreationDate":"2021-01-13T20:47:00.000","Title":"Is it possible to limit a Tkinter canvas drawing to a specific frame?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have made an alexa like program on python. Now, I want it to auto run when I start my computer and take inputs and give outputs as well. How do I do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":328,"Q_Id":65712858,"Users Score":0,"Answer":"- For linux\nFirst make sure you add this line to the top of your python program.\n#!\/usr\/bin\/python3\n\nCopy the python file to \/bin folder with the command.\n\nsudo cp -i \/path\/to\/your_script.py \/bin\n\nNow Add a new Cron Job.\n\nsudo crontab -e\nThis command will open the cron file.\n\nNow paste the following line at the bottom of the file.\n\n@reboot python \/bin\/your_script.py &\n\nDone, now test it by rebooting the system\n\nYou can add any command to run in the startup in the cron file.\nCron can be used to perform any kind of scheduling other than startup also.\n- For Windows\nNavigate to C:\\Users\\username\\Appdata\\Roaming\\Microsoft\\Windows\\Start Menu\\Programs\\Startup\nPlace your complied exe file there, it will be executed on startup.\nTo make exe from py, first install the pyinstaller module pip install pyinstaller.\nNow run the command in the folder where the python file is pyinstaller --onefile your_script.py","Q_Score":0,"Tags":"python,autorun","A_Id":65712899,"CreationDate":"2021-01-14T03:08:00.000","Title":"Python Auto run files on startup","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am stuck on the .wav file extension, I used \"gtts\" method to convert text into the speech and save the file and it is working perfectly but the problem is \"gtts\" only support the .mp3 extension but I need the output file with .wav extension.\nSo I am question is, if any function like \"gtts\" to convert text into speech and save the file with .wav extension?\nor anyone who already done work on this module. Please share your opinion. Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":65713613,"Users Score":0,"Answer":"Thank you for sharing the link, it is now working after entering the environmental path of the lib.","Q_Score":1,"Tags":"python","A_Id":65714020,"CreationDate":"2021-01-14T04:54:00.000","Title":"Is there any function in which .wav file will generate","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When running psutil.boot_time() for the first time (yesterday) on my windows computer it shows the correct boot time. But when I am running it today it shows yesterday's boot time!\nwhat to do? Am I Doing Something Wrong?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":65713779,"Users Score":0,"Answer":"If you hadn't booted it again, the boot_time will be the same.","Q_Score":0,"Tags":"python,psutil","A_Id":70358266,"CreationDate":"2021-01-14T05:16:00.000","Title":"Incorrect Boot Time In `psutil.boot_time()` on windows with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a interactive plotly graph which I wish to put in to a Java GUI.\nI am using swing for the Java GUI.\nthe plotly functions and displays correctly on Chrome, but not on Java swing components.\nHow should I go about doing this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":65715368,"Users Score":0,"Answer":"You need to start plotly as a seperated web service and call it via embedded browser.","Q_Score":1,"Tags":"java,python,browser,plotly","A_Id":71008038,"CreationDate":"2021-01-14T08:07:00.000","Title":"Embedding a Plotly graph to Java GUI","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Essentially want to send a script to my friend to go over. He has python installed on his computer, but doesn't specifically have 'pyinputplus' - a key component of the program.\nIs there a way that I can send this script to him without him installing pyinputplus? Or whether I can effectively insert 'pip install pyinputplus' into the code and have it execute when he runs it? I had also considered making the script an executable, but didn't think that would help.\nRelatively new to this, so apologies for my naivety.\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":65716394,"Users Score":0,"Answer":"indeed you can give him directly the source code of the librairy.\nUsing in python the command sys.path you can found all folders where python will search for theses librairies.\nJust give him the folder and let him place it inside a result of his sys.path.","Q_Score":0,"Tags":"python,pip","A_Id":65716452,"CreationDate":"2021-01-14T09:27:00.000","Title":"Method of opening .py scripts without packages installed? (pip)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been handed a file with over 30 different neural network architectures, which I should examinate regarding certain metrics. Basically, without calling every single network explicitly, i.e. \"MapNet1()\", I want to iterate over the networks and just track the metrics I'm interested in.\nHow do I do that, especially regarding even bigger sizes of networks to investigate? Or better formulated: how can I loop over the networks defined in my file \"classification_models.py\" and the networks defined in there without calling each of the networks manually?\nAll the best.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":32,"Q_Id":65717092,"Users Score":1,"Answer":"Make a python list, which entries are yours neural networks (for example, append the trained models one by one). Then iterate over the list as usual.","Q_Score":0,"Tags":"python,automation,neural-network","A_Id":65717521,"CreationDate":"2021-01-14T10:17:00.000","Title":"How to iterate over neural network architectures?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I'm new to scraping data from twitter and I've been working on a project to collect tweets and their replies. I scraped the tweets using twitter API , but I couldn't scrape their replies. Any suggestions ?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":980,"Q_Id":65718539,"Users Score":0,"Answer":"You can achieve this by:\nYou want to get replies of @barackobama and Tweet Id is 1234 just example.\nSearch tweets having @barackobama is tweet text. then make filter on InrplytweetId=1234\nby this method you can get replies of this tweet.","Q_Score":0,"Tags":"python,twitter","A_Id":65725531,"CreationDate":"2021-01-14T11:58:00.000","Title":"Ideas for scraping reply for each tweet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I'm new to scraping data from twitter and I've been working on a project to collect tweets and their replies. I scraped the tweets using twitter API , but I couldn't scrape their replies. Any suggestions ?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":980,"Q_Id":65718539,"Users Score":1,"Answer":"Scraping is not using API. API is a comfortable method to get data while scraping is doing it manually. If you want to get replies, but the API doesn't provide them, you can use selenium. It is well-known library used for scraping data.","Q_Score":0,"Tags":"python,twitter","A_Id":65718617,"CreationDate":"2021-01-14T11:58:00.000","Title":"Ideas for scraping reply for each tweet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted to know if there is a way to import multiple modules at parallel using python, to decrease the load time of the app.\nany help would be highly appreciated. :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":65718634,"Users Score":0,"Answer":"In python there is a module called Threading. It does so you can execute code in parallel. For example if you have a server with treading, then it can calculate and manage all the clients in parallel.","Q_Score":0,"Tags":"python,multiprocessing,python-import,python-module","A_Id":65718766,"CreationDate":"2021-01-14T12:05:00.000","Title":"Importing multiple modules at parallel in python(using multiprocessing)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to generate a matrix (propagator in physics) by ordered multiplication of many other matrices. Each matrix is about the size of (30,30), all real entries (floats), but not symmetric. The number of matrices to multiply varies between 1e3 to 1e5. Each matrix is only slightly different from previous, however they are not commutative (and at the end I need the product of all these non-commutative multiplication). Each matrix is for certain time slice, so I know how to generate each of them independently, wherever they are in the multiplication sequence. At the end, I have to produce many such matrix propagators, so any performance enhancement is welcomed.\nWhat is the algorithm for fastest implementation of such matrix multiplication in python?\nIn particular -\n\nHow to structure it? Are there fast axes and so on? preferable dimensions for rows \/ columns of the matrix?\nAssuming memory is not a problem, to allocate and build all matrices before multiplication, or to generate each per time step? To store each matrix in dedicated variable before multiplication, or to generate when needed and directly multiply?\nCumulative effects of function call overheads effects when generating matrices?\nAs I know how to build each, should it be parallelized? For example maybe to create batch sequences from start of the sequence and from the end, multiply them in parallel and at the end multiply the results in proper order?\nIs it preferable to use other module than numpy? Numba can be useful? or some other efficient way to compile in place to C, or use of optimized external libraries? (please give reference if so, I don't have experience in that)\n\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":57,"Q_Id":65722938,"Users Score":1,"Answer":"I don't think that the matrix multiplication would take much time. So, I would do it in a single loop. The assembling is probably the costly part here.\nIf you have bigger matrices, a map-reduce approach could be helpful. (split the set of matrices, apply matrix multiplication to each set and do the same for the resulting matrices)\nNumpy is perfectly fine for problems like this as it is pretty optimized. (and is partly in C)\nJust test how much time the matrix multiplication takes and how much the assembling. The result should indicate where you need to optimize.","Q_Score":0,"Tags":"python,performance,matrix,matrix-multiplication","A_Id":65723725,"CreationDate":"2021-01-14T16:31:00.000","Title":"Fast subsequent multiplication of many matrices in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using debian, seems like installed all dependencies\nsudo apt update\nsudo apt install -y git zip unzip openjdk-8-jdk python3-pip autoconf libtool pkg-config zlib1g-dev libncurses5-dev libncursesw5-dev libtinfo5 cmake libffi-dev libssl-dev\npip3 install --user --upgrade Cython==0.29.19 virtualenv # the --user should be removed if you do this in a venv\nadded the following line at the end of your ~\/.bashrc file\nexport PATH=$PATH:~\/.local\/bin\/\ndid clone git, installed","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":65723283,"Users Score":0,"Answer":"Installed from pip not from git and it worked.","Q_Score":0,"Tags":"python,linux,debian,buildozer","A_Id":65723477,"CreationDate":"2021-01-14T16:52:00.000","Title":"pkg_resources: The 'sh' distribution was not found and is requred by buildozer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have the current file structure in a folder with nothing else in it:\n\n(folder) crypt\n(file) run.bat\n\nI'm on Windows and I'm trying to execute a python.exe with run.bat that is in the crypt folder. How do I do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":65724516,"Users Score":0,"Answer":"I figured it out! I just had to add a \".\/crypt\/python.exe\" argument as the thing to run.","Q_Score":0,"Tags":"python,windows,executable","A_Id":65724544,"CreationDate":"2021-01-14T18:05:00.000","Title":"Running an executable in a child directory from parent directory windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am building a Kivy into a Android app using buildozer and I am running into a error. When I type buildozer -v android debug I run into the following error.\n\n#sdkmanager path \"\/home\/...\/.buildozer\/android\/platform\/android-sdk\/tools\/bin\/sdkmanager\" does not exist, sdkmanager is notinstalled\n\nHow can solve this problem.\nNote: I am using the Windows virtual Linux terminal (Ubuntu)","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":484,"Q_Id":65724724,"Users Score":0,"Answer":"So what is basically happening is that buildozer is not finding the sdk manager. There are two ways to tackle this problem. First you can download the SDK tools and unzip it in the folder. Second is to start afresh. Delete all the folders created during the build via the terminal using this command rm -rf folder_name. Then build the app again. This will cause the packages to be downloaded again.","Q_Score":0,"Tags":"python,android,user-interface,kivy,buildozer","A_Id":65750377,"CreationDate":"2021-01-14T18:19:00.000","Title":"While building Kivy app 'sdkmanager not installed'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to deploy a Python ML app (made using Streamlit) to a server. This app essentially loads a NN model that I previously trained and makes classification predictions using this model.\nThe problem I am running into is that because TensorFlow is such a large package (at least 150MB for the latest tensorflow-cpu version) the hosting service I am trying to use (Heroku) keeps telling me that I exceed the storage limit of 300MB.\nI was wondering if anyone else had similar problems or an idea of how to fix\/get around this issue?\nWhat I've tried so far\n\nI've already tried replacing the tensorflow requirement with tensorflow-cpu which did significantly reduce the size, but it was still too big so -\nI also tried downgrading the tensorflow-cpu version to tensorflow-cpu==2.1.0 which finally worked but then I ran into issues on model.load() (which I think might be related to the fact that I downgraded the tf version since it works fine locally)","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":144,"Q_Id":65728247,"Users Score":1,"Answer":"I've faced the same problem last year. I know this does not answer your Heroku specific question, but my solution was to use Docker with AWS Beanstalk. It worked out cheaper than Heroku and I had less issues with deployment. I can guide on how to do this if you are interested","Q_Score":0,"Tags":"python,tensorflow,heroku,keras,streamlit","A_Id":65780677,"CreationDate":"2021-01-14T23:11:00.000","Title":"Deploy TensorFlow model to server?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to deploy a Python ML app (made using Streamlit) to a server. This app essentially loads a NN model that I previously trained and makes classification predictions using this model.\nThe problem I am running into is that because TensorFlow is such a large package (at least 150MB for the latest tensorflow-cpu version) the hosting service I am trying to use (Heroku) keeps telling me that I exceed the storage limit of 300MB.\nI was wondering if anyone else had similar problems or an idea of how to fix\/get around this issue?\nWhat I've tried so far\n\nI've already tried replacing the tensorflow requirement with tensorflow-cpu which did significantly reduce the size, but it was still too big so -\nI also tried downgrading the tensorflow-cpu version to tensorflow-cpu==2.1.0 which finally worked but then I ran into issues on model.load() (which I think might be related to the fact that I downgraded the tf version since it works fine locally)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":144,"Q_Id":65728247,"Users Score":0,"Answer":"You might have multiple modules downloaded. I would recommend you to open file explorer and see the actual directory of the downloaded modules.","Q_Score":0,"Tags":"python,tensorflow,heroku,keras,streamlit","A_Id":65728358,"CreationDate":"2021-01-14T23:11:00.000","Title":"Deploy TensorFlow model to server?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am experiencing this error every time I attempt to open a Jupyter notebook. Around 1:30pm, I did two things on my new Mac mini with the M1 chip:\n\nconda update --all\ndownloaded ChromeDriver for Selenium web scraping\n\nSince then I cannot open any of my Jupyter notebooks. I am able to start Jupyter notebook on my Safari browser from the terminal but not open any notebooks or a new Python3 notebook. I also created a new environment with conda installed and still received the same error.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":95,"Q_Id":65728571,"Users Score":0,"Answer":"Found a \"solution:\" revert back to initial installation of conda using conda install --revision 1, and I am now able to open Jupyter notebooks.\nQuestion remains whether conda update --all or conda update is problematic on the new m1 chip. Feel free to post your thoughts. Thanks!","Q_Score":0,"Tags":"python,path,jupyter-notebook,environment-variables,conda","A_Id":65728789,"CreationDate":"2021-01-14T23:52:00.000","Title":"Unreadable Notebook ... FileNotFoundError(2, 'No such file or directory')","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"get error when use this\ndf = pd.read_csv('filename.csv', usecols=[1,5,7])\nreturn Response(df.to_json(),status=status.HTTP_200_OK)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":65730183,"Users Score":0,"Answer":"df.to_json (r'Path where the new JSON file will be stored\\New File Name.json')\nto need to first save the than send it to a response","Q_Score":0,"Tags":"python,django,django-views","A_Id":65730219,"CreationDate":"2021-01-15T03:38:00.000","Title":"How to create response json from pandas csv in django?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I just want to invoke a grpc method and exit from the process, I don't need the response back to the client.\nMy use case is like.\nAWS Lambda will invoke only invoke multiple grpc request and exit without waiting for the response","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":456,"Q_Id":65731855,"Users Score":0,"Answer":"gRPC on-the-wire does not support fire-and-forget RPCs. The client is free to ignore the results and the server can respond immediately before actually processing the request.","Q_Score":1,"Tags":"architecture,grpc,grpc-python","A_Id":65816083,"CreationDate":"2021-01-15T07:11:00.000","Title":"Sending grpc Requests without waiting for response?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on my personal project for an oscilloscope, where I send bulk data from MCU(STM32) to PC through USB Full-Speed (12 Mbits). I would like to communicate with the device(STM32) by using PySerial. I found out that USB is half-duplex, where the host (PC) sets talking privileges with the device - \"speak when you're spoken to\". What I don't understand is how does the host set the talking privileges - Does my computer or pyserial automatically handle this, or do I have to do some handshaking protocol that needs to be implemented in code in both the host and device? I'm wondering since in the event both device and host are sending data, what happens to the data? Thank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":103,"Q_Id":65732165,"Users Score":0,"Answer":"On your PC you do not have to worry about the USB protocol. That is the responsibility of the USB stack in your OS and the associated USB device drivers.\nSo you just use PySerial to send and receive your data.","Q_Score":0,"Tags":"python,serial-port,usb,pyserial","A_Id":65733185,"CreationDate":"2021-01-15T07:37:00.000","Title":"Confused about PySerial with USB Full-speed half-duplex talking priority","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am making a bot that is using cogs and has a couple different commands. I want this bot to only reply in one of the two bot commands channels of the server I\u2019m using it on. I have seen that i can use ctx.channel.id = whatever the Id is, but i would prefer the bot to not be able to respond in the channel at all, including to .help commands. I have seen people do this with on_message, but I\u2019m not sure how I would do that with cogs. Any help would be much appreciated. My intended result is basically to have the bot only respond in two channels, the two bot channels that i specify, to any commands including the .help command. Thanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2916,"Q_Id":65737898,"Users Score":1,"Answer":"The easiest way to do this is not actually via code but via permissions on the server. On your server you should find a role with the same name as your bot, whose permissions (including send messages) you can change for seperate channels.","Q_Score":0,"Tags":"python,python-3.x,discord,discord.py,discord.py-rewrite","A_Id":65740071,"CreationDate":"2021-01-15T14:28:00.000","Title":"How can I restrict a bot to responding in certain channels with discord.py?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried every other solution to create a text file in C:\\windows but unfortunately after performing the user uac permission and giving administrative access, windows still doesn't allow to create this file, pycharm console output PermossionError: [Errno 13] Permission denied: 'C:\\textfile.log'.\nIs there a way to create a file in C windows root by entering the admin user and pass in windows uac?\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":194,"Q_Id":65741250,"Users Score":0,"Answer":"Try the following:\n\nTry from out of PyCharm, sometimes that could be a problem.\nRun the python file itself as administrator.","Q_Score":0,"Tags":"python,admin,root,uac,privileges","A_Id":65741441,"CreationDate":"2021-01-15T17:59:00.000","Title":"Elevate permission to create file in C:\\windows with Python 3.x","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a QTextTable and I want to insert a row in my current position.\nHow can I know in which row is the QTextCursor?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":46,"Q_Id":65741813,"Users Score":0,"Answer":"With help @musicamente\nthe solution for insert a row in a QTextTable in current position is:\ncursor.currentTable().insertRows(cursor.currentTable().cellAt(cursor).row()+1,1)","Q_Score":0,"Tags":"python,pyqt,pyqt5","A_Id":65742377,"CreationDate":"2021-01-15T18:38:00.000","Title":"QTextTable QTextCursor current row?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am requesting an API which sometimes gives a string that contain \"*\" characters.\nI have to post the output on discord, where ** makes the text bold.\nI want to see if a string contains any * and if so put a markdown \\ escape character in front of the *.\nHow can I accomplish this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":65742637,"Users Score":1,"Answer":"As @Random Davis rightly pointed out in the comments, you can use str.replace(\"*\",\"\\*\")and it will replace all the * occurrence.","Q_Score":0,"Tags":"python,python-3.x","A_Id":65742846,"CreationDate":"2021-01-15T19:45:00.000","Title":"If character in string put escape character \"\\\" in front of the found character","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have this problem when I try to import Pycryptodome.\nTraceback (most recent call last): File \"C:\\Users\\me\\Documents\\Python\\Python 3.8\\file.pyw\", line 17, in from Crypto.Cipher import AES File \"C:\\Users\\me\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\pycrypto-2.6.1py3.8-win-amd64.egg\\Crypto\\Cipher\\AES.py\", line 50, in from Crypto.Cipher import _AES\nAnd then:\nImportError: DLL load failed while importing _AES: %1 is not a valid Win32 application.\nI'm using Windows 64 bit with Python 64 bit 3.8.7. I installed Pycryptodome (version 3.9.9) with pip install pycryptodome. But when I tried to import AES from Pycryptodome, it errors out with the error above. Can anyone please tell me how to fix it? FYI, this is my first post on Stack Overflow, so if the post is missing anything, please tell me. Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":141,"Q_Id":65743181,"Users Score":0,"Answer":"oh silly me, need to install pycryptodome 3.8.2. Dumb mistake lol.","Q_Score":0,"Tags":"python,importerror,pycryptodome","A_Id":65744241,"CreationDate":"2021-01-15T20:33:00.000","Title":"Pycryptodome: ImportError: DLL load failed while importing _AES: %1 is not a valid Win32 application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been looking through the documentation, and I can't find any way to stop Ray after an answer is calculated within a tolerance level. Right now I'm writing the answer to a csv with a print('Ok to stop'), and then I kill the process manually. I'd like to stop all of the workers, and then have it automatically move on to another problem. Is there an error that I can raise that will make all of the workers stop?\nThanks.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":3607,"Q_Id":65746090,"Users Score":1,"Answer":"If you start a ray cluster with ray.init, the cluster should be killed when your python program exists. If you start it with ray start, you can use the ray stop command to shutdown the cluster. You can also use ray stop --force to forcefully kill all processes left.","Q_Score":3,"Tags":"python,parallel-processing,ray","A_Id":65746188,"CreationDate":"2021-01-16T03:03:00.000","Title":"How do I stop a Python Ray Cluster","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been looking through the documentation, and I can't find any way to stop Ray after an answer is calculated within a tolerance level. Right now I'm writing the answer to a csv with a print('Ok to stop'), and then I kill the process manually. I'd like to stop all of the workers, and then have it automatically move on to another problem. Is there an error that I can raise that will make all of the workers stop?\nThanks.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":3607,"Q_Id":65746090,"Users Score":1,"Answer":"To stop a Ray cluster from Python, you can use ray.shutdown().","Q_Score":3,"Tags":"python,parallel-processing,ray","A_Id":65753014,"CreationDate":"2021-01-16T03:03:00.000","Title":"How do I stop a Python Ray Cluster","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to build a telegram bot. In that I need to know if anyone is registered with a email address. I checked the documentation but didn't found any answer. If it is possible with telegram core api please feel free to answer.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":104,"Q_Id":65746922,"Users Score":0,"Answer":"There is no possibility to register in Telegram using anything else except phone number.\nYou can NOT use email, social auth or any other kind of tokens.\nOnly phone number.","Q_Score":0,"Tags":"python,python-3.x,api,telegram,telegram-bot","A_Id":65761409,"CreationDate":"2021-01-16T06:02:00.000","Title":"How to programmatically check if a email is registered in the Telegram?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using turtle files containing biographical information for historical research. Those files are provided by a major library and most of the information in the files is not explicit. While people's professions, for instance, are sometimes stated alongside links to the library's URIs, I only have URIs in the majority of cases. This is why I will need to retrieve the information behind them at some point in my workflow, and I would appreciate some advice.\nI want to use Python's RDFLib for parsing the .ttl files. What is your recommended workflow? Should I read the prefixes I am interested in first, then store the results in .txt (?) and then write a script to retrieve the actual information from the web, replacing the URIs?\nI have also seen that there are ways to convert RDFs directly to CSV, but although CSV is nice to work with, I would get a lot of unwanted \"background noise\" by simply converting all the data.\nWhat would you recommend?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":160,"Q_Id":65748870,"Users Score":1,"Answer":"RDFlib's all about working with RDF data. If you have RDF data, my suggestion is to do as much RDF-native stuff that you can and then only export to CSV if you want to do something like print tabular results or load into Pandas DataFrames. Of course there are always more than one way to do things, so you could manipulate data in CSV, but RDF, by design, has far more information in it than a CSV file can so when you're manipulating RDF data, you have more things to get hold of.\n\nmost of the information in the files is not explicit\n\nBetter phrased: most of the information is indicated with objects identified by URIs, not given as literal values.\n\nI want to use Python's RDFLib for parsing the .ttl files. What is your recommended workflow? Should I read the prefixes I am interested in first, then store the results in .txt (?) and then write a script to retrieve the actual information from the web, replacing the URIs?\n\nNo! You should store the ttl files you can get and then you may indeed retrieve all the other data referred to by URI but, presumably, that data is also in RDF form so you should download it into the same graph you loaded the initial ttl files in to and then you can have the full graph with links and literal values it it as your disposal to manipulate with SPARQL queries.","Q_Score":1,"Tags":"python-3.x,rdflib,turtle-rdf","A_Id":66113979,"CreationDate":"2021-01-16T10:56:00.000","Title":"Workflow for interpreting linked data in .ttl files with Python RDFLib","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been trying for a long time to find a solution to the scrapyd error message: pkg_resources.DistributionNotFound: The 'idna<3,>=2.5' distribution was not found and is required by requests\nWhat I have done:\n$ docker pull ceroic\/scrapyd\n$ docker build -t scrapyd .\nDockerfile:\nFROM ceroic\/scrapyd\nRUN pip install \"idna==2.5\"\n$ docker build -t scrapyd .\nSending build context to Docker daemon 119.3kB \nStep 1\/2 : FROM ceroic\/scrapyd \n---> 868dca3c4d94\nStep 2\/2 : RUN pip install \"idna==2.5\"\n---> Running in c0b6f6f73cf1\nDownloading\/unpacking idna==2.5\nInstalling collected packages: idna\nSuccessfully installed idna\nCleaning up...\nRemoving intermediate container c0b6f6f73cf1\n---> 849200286b7a\nSuccessfully built 849200286b7a\nSuccessfully tagged scrapyd:latest\nI run the container:\n$ docker run -d -p 6800:6800 scrapyd\nNext:\nscrapyd-deploy demo -p tutorial\nAnd get error:\npkg_resources.DistributionNotFound: The 'idna<3,>=2.5' distribution was not found and is required by requests\nI'm not a Docker expert, and I don't understand the logic. If idna==2.5 has been successfully installed inside the container, why does the error message require version 'idna<3,>=2.5'?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":95,"Q_Id":65751415,"Users Score":0,"Answer":"The answer is very simple. I finished my 3 days! torment. When I run the\nscrapyd-deploy demo -p tutorial \nthen I do it not in the created container, but outside it.\nThe problem was solved by:\npip uninstall idna\npip install \"idna == 2.5\"\nThis was to be done on a virtual server, not a container. I can't believe I didn't understand it right away.","Q_Score":0,"Tags":"python,docker,scrapy,scrapyd","A_Id":65753051,"CreationDate":"2021-01-16T15:42:00.000","Title":"scrapyd-deploy error: pkg_resources.DistributionNotFound","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Ok, I am relatively new to Python (more of a MATLAB \/ R \/Stata user). I previously installed Python on my computer from the Python website. Everything was running smoothly until I had to install Pytorch too. I tried installing it via pip to no avail, so I had to make a new installation of Python but this time with Anaconda.\nHowever, now I have a mess and I can not load Scypi on Anaconda and I can not load Pytorch in the regular Python I have. Having to run them separately is driving me insane. Is there a way that I can merge the two versions together or should I uninstall and stick to only one?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":111,"Q_Id":65755978,"Users Score":0,"Answer":"Have you tried pip3 install pytorch?\nSometimes Python2 is the main version. To use python 3 pip instead you have to use pip3 install","Q_Score":1,"Tags":"python,anaconda,pytorch","A_Id":65756225,"CreationDate":"2021-01-16T23:58:00.000","Title":"Managing several versions of Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"On a local machine there are two files in the same directory:\n\ntaskrunner.py\nnamefile.txt\n\ntaskrunner.py runs continuously (Python 3.x) and reads a single name from namefile.txt once per minute. I want to be able to have someone at a remote location SSH into the local machine and replace the old namefile.txt with a new namefile.txt without causing any collisions. It is entirely acceptable for taskrunner.py to work with the old namefile.txt information until the new namefile.txt is in place. What I do not want to have occurred is:\n\nHave the taskrunner.py throw an exception because namefile.txt is present in the process of being replaced\n\nand\/or\n\nBe unable to insert the new namefile.txt because of taskrunner.py locks out the remote access.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":37,"Q_Id":65756733,"Users Score":1,"Answer":"This is a typical situation where a lock is useful.\nYou need to create two copies of namefile.txt: let's call them namefile.txt and namefileOld.txt. You also need a locking mechanism that will allow fast updates. For complex operations, you can use Redis. For simple operations like yours, you can probably get away with an environment variable. Let's call it LOCK, which can take values True and False.\nWhen a person wants to write to namefile.txt, set LOCK to True. Subsequently, set LOCK to False and overwrite nameFileOld.txt with data from nameFile.txt.\nHow taskrunner.py should read the data:\n\nRead the LOCK value.\nIf LOCK == True, read from nameFileOld.txt\nelse, read form nameFile.txt","Q_Score":1,"Tags":"python,python-3.x","A_Id":65757068,"CreationDate":"2021-01-17T02:24:00.000","Title":"Remotely Access A Text File That Python Is Presently Accessing Locally","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an db.sql file and I tried inserting the data from it to my local MYSQL using the command:\nmysql -u root -p chitfund < db.sql . It doesn't show any error and also doesn't print anything. But when I try to see the tables in my db, it shows now tables. I have the data in the form of .csv also. I tried inserting using mysql.connector , but it is not installing and throws an error. Is there any other way to insert the data using the sql or csv files.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":77,"Q_Id":65758287,"Users Score":0,"Answer":"The problem was with my .sql file. It didn't have any insert or create statements. I assumed it has the statements and the data is not being inserted. After having all the insert statements for inserting the data, it worked properly","Q_Score":2,"Tags":"python,mysql,csv,mysql-connector-python","A_Id":65759059,"CreationDate":"2021-01-17T07:34:00.000","Title":"Insert data into mysql from sql file or using csv file","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to programming and coding, and I choose Python to be my first.\nWhen I declare a variable, I notice that the variable = \"x\" and variable ='x' are the same. So can anyone told me the differences of \"x\" and 'x'","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":4187,"Q_Id":65758620,"Users Score":3,"Answer":"According to PEP 8(Python Coding standard)\nIn Python, single-quoted strings and double-quoted strings are the same. This PEP does not make a recommendation for this. Pick a rule and stick to it. When a string contains single or double quote characters, however, use the other one to avoid backslashes in the string. It improves readability.\nFor triple-quoted strings, always use double quote characters to be consistent with the docstring convention in PEP 257.\ne.g\n\u2018single-quote\u2019 is the same as \u201csingle-quote\u201d\nYou will need a double quote when you need apostrophe in the middle of the string\ne.g \u201cI\u2019m a good boy\u201d","Q_Score":0,"Tags":"python,python-3.x,string,variables,quotation-marks","A_Id":65758687,"CreationDate":"2021-01-17T08:29:00.000","Title":"Are there any differences in \" \" and ' ' in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"so I get this message right below the input cell when trying to fit the model:\nINFO:numexpr.utils:NumExpr defaulting to 8 threads.\nINFO:fbprophet:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.\nAfter setting daily_seasonality=True and running the cell again, Value error shows up that parameter daily_seasonality is not recognized.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":1103,"Q_Id":65759977,"Users Score":4,"Answer":"I've had the same problem.Access Anaconda Prompt for the environment that you are working with as admin. But it works after I follow some steps:\n1- conda install -c conda-forge fbprophet\n2- conda remove --force fbprophet\n3- In your jupyter notebook use pip : install fbprophet","Q_Score":0,"Tags":"python,valueerror,model-fitting,facebook-prophet","A_Id":65817878,"CreationDate":"2021-01-17T11:21:00.000","Title":"Value error when fitting Prophet() model - parameter not recognized","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a raspberry pi, a flask server, a flask client, and two different networks.\nwhen I connect a wifi adapter to the raspberry pi I can see that I have a new interface called \"wlan1\" is there a way to run a the server for example on \"wlan0\" and the client on \"wlan1\".\nwhat I'm trying to do is run the server on a different network than the client (while both of them are on the pi).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":65760471,"Users Score":2,"Answer":"Server:\nFor the server part, you need to \"bind\" the listening socket to the IP address of wlan0.\n\nFind the IP address of wlan0 using ifconfig wlan0 or ip addr show dev wlan0 (e.g. 192.168.0.2)\nBind the Flask server to that IP address using app.run(host='192.168.0.2', port=80)\n\nIf you bind to 0.0.0.0, it will be reachable from all network devices.\nClient:\nA little bit more involved, take a look at how \"routing tables\" work for the theory.\n\nFind out the IP address of the server that your client will connect to (e.g. 93.184.216.34)\nFind out the IP address of the default gateway on the interface wlan1, for example with ip route (look for \"default via dev wlan1\"), e.g. \"default via 192.168.1.1 dev wlan1\"\nAdd a route to that IP address via the gateway and interface, using route add 93.184.216.34 gw 192.168.1.1 dev wlan1\n\nNote that the routing table will affect all programs on the raspberry pi, not just your client application.","Q_Score":1,"Tags":"python,flask","A_Id":65761573,"CreationDate":"2021-01-17T12:17:00.000","Title":"Run flask file on wlan1 instead of wlan0","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to create an AWS lambda function that help user collect data from a website (using selenium and headless-chromium). The website requires sms code verification during login so I need to get it from the user and pass it back to the AWS Lambda function\nthe flow will be:\n\nusername & password send to lambda function\nlambda function start, chromium auto login with username & password\nwaiting for sms code from user\nuser enter sms code, code pass to lambda function\nlambda function continue\n\nis it possible to do so? like the input() function when running python locally\nthanks!!\n*first question in stackoverflow! let me know if anything doesn't make sense","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":772,"Q_Id":65762905,"Users Score":0,"Answer":"We need at least two Lambdas.\nSecond Lambda:\n\nTakes OTP & UserId as input.\nWrites a record to DynamoDb with userId and OTP.\n\nFirst Lambda:\n\nLambda is invoked with user Id and password\nStart a selenium browser session.\nLogin and write a record to Dynamo.\nKeep checking Dynamo every second for an entry for userId with OTP in Dynamo.\nSet a timeout and complete Login Process.\n\nMain disadvantage of this approach is we have first Lambda running for entire time of login. But if we can't break the login and OTP process, I don't see another way.\nMay be having a ECS Fargate task instead of Lambda function may save some cost as we can easily run multiple selenium browser sessions in single ECS Task.","Q_Score":1,"Tags":"python,amazon-web-services,selenium,aws-lambda,headless-browser","A_Id":65763580,"CreationDate":"2021-01-17T16:07:00.000","Title":"AWS Lambda: Python user input possible?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can I modify a raw Unix timestamp so that it shows that it is 5 hours behind (as an example). I'm looking to do this with a javascript or python. The more lightweight the better. I'm basically looking for how to manually decode a given unix timestamp and change some of its numbers so that it gives me back a unix timestamp showing a different time. It would be even greater if I could automatically adjust it to a users personal time-zone using javascript\/python.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":65765799,"Users Score":0,"Answer":"Convert to the number of hours you want to offset by to seconds, and then add or subtract it from the Unix timestamp. As far as getting the user's personal time zone, I'm not sure how you would do that without language specific code.","Q_Score":0,"Tags":"javascript,python,timestamp,unix-timestamp","A_Id":65765871,"CreationDate":"2021-01-17T20:50:00.000","Title":"How to manually modify Unix TimeStamp so that its 5 hours behind UTC?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I modify a raw Unix timestamp so that it shows that it is 5 hours behind (as an example). I'm looking to do this with a javascript or python. The more lightweight the better. I'm basically looking for how to manually decode a given unix timestamp and change some of its numbers so that it gives me back a unix timestamp showing a different time. It would be even greater if I could automatically adjust it to a users personal time-zone using javascript\/python.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":65765799,"Users Score":0,"Answer":"How to choose a time 5 hours numerically smaller\nThis would be relevant for if, for example, you were testing whether the submission of an exam answer is within 5 hours of the exam's start time.\nFor this, just subtract 5 * 3600 * 1000 from the Unix timestamp's numerical value.\nWhat you are actually proposing to do is extremely unwise\nYou seem to be planning to create a Unix timestamp of a different point in time which, when expressed as UTC but with the annotation \"UTC\" deleted, will match the local time display expected by a user who is 5 hours behind UTC. I can see why you are tempted to do this but it is a very bad idea.\n\nUnix Timestamps do not default to be in UTC, they describe a point in time across all of space simultaneously. If you shift the value of a Unix timestamp, it is no longer a Unix timestamp, just as (mass of car minus 50 kg) is no longer the mass of that car. The value is either the mass of a different car that is 50kg lighter, or an incorrect value for the mass of the original car.\n\nUnix timestamps are unambiguous. Once you know that a variable contains a Unix timestamp, you can stop worrying about any if's, but's or maybe's. It is solid and definite. What you are creating is a horrible thing which looks like a Unix timestamp of an timepoint, but it is not. What variable name are you going to give it to prevent confusion? You might give the physical property a new name, such as the goalieTimeStamp, which is distinguished from Unix timestamps by being displaced by 5 hours.\n\nIf a person is 5 hours behind UTC now (in January), that person will likely be a different number of hours behind UTC in summertime. This is a mess.\n\n\nI think you are doing this so that you can display a local time nicely. Choose a different, better, way to achieve this.\nYou should use the localisation system in the relevant language to obtain and display the local time, which will depend not only on the location of the user, but also the time of year. This will also allow you to deal with languages etc, if you need to.\nAnd throughout your code you will have a clear distinction between the timepoint of your event (invariant across space) and how a local user will express that time in their timezone, time of year and language.\nA good library for this in Javascript is moment.js. It is rather heavyweight, but this is because the task is much more heavyweight that it first seems!","Q_Score":0,"Tags":"javascript,python,timestamp,unix-timestamp","A_Id":65766210,"CreationDate":"2021-01-17T20:50:00.000","Title":"How to manually modify Unix TimeStamp so that its 5 hours behind UTC?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The latest anaconda version is 2020.11.\nI am using anaconda python 2020.7. Will conda update --all be good enough to upgrade my existing version to 2020.11? Are there things that I am missing out if I don't install 2020.11 directly from installation file?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":1549,"Q_Id":65767624,"Users Score":2,"Answer":"You can easily update Anaconda to the latest version.\nEnter these commands:\nconda update conda\nconda update anaconda=VersionNumber","Q_Score":3,"Tags":"python,anaconda,conda","A_Id":65867706,"CreationDate":"2021-01-18T01:06:00.000","Title":"Is \"conda update --all\" good enough to upgrade to latest anaconda version?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to using terminal in Mac. When I type any python3 command it only checks the users folder on my PC, HOW can I change the directory to open a folder in the users section and check for the .py file there?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":65767823,"Users Score":0,"Answer":"You have to use the command cd to your folder first.","Q_Score":0,"Tags":"python,python-3.x,macos,terminal","A_Id":65767862,"CreationDate":"2021-01-18T01:44:00.000","Title":"How to select a folder in users directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"After moving miniconda3 to another path, and after fixing all paths in:\na, ~\/.bashrc\nb, ~\/bin\/activate\nc, ~\/bin\/conda\nd, ~\/etc\/profile.d\/conda.sh\nI have a working conda and all my installations have been preserved. The only things that work abnormally are:\n\nconda env list returns y\/N and neither the answer returns my env list, yet conda info --envs works perfectly and returns all my env names.\npip --version returns an error message\n\n\nbad interpreter: no such file or directory\n\nCan anyone tell me, besides the four files above, if there is any file that I need to modify?\nThe list of files that might be related:\n\n~\/miniconda3\/condabin\/conda\n~\/miniconda3\/bin\/conda\n~\/miniconda3\/bin\/conda-env\n~\/miniconda3\/bin\/activate\n~\/miniconda3\/bin\/deactivate\n~\/miniconda3\/etc\/profile.d\/conda.sh\n~\/miniconda3\/etc\/fish\/conf.d\/conda.fish\n~\/miniconda3\/shell\/condabin\/Conda.psm1\n~\/miniconda3\/shell\/condabin\/conda-hook.ps1\n~\/miniconda3\/lib\/python3.8\/site-packages\/xontrib\/conda.xsh\n~\/miniconda3\/etc\/profile.d\/conda.csh\n~\/.bashrc","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":817,"Q_Id":65771927,"Users Score":0,"Answer":"Maybe you could try to modify your new path in .\/bin\/pip and .\/bin\/conda-env","Q_Score":7,"Tags":"python,anaconda,conda","A_Id":67412174,"CreationDate":"2021-01-18T09:34:00.000","Title":"\"conda env list\" always returns \"y\/N\" and neither answer returns anything, yet \"conda info --envs\" works perfectly?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently started working with Yaskawa's OPC UA Server provided on its robot's controller.\nI'm connecting to the server via Python's OPCUA library. Everything works well, but when my code crashes or when I will close terminal without disconnecting from the server I cannot connect to it once again.\nI receive an error from library, saying:\nThe server has reached its maximum number of sessions.\nAnd the only way to solve this is to restart the controller by turning it off and on again.\nDocumentation of the server is saying that max number of sessions is 2.\nIs there a way to clear the connection to the server without restarting the machine?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":221,"Q_Id":65773379,"Users Score":1,"Answer":"The server keeps track of the client session and doesn't know that your client crashed.\nBut the client can define a short enough SessionTimeout, after which the server can remove the crashed session.\nThe server may have some custom configuration where you can define the maximum number of sessions that it supports. 2 sessions is very limited, but if the hardware is very limited maybe that is the best you can get. See the product documentation about that.","Q_Score":1,"Tags":"python,opc-ua","A_Id":65776760,"CreationDate":"2021-01-18T11:09:00.000","Title":"OPC UA zombie connection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to make a database management program in Python and Tkinter that relies on user input for updating it. The script goes like this:\n\nUser searches for records.\nSelects an entry in treeview widget.\nSelection populates all of the Entry fields.\nUser changes one or more values in them.\nProgram updates the database with the new values\n\nMy problem is this: How do I compare the old value (that came from selecting in threeview) and the new value (that user changed)?\nI have an idea for a loop that 'scans' the new and old values and based on that executes UPDATE query on the changed column, but I can't get the value that is in the Entry widget.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":200,"Q_Id":65775759,"Users Score":0,"Answer":"You need to let the database do the work, i.e you let the database search for the record and update it based on a value. This value must be unique, an ID.\nSo what should happen:\n\nThe user chooses the record and the new value\nYou have the ID of the record and the new value\nCall your ORM's update function, which should accept the ID of the record and the new value","Q_Score":0,"Tags":"python,sql,firebird","A_Id":65776192,"CreationDate":"2021-01-18T13:48:00.000","Title":"Update database based on user input","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to make a database management program in Python and Tkinter that relies on user input for updating it. The script goes like this:\n\nUser searches for records.\nSelects an entry in treeview widget.\nSelection populates all of the Entry fields.\nUser changes one or more values in them.\nProgram updates the database with the new values\n\nMy problem is this: How do I compare the old value (that came from selecting in threeview) and the new value (that user changed)?\nI have an idea for a loop that 'scans' the new and old values and based on that executes UPDATE query on the changed column, but I can't get the value that is in the Entry widget.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":200,"Q_Id":65775759,"Users Score":0,"Answer":"Your widget should do it, not the database.\nWhat would be triggering the database writing anyway? would it be some \"save\" button in the window? Or should it be immediately done after the node of the tree was renamed (probably, user stroke ENTER key after keying in new name, but different tree wdgets can have different user interactions methods).\nSo, the latter option would be like this:\nWhen user starts renaming the node, when it enters the tree node editing mode, you tree widget notifies you about it, and then you read the value (before user changed it) and save it into variable. Then you wait for the user to stop editing the node.\nWhen the tree widget informs you that user completed the editing, you check what kind of completion it was. If it was CANCEL-like completion, then you do nothing.\nIf it was OK-like completion, then read the now changed name of the node to another variable.\nYou compare the two variables and if the values are different, you command database to UPDATE the row corresponding to the edited node and then to COMMIT the transaciton.\nAs @MehdiKhlifi said earlier, you would have to have ID column in your table (read about SQL sequence, in Firebird\/Interbase this was created as generator before SQL standardized it, it is the same thing). You would have to somehow store those IDs into the tree nodes (read your widget documentation how to do it), so you would know which table row corresponds to just edited node.\n\nAlternatively, you may consider the whole window as data frame, not one node.\nThen you would have to make two functions:\n\nreading the tree from some buffer (array, hashmap\/dictionary, object or something)\ncreating new empty buffer and writing the tree to it\n\nWhen form is opened you create one buffer and read it from the database, then toy read the tree from the buffer.\nWhen user preses SAVE button, you write the tree into the new buffer, then you compare those two buffers, and then for every changed item you do the SQL update like described above, then you do the SQL commit for all he updated rows.\n\nNotice that usually user can do more than renaming specific nodes: often user can add new tree nodes, delete nodes, or move nodes to different branch (prune-and-paste).","Q_Score":0,"Tags":"python,sql,firebird","A_Id":65798540,"CreationDate":"2021-01-18T13:48:00.000","Title":"Update database based on user input","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can't seem to resolve the following pylint error, I checked all the usual suspects and historical Stack Overflow questions but no answer seemed to fit the bill on this one - any thoughts?:\n(value error) invalid \\x escape at position 4\nvariable = b'\\xC0\\xPR\\x89\\xE1\\xPQ\\xRP\\xB8\\x3B\\x00\\x00\\x00\\xCD\\x80'","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":47,"Q_Id":65778100,"Users Score":1,"Answer":"You can use Python Hex escapes only with Hexadecimal characters.\nThe problems in your variable are \\xPR, \\xPQ, \\xRP.\nYou can only escape hex values between \\x00 and \\xFF.\nAn example is b\"\\x41\\x42\\x43\".","Q_Score":0,"Tags":"python,python-3.x,hex","A_Id":65780142,"CreationDate":"2021-01-18T16:06:00.000","Title":"Invalid \\x Escape with Byte Notation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pandas DataFrame database of location and postal code addresses. But postal codes have been interpreted from Excel reading. For example, in France postal addresses are a department code followed by city code, e.g. 75600 (75 = 'Paris region', 600 = 'inner city').\nBut in some first postal codes, e.g. 01200 it have interpreted like 1200. How can I search integer values below 10000 and modify them? Or how to preserve the first zero. How to search and replace in a dataframe and use the content (to modify it)?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":651,"Q_Id":65779595,"Users Score":1,"Answer":"Solution of df = df[0].apply(lambda x: '0' + str(x) if x < 10000 else str(x)) is perfect. It convert the code to full string and then I can find GPS coordinates correspondant to national postal infos.\nMany thanks.","Q_Score":0,"Tags":"python,pandas,dataframe,leading-zero","A_Id":65781467,"CreationDate":"2021-01-18T17:47:00.000","Title":"Pandas dataframe search and modify leading zero on postal codes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Google Sheet with a list of products for an online store, but no prices. Is there a script that will scrape the website and add the prices for each into my Google Sheet? To add to the complexity, the website is only accessible with a login.\nI'm very new to programming so any help would be appreciated.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":65780501,"Users Score":0,"Answer":"There is no such script, but there are libaries that help you scrape data from websites.\nLet's say you pick Selenium.\nThe script would look something like this.\nOpen webbrowser, navigate to your site, login, and then do stuff you need to do. It does exactly what a normal user could do. Click on a element, input text, select an item, etc...","Q_Score":0,"Tags":"javascript,python,web-scraping","A_Id":65780779,"CreationDate":"2021-01-18T18:56:00.000","Title":"Scraping data off a website that requires a login","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Django newbie here.\nI have an existing Django virtual environment that was setup as part of a tutorial,\nusing Django 2.2. and Vagrant + Virtual Box on my laptop.\nThere was 1 project created for the tutorial above.\nI will now be starting another Django tutorial,\nwherein I will be creating a few more projects.\nWHat is the best way to go about this?\nShould I create a new virtual environment for tutorial #2?\nOr use the existing environment, that was setup for tutorial #1?\nFYI - tutorial #2 uses Django 1.11\nThanks a lot!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":470,"Q_Id":65784548,"Users Score":1,"Answer":"It is always a good practice to create different virtual env for each django project. For example if you have multiple django projects that are using one virtualenv, and you want to host one of the django apps on a platform like Heroku which will require you create a requirements.txt file for python apps, so when you run pip freeze to get the requirements, you will find out that there are many packages in that env that is not required by your current project. And installing all those packages on your Heroku might make you run out of space before you know it. So try and keep the virtualenv different according to your project and keep the requirement.txt as well.","Q_Score":0,"Tags":"python,django","A_Id":65787547,"CreationDate":"2021-01-19T02:01:00.000","Title":"Django Question - Should i start a new project in the same virtual environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am creating and training a TensorFlow model in Google Cloud in their JupyterLab AI Notebooks but for some reason I cannot find a way to save my model after it's been created.\nTypically, I'd use created_model.save(str('\/saved_model_file')) in Colab to save to the local directory.\nBut JuptyerLab in Google Cloud responds with a \"Permission Denied\" error, I've tried giving all possible maximum permissions in AIM, I'm the only person on the count. But the error persists.\nBut I do seem capable of saving blobs to Buckets by using blob.upload_from_filename(source_file_name) or blob.upload_from_string(source_file_name) which saving to buckets seems like a more appropriate strategy.\nBut neither one of these will take the trained model created by TensorFlow since it's more of a function and not a file type they seem to be looking for. The tutorials seem to casually mention that you should save your model to a bucket but completely neglect to provide any simple code examples, apparently I'm the only guy on earth who wasn't born knowing how to do this.\nWould be a great if someone could provide some code examples on how to save a TensorFlow model to a bucket. I also need for this function to be done automatically by the python code. Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":182,"Q_Id":65786211,"Users Score":0,"Answer":"It seems you just need to create the path with the OS module first then the TF function will work. This seems kind of odd, other platforms I've used let the TF function create the path by its self. Apparently Google Cloud doesn't like that, perhaps there's some user permission buried in a hierarchy somewhere that is causing this problem.\n\nMODEL_NAME = 'model_directory'\nmodel_version = int(time.time())\nmodel_path = os.path.join(MODEL_NAME, str(model_version))\nos.makedirs(model_path)\ntf.saved_model.save(model, model_path)","Q_Score":0,"Tags":"python,tensorflow,google-cloud-platform,jupyter-notebook","A_Id":65803372,"CreationDate":"2021-01-19T06:00:00.000","Title":"Google Cloud can't save Tensorflow model from Jupyter Notebook","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I am really confused on how nginx works with docker.\nSuppose I have a python flask container website hosted on two servers and I am using nginx to load balance from a third server, Say bastion.\nSo, everytime I visit the website, will a new docker -flask-instance\/image be created to serve the client? Or all are served from the one flask image?\nIf yes, where can I find the new instances names which are created.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":22,"Q_Id":65789586,"Users Score":1,"Answer":"First of you seem to be confused about the concept of images in docker. For your flask application there should only be 1 image, and there can be any number of containers which are running instances of this image.\nyou can see all running instances (containers) with docker ps.\nAnd no generally speaking, there will not be a new container for every request.","Q_Score":0,"Tags":"python,docker,nginx,flask,nginx-reverse-proxy","A_Id":65789804,"CreationDate":"2021-01-19T10:24:00.000","Title":"Is a new docker instance\/image created everytime a web request arrives?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have Android application made in java and I need to call Python script and put there some parameters to its function. When I make apk. of this androdid app. how can I make Android device execute python script in it ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":65789961,"Users Score":0,"Answer":"You can make a server out of python as backend with flask or django, and then call the server whenever your app needs to run the python script.","Q_Score":1,"Tags":"java,python","A_Id":65792459,"CreationDate":"2021-01-19T10:47:00.000","Title":"Run python script in Android aplication","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need a query for SCD2 in MySQL using:\nDATA STRUCTURE:\nid, data, start_date, end_date\nif the record_id exist:\n-update the record's end_date,\n-and create a new record with the new data\nelse:\n-insert a new record.\nCan I use MySQL CASE to this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":33,"Q_Id":65790397,"Users Score":1,"Answer":"You dont need an end-date to store historic data;\nWhenever you want to store a new address, you simply add it with the start-date.\nYou retrieve history by\nSELECT * FROM table ORDER BY start-date\nOr just the last address by\nSELECT * FROM table ORDER BY start-date DESC LIMIT 1","Q_Score":0,"Tags":"mysql,python-3.x","A_Id":65803143,"CreationDate":"2021-01-19T11:13:00.000","Title":"Mysql do a query if case 1, do another query if case 2","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know that detach() is used for detaching a variable from the computational graph. In that context, are the following expressions x = x - torch.mean(x, dim=0).detach() and x = x - torch.mean(x, dim=0) equivalent? I just want to subtract the mean out, don't want to pass gradients through the average calculation.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":98,"Q_Id":65791620,"Users Score":2,"Answer":"If you do not detach the mean, then you have lateral dependencies between all elements in the batch (dim=0) when estimating the gradient.","Q_Score":0,"Tags":"python,deep-learning,pytorch","A_Id":65792135,"CreationDate":"2021-01-19T12:34:00.000","Title":"Functionality of `detach()` in PyTorch for this specific case","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After trying to create requirements file with pip freeze > requirements.txt from my virtual enviroment not all of the required imports get listed. Am I doing something wrong or is there other way to list them all?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":96,"Q_Id":65792054,"Users Score":1,"Answer":"Found out the problem was that pycharm creates virtual enviroment already and I was creating new one on top of that","Q_Score":0,"Tags":"python,pip","A_Id":65795610,"CreationDate":"2021-01-19T13:02:00.000","Title":"Pip freeze doesnt list all required packages","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building a django app in which the user has to write a sentence in a text box. This sentence gets then sent to the server and received by it. After that the user has to click on continue and gets on a another html page. Here the user has to record an audio of a word he sais. The word is then turned into a string and after that sent to the server. Now I would like the function in views.py to find out if the word the user said is in the sentence the user wrote before. But the sentence is only in the first function that receives the sentence after it is sent. I know I could first store the sentence but is there also another way?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27,"Q_Id":65795652,"Users Score":1,"Answer":"yes, at least there is two ways first using a model to store the value. or a file maybe.\nsecond using some html magic(? I'm not sure of magic). using an input type=\"hidden\".\nyour first function receives the text, redirects user to another page but with an argument the text!, then inside that template store that text in a hidden input and by clicking the button send both voice and hide value text to the new functon.","Q_Score":0,"Tags":"python,python-3.x,django","A_Id":65795834,"CreationDate":"2021-01-19T16:33:00.000","Title":"django app how to get a string from a function before","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to subclass QtCore.QSortFilterProxyModel, and I found that it has two class methods that are very similar in function.\nrowCount() and hasChildren()\nI can use rowCount()==0 to determine whether there are children. Why do I need a separate class method hasChildren()?\nHow are their roles different? Is hasChildren() necessary?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":65796722,"Users Score":2,"Answer":"rowCount() is not similar to hasChildren() since the first indicates the number of rows and the second if there are children. As you pointed out just when you compare rowCount with zero it might seem that they are equivalent. For example, in the case that the model does not have columns(columnCount() == 0) but has rows (rowCount() > 0), then will it have children? Well no, that's why in various models the QModelIndex is verified to be valid and that the number of columns or rows is greater than zero.\nSo if you want to verify that a QModelIndex has children it is better to use hasChildren() as well as it is more readable.","Q_Score":0,"Tags":"python,qt,pyqt5,qt5,pyside2","A_Id":65796825,"CreationDate":"2021-01-19T17:43:00.000","Title":"What is the difference between rowCount() and hasChildren()?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a training data with 3961 different rows and 32 columns I want to fit to a Random Forest and a Gradient Boosting model. While training, I need to fine-tune the hyper-parameters of the models to get the best AUC possible. To do so, I minimize the quantity 1-AUC(Y_real,Y_pred) using the Basin-Hopping algorithm described in Scipy; so my training and internal validation subsamples are the same.\nWhen the optimization is finished, I get for Random Forest an AUC=0.994, while for the Gradient Boosting I get AUC=1. Am I overfitting these models? How could I know when an overfitting is taking place during training?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":201,"Q_Id":65797412,"Users Score":0,"Answer":"To know if your are overfitting you have to compute:\n\nTraining set accuracy (or 1-AUC in your case)\nTest set accuracy (or 1-AUC in your case)(You can use validation data set if you have it)\n\nOnce you have calculated this scores, compare it. If training set score is much better than your test set score, then you are overfitting. This means that your model is \"memorizing\" your data, instead of learning from it to make future predictions.\nTo know if you are overfitting, you always need to do this process. However, if your training accuracy or score is too perfect (e.g. accuracy of 100%), you can sense that you are overfitting too.\nSo, if you don't have training and test data, you have to create it using sklearn.model_selection.train_test_split. Then you will be able to compare both accuracy. Otherwise, you won't be able to know, with confidence, if you are overfitting or not.","Q_Score":0,"Tags":"python,scikit-learn,overfitting-underfitting","A_Id":65799392,"CreationDate":"2021-01-19T18:31:00.000","Title":"How to know when an overfitting is taking place?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I run a cell in my IPython notebook and python3's memory usage goes up by ~100MB. After I run it again, python3's memory usage again goes up by ~100MB. After a few runs, my computer has run out of memory and programs are crashing.\nI can fix this by resetting the IPython kernel, but then I lose the state of my entire notebook. Is there a way to clear the memory used by that particular cell when I re-run the cell, so that old runs don't just accumulate until my computer crashes?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":317,"Q_Id":65799275,"Users Score":1,"Answer":"Adding import gc; gc.collect() to the beginning of my cell solves the issue.","Q_Score":1,"Tags":"python,memory-management,jupyter-notebook,out-of-memory,ipython","A_Id":65799377,"CreationDate":"2021-01-19T20:45:00.000","Title":"How to clear memory after running cell in IPython notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a cypher projection that used algo.unionFind in Neo4j. However, that algorithim has been deprecated. My query was:\nCALL algo.unionFind('MATCH (n) WHERE n.dtype=\\\"VALUE\\\" RETURN id(n) AS id','MATCH p=(n)-[]-(m) WHERE n.dtype=\\\"VALUE\\\" AND m.dtype=\\\"VALUE\\\" RETURN id(n) AS source, id(m) AS target', {write:true, partitionProperty:\\\"partition\\\", graph:'cypher'}) YIELD nodes, setCount, loadMillis, computeMillis, writeMillis\nI was hoping to find an equivalent approach with the Graph Data Science Library that runs the query and writes a new property partition in my nodes.\nAny help would be greatly appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":130,"Q_Id":65799737,"Users Score":0,"Answer":"The algorithm has been renamed to gds.wcc.write in the new GDS library.","Q_Score":0,"Tags":"python,neo4j,cypher,py2neo,graph-data-science","A_Id":65807968,"CreationDate":"2021-01-19T21:24:00.000","Title":"Neo4j algo.unionFind equivalent with new Graph Data Science Library","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to create an autocomplete function for a few entry fields, however when I enter some text and select an autocomplete suggestion, the automatic loading of text into the other entry fields causes said entry fields' auto-complete functions to fire off as well, leading to a circle of autocompletion.\nSo, I would want to detect if the text being entered into the entry field was typed up or loaded in by the .set() function.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":65801576,"Users Score":0,"Answer":"There's nothing built-in. You can set up your own triggers to set a flag the first time a user presses a key, however.\nOr, if you're just trying to stop some behavior when you programatically insert text, set a flag, insert the text, and then unset the flag.","Q_Score":0,"Tags":"python,tkinter","A_Id":65801713,"CreationDate":"2021-01-20T00:39:00.000","Title":"Is there any way to check if text in an entry field was loaded in using the .set() function or was typed by the user?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to delete a photo sent by my bot (reply_photo()), I can't find any specific reference to doing so in the documentation, and have tried delete_message(), but don't know how to delete a photo. Is it currently possible?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":410,"Q_Id":65803490,"Users Score":0,"Answer":"You need to have the chat_id and the message_id of that message sent by bot, then you can delete using context.bot.delete_message(chat_id, message_id).\nNote: Bot cannot delete a message if it was sent more than 48 hours ago.","Q_Score":0,"Tags":"python,telegram,telegram-bot,python-telegram-bot","A_Id":65804336,"CreationDate":"2021-01-20T05:18:00.000","Title":"Telegram Bot delete sent photos?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to delete a photo sent by my bot (reply_photo()), I can't find any specific reference to doing so in the documentation, and have tried delete_message(), but don't know how to delete a photo. Is it currently possible?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":410,"Q_Id":65803490,"Users Score":0,"Answer":"It's currently possible in Telegram API, not the Bot API unfortunately. It's a shame :(","Q_Score":0,"Tags":"python,telegram,telegram-bot,python-telegram-bot","A_Id":65803544,"CreationDate":"2021-01-20T05:18:00.000","Title":"Telegram Bot delete sent photos?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to print the data frame after \"pycaret setup\".That is after applying the one-hot encoder and other changes","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":65804163,"Users Score":0,"Answer":"get_config() can help in general.\nFor example, in clustering module after setup() is run, you can get the transformed data by get_config('X').","Q_Score":0,"Tags":"python,pycaret","A_Id":67589240,"CreationDate":"2021-01-20T06:29:00.000","Title":"Want to print out DataFrame after applying the pycaret setup changes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I execute pip install cassandra-driver successfully\nand when I type\npython -c 'import cassandra; print (cassandra.__version__)' I got 3.24.0 \nBut when I import cassandra from jupiter notebook I got :\nModuleNotFoundError Traceback (most recent call last)\n in \n----> 1 import cassandra\nModuleNotFoundError: No module named 'cassandra'\nI'm using python 3, os: windows 10\nSo, why it's not able to access cassandra as on cmd?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72,"Q_Id":65806208,"Users Score":0,"Answer":"execute !pip install cassandra-driver on Jupiter notebook\nthen import cassadra\ngoes well !","Q_Score":0,"Tags":"python,cassandra,jupyter-notebook","A_Id":65808483,"CreationDate":"2021-01-20T09:00:00.000","Title":"import cassandra module from Jupiter notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am retokenizing some spaCy docs and then I need the dependency trees (\"parser\" pipeline component) for them.\nHowever, I do not know for certain if spaCy handles this correctly. I could not find any info about how the retokenizer works in the docs and the spacy tutorial. The only thing I found is the original retokenizer cython source code and they do handle the dependencies, however it looks like they only address them, they don't do the analysis again.\nSo I need to know if I can trust that for any weird retokenizations I could make, or I have to make the dependency tree again.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":124,"Q_Id":65807884,"Users Score":1,"Answer":"No, retokenizing doesn't re-run any pipeline components.\nMerging preserves the dependency of the root node in the merged span by default, but you can override it, and any other attributes, if you want. For splitting, you need to provide the heads and deps in the attributes if you want them to be set. Other attributes are also unset unless you provide except for the first token in the split token, which keeps some of its original annotation.\nIf you don't need the parse to decide what to retokenize, it would probably be easiest to put the retokenizing component before the parser in the pipeline. Otherwise you can run the parser again after retokenizing. Any existing sentence starts would be preserved, but everything else could potentially be modified\nBe aware that the parser may not perform well on retokenized texts because it's only been trained with the default tokenization.","Q_Score":0,"Tags":"python,nlp,spacy","A_Id":65815695,"CreationDate":"2021-01-20T10:40:00.000","Title":"Does spaCy retokenizer do the dependency parsing again?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently writing a Scrapy Webcrawler that is meant to extract data from a site's page and append those data to an existing excel(\".tmp.xlsx\") file. The file comes with prepopulated column headers like \"name\", \"country\", \"state\", \"zip code\", \"address\", \"phone number\". The sites i will be scraping most times wont have data to populate all columns. Some can have data for just \"country\", \"state\", \"zip code\" and \"phone number\"..\nI need help setting up my pipelines.py in a way whereby i will be appending to the file based on the type of data i get from the site im crawling..","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":122,"Q_Id":65808411,"Users Score":0,"Answer":"One option (which may not be what you are looking for) is to just append the data to a CSV (using Scrapy's builtin CsvItemExporter). Then in the close_spider method, convert it to an excel file (using e.g., pandas).","Q_Score":1,"Tags":"python,excel,pandas,selenium,scrapy","A_Id":68321758,"CreationDate":"2021-01-20T11:13:00.000","Title":"How to append data to an existing Excel file based on the input?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I hope you can help me. I have a static website hosted on Heroku and I would like to implement a Python Script to be executed when a button is clicked. So, just as a reference you would have:\n\nA text field\nA button\nAnother text field\n\nThe idea is that you enter some text in the first text field, you click the button calling the Python Script, and then print the result coming from the Python Script in the second text field.\nHow would you implement such technology? Which services should be used to achieve the result?\nI think that the script should be hosted somewhere and be called via an API but I do not really know how to do it. I hope you can help me.\nThanks!","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":49,"Q_Id":65809728,"Users Score":1,"Answer":"I should use Flask or Django. In Flask you simply use the: name = \"your_variable\" command in your HTML code and then you can simply use the code request.form [\"your_variable\"] in your python script.","Q_Score":1,"Tags":"python,html,heroku,interaction","A_Id":65809930,"CreationDate":"2021-01-20T12:38:00.000","Title":"How can I run a Python script inside a webpage?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I hope you can help me. I have a static website hosted on Heroku and I would like to implement a Python Script to be executed when a button is clicked. So, just as a reference you would have:\n\nA text field\nA button\nAnother text field\n\nThe idea is that you enter some text in the first text field, you click the button calling the Python Script, and then print the result coming from the Python Script in the second text field.\nHow would you implement such technology? Which services should be used to achieve the result?\nI think that the script should be hosted somewhere and be called via an API but I do not really know how to do it. I hope you can help me.\nThanks!","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":49,"Q_Id":65809728,"Users Score":1,"Answer":"You have to use backend for your purpose. When a user clicks your button some data would be collected by your backend, handled and showed to user with the help of API. You can start with learning a little bit of Flask and learning Django later for some bigger projects.","Q_Score":1,"Tags":"python,html,heroku,interaction","A_Id":65809806,"CreationDate":"2021-01-20T12:38:00.000","Title":"How can I run a Python script inside a webpage?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an interactive notebook with a few widgets, that displays a plot (+other data) based on the information given by the widgets. The plot is only displayed if a Checkbox is checked.\nEverything works well except one thing: the Checkbox has a True value by default. When loading the notebook for the first time, it appears as clicked but the plot is not displayed -- if I interact with the widgets in any other way (either by re-clicking this checkbox twice, or by modifying some of the other widgets), then the plot is displayed.\nIs there a way to have the plot displayed at the beginning without waiting for the user to interact?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":274,"Q_Id":65810840,"Users Score":0,"Answer":"Start by creating your checkbox with the False value\ncb = widgets.Checkbox(value=False)\nThen, once you have set up your observse, in the code, change the value of the checkbox to True, and this should trigger your code.\ncb.value = True","Q_Score":1,"Tags":"jupyter-notebook,ipython,ipywidgets","A_Id":65825990,"CreationDate":"2021-01-20T13:49:00.000","Title":"Plot in a Jupyter notebook depending on a true\/false checkbox","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If a run a python script where i declare 6 processes using multiprocessing, but i only have 4 CPU cores, what happens to the additional 2 processes which can find a dedicated CPU core.\n\nHow are they executed?\nIf the two additional processes run as separate threads on the existing Cores, will GIL not stop their execution?\n\n#Edit 1 - 21st Jan 2021\nI have mixed up threads and processes in the question I asked. Since I have better clarity on the concept, I would rephrase question 2 as follows(for any future reference):\nIf the two additional processes run in parallel with two other processes in existing Cores, will GIL not stop their execution?\nAns: GIL does NOT affect the processes, GIL allows only one thread to run at a time, there is no restriction on processes, however. The system scheduler manages how the additional two processes will run on the existing cores.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1177,"Q_Id":65810983,"Users Score":4,"Answer":"First you are mixing up threads and processes: in Python only threads not processes have to share a lock on their interpreter.\nIf your are using the multiprocessing library then, your are using Python processes which have their own interpreter.\nWhen you are using Python processes, their execution is managed by your operating system scheduler, in the same manner as every other processes in your computer.\nIf you have more processes than CPU cores then the extra processes are waiting in background to be scheduled.\nThis usually happen when an other process terminates, wait on an IO, or periodically with clock interrupts.","Q_Score":2,"Tags":"python,multiprocessing,cpu-cores","A_Id":65811192,"CreationDate":"2021-01-20T13:57:00.000","Title":"Multiprocessing in python vs number of cores","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a program for a homework using recursion to search for a word in a matrix (2x2 or more), it can be going from left to right or from up to down (no other directions), for example if I am searching for ab , in the matrix [['a','b'],['c','d']], the program should return in what direction the word is written (across), the starting index(0), ending index(2), and the index of the row or column(0). \nMy problem is that I have the idea of the recursion but, I can't implement it. I tried to break the problem down into more little proplems, like searching for the word in a given row, I started by thinking of the smallest case which is 2x2 matrix, at the first row and column, I need to search one to the right and one to the bottom of the first char, and check if they are equal to my given string, then give my recursion function a smaller problem with the index+1. However I can't think of what to make my function return at the base case of the recursion, been trying to solve it and think of ways to do it for two days, and I can't code what I think about or draw.\nNote that I can't use any loops, I would really appreciate it if somone could push me in the right direction, any help would be pretty much appreciated, thanks in advance.\nEdit: more examples: for input of matrix : [['a','b','c'],['d','e','f'],['g','h','i']] the outputs are:\nwith the string ab : across,0,0,2\nwith the string be : down,1,0,2 \nwith the string ghi: across,2,0,3","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":193,"Q_Id":65812219,"Users Score":1,"Answer":"I assume that the word we are looking for could be found starting from any place but we can move up to down or left to right only.\nIn that case, you should have a function that takes the start index and a direction and then the function keeps moving in the given direction starting from the given index and keeps moving until it doesn't find a mismatch, and it just returns true or false based on the match of the given string.\nNow you need to call this function for each and every index of the matrix along with two directions up to down and left to right, and at any index, if you get the output of the function as true then you have found your answer.\nThis is a very basic idea to work, next it depends on you how you want to optimize the things in this method only.\n\nUpdate:\nTo avoid using the loops.\nThe other way I can think of is that the function which we have defined now takes the row, column, and the string to find. So at each call, you will first check if the character at the given row and column matches the first character of the given string if so then it calls the two more functions, one in the right direction and the other in the down direction, along with the string with the first character removed.\nNow to check all the columns of the matrix, you will anyway call the function in down and right direction with the exact same string.\nThe base case will be that if you reach the end of the string then you have found the answer and you will return True, otherwise False.\nOne more thing to notice here is that if any of the 4 function calls gives you a True response then the current row\/column will also return True.\nCheers!","Q_Score":0,"Tags":"python,recursion","A_Id":65812462,"CreationDate":"2021-01-20T15:09:00.000","Title":"Recursively search word in a matrix of characters","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"A few versions ago, virtualenv featured a Lib folder where, among the site-packages, libraries were stored. I created a env with Python 3.9 today and noticed, that the Lib folder ist empty, except for the site-packages folder.\nAlso there used to be the folders \"Include\" and \"tcl\" .\nWhat has happened to them? I couldn't find anything in the virtualenv changelog.\nSpecifically, I'm searching for the locale.py which I need for bundling with pyinstaller.\nBoth environments were created with virtualenv env.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":273,"Q_Id":65812773,"Users Score":1,"Answer":"A few versions ago, virtualenv featured a Lib folder where, among the site-packages, libraries were stored.\n\nI think this was created with the command virtualenv\u2026\n\nI created a env with Python 3.9 today and noticed, that the Lib folder ist empty, except for the site-packages folder.\n\n\u2026and this with the command python -m venv. They create slightly different kinds of virtual environments.\n\nSpecifically, I'm searching for the locale.py\n\nI think venv (unlike virtualenv) left it in the main Lib\/ folder (in the global Python directory).","Q_Score":0,"Tags":"python,virtualenv,pyinstaller","A_Id":65814261,"CreationDate":"2021-01-20T15:41:00.000","Title":"Lib folder in virtualenv - where is the locale.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use a Tkinter scale to set the magnitude of an input. Therefore I want the ticks of the scale to be 1, 10, 100, 1000 etc.\nMy initial thoughts are that I will have the scale\nmagnitudescale = Scale(window1, from_ = 0, to = 3)\nWhen the scale is moved there would be some function that takes the scale position and alters the value by 10^x\nIs there a clean way to do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":65813540,"Users Score":0,"Answer":"magnitudescale = Scale(window1, from_ = 0, to = 3)\nI found that there are some clever ways to do this but none that do exactly what the question asks. The way that I ended up implementing this is to have the scale position command a function that takes the position of the scale and calculates 10^(X)\nthis code looks like this\n10**(magnitudescale.get())\nUsing this calculation we can change the text in a label placed above the scale so that the slider position corresponds with the values \"1, 10, 100, 1000, etc.\"","Q_Score":0,"Tags":"python,tkinter,tkinter-scale","A_Id":66156051,"CreationDate":"2021-01-20T16:26:00.000","Title":"Is it possible to set Tkinter tickinterval as a magnitude?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried to retrieve text in a rectangle. This rectangle is retrieved from\nPage.getLinks(). when I try to get the text in the rectangle using getTextbox() and getText(\u201ctext\u201d, clip=rect). Both methods return Empty string","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":152,"Q_Id":65815905,"Users Score":1,"Answer":"Try expanding your rectangle's coordinates slightly. The default get_textbox parameter excludes anything intercepting the bounds of the rectangle.\nI was having the exact same issue and this solved it.","Q_Score":0,"Tags":"python,pdf,pymupdf","A_Id":70807995,"CreationDate":"2021-01-20T18:59:00.000","Title":"Pymupdf getTextbox returns empty","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to build a Python script which should go to web every day at 1 pm, do some job (some web-scraping) and save a result to a file.\nIt will be deployed on a Linux server.\nI am not sure what technology to use to run it on schedule.\nWhat comes to mind:\n\nRun it with a cron job scheduler. Quick and dirty. Why bother with any other methods?\n\nRun it as a service with a systemd \/ systemctl (I never did this but I just know there is such possibility and I have to google for a specific implementation). Is this something to be considered as best practice?\n\nOther methods?\n\n\nSince, I never did this, I don't know the pros and cons of every method. May be it's just a one way of doing this properly? Please share your experience.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":65816103,"Users Score":0,"Answer":"I use cron job to run a schedule task it works awesome with me.","Q_Score":0,"Tags":"python,automation,systemd,remote-server,cron-task","A_Id":65817101,"CreationDate":"2021-01-20T19:12:00.000","Title":"Running Python on remote server on schedule","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I noticed any exe application I make are treated like viruses upon download. This is terrible, how do I make them legitimate? I read something about self-signing, but I still don't get it. What is the process of self signing and how do I do it? If it helps I am using pygame, python, on pycharm, with pyinstaller, on Windows 10.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":195,"Q_Id":65817056,"Users Score":0,"Answer":"You cant, or it is hard, to fix that your program treats as virus at antiviruses because av's use euristhic analysis and machine learning so they could make your program false positive. You have to go to av's forums and ask why your app is false positive.","Q_Score":1,"Tags":"python,exe,signing","A_Id":65817169,"CreationDate":"2021-01-20T20:21:00.000","Title":"How to stop my exes from being treated like a virus?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the title explains, powershell uses the wrong python and pip executables in virtual environments.\nI created the virtual environment using python -m venv venv and activated it with .\/venv\/Scripts\/Activate.ps1\nThe activation scripts runs as expected and on the lefthand side of the shell can I see: (venv) P: C:\\path indicating that the virtual environment is activated. However, when I write python or pip, powershell uses the system wide binaries instead of the ones in the virtual environment. Therefore when I try installing packages with pip they get installed system wide instead of in the venv.\nI can use .\/venv\/Scripts\/python.exe -m pip and install packages that way in the virtual environment. Though this workaround is rather tedious and wonders if anyone knows a way to fix the underlying issue?\nEDIT 1: The path variable contains C:\\path\\to\\venv\\Scripts folder as the top entry. However, using the absolut path C:\\path\\to\\venv\\Scripts\\python.exe does not work. Powershell says it can't find it. But the relative path as described above do work.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":166,"Q_Id":65818086,"Users Score":0,"Answer":"Found the problem!\nI had accidentally renamed the venv folder after i created it, and therefore the path was pointing to the wrong location","Q_Score":0,"Tags":"python,powershell,virtualenv","A_Id":65823866,"CreationDate":"2021-01-20T21:41:00.000","Title":"venv using the wrong interpreter in powershell","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"As described in the title, I tried installing the Markdown2 package through pip and through PyCharm's built-in package installer, as well as with easy_install, all of which appeared to be successful, but I'm still getting a ModuleNotFoundError at runtime.\nI'm working in a Django venv, not sure if that impacts it. I checked the library and scripts folders and the markdown2 files seem to be present. Not sure what I'm doing wrong at this point but any help would be appreciated!\nAlso I checked to make sure the interpreter is in the venv\/scripts folder and it is. I'm still super new to python venvs so I could be doing something else wrong.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":65818111,"Users Score":0,"Answer":"Turns out the configuration (top right corner) was using a different interpreter than what was in the project settings. Oops.\nIf anyone else has this problem, click the configuration dropdown in the top right (icon left of \"run\" button) --> edit configurations --> Environment --> Python interpreter. I might just be dumb and didn't know it was separate from the project settings interpreter but TMYK I guess?","Q_Score":0,"Tags":"python,pip,python-venv","A_Id":65818340,"CreationDate":"2021-01-20T21:43:00.000","Title":"Python package is in venv\/lib\/site-packages folder and shows as installed in PyCharm but throws ModuleNotFoundError","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I read that blender comes with its own python version. However I have troubles actually locating it in ubuntu. I had hoped for adding packages there. What is the current way of adding packages, like pandas to blender's python version?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":445,"Q_Id":65818610,"Users Score":0,"Answer":"Just copy the respective packages to\n\/usr\/share\/blender\/scripts\/modules\nand restart blender.","Q_Score":0,"Tags":"python,pandas,pip,blender","A_Id":65827045,"CreationDate":"2021-01-20T22:27:00.000","Title":"How to install pandas in blender 2.80 python on ubuntu?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a large memory bank of information (in a class that has a large dictionary) that needs to be loaded into memory and updated as information is received once the data has been compiled into this super large structure (up to 20GB) I need to then save this updated memory bank to disk for later loading. However with pickle I haven't been able to find a way I can pickle a file by streaming the data as it serializes it(I can't exceed 25.5 GB). If you notice between having both a 20GB structure and needing to have the serialized pickle it well exceeds my memory resources.\nIs there a way to have pickle stream the information as it is serialized, or will I have to make my own function to write the memory to file(s) myself?\nIs there a way to keep memory costs low (offloading the memory from from to disk as the process is completed)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":65819466,"Users Score":0,"Answer":"If anyone must know, I solved this problem by converting my large memory structure into three smaller memory structures (order dependent) which I could then pickle. When I load the memory structure I have to also re concatenate the memories into a larger structure. The memory saving isn't much however it is a workaround I was able to do for now. This solution is of course structure dependent.","Q_Score":0,"Tags":"python-3.x,machine-learning,memory,pickle","A_Id":65820560,"CreationDate":"2021-01-21T00:11:00.000","Title":"Writing large memory structures to disk with limited Memory resources (Python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a recommendation system of sorts, and I'm trying to find playlists with a given song in them to see what other songs people listen alongside that. Is there a way to do that using the Spotify API in Python?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":163,"Q_Id":65820060,"Users Score":1,"Answer":"Nope, there isn't. You could however use the endpoint https:\/\/api.spotify.com\/v1\/recommendationsto get related songs to a song. You give a song, artist and genre to Spotify and they give a playlist with similar songs back to you. That could allow you to achieve what you're trying to do.","Q_Score":1,"Tags":"python,python-3.x,spotify,spotipy","A_Id":65833490,"CreationDate":"2021-01-21T01:43:00.000","Title":"Is there a way to get a list of playlists containing a particular song in Spotipy (Spotify Python Api)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know the function 'ord' can convert character to number.\nbut I just want to know how to convert without 'ord'\nC can convert it, but is it impossible in Python ?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":340,"Q_Id":65823363,"Users Score":-1,"Answer":"You can use len() function to solve it\ninput(\"any value?\")\nprint(len(\"value\"))input(\"any value you want\") print(len(\"enter your value\"))","Q_Score":0,"Tags":"python","A_Id":70194250,"CreationDate":"2021-01-21T08:13:00.000","Title":"How to change character to ASCII in python without ord()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have multiple processes (web-scrapers) running in the background (one scraper for each website). The processes are python scripts that were spawned\/forked a few weeks ago. I would like to control (they listen on sockets to enable IPC) them from one central place (kinda like a dispatcher\/manager python script), while the processes (scrapers) remain individual unrelated processes.\nI thought about using the PID to reference each process, but that would require storing the PID whenever I (re)launch one of the scrapers because there is no semantic relation between a number and my use case. I just want to supply some text-tag along with the process when I launch it, so that I can reference it later on.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":14,"Q_Id":65829822,"Users Score":1,"Answer":"pgrep -f searches all processes by their name and calling pattern (including arguments).\nE.g. if you spawned a process as python myscraper --scrapernametag=uniqueid01 then you can run:\nTAG=uniqueid01; pgrep -f \"scrapernametag=$TAG\"\nto discover the PID of a process later down the line.","Q_Score":0,"Tags":"python-3.x,linux,windows","A_Id":65830416,"CreationDate":"2021-01-21T14:43:00.000","Title":"How to reference a process reliably (using a tag or something similar)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to design a game which is similar to Plague Inc. and it is essentially where there is a deadly virus (quite ironic) which is spreading and the aim of the game is to stop the virus from spreading.\nI've split the world into 13 regions, and each region will have several key details I will need to use, such as the number of cases, the number of deaths and the population. With each of these details, I will want some of them to be dynamic, such as wanting the amount of cases and deaths to go up or down.\nI'm extremely new to python, and was hoping for some particular expertise in how to design this game. Any guidance of the best ways to represent this data would be much appreciated!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":30,"Q_Id":65833537,"Users Score":1,"Answer":"Hello Aran Khalastchi,\nBased off of my experiences, Python is not really a graphical programming language, and more of a text based language. I wouldn't suggest Python as your go to unless you are using a library for graphics. If not, I definitely recommend Unity or Godot, and if you want to go fully raw code (no engines\/libraries) I recommend Java as it has its own graphics. If I am wrong, please forgive me :)","Q_Score":0,"Tags":"python,python-3.x,class,oop,object","A_Id":65834010,"CreationDate":"2021-01-21T18:22:00.000","Title":"Designing a game advice","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am asked to normalize a probability distribution P=A(x^2)(e^-x) within 0 to infinity by finding the value for A. I know the algorithms to calculate the Numerical value of Integration, but how do I deal with one of the limits being Infinity.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":233,"Q_Id":65840009,"Users Score":0,"Answer":"The only way I have been able to solve this problem with some accuracy (I got full accuracy, indeed) is by doing some math first, in order to obtain the taylor series that represents the integral of the first.\nI have been looking here for my sample code, but I don't find it. I'll edit my post if I get a working solution.\nThe basic idea is to calculate all the derivatives of the function exp(-(x*x)) and use the coeficients to derive the integral form (by dividing those coeficients by one more than the exponent of x of the above function) to get the taylor series of the integral (I recommend you to use the unnormalized version described above to get the simple number coeficients, then adjust the result by multiplying by the proper constants) you'll get a taylor series with good convergence, giving you precise values for full precision (The integral requires a lot of subdivision, and you cannot divide an unbounded interval into a finite number of intervals, all finite)\nI'll edit this question if I get on the code I wrote (so stay online, and dont' change the channel :) )","Q_Score":0,"Tags":"c,python-3.x,gfortran,numerical-methods,numerical-integration","A_Id":65862048,"CreationDate":"2021-01-22T05:58:00.000","Title":"Numerical Integration in fortran with infinity as one of the limits","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I want to run python into gdb I using\nsource \/tmp\/gdb\/tmp\/parser.py\n\nCan I set an alias so in the next time I want to call this script I use only parser.py or parser (without setting the script into working directory\nHow can I pass args to script ? source \/tmp\/gdb\/tmp\/parser.py doesn't work","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":65840478,"Users Score":0,"Answer":"These should have been asked as two separate questions, really. But:\n\nExecute command dir \/tmp\/gdb\/tmp\/, after that you should be able to run script as source parser.py\nYou can't when you are sourcing a script. Rewrite script so that it attaches itself as GDB command via class inheriting from gdb.Command. The command can accept arguments. And you will save on typing source ... too.","Q_Score":0,"Tags":"gdb,gdb-python","A_Id":66403052,"CreationDate":"2021-01-22T06:44:00.000","Title":"Run python script with gdb with alias","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"this should be dead simple but ive googled for ages and can't find anything on this. maybe too generic of a search.\nI have several vms. centos and ubuntu.\nthey both always come with python3.6 which has always been fine with me. but i gotta do some devwork on an app written in 3.7. So i installed that in ubuntu using the apt-get install python3.7 which went fine but it seems the modules I install with pip3 work on in python3.6...\npip3 install future\nimport future\nworks in 3.6 but not 3.7.\nWhat I do?\n-thx","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":65841125,"Users Score":0,"Answer":"which pip3 points to \/usr\/bin\/pip3 hence pip3 install only installs it for python3.6.\nFor python3.7 you can use \/path\/to\/python3.7 -m pip install future to install it.","Q_Score":0,"Tags":"pip,python-3.7","A_Id":65841824,"CreationDate":"2021-01-22T07:44:00.000","Title":"using pip3 with 3.7?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"(gdb) source script.py loaded script file to GDB\nHow to unload that script? How to unload all loaded script or view all script that loaded ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":65841492,"Users Score":0,"Answer":"The script is \"sourced\", not \"loaded\". The script executed and exited. Hence you can't unload it. It may have left something after itself (pretty-printers, commands, breakpoints, changes in configuration etc). You can't unload them all as a group, you have to find them and undo one-by-one.","Q_Score":0,"Tags":"gdb,gdb-python","A_Id":66403116,"CreationDate":"2021-01-22T08:16:00.000","Title":"Unload source file with GDB","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm newbee in programming and I need some help.\nI have to work with PyCharm and Odoo, so my point is to configure PyCharm for Odoo debugging. First of all I made a module and a model, it perfectly work with database(i can see and check it).\nI want PyCharm not to highlight word 'odoo', 'models' and 'fields' by red line(unresolved reference) or green line(Package containing module 'odoo' is not listed in project requirements)\nI've read lots of tutorials and manuals but nothing helps me.\nI think the problem is that PyCharm doesn't see the odoo package (or smth like that).\nThere is no odoo in requirements.txt\nSo, I need to coonect PyCharm and Odoo, without red or green underlining. PyCharm doesn't see the Odoo module.\nMaybe the problem with the fact that my project folder located not in odoo main folder, but maybe im wrong.\nSorry for my english.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":583,"Q_Id":65842520,"Users Score":0,"Answer":"Install and create virtualenv and install odoo. Then add virtualenv path to pycharm.","Q_Score":0,"Tags":"python,pycharm,odoo","A_Id":65842696,"CreationDate":"2021-01-22T09:29:00.000","Title":"odoo PyCharm configuration \/ no module named 'odoo'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm newbee in programming and I need some help.\nI have to work with PyCharm and Odoo, so my point is to configure PyCharm for Odoo debugging. First of all I made a module and a model, it perfectly work with database(i can see and check it).\nI want PyCharm not to highlight word 'odoo', 'models' and 'fields' by red line(unresolved reference) or green line(Package containing module 'odoo' is not listed in project requirements)\nI've read lots of tutorials and manuals but nothing helps me.\nI think the problem is that PyCharm doesn't see the odoo package (or smth like that).\nThere is no odoo in requirements.txt\nSo, I need to coonect PyCharm and Odoo, without red or green underlining. PyCharm doesn't see the Odoo module.\nMaybe the problem with the fact that my project folder located not in odoo main folder, but maybe im wrong.\nSorry for my english.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":583,"Q_Id":65842520,"Users Score":0,"Answer":"SOLVED:\nYou just need to add odoo folder to the source folder of the project in PyCharm settings:\nFile - Settings - Project Structure - + Add content root\nThere you should pick Odoo13\\server folder","Q_Score":0,"Tags":"python,pycharm,odoo","A_Id":65934070,"CreationDate":"2021-01-22T09:29:00.000","Title":"odoo PyCharm configuration \/ no module named 'odoo'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"While accessing live-streaming IP camera.\nVideoCapture can open a video but then after some second or minute can't read it. cap.read\u200b() keeps returning false and frame is none after some time.\nFPS rate of that camera is 180000. This is so high.\nPython-> 3.8.5 (default, Jul 28 2020, 12:59:40)\n[GCC 9.3.0] on linux\nOS- Ubuntu (18.04 or 20.04)\nOpenCV - 4.4.0\nopencv-contrib-python==4.4.0.46","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":401,"Q_Id":65843787,"Users Score":1,"Answer":"This has kinda just been an issue that everyone seems to occasionally run into while using opencv with IP cameras. You can sidestep the problem by checking if cap.read() returns false and closing and re-opening the stream if it happens (If you keep having problems after closing and re-opening then there's actually a connection issue and it's not just opencv).","Q_Score":1,"Tags":"python-3.x,opencv,video-streaming,live-streaming,ip-camera","A_Id":65848192,"CreationDate":"2021-01-22T10:54:00.000","Title":"VideoCapture can open a video but then after some second or minute cap.read\u200b() keeps returning false","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When computing ordinary least squares regression either using sklearn.linear_model.LinearRegression or statsmodels.regression.linear_model.OLS, they don't seem to throw any errors when covariance matrix is exactly singular. Looks like under the hood they use Moore-Penrose pseudoinverse rather than the usual inverse which would be impossible under singular covariance matrix.\nThe question is then twofold:\n\nWhat is the point of this design? Under what circumstances it is deemed useful to compute OLS regardless of whether the covariance matrix is singular?\n\nWhat does it output as coefficients then? To my understanding since the covariance matrix is singular, there would be an infinite (in a sense of a scaling constant) number of solutions via pseudoinverse.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":524,"Q_Id":65844656,"Users Score":0,"Answer":"I noticed the same thing, seems sklearn and statsmodel are pretty robust, a little too robust making you wondering how to interprete the results after all. Guess it is still up to the modeler to do due diligence to identify any collinearity between variables and eliminate unnecessary variables. Funny sklearn won't even give you pvalue, which is the most important measure out of these regression. When play with the variables, the coefficient will change, that is why I pay much more attention on pvalues.","Q_Score":2,"Tags":"python,scikit-learn,linear-regression,statsmodels","A_Id":65855743,"CreationDate":"2021-01-22T11:53:00.000","Title":"Results of sklearn\/statsmodels ordinary least squares under singular covariance matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had saved a python file after working on it for sometime, but now when I open it, Python 3.9.1 opens a window then immediately closes. I had done lots of work on this and don't want it to go to waste. I'm on Windows 10.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":754,"Q_Id":65847043,"Users Score":0,"Answer":"If you\u2019re using the Open option when you right-click or you are simply double-clicking the script, the program will run and close so fast you won\u2019t even see it.\nHere are two options I use:\n\nOpen up the Command Prompt. You can easily do this by going to the address bar of your File Explorer and enter \u2018cmd\u2019. If you\u2019re in the directory where your script is, the current working directory of the Command Prompt will be set to that. From there, run python my_script.py.\n\nEdit your script with IDLE. If you\u2019re using an IDE, it should be nearly the same process, but I don\u2019t use one so I wouldn\u2019t know. From the editor, there should be a method for running the program. In IDLE, you can just using Ctrl + F5.","Q_Score":0,"Tags":"python,windows,crash","A_Id":65847133,"CreationDate":"2021-01-22T14:29:00.000","Title":"Python IDLE 3.9.1 file not opening in windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I had saved a python file after working on it for sometime, but now when I open it, Python 3.9.1 opens a window then immediately closes. I had done lots of work on this and don't want it to go to waste. I'm on Windows 10.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":754,"Q_Id":65847043,"Users Score":0,"Answer":"Right click on it and clicken \"open with\". Then choose Python IDLE.","Q_Score":0,"Tags":"python,windows,crash","A_Id":65847138,"CreationDate":"2021-01-22T14:29:00.000","Title":"Python IDLE 3.9.1 file not opening in windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I tried makemigrations, migrate, and even many methods stated in stack overflow but nothing is happening. Please tell me the reason why this happens and how can i solve it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":250,"Q_Id":65847395,"Users Score":0,"Answer":"I think this can help you :-\nDelete all the migrations in your app except init.py file. AND then again type python manage.py make makemigrations and type python manage.py migrate","Q_Score":0,"Tags":"python,django,database,django-models,error-handling","A_Id":65848539,"CreationDate":"2021-01-22T14:51:00.000","Title":"django.db.utils.ProgrammingError: (1146, \"Table 'online_examination_system.studentapp_courses' doesn't exist\")","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a NetCore3.1 server app. On my local setup I can use Process to execute python to do some dataframe crunching that I have installed in a venv.\nOn Azure, I can use site extensions to install a local copy of python and all my needed libs. (It's located in D:\/home\/python364x86\/).\nNow on my published Azure app, I want my process to execute python as on my local setup. I have configured the proper path, but I get this error: \"Unexpected character encountered while parsing value: D. Path '', line 0, position 0.\"\nWould anyone know why this is failing? Many thanks for any help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":70,"Q_Id":65847542,"Users Score":0,"Answer":"Please notice that the python extension is in the website extension, so it should be impossible to access the python extension in code.","Q_Score":0,"Tags":"python,azure,asp.net-core,process","A_Id":65932178,"CreationDate":"2021-01-22T15:00:00.000","Title":"NetCore 3.1: How to execute python.exe using Process on Azure?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to integrate a control problem, i.e. an ODE of the form dx\/dt = A.f(t), where\nx(t) is a function in R^3, f(t) is a function in R^4 and A is a matrix 3x4. In my special case, f(t) = F'(t), i.e. a time derivative of a function F. Furthermore, F is 1-periodic. Hence, integrating the ODE over the interval [0, 1] should yield the starting position again. However, methods like solve_ivp from scipy.integrate do not respect this periodicity at all (I have tried all the possible methods like RK45, Radau, DOP853, LSODA).\nIs there a special ODE solver that respects such periodicity to a high degree of precision?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72,"Q_Id":65848198,"Users Score":2,"Answer":"All algorithms will suffer from precision loss anyway, so you will most likely never achieve exact periodicity. Nonetheless, you can also try to increase the precision of the integration by using the parameters \"atol\" and \"rtol\", which, roughly, will keep the error of the integration below (atol + rtol*y) at each time step. You can typically go as low as atol=rtol=1e-14.","Q_Score":0,"Tags":"python,scipy,integration,ode","A_Id":65851352,"CreationDate":"2021-01-22T15:40:00.000","Title":"ODE solver for python respecting periodicity","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a total newbie in Python and I'm just starting to explore the work handed over to me.\nMy goal is to determine the week number of the date given. The data is coming from excel and the output will also be shown in the same excel file. My only PROBLEM is that, week 1 of 2021 started on Friday but when I run Python, week 1 starts on January 4, 2021 to January 10, 2021 instead of January 1, 2021 to January 2, 2021. I am currently using this code\nRPD_week = (pd.to_datetime(dffinal['RPD'], errors='coerce')).dt.strftime(\"%V\")\nPlease help because I'm literally crying already...","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":273,"Q_Id":65851343,"Users Score":1,"Answer":"There are two common systems of numbering weeks in English-speaking countries.\nOne is the ISO system. Monday is the first day of the week and week 1 is the week that contains 4 January. All days in a new year preceding the first Monday are considered to be in the last week of the previous year. This is the number given by int(mydate.strftime(\"%V\")) in Python and ISOWEEKNUM() in Excel.\nThe other is the North American system which has at least 2 flavours.\nExcel flavour. Sunday is the first day of the week. Week 1 is the week that contains 1 January. Any days in that week prior to 1 January are considered to be in the last week of the previous year. This is the number given by WEEKNUM() in Excel.\nPython flavour. Sunday is the first day of the week. All days in a new year preceding the first Sunday are considered to be in week 0. This is the number given by int(mydate.strftime(\"%U\")) in Python.\nIt follows that Python and Excel frequently disagree. Excel says Thursday 2 January 2020 is in week 1 because it comes after 1 January, which marks week 1. Python says it is in week 0 because week 1 begins on the first Sunday, which was 5 January.","Q_Score":3,"Tags":"python,pandas","A_Id":65854036,"CreationDate":"2021-01-22T19:10:00.000","Title":"How can I find week numbers with 2021 year starting on Friday in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried to create some chart in Altair using a few aggregations and calculations. I got the chart drawn, but that what was displayed on the chart wasn't looking correct. I wondered if it is possible to look at the result datum to check calculations.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":79,"Q_Id":65851499,"Users Score":2,"Answer":"After some search and help from my groupmate, it turned out that one can see the result datum object. You have to press on 3 dots in the upper right corner of the chart -> Open in Vega Editor -> Data Viewer tab (right bottom part of the screen) -> select data_0 resource.\nI thought it might be helpful for someone, cause I haven't managed to find this info on the internet.","Q_Score":1,"Tags":"python,jupyter-notebook,altair,vega","A_Id":65851500,"CreationDate":"2021-01-22T19:22:00.000","Title":"How to debug `datum` transformations and aggregations in Altair (Python)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I encountered the Error calling sync triggers (TooManyRequests) error when running func azure functionapp publish... for a Python Function App in Azure. Encountered this error consistently after trying to publish.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1661,"Q_Id":65851673,"Users Score":3,"Answer":"Solution: This problem was caused by my stopping the Function App in Portal. After re-starting the Function App, the problem disappeared!","Q_Score":1,"Tags":"python,azure,azure-functions,azure-function-app,azure-linux","A_Id":65851674,"CreationDate":"2021-01-22T19:36:00.000","Title":"Encountered the \"Error calling sync triggers (TooManyRequests)\" error when running \"func azure functionapp publish\" for a Python Function App in Azure","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"so I'm supposed to create a new list \"d\" from a list of lists \"c\", where \"d\" contains all numbers of \"c\" between the values of 5 and 45.\n c = [[1,1,12],[2,3,7,23],[54,12,17,90],[43,52,67,9]]\nd = [x for x in c if x in range(5,45)]\nprint(d)\nI tried this code, and I just get an empty output of\n[]","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":65852555,"Users Score":0,"Answer":"x stands for each list inside your \"list of lists\". You can unpack c:\nd = [x for x in list(itertools.chain(*c)) if x in range(5, 45)]","Q_Score":0,"Tags":"python,list","A_Id":65852684,"CreationDate":"2021-01-22T20:48:00.000","Title":"Using the list of lists called c defined below. Make a new list that contains all of the numbers between 5 and 45 that appear in the list","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I built a selenium script on my computer at home, On my home computer it works great. I took it into work and it launches the web driver and then it starts clicking on the wrong buttons(if it makes a difference my home computer is a iMac and my work computer is a HP pc). I've double checked the id tags, name tags and I've even selected the path's thru a full xpath. Nothing changes it keeps clicking the buttons next to the correct ones. I've never seen anything like this. Does anyone have any idea what is going on??\nI can't post my code so I am sorry about that. Any advice would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":52,"Q_Id":65853252,"Users Score":2,"Answer":"I faced a similar problem. Make sure the browser's zoom level is at 100%","Q_Score":1,"Tags":"python,selenium","A_Id":71697945,"CreationDate":"2021-01-22T21:53:00.000","Title":"Selenium Clicking on different elements on different computers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am on Windows Subsystem for Linux (WSL). I've defined an environment and I am trying to add the palettable package to it. This is what I tried:\n\nconda install palettable, went fine no errors, tried to do import palettable in my script and I get the error ModuleNotFoundError: No module named 'palettable'\nNext I did conda remove palettable\nThen I installed again, this time using pip by doing pip install palettable\nI get the same error\n\nDid I miss a step? Or do something wrong?\nI've added many other packages to this same environment using conda and not had any problems or encountered this error before.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":446,"Q_Id":65855564,"Users Score":0,"Answer":"I found the problem. I am using VSCode and I didn't realize the python interpreter and the notebook kernel are set independently. The interpreter correctly reflected that I was using my project environment. But the notebook kernel (top right corner of NB window) was not set to the same thing. Once I set it correctly and restarted the IDE it now is correctly finding the packages.","Q_Score":0,"Tags":"python","A_Id":65865025,"CreationDate":"2021-01-23T03:54:00.000","Title":"Python Can't Find Module After Installing with Pip and Conda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am on Windows Subsystem for Linux (WSL). I've defined an environment and I am trying to add the palettable package to it. This is what I tried:\n\nconda install palettable, went fine no errors, tried to do import palettable in my script and I get the error ModuleNotFoundError: No module named 'palettable'\nNext I did conda remove palettable\nThen I installed again, this time using pip by doing pip install palettable\nI get the same error\n\nDid I miss a step? Or do something wrong?\nI've added many other packages to this same environment using conda and not had any problems or encountered this error before.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":446,"Q_Id":65855564,"Users Score":0,"Answer":"You can easily fix this problem by restarting the kernel or IDE in which you are writing the code after installing the module..","Q_Score":0,"Tags":"python","A_Id":65855600,"CreationDate":"2021-01-23T03:54:00.000","Title":"Python Can't Find Module After Installing with Pip and Conda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to curl top news(title, discription, and image) from any news website by using built in (python\/nlp\/machine learning)api. And i want to use that api in php to get all this data. My problem is i have to fetch data from any news site, so to fetch data from multiple sites which api i use...","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":33,"Q_Id":65855855,"Users Score":1,"Answer":"To fetch data from multiple API's with cURL you can add multiple cURL requests. Your question is to ambiguous to answer. Please provide an example of your code as different websites will have different API's and requirements. To get a better answer you need to be more specific.","Q_Score":0,"Tags":"python,php","A_Id":65855915,"CreationDate":"2021-01-23T04:51:00.000","Title":"How to curl data using Python(NLP\/ML etc) built in api in php..?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried installing nni packages in windows command prompt,it installed successfully.but when i tried to command,\n\"nnictl create --config nni\\example\/trails\\mnist-pytorch\\config_windows.yml\"\nin windows command prompt.it says,\n'nnictl' is not recognized as an internal or external command,operable program or batch file.\nhow can i fix this error?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":149,"Q_Id":65856053,"Users Score":0,"Answer":"It is solved! I just installed anaconda3 and tried installing nni packages in it.And also it recognized 'nnictl' command in anaconda3 prompt automatically by setting path variables.Thanks:)","Q_Score":0,"Tags":"python","A_Id":65866900,"CreationDate":"2021-01-23T05:29:00.000","Title":"Windows Command prompt - nnictl command not recognized","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Basically what I want to achieve is that,\nI have a Django project. and I want to store the db of the project on my Server(CPanel). and access it from the Laptop. I tried searching about Remote MySql on Django but couldnt find anything,\nI am not using Google Cloud or PythonAnywhere, or heroku,\nis there a way? Please.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":65856647,"Users Score":0,"Answer":"Usually, when you want to access DB remotely on the CPanel of a Virtual server, remote access should be enabled. In my experience, providers close remote connections on VServers unless you ask them to open them. if they open it, you can use putty or similar software to connect to the server and run DB commands.","Q_Score":0,"Tags":"python,mysql,django","A_Id":65856710,"CreationDate":"2021-01-23T07:04:00.000","Title":"I am looking for a way to connect to my database from my laptop, this database is on CPANEL, and i am making a Django Project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to make a desktop app obiously im a begginer so\nif i built a gui in mac using python and tkinter that program will work on windows?\nAlso is tkinter the best framework?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":406,"Q_Id":65856854,"Users Score":1,"Answer":"Yes, but tkinter applications will look different on different platforms. This means that you may get different buttons on Windows 7 vs MacOS X -- Tkinter is just using whatever the OS gives it.\nAs for it being the best framework, I couldn't say if it is the best or not, but it is pretty simple and works well. I normally use tkinter.","Q_Score":0,"Tags":"python,user-interface,tkinter","A_Id":65856917,"CreationDate":"2021-01-23T07:35:00.000","Title":"Python tkinter Mac and windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on Ubuntu by remoteSSH, and I updated python kernel in my vitual environment named nn form 3.7.9 to 3.8.5, however, I still find the old kernel standing in the jupyter kernel list. I want to know how to delete the old kernel name from the kernel list.\nI've replaced python 3.7.9 and python3.6.4 with python 3.8.5, but the old kernels didn't disappear, I want to delete them manually.\nMoreover, I can't select Python 3.8.5 from the kernel list.","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":2861,"Q_Id":65858621,"Users Score":5,"Answer":"I had the same problem and the following might help someone else encountering the issue:\n\nReload VS Code Window by Ctrl+Shift+P and selecting Reload Window.\n\nReload the Python and Jupyter extensions under the Extensions in the Side Bar.\n\nQuit and relaunch VS Code.\n\n\nIt seems that VS Code is not that quick to update the interpreter list.","Q_Score":1,"Tags":"python,visual-studio-code,jupyter","A_Id":68491421,"CreationDate":"2021-01-23T11:22:00.000","Title":"VSCode Jupyter cannot update kernels automatically","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was trying to figure out without success how I could exclude the dev\/test dependencies of 3rd party Python modules from the BOM generated by CycloneDX. There seems to be no straightforward way to do this. Any recommendation on how to best approach this would be highly appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":65859032,"Users Score":0,"Answer":"This is unfortunately not supported currently. But would make a great issue :)","Q_Score":0,"Tags":"python,external-dependencies","A_Id":67225494,"CreationDate":"2021-01-23T12:10:00.000","Title":"CycloneDX Exclude Python Dev\/Test Dependencies","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to code a timed ban or mute, but with that I can restart my bot. Is there a nice libary or anyone has an idea to code it?\nThank you very much!\nI code with discordpy cogs","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75,"Q_Id":65860119,"Users Score":0,"Answer":"If it involves restarting your bot, then you cannot use your RAM to store your data but you need to use your Hard Disk for that. When your bot is running, it is storing its data inside the RAM and that's why you can re-use them while the bot is online. Once it goes offline or gets restarted, all the data are removed from the RAM because the program is shut down.\nTo store those data within the Hard Disk, you need a database. For such small projects, you can use JSON or SQLite. If the project scales, you can move to another SQL like MySQL that will handle a more complex and heavy database.\nTo make a bot that can do a timed message:\n\nYou need to store the data of when the message is going to be sent on your hard disk (database), then use that data to send that message. For example, you want to send \"hello\" in 1 day. That basically means that you want to send it at 8\/7\/2021 6:19 PM (it's 7\/7\/2021 6:19 PM right now). So, you store 8\/7\/2021 6:19 PM as a piece of data of when the bot is going to send the message.\nThen you make the bot compare the current time with the time that you saved on your database. If it is greater, then it will send the message and delete the data from the database.\nYou can use the same technique with timed ban, role and everything else.\n\nFrom a technical standpoint, you can use Discordpy for all the Discord stuff, datetime to check the time, JSON (or SQlite3) for the database.","Q_Score":1,"Tags":"python,discord,discord.py-rewrite","A_Id":68290575,"CreationDate":"2021-01-23T14:07:00.000","Title":"Discordpy timed message or ban or role, WITH bot restart | discordpy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two bytestream, these data should be dump in a compound OLE file both together in a container. I just checked that there are some third party libraries to do it, but I would like to do it with pywin32, I have this library in my project and I would not like to add more third party libraries which maybe I could not mantain in the future. If for some reason I can not use Com objects from Windows, which is the best option or the best library?\nThanks.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":91,"Q_Id":65864425,"Users Score":-1,"Answer":"dI found several libraries, useless right now to create an ole object from scratch then add some streams in to the container. The only way to do it is through pywin32 then use a Com object. The problem is as always with pywin32, no examples, no a good documentation. Nice.\nDoes anyone would know how to do it? It would help just to know how to open a com object for this purpose.\nThanks.","Q_Score":0,"Tags":"python-3.x,pywin32,ole,bytestream","A_Id":65872773,"CreationDate":"2021-01-23T21:10:00.000","Title":"Writing OLE compound files with python 3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a 2D numpy array [1,2], [3,4], [5,6]. How can I modify the values in the second column (eg. add 1 to each value, so the result would be ([1,3], [3,5], [5,7])?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":29,"Q_Id":65865233,"Users Score":2,"Answer":"Psidom's comment above worked thanks.\narr[:,1] += 1","Q_Score":0,"Tags":"python,numpy","A_Id":65874097,"CreationDate":"2021-01-23T22:42:00.000","Title":"Modifying a multidimensional Numpy array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I have a dataframe with two columns, a naive date, and it's corresponding timezone. I'd like to convert the naive date to a timezone aware date using the second column.\n\n\n\n\nnaive_date\ntimezone\n\n\n\n\n24-01-2021 05:00:00\n'Europe\/London'\n\n\n24-01-2021 06:00:00\n'Europe\/Amsterdam'\n\n\n24-01-2021 00:00:00\n'US\/Eastern'\n\n\n\n\nIf I just wanted to convert to a known timezone, I would do:\ndf['naive_date'].dt.tz_localize(tz='Europe\/London')\nBut what if the value passed to tz should be taken from the timezone column for that particular row? Is that possible? I tried, which unsurprisingly didn't work:\ndf['naive_date'].dt.tz_localize(tz=df['timezone'])\nFor context I'll be comparing the date against another timezone aware date column, and the comparison returns false when comparing naive to timezone aware datetime.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":65866411,"Users Score":0,"Answer":"Pablo's answer was spot on, this is what I ended up using:\ndf['local_date'] = df.apply(lambda x: x['naive_date'].tz_localize(tz=x['timezone']), axis=1)","Q_Score":0,"Tags":"python,pandas","A_Id":65927361,"CreationDate":"2021-01-24T01:46:00.000","Title":"Passing a paremeter taken from a dataframe column to a pandas method","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a simple neural network with tensorflow and I am studying how the amount of epochs is affecting results. I use Google Colab for this purpose.\nScenario:\n\nI download the dataset from tensorflow (built-in)\nI create a model in tensorflow\nI set the variable with how many epochs I want to train the model\nCompile and train the model\n\nI noticed that when I re-run the script, the dataset is already downloaded and I am worried the model may be also kept in session memory..\nMy question is: if I re-run the script in google colab using option \"Run after\" with different epochs number, will this create new instance of the model and start training from 0, or will it start re-training already trained model?\nFor example:\nI run the script and trained network for 10 epochs. I change the variable to 50 and re-run the script.\nWill it start training model from 0 to 50, or will it take already trained model and train for 50 more epochs, so 60 in total?\nIs there any way to check for how many epochs the model was trained?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":65866488,"Users Score":0,"Answer":"I created new script with network from tensorflow tutorial, added evaluation function after model compilation and before training and then after training.\nAnswer: when re-running the script model is always trained from 0 epoch.","Q_Score":0,"Tags":"python,tensorflow,keras,google-colaboratory,tensorflow2.0","A_Id":65867180,"CreationDate":"2021-01-24T02:01:00.000","Title":"Tensorflow networks in google colab, what happens when I re-run the script","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm not new with Python programming, today I tried to install Django using \"pip install Django\" on my computer.\nI'm using Python 3.9.1, pip 20.2.3, PyCharm (community) 2020.3.2 .\nI've tried everything, using pip3, trying to install libraries from pycharm termianl and from the pycharm interpreter configuration, I even format my computer(full foramt and the quick foramt) and nothing helped my.\nI searched for similar problems here but all of them didn't work.\nI get this problem over and over again (I tried to set the timeout to 100, and my internet connection is good and fast enough): (in this tryout I've tried to install numpy library, I get the same problem with every library that I tried)\n(venv) C:\\Users\\achik\\PycharmProjects\\first>pip install numpy\nWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"\nHTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/\nWARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"\nHTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/\nWARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"\nHTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/\nWARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"\nHTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/\nWARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnection\nPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/\nERROR: Could not find a version that satisfies the requirement numpy (from versions: none)\nERROR: No matching distribution found for numpy\nPlease help me, I'm about to lose my mind\nUPDATE:\nI formatted my computer and reinstall windows 10, installed python\n3.7 and older version of PyCharm and it didn't work.\nI formatted again my pc and installed Ubuntu 18.04 lts and again I have the same problem.\nPLEASE help me!","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":248,"Q_Id":65866835,"Users Score":0,"Answer":"In Pycharm sometime this type of issue happens due to their venv file, if possible then please remove venv file from pycharm directory any add python in the environment variable\nand try to install NumPy from CMD.\nI hope this will resolve your issue, I was also facing the same issue and tried this & fortunately being solved.","Q_Score":0,"Tags":"python,windows,pip,pycharm","A_Id":65866892,"CreationDate":"2021-01-24T03:19:00.000","Title":"errors while using pip install on windows 10 and Ubuntu 18.04 lts","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm not new with Python programming, today I tried to install Django using \"pip install Django\" on my computer.\nI'm using Python 3.9.1, pip 20.2.3, PyCharm (community) 2020.3.2 .\nI've tried everything, using pip3, trying to install libraries from pycharm termianl and from the pycharm interpreter configuration, I even format my computer(full foramt and the quick foramt) and nothing helped my.\nI searched for similar problems here but all of them didn't work.\nI get this problem over and over again (I tried to set the timeout to 100, and my internet connection is good and fast enough): (in this tryout I've tried to install numpy library, I get the same problem with every library that I tried)\n(venv) C:\\Users\\achik\\PycharmProjects\\first>pip install numpy\nWARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"\nHTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/\nWARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"\nHTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/\nWARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"\nHTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/\nWARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"\nHTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/\nWARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError(\"HTTPSConnection\nPool(host='pypi.org', port=443): Read timed out. (read timeout=15)\")': \/simple\/numpy\/\nERROR: Could not find a version that satisfies the requirement numpy (from versions: none)\nERROR: No matching distribution found for numpy\nPlease help me, I'm about to lose my mind\nUPDATE:\nI formatted my computer and reinstall windows 10, installed python\n3.7 and older version of PyCharm and it didn't work.\nI formatted again my pc and installed Ubuntu 18.04 lts and again I have the same problem.\nPLEASE help me!","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":248,"Q_Id":65866835,"Users Score":1,"Answer":"I had the same problem. Sometimes it is because of internet connection and sometimes cmd working so slow.\nIf your antifilter is open, close it before executing commands on cmd.\nAnd whenever you faced this problem close cmd and run it as admin again...\nyou can install needed packages without using pip as well.\nFor example download a suitable version of numpy and save it on C:\\Python\\Tools\\script and run it using cmd.","Q_Score":0,"Tags":"python,windows,pip,pycharm","A_Id":65868494,"CreationDate":"2021-01-24T03:19:00.000","Title":"errors while using pip install on windows 10 and Ubuntu 18.04 lts","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using python 3.9.1 on my computer and when I try this command on cmommand windows : python -- version , I come up with 2.7.12 !!! And it does not show the version correct.\nI uninstalled python and removed all the related files on C drive as well as environmental Variables...\nNow I don't have python but it still shows the version 2.7.12 when I ask for Command windows!!!\nDoes anyone know what is the problem ????","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":81,"Q_Id":65868650,"Users Score":2,"Answer":"Go to my computer, right click and then properties. Here go to Advanced System setting\nand at the bottom of the window open Environment Variables and check any variable having python on it. if there are two variable maybes this is the problem.\nAlso go to the app data on your windows and check files if there is a file related to the older version of python.\nGood Luck.","Q_Score":1,"Tags":"python-3.x,python-2.7,pip","A_Id":65868714,"CreationDate":"2021-01-24T08:53:00.000","Title":"Showing python 2.7.12 for python 3.9.1 on Command window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using python 3.9.1 on my computer and when I try this command on cmommand windows : python -- version , I come up with 2.7.12 !!! And it does not show the version correct.\nI uninstalled python and removed all the related files on C drive as well as environmental Variables...\nNow I don't have python but it still shows the version 2.7.12 when I ask for Command windows!!!\nDoes anyone know what is the problem ????","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":81,"Q_Id":65868650,"Users Score":1,"Answer":"You can use PowerShell instead of cmd as well try this one after checking the variables.","Q_Score":1,"Tags":"python-3.x,python-2.7,pip","A_Id":65868854,"CreationDate":"2021-01-24T08:53:00.000","Title":"Showing python 2.7.12 for python 3.9.1 on Command window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using python 3.9.1 on my computer and when I try this command on cmommand windows : python -- version , I come up with 2.7.12 !!! And it does not show the version correct.\nI uninstalled python and removed all the related files on C drive as well as environmental Variables...\nNow I don't have python but it still shows the version 2.7.12 when I ask for Command windows!!!\nDoes anyone know what is the problem ????","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":81,"Q_Id":65868650,"Users Score":2,"Answer":"if you have both versions then you should write python2 --version o","Q_Score":1,"Tags":"python-3.x,python-2.7,pip","A_Id":66407138,"CreationDate":"2021-01-24T08:53:00.000","Title":"Showing python 2.7.12 for python 3.9.1 on Command window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Couldn\u2019t think of a solution yet.\nI have a scraper running currently. In the first 80% of one loop (every 3-4 hours), the entire process can be run in a headless server without any output needed as it uses Selenium and BS4.\nHowever for the rest of the remaining 20%, I could not program some clicking and typing actions with Selenium for that specific website. I am assuming that it is because the single page website has many many frames (I might be wrong.)\nSo, to combat this and to get around it, I basically used PyAutoGui to control my mouse and click and enter things in textfields repeatedly. I did this by specifying coordinates for each button.\nHow can I do this on Ubuntu 18.04(the server) but without a monitor? Is there a way to fake a monitor of a certain resolution so the coordinates I select when the server is plugged into a monitor of resolution xxx,yyy still works exactly without issue when i create a fake output of resolution xxx,yyy\nI have an extra monitor but I don\u2019t want it to be running all night and day and letting snooping eyes see (i live in a shared house).\nThanks\nEDIT: I reread this after posting it and sorry if the text seems messy im very tired.\nWhat i mean is that the places where the mouse is supposed to click are determined by coordinates relative to ur monitor. How can i replicate this if the monitor is unplugged?\nSorry again","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":65872167,"Users Score":0,"Answer":"No software solution, but you could use a \"hdmi dummy\" with your current resolution. It emulates a display, so your computer will work like usually.","Q_Score":0,"Tags":"python,linux,selenium,ui-automation","A_Id":65873363,"CreationDate":"2021-01-24T15:03:00.000","Title":"How to fake output to automate UI actions on headless server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Note this is a MacOS question not a Linux Question - They are different operating systems\nI'd like to get a meaningful mount point out of python's os.stat(\"foo\").st_dev. At the moment this is just a number and I can't find anywhere to cross reference it.\nAll my searches so far have come up with answers that work on Linux by interrogating \/proc\/... but \/proc doesn't exist in MacOS so any such answer will not work.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":179,"Q_Id":65873319,"Users Score":3,"Answer":"I'm a Linux guy, but if I were not allowed to use \/proc, I would search the \/dev directory for an entry (i.e. filename) which has following stat data:\n\nst_mode indicates that it is a block device (helper: stat.S_ISBLK)\nst_rdev matches the given st_dev value","Q_Score":3,"Tags":"python,macos,stat","A_Id":65873732,"CreationDate":"2021-01-24T16:48:00.000","Title":"Is there a way to get a meaningful mount point for st_dev on MacOS?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I made an app using python and kvlang, and I was trying to get all the files into a one standalone \"exe\" file. I needed to include the \".kv\" file and my main script. I was using pyinstaller and wrote this command:\npyinstaller --onefile -w --icon=download.ico --add-data filefinder.kv;. filefinder.py\nAnd it all went well - no errors or anything but when I launch the app I just get a quick flash of a white window and then it closes. I have determined that the error must be because of some issue with the \".kv\" file but I am not able to fix it cause there's no errors, Nothing! I checked and the app works with the \"onedir\" option but I need to make it smaller in size. I also tried the \"auto-py-to-exe\" but it gives the same result. I am happy to provide any more info should you need it to help me resolve this issue. Cheers!\nAdditional info:\nSystem: Windows 10 pro\nPython: 3.9.1\nkivy: 2.0.0\nPyinstaller: 4.2","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1358,"Q_Id":65874054,"Users Score":2,"Answer":"Not sure why it doesn't work, but run the exe in command prompt and then when it fails the error message will not disappear.\nAdd lots of logs to your application, these can be print statements, as those will always end up on stdout.\ni.e. on the first entrypoint, print(\"Running main\")\nWhen you call your first function:\nprint('calling function_name()')\nOnce that has finished\nprint('function_name() complete')\nAnd so on and so forth until you find where exactly the program stops functioning.\nStart -> cmd -> navigate to your file using cd -> type in the name of the exe to run it.","Q_Score":1,"Tags":"python,pyinstaller,kivy-language","A_Id":65874085,"CreationDate":"2021-01-24T17:53:00.000","Title":"Python app not working after using pyinstaller but doesn't give any errors","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been trying to update the color of a GLMeshItem in a 3d pyqtgraph plot with some success. Using the setColor() member of the GLMeshItem class, I can set the color initially, and sometimes, the face color of the mesh will update, but not always.\nI've had this working in the past, so I know it is possible to do, but can't seem to figure it out this time. According to the documentation, setColor() will Set the default color to use when no vertex or face colors are specified. I'm a bit confused as to whether this means I can dynamically change colors, or only set the color initially.\nFor context, I have a 3d plot with a mesh in it. I read date from a file, then want to set the color of the mesh based on that data. I can post more code if needed, but my program is several hundred lines long and would be tricky to extract the necessary bits to make to problem reproducible. If it would help I can definitely do that however. For now here is the line I am using to set the color:\n_mesh_model is a GLMeshItem.\nColor is a tuple of the form (R, G, B, Alpha)\nself._mesh_model.setColor(color)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":127,"Q_Id":65875040,"Users Score":0,"Answer":"Ok after some searching I figured it out.\nInstead of self._mesh_model.setColor(color), where color is a tuple in the form RGBA, I used this: QtGui.QColor(R, G, B) where R, G and B are values between 0 and 255.\nNow the mesh color updates as expected.","Q_Score":0,"Tags":"python,pyqtgraph","A_Id":65877351,"CreationDate":"2021-01-24T19:35:00.000","Title":"Updating the color of a GLMeshItem","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use the points as a loop input in a vtkSelectPolyData filter,then enable generation of selection scalars, and get the enclosed surface patch using vtkClipPolyData but sometimes vtkSelectPolyData can\u2019t select area inside the loop and gives the wrong area or returns vtkSelectPolyData filter can not follow the edge.I tried to preprocess the poly data with vtkCleanPolyData but the problem was not solved.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":106,"Q_Id":65875409,"Users Score":0,"Answer":"If you want the result to follow the edges in input data, then stop using vtkClipPolyData.\nUse vtkSelectPolyData with GenerateSelectionScalarOff(), if you want both data inside and outside the loop, do make sure to use GenerateUnselectedOutputOn(), this should give you second (unselected) output. Remember to use the correct selection mode like \"SetSelectionModeToSmallestRegion()\" or \"SetSelectionModeToLargestRegion()\" or whatever other mode that suits your needs.\nDo note that vtkSelectPolyData has two outputs to get selected data use \"GetOutput()\" for second output use \"GetUnselectedOutput()\".","Q_Score":0,"Tags":"python,vtk","A_Id":65994314,"CreationDate":"2021-01-24T20:10:00.000","Title":"Why the vtkSelectPolyData returns wrong area?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know there are various ways to calculate the instances of a character in a string, such as using count(), collections.Counter, regex, etc., but which way would be most efficient if I only wanted to find the instances of one specific character in a string?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":65875749,"Users Score":0,"Answer":"Count is prolly the fastest and the most simple version tbh. You should stick to that :)","Q_Score":0,"Tags":"python,string,performance","A_Id":65875767,"CreationDate":"2021-01-24T20:44:00.000","Title":"Most efficient way to calculate ocurrences of a character in a string?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a uint8 image and a mask of it (float32).\nThe mask has ones where the object I want to keep is, and zeros for the background.\nI want to make a new image that shows the real colored object instead of the ones, and keep the background zeros.\nI tried to multiply both images but it says that the mask has 3 dimensions and the colored image 4 dimensions.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":65875845,"Users Score":0,"Answer":"You seem to have an RGBA image, which may be converted to 3-channel rgb\/bgr via cv2.cvtColor() function. Also i believe that it would be more clear to do masking with cv2.bitwise_and().","Q_Score":0,"Tags":"python-3.x,opencv,image-processing","A_Id":65877748,"CreationDate":"2021-01-24T20:53:00.000","Title":"How can I take out an object from an image to remove the background?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am implementing a system that checks the plagiarism of documents.\nour stack is vuejs, nodejs\/express and flask for python.\nMy question is that i have a page which the user will upload his documents for checking and the vue ui will send a request to the backend apis with the user file to check the similarity, while this process is running a loading overlay is displayed in the same page.\nI want to update this page with live steps from the backend side like \"extracting\", \"searching\", \"comparing\", \"generating report\".\nnoting that the request sent with the user file have only one response.\nso any ideas how can i achieve this step ??\nThanks ,,","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":156,"Q_Id":65877338,"Users Score":0,"Answer":"You can return a request_id and after that you can use the id to check on the status\/stage of the request.","Q_Score":0,"Tags":"javascript,python,node.js,express,vue.js","A_Id":65877359,"CreationDate":"2021-01-25T00:01:00.000","Title":"How to send live status update from express\/flask backends to vuejs frontend","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"It says Fatal error in launcher: Unable to create process using '\"c:\\python38\\python.exe\" \"C:\\Python38\\Scripts\\pip.exe\" ': The system cannot find the file specified.\", when I use pip alone.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":65878031,"Users Score":0,"Answer":"You seem to have an issue with an old Python installation that wasn't fully removed.\nAn easy way to resolve it is by overwriting the system PATH variable.\nPress Winkey+Break (or Winkey+Pause depending on keyboard), go to \"advanced system settings\" then \"environment variables\".\nOn user variables you have \"path\". Edit it and add this new path:\nC:\\Users\\\\AppData\\Local\\Programs\\Python\\Python39\\Scripts\nMove this all the way to the top and press OK.\nReopen your cmd. Should work.","Q_Score":0,"Tags":"python","A_Id":65878202,"CreationDate":"2021-01-25T01:59:00.000","Title":"Can't use pip without saying py -m pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"It says Fatal error in launcher: Unable to create process using '\"c:\\python38\\python.exe\" \"C:\\Python38\\Scripts\\pip.exe\" ': The system cannot find the file specified.\", when I use pip alone.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":65878031,"Users Score":0,"Answer":"you can uninstall python, and than you install python, must choice pip.","Q_Score":0,"Tags":"python","A_Id":65878450,"CreationDate":"2021-01-25T01:59:00.000","Title":"Can't use pip without saying py -m pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"vincens@VMAC: python3\ndyld: Library not\nloaded:\/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation\nReferenced from:\n\/Library\/Frameworks\/Python.framework\/Versions\/3.6\/Resources\/Python.app\/Contents\/MacOS\/Python\nReason: image not found\n[1] 25278 abort python3\n\npython3 env is not used when I update my Mac to the latest version. How can I solve it?","AnswerCount":5,"Available Count":2,"Score":0.1586485043,"is_accepted":false,"ViewCount":26865,"Q_Id":65878141,"Users Score":4,"Answer":"That's becuase you have installed both python 3.6 from system library & python3.9 from other source like brew and there are something wrong with the python in lower version. Please manually delete the python within \/Library\/Frameworks. sudo rm -rf \/Library\/Frameworks\/Python.framework\/Versions\/3.6 this command works for me.","Q_Score":17,"Tags":"python-3.x,macos,pycharm","A_Id":70687123,"CreationDate":"2021-01-25T02:14:00.000","Title":"dyld: Library not loaded: \/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"vincens@VMAC: python3\ndyld: Library not\nloaded:\/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation\nReferenced from:\n\/Library\/Frameworks\/Python.framework\/Versions\/3.6\/Resources\/Python.app\/Contents\/MacOS\/Python\nReason: image not found\n[1] 25278 abort python3\n\npython3 env is not used when I update my Mac to the latest version. How can I solve it?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":26865,"Q_Id":65878141,"Users Score":23,"Answer":"This worked for me with the same issue.\nCheck if you have multiple Python3.x versions installed. In my case I had Python3.6 and Python3.9 installed. brew uninstall python3 did not remove Python3.6 completely.\nI was able to call Python3.9 from Terminal by explicitly running python3.9 instead of python3, which led me to believe the issue was caused by ambiguity in which Python3.x resource was to be used.\nManually deleted \/Library\/Frameworks\/Python.framework\/Versions\/3.6 resulted in Python3 running as expected.\nhint:\nIt may be sufficient to remove \/Library\/Frameworks\/Python.framework\/Versions\/3.6 from your PATH environment variable.","Q_Score":17,"Tags":"python-3.x,macos,pycharm","A_Id":65895716,"CreationDate":"2021-01-25T02:14:00.000","Title":"dyld: Library not loaded: \/System\/Library\/Frameworks\/CoreFoundation.framework\/Versions\/A\/CoreFoundation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"i am trying to program a simple script and i would like to know if anyone has the answer to this question\nwhen i have a module for example WMI from 'pip install wmi' in the form as 'import wmi' in my code, how do i get the pyinstaller module to compile the wmi module with the exe file\ni have tried importing from the source code in a folder example 'from wmi import wmi' but i got no luck when launching the exe file only in the raw python file, also just to note when i compile the script i do the command 'pyinstaller vb.py --onefile'","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":75,"Q_Id":65878909,"Users Score":0,"Answer":"remove --onefile since wmi couns as a file (or multiple if its a package)","Q_Score":0,"Tags":"python,module,pyinstaller,exe","A_Id":65879149,"CreationDate":"2021-01-25T04:16:00.000","Title":"need help using pyinstaller","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am making a discord bot in python. At one point in coding the bot, it stopped updating, it kept using old code. Then I realized the bot never stopped. I kicked it and reinvited it, but the problem did not go away.\nDoes anybody know why?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":121,"Q_Id":65879004,"Users Score":0,"Answer":"Kicking will not fix it, it's a problem some code compliers may have, or just the concurrent running version is still running.\nEditors like sublime can do this problem if the existing program has not been stopped.\nThis can be easily fixed by going back into Discord's developer portal and \"regenerate token\". After this, update with the same new token in your code with running the bot token. This will force terminate any builds running.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":65879050,"CreationDate":"2021-01-25T04:30:00.000","Title":"Discord.py bot stays active even after code is stopped","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am making a discord bot in python. At one point in coding the bot, it stopped updating, it kept using old code. Then I realized the bot never stopped. I kicked it and reinvited it, but the problem did not go away.\nDoes anybody know why?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":65879004,"Users Score":0,"Answer":"It's normal, after about 30 seconds or so, the bot should stop","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":65879030,"CreationDate":"2021-01-25T04:30:00.000","Title":"Discord.py bot stays active even after code is stopped","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I converted an empty Python file to an exe file with the pyinstaller library, and the exe file size was 6.694 MB.\nWhy was the size of the exe file so large even though the Python file was empty and there was no reference to it?\nreally why?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":230,"Q_Id":65880874,"Users Score":1,"Answer":"I think I found the answer to my questions myself.\nIn many languages, such as C#, the script file is initially completely empty.\nIn C#, for example, to print text on a console, you must first use the system library, the Console class, WriteLine method, and then do so.\nWhile in each Python file you have access to a number of default classes and functions. Including the print function that you can use to easily print text.\nThe size of this raw data in Python files is not small at all.\nIn fact, before the first line of each Python file, there is a line that import the Python default library.\nI wish Python was completely empty at first.","Q_Score":0,"Tags":"python,compilation,exe","A_Id":66008241,"CreationDate":"2021-01-25T07:57:00.000","Title":"Why are Python exe files so large?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"If code is divided into too many segments, can this make the program slow?\nFor example - Creating a separate file for just a single function.\nIn my case, I'm using Python, and suppose there are two functions that I need in the main.py file. If I placed them in different files (just containing the function).\n(Suppose) Also, If I'm using the same library for the two functions and I've divided the functions into separate files.\nHow can this affect efficiency? (Machine performance-wise and Team-wise).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":56,"Q_Id":65881407,"Users Score":1,"Answer":"It depends on the language, the framework you use etc. However, dividing the code too much can make it unreadable, which is (most of the time) the bigger problem. Since most of the time you will (or should) be working in a team, you should consider how readable your code would be for them.\nHowever, answering this in a definite way is difficult. You should ask a Senior developer on your team for guidelines.","Q_Score":1,"Tags":"python,performance,optimization","A_Id":65881724,"CreationDate":"2021-01-25T08:41:00.000","Title":"Can dividing code too much make it inefficient?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Flask app running in Google Cloud App Engine. I want the user to be able to call MATLAB functions on their local instance - if they have MATLAB installed locally and the correct license, of course.\nRunning locally the app works well using matlab.engine, however, when deployed to google cloud platform it fails during build. Looking in the logs:\n\nModuleNotFoundError: No module named 'matlabengineforpython3_7\n\nSo I suspect it is because the server cannot import the required dlls etc. for the python matlab engine package to work.\nIs there a way to pass the required files to google app engine? Is this approach even possible?\nMy users will always have a local copy of MATLAB, so I am trying to find a solution that avoids needing to pay for the MATLAB server license.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":95,"Q_Id":65882168,"Users Score":1,"Answer":"I don't believe that any solution is possible that would avoid a Matlab server license. Your server cannot access installed Matlab on the computers of your users.\nTo install non-Python software with App Engine you need to use a custom runtime with App Engine Flexible. Check the GAE docs for more details.","Q_Score":0,"Tags":"python,google-app-engine,flask,matlab-engine","A_Id":65886318,"CreationDate":"2021-01-25T09:37:00.000","Title":"Is it possible to call matlab.engine from Flask webapp on Google Cloud?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"This is a more general question on how the solution could be designed.\nThe goal is to have an API which would take some a text or a list of texts, process it and return transformed ones. How should I approach it so that whenever a text is requested, all the requests that contains list of texts would wait and one text would be processed first?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":93,"Q_Id":65884372,"Users Score":0,"Answer":"In general you would have a heap (priority queue) of requests in line. Your API can fetch highest-priority requests from this heap.\nIn this particular case, if all you need is strs first and lists second, define your priority function like lambda x: 2 if isinstance(x, str) else 1.\nNote that this does not take into account of potential timeouts.. if you have lots of str requests, the list ones could have to wait a while.","Q_Score":1,"Tags":"python,nginx,flask,celery,scheduled-tasks","A_Id":65884481,"CreationDate":"2021-01-25T12:02:00.000","Title":"Queuing tasks with prioritization rules","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone help me with this question? I'm fairly new to coding so help would be much appreciated\nQn: Define a function that expects a stack as an argument. The function builds and returns an instance of LinkedQueue that contains the elements in the stack. The function assumes that the stack has the interface described in the previous stack section. The function's postconditions are that the stack is left in the same state as it was before the function was called, and that the queue's front element is the one at the top of the stack.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":65895845,"Users Score":0,"Answer":"looks school work :D\nfor a stack you can only push\/pop on it.\n1st, pop elements, 1 by 1, to create the linkedList, and push the popped elements to a reversed stack.\n2nd, pop everything from the reversed stack and push back into original stack.","Q_Score":0,"Tags":"python,function,data-structures,stack,queue","A_Id":65895950,"CreationDate":"2021-01-26T04:19:00.000","Title":"How to define a function that expects a stack as an argument? (full question below)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed django cookiecutter in Ubuntu 20.4\nwith postgresql when I try to make migrate to the database I get this error:\n\npython manage.py migrate\nTraceback (most recent call last): File \"manage.py\", line 10, in\n\nexecute_from_command_line(sys.argv) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/init.py\",\nline 381, in execute_from_command_line\nutility.execute() File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/init.py\",\nline 375, in execute\nself.fetch_command(subcommand).run_from_argv(self.argv) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/base.py\",\nline 323, in run_from_argv\nself.execute(*args, **cmd_options) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/base.py\",\nline 361, in execute\nself.check() File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/base.py\",\nline 387, in check\nall_issues = self._run_checks( File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/management\/commands\/migrate.py\",\nline 64, in _run_checks\nissues = run_checks(tags=[Tags.database]) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/checks\/registry.py\",\nline 72, in run_checks\nnew_errors = check(app_configs=app_configs) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/core\/checks\/database.py\",\nline 9, in check_database_backends\nfor conn in connections.all(): File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/db\/utils.py\",\nline 216, in all\nreturn [self[alias] for alias in self] File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/db\/utils.py\",\nline 213, in iter\nreturn iter(self.databases) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/utils\/functional.py\",\nline 80, in get\nres = instance.dict[self.name] = self.func(instance) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/db\/utils.py\",\nline 147, in databases\nself._databases = settings.DATABASES File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/conf\/init.py\",\nline 79, in getattr\nself._setup(name) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/conf\/init.py\",\nline 66, in _setup\nself._wrapped = Settings(settings_module) File \"\/home\/mais\/PycharmProjects\/django_cookiecutter_task\/venv\/lib\/python3.8\/site-packages\/django\/conf\/init.py\",\nline 176, in init\nraise ImproperlyConfigured(\"The SECRET_KEY setting must not be empty.\") django.core.exceptions.ImproperlyConfigured: The SECRET_KEY\nsetting must not be empty.\n\nI did the whole instructions in cookiecutter docs and createdb what is the wrong?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":167,"Q_Id":65897801,"Users Score":0,"Answer":"Your main problem is very clear in the logs.\nYou need to set your environment SECRET_KEY give it a value, and it should skip this error message, it might throw another error if there are some other configurations that are not set properly.","Q_Score":0,"Tags":"python-3.x,django,django-rest-framework","A_Id":65898014,"CreationDate":"2021-01-26T08:12:00.000","Title":"Django cookiecutter with postgresql setup on Ubuntu 20.4 can't migrate","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I've been working with kivymd for awhile and I wanna create a screen in which it's contents will change depending on content that is clicked for example let's say I have a home screen with products and when each product is clicked another screen will open with the image and details of that specific product and that should be possible for each other product","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":65898199,"Users Score":0,"Answer":"You can use 2 screens for this.\n\nMainScreen to show all the products\nProductscreen to show details of the product.\n\nYou can add the contents on the screen according to the product being clicked. This way, you do not need to create a separate screen for each product.\nNote: Since the contents on the productscreen will be added at runtime, you will need to write it in .py file and not .kv file.","Q_Score":0,"Tags":"python,dynamic,kivy,kivymd","A_Id":65913437,"CreationDate":"2021-01-26T08:43:00.000","Title":"Dynamic screens in kivymd","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to change an element in np array by index without any loops and if statements.\nE.g we have array [1,2,3,4], and every second element, starting from 0, I want to change to 10. So as to get [10,2,10,4].","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":65899550,"Users Score":0,"Answer":"Simply, you can use: a[::2]=10","Q_Score":0,"Tags":"python,numpy","A_Id":65899709,"CreationDate":"2021-01-26T10:23:00.000","Title":"How to change element in np.array by index","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install jupyterlab via the command terminal and it gave me the following warning:\nWARNING: The script jupyter-server.exe is installed in 'C:\\Users\\Benedict\\AppData\\Roaming\\Python\\Python39\\Scripts' which is not on PATH.\nConsider adding this directory to PATH or, if you prefer to suppress thus warning, use --no-warn-script-location.\nPlease how do I add the directory to PATH? Someone help me please. Thank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":623,"Q_Id":65899561,"Users Score":0,"Answer":"As i can see, you haven't put that in path, for doing that follow the following step:-\n\nOpen the advance system settings\nselect environment variable\nThen click on path and press edit.\nClick on new and enter the you path and then your path to python script directory.\nPress okay and reopen the jupyter.\nThat's it","Q_Score":0,"Tags":"python-3.x,pandas,numpy,pip,jupyter-lab","A_Id":65899754,"CreationDate":"2021-01-26T10:24:00.000","Title":"How to add a directory to a path?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"On django-background-tasks==1.1.11 (Django==2.2, Python 3.6.9), I have this problem where everytime I run python manage.py migrate, the table background_task_completedtask gets deleted. This breaks my background tasks. So far I have found a way to reverse it, as it is a separate migration from the initial one, meaning I can just python manage.py migrate background_task 0001_initial to restore it, but this does mean it will still be removed next migration.\nAny ideas for a more permanent solution?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":183,"Q_Id":65900159,"Users Score":0,"Answer":"Found a (somewhat hacky) permanent solution myself:\nBy faking migrations (python manage.py migrate --fake (or python manage.py migrate appname --fake)), you make django think the migration has been executed without actually executing it. By doing this with the migration that was bothering me, I managed to get everything working again.","Q_Score":0,"Tags":"python,django,django-migrations,background-task","A_Id":65940158,"CreationDate":"2021-01-26T11:06:00.000","Title":"Stop Django background-task from deleting completed_task model","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"now. when i run it. the error conmes\nImportError: dlopen(\/Users\/v\/Library\/Python\/3.8\/lib\/python\/site-packages\/PyQt5\/QtWidgets.abi3.so, 2): no suitable image found. Did find:\n\/Users\/v\/Library\/Python\/3.8\/lib\/python\/site-packages\/PyQt5\/QtWidgets.abi3.so: mach-o, but wrong architecture\n\/Users\/v\/Library\/Python\/3.8\/lib\/python\/site-packages\/PyQt5\/QtWidgets.abi3.so: mach-o, but wrong architecture","AnswerCount":8,"Available Count":2,"Score":0.0748596907,"is_accepted":false,"ViewCount":12971,"Q_Id":65901162,"Users Score":3,"Answer":"Try installing the pyqt under the ARM architecture as below\narch -arm64 brew install pyqt","Q_Score":15,"Tags":"python,macos,pyqt5,apple-silicon,apple-m1","A_Id":67355598,"CreationDate":"2021-01-26T12:12:00.000","Title":"How can i run pyqt5 on my mac with M1chip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"now. when i run it. the error conmes\nImportError: dlopen(\/Users\/v\/Library\/Python\/3.8\/lib\/python\/site-packages\/PyQt5\/QtWidgets.abi3.so, 2): no suitable image found. Did find:\n\/Users\/v\/Library\/Python\/3.8\/lib\/python\/site-packages\/PyQt5\/QtWidgets.abi3.so: mach-o, but wrong architecture\n\/Users\/v\/Library\/Python\/3.8\/lib\/python\/site-packages\/PyQt5\/QtWidgets.abi3.so: mach-o, but wrong architecture","AnswerCount":8,"Available Count":2,"Score":0.024994793,"is_accepted":false,"ViewCount":12971,"Q_Id":65901162,"Users Score":1,"Answer":"In my case it's\u00a0work: arch -x86_64 brew install pyqt\nAnd all required pyqt start from arch -x86_64 or start from rosetta (through emulator).","Q_Score":15,"Tags":"python,macos,pyqt5,apple-silicon,apple-m1","A_Id":69896096,"CreationDate":"2021-01-26T12:12:00.000","Title":"How can i run pyqt5 on my mac with M1chip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As far as I understand it, in Bert's operating logic, he changes 50% of his sentences that he takes as input. It doesn't touch the rest.\n1-) Is the changed part the transaction made with tokenizer.encoder? And is this equal to input_ids?\nThen padding is done. Creating a matrix according to the specified Max_len. the empty part is filled with 0.\nAfter these, cls tokens are placed per sentence. Sep token is placed at the end of the sentence.\n2-) Is input_mask happening in this process?\n3 -) In addition, where do we use input_segment?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":691,"Q_Id":65901473,"Users Score":1,"Answer":"The input_mask obtained by encoding the sentences does not show the presence of [MASK] tokens. Instead, when the batch of sentences are tokenized, prepended with [CLS], and appended with [SEP] tokens, it obtains an arbitrary length.\n\nTo make all the sentences in the batch has fixed number of tokens, zero padding is performed. The input_ids variable shows whether a given token position contians actual token or if it a zero padded position.\n\nUsing [MASK] token is using only if you want to train on Masked Language Model(MLM) objective.\n\nBERT is trained on two objectives, MLM and Next Sentence Prediction(NSP). In NSP, you pass two sentences and try to predict if the second sentence is the following sentence of first sentence or not. segment_id holds the information if of which sentence a particular token belongs to.","Q_Score":0,"Tags":"python,word,embedding,bert-language-model","A_Id":65921539,"CreationDate":"2021-01-26T12:34:00.000","Title":"About Bert embedding (input_ids, input_mask)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to set my Pycharm run\/debug configuration for robot framework tests. I would like to create configuration, where I can run any robot file in any directory with run\/debug button.\n$FileName$ works when set to Parameters field\nBut the working directory works only when the path is real. I've tried $FileDir$ and $FilePath$. None of those worked.\nNote: I know about File->Settings->External Tools option, but I believe, there is also way via run\/debug configuration","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":70,"Q_Id":65903680,"Users Score":0,"Answer":"Ok, so setting it to directory of script worked. So If you select script in first field, then this folder is automatically set for you. It worked for me like this.","Q_Score":0,"Tags":"python,pycharm,robotframework","A_Id":65934187,"CreationDate":"2021-01-26T14:56:00.000","Title":"Can I set working directory in pycharm run\/debug configuration witch something like $FileDir$?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it possible to define marshmallow schema for top-level lists (e.g. obj = ['John', 'Bill', 'Jack']) or dictionaries (e.g. obj = {'user1': 123, 'user2': 456, 'user3': 789} - keys are arbitrary)?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":256,"Q_Id":65904700,"Users Score":1,"Answer":"Top-level dict is the normal case. To accept arbitrary keys, see unknown=INCLUDE, but then you have no validation, unlike when using the Dict field.\nA solution to this would be either to define a default Field for unknown fields, or to extend Field to let it act as Schema. The former was already suggested (by me, I belive) and shouldn't be too hard to achieve but no one has taken the time to work on it. The latter was suggested by Jared and would be an important refactor. No one is working on it.\nTop-level list of dicts is the normal case of using a schema with many=True. Top-level list of anything is not achievable yet. This would work with the refactor mentioned above.","Q_Score":0,"Tags":"python,python-3.x,marshmallow","A_Id":65916171,"CreationDate":"2021-01-26T15:56:00.000","Title":"Top level lists and dictionaries with marshmallow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Python script.\nA particular method takes a text file and may creates0-3 files.\nI have sample text files, and the expected output files for each.\nHow would I set up this script for testing?\nI use unittest for testing functions that do not have file i\/o currently.\n\nSampleInputFile1 -> expect 0 files generated.\nSampleInputFile2 -> expect 1 files generated with specific output.\nSampleInputFile3 -> expect 3 files generated, each with specific output.\n\nI want to ensure all three sample files, expected files are generated with expected content. Testing script question","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":69,"Q_Id":65906695,"Users Score":0,"Answer":"maybe I dont understand the question but you use a program to divide a text into different subtexts?\nWhy dont you use train_test_split from sklearn to get the test and training files,\nsklearn.model_selection.train_test_split(*arrays, test_size=None, train_size=None, random_state=None, shuffle=True, stratify=None)","Q_Score":0,"Tags":"python,testing,io","A_Id":65907109,"CreationDate":"2021-01-26T17:57:00.000","Title":"How do I test Python with test files and expected output files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im suposing that to generate a subclass the paternal one must be defined, if it is the case, \"object\" class must defined before the creation of \"type\" class, So, how is \"object\", a object of \"type\" class?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":23,"Q_Id":65908575,"Users Score":0,"Answer":"Neither object nor type can be defined in pure Python: object because it is its own parent class and type because it is its own metaclass. They are provided by the Python implementation.","Q_Score":0,"Tags":"python,class","A_Id":65908601,"CreationDate":"2021-01-26T20:11:00.000","Title":"Type of object class in python is defined inside itself?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for an IDE that can act like RStudio to develop Python applications in. I love being able to execute code chunks ad-hoc just to see what they do, change it a bit, look at the output again, etc. However, I also want the structure that Pycharm brings, being able to open an entire repo as a project. Does anyone have any recommendations?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":951,"Q_Id":65908588,"Users Score":0,"Answer":"Try Jupyter notebook, I think, it's what you want. Or you can run IPython notebooks in Pycharm e.g.","Q_Score":1,"Tags":"python,pycharm,ide,rstudio","A_Id":65908612,"CreationDate":"2021-01-26T20:12:00.000","Title":"Develop Python with something like RStudio","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for an IDE that can act like RStudio to develop Python applications in. I love being able to execute code chunks ad-hoc just to see what they do, change it a bit, look at the output again, etc. However, I also want the structure that Pycharm brings, being able to open an entire repo as a project. Does anyone have any recommendations?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":951,"Q_Id":65908588,"Users Score":0,"Answer":"I personally hate Jupyter Notebooks. I'd recommend using Spyder, Pycharm, or VScode with extra emphasis on Spyder because it's python native and allows for remote connections for free (Pycharm is more sophisticated but you have to pay for the version that lets you connect to a remote kernel).\nTo execute a block of code in Spyder you just highlight what you want to run in the text editor and press f9. Spyder has similar repo\/ file management capabilities as Pycharm.","Q_Score":1,"Tags":"python,pycharm,ide,rstudio","A_Id":65909219,"CreationDate":"2021-01-26T20:12:00.000","Title":"Develop Python with something like RStudio","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been trying to link a main module with another module, both need each other to perform a specific task, so I imported them to each other, but I get an error about circular importation. I've been trying to avoid this but it keeps raising the error, please how do I correct this??","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":65908925,"Users Score":0,"Answer":"Simply create a 3rd module, from which you can then import the functions, classes and variables to the 2 other scripts. Normally issues like these are signs of poor code structuring. 2 separate scripts should never both depend on one another at the same time. Avoid such design blunders in the future.","Q_Score":0,"Tags":"python,python-3.x,string,import,module","A_Id":65908980,"CreationDate":"2021-01-26T20:37:00.000","Title":"How do you prevent a circular import exception","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Situation: A csv lands into AWS S3 every month. The vendor adds\/removes\/modifies columns from the file as they please. So the schema is not known ahead of time. The requirement is to create a table on-the-fly in Snowflake and load the data into said table. Matillion is our ELT tool.\nThis is what I have done so far.\n\nSetup a Lambda to detect the arrival of the file, convert it to JSON, upload to another S3 dir and adds filename to SQS.\nMatillion detects SQS message and loads the file with the JSON Data into Variant column in a SF table.\nSF Stored proc takes the variant column and generates a table based on the number of fields in the JSON data. The VARIANT column in SF only works in this way if its JSON data. CSV is sadly not supported.\n\nThis works with 10,000 rows. The problem arises when I run this with a full file which is over 1GB, which is over 10M rows. It crashes the lambda job with an out of disk space error at runtime.\nThese are the alternatives I have thought of so far:\n\nAttach an EFS volume to the lambda and use it to store the JSON file prior to the upload to S3. JSON data files are so much larger than their CSV counterparts, I expect the json file to be around 10-20GB since the file has over 10M rows.\nMatillion has an Excel Query component where it can take the headers and create a table on the fly and load the file. I was thinking I can convert the header row from the CSV into a XLX file within the Lambda, pass it to over to Matillion, have it create the structures and then load the csv file once the structure is created.\n\nWhat are my other options here? Considerations include a nice repeatable design pattern to be used for future large CSVs or similar requirements, costs of the EFS, am I making the best use of the tools that I are avaialable to me? Thanks!!!","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":859,"Q_Id":65909077,"Users Score":0,"Answer":"Why are you converting CSV into JSON; CSV is directly being loaded into table without doing any data transformation specifically required in case of JSON, the lateral flatten to convert json into relational data rows; and why not use Snowflake Snowpipe feature to load data directly into Snowflake without use of Matallion. You can split large csv files into smaller chunks before loading into Snowflake ; this will help in distributing the data processing loads across SF Warehouses.","Q_Score":1,"Tags":"python,amazon-web-services,lambda,snowflake-cloud-data-platform,matillion","A_Id":65913376,"CreationDate":"2021-01-26T20:49:00.000","Title":"Data Ingestion: Load Dynamic Files from S3 to Snowflake","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Situation: A csv lands into AWS S3 every month. The vendor adds\/removes\/modifies columns from the file as they please. So the schema is not known ahead of time. The requirement is to create a table on-the-fly in Snowflake and load the data into said table. Matillion is our ELT tool.\nThis is what I have done so far.\n\nSetup a Lambda to detect the arrival of the file, convert it to JSON, upload to another S3 dir and adds filename to SQS.\nMatillion detects SQS message and loads the file with the JSON Data into Variant column in a SF table.\nSF Stored proc takes the variant column and generates a table based on the number of fields in the JSON data. The VARIANT column in SF only works in this way if its JSON data. CSV is sadly not supported.\n\nThis works with 10,000 rows. The problem arises when I run this with a full file which is over 1GB, which is over 10M rows. It crashes the lambda job with an out of disk space error at runtime.\nThese are the alternatives I have thought of so far:\n\nAttach an EFS volume to the lambda and use it to store the JSON file prior to the upload to S3. JSON data files are so much larger than their CSV counterparts, I expect the json file to be around 10-20GB since the file has over 10M rows.\nMatillion has an Excel Query component where it can take the headers and create a table on the fly and load the file. I was thinking I can convert the header row from the CSV into a XLX file within the Lambda, pass it to over to Matillion, have it create the structures and then load the csv file once the structure is created.\n\nWhat are my other options here? Considerations include a nice repeatable design pattern to be used for future large CSVs or similar requirements, costs of the EFS, am I making the best use of the tools that I are avaialable to me? Thanks!!!","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":859,"Q_Id":65909077,"Users Score":0,"Answer":"I also load CSV files from SFTP into Snowflake, using Matillion, with no idea of the schema.\nIn my process, I create a \"temp\" table in Snowflake, with 50 VARCHAR columns (Our files should never exceed 50 columns). Our data always contains text, dates or numbers, so VARCHAR isn't a problem. I can then load the .csv file into the temp table. I believe this should work for files coming from S3 as well.\nThat will at least get the data into Snowflake. How to create the \"final\" table however, given your scenario, I'm not sure.\nI can imagine being able to use the header row, and\/or doing some analysis on the 'type' of data contained in each column, to determine the column type needed.\nBut if you can get the 'final' table created, you could move the data over from temp. Or alter the temp table itself.","Q_Score":1,"Tags":"python,amazon-web-services,lambda,snowflake-cloud-data-platform,matillion","A_Id":66094723,"CreationDate":"2021-01-26T20:49:00.000","Title":"Data Ingestion: Load Dynamic Files from S3 to Snowflake","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Situation: A csv lands into AWS S3 every month. The vendor adds\/removes\/modifies columns from the file as they please. So the schema is not known ahead of time. The requirement is to create a table on-the-fly in Snowflake and load the data into said table. Matillion is our ELT tool.\nThis is what I have done so far.\n\nSetup a Lambda to detect the arrival of the file, convert it to JSON, upload to another S3 dir and adds filename to SQS.\nMatillion detects SQS message and loads the file with the JSON Data into Variant column in a SF table.\nSF Stored proc takes the variant column and generates a table based on the number of fields in the JSON data. The VARIANT column in SF only works in this way if its JSON data. CSV is sadly not supported.\n\nThis works with 10,000 rows. The problem arises when I run this with a full file which is over 1GB, which is over 10M rows. It crashes the lambda job with an out of disk space error at runtime.\nThese are the alternatives I have thought of so far:\n\nAttach an EFS volume to the lambda and use it to store the JSON file prior to the upload to S3. JSON data files are so much larger than their CSV counterparts, I expect the json file to be around 10-20GB since the file has over 10M rows.\nMatillion has an Excel Query component where it can take the headers and create a table on the fly and load the file. I was thinking I can convert the header row from the CSV into a XLX file within the Lambda, pass it to over to Matillion, have it create the structures and then load the csv file once the structure is created.\n\nWhat are my other options here? Considerations include a nice repeatable design pattern to be used for future large CSVs or similar requirements, costs of the EFS, am I making the best use of the tools that I are avaialable to me? Thanks!!!","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":859,"Q_Id":65909077,"Users Score":0,"Answer":"Why not split the initial csv file into multiple files and then process each file in the same way you currently are?","Q_Score":1,"Tags":"python,amazon-web-services,lambda,snowflake-cloud-data-platform,matillion","A_Id":65910045,"CreationDate":"2021-01-26T20:49:00.000","Title":"Data Ingestion: Load Dynamic Files from S3 to Snowflake","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating a python script, that create csv files as result from sybase queries.\nWhen pandas creates the file, all numeric values inside csv file ends with \".0\" (for example, 2.0 instead of 2 )\nI can cast to integer in query, but it affects query performance.\nIn order to solve that, I setted coerce_float parameter to false inside read_sql function.\nHowever all columns are converted to string, and some columns must be floats, and the decimal separator is \".\" and I need \",\" as decimal separator.\nMy question is:\nIs there some way to change default decimal separator as a comma and keep coerce_float to False?\nObs: a simple string replace doesnt solve my problem, because the script will read several query files.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":210,"Q_Id":65910188,"Users Score":0,"Answer":"First of all, python uses \".\" to write floats so you cant have floats written with \",\". If you want floats written with \",\" they must be strings. However, you can save floats with the decimal \",\" as you save the .csv, by passing the argument decimal=',' to df.to_csv().","Q_Score":0,"Tags":"python,pandas,dataframe,sybase","A_Id":65910643,"CreationDate":"2021-01-26T22:27:00.000","Title":"Change default decimal separator in pandas read_sql","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a small python code that uses min(list) multiple times on the same unchanged list, this got me wondering if I use the result of functions like min(), len(), etc... multiple times on an unchanged list is it better for me to store those results in variables, does it affect memory usage \/ performance at all?\nIf I had to guess I'd say that if a function like min() gets called many times it'd be better for performance to store it in a variable, but I am not sure of this since I don't really know how python gets the value or if python automatically stores this value somewhere as long as the list isn't changed.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":65911867,"Users Score":0,"Answer":"If you are only using it 1-5 times, it doesn't really matter. But if you are going to call it anymore, and really less too, it is best to just save it as a variable. It will take next to no memory, and very little time to do so and to pull it from memory.","Q_Score":2,"Tags":"python,performance,optimization,min","A_Id":65911912,"CreationDate":"2021-01-27T01:57:00.000","Title":"Is it better for performance to use min() mutliple times or to store it in a variable?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a possibility to see all pip installed packages in Pycharm?\nBecause I have the Problem: I write in PyCharm and it works fine, but now I want to move the project to a server... And now I don't know how can I quickly export this","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":70,"Q_Id":65914621,"Users Score":1,"Answer":"Use the command pip freeze >requirements.txt locally to import the environment you need into the file,\nthen use the command pip install -r requirements.txt on the server to install the required environment","Q_Score":0,"Tags":"python,pycharm","A_Id":65914668,"CreationDate":"2021-01-27T07:28:00.000","Title":"See pip installations in a PyCharm Project?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've seen some paper providing Information Criterion for SVM (e.g. Demyanov, Bailey, Ramamohanarao & Leckie (2012)). But it doesn't seem that it exist any implementation of such method in python. For instance Sklearn only provide methods for linear model and random forest\/gradien boosting algorithms.\nMy question is then, is their any implementation of a potential Information Criterion for SVM in python ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":27,"Q_Id":65917525,"Users Score":1,"Answer":"you can use SVM with changing the kernel for non-linear model.\nfor exemple kernel='poly'","Q_Score":0,"Tags":"python,machine-learning,svm,criterion","A_Id":65920315,"CreationDate":"2021-01-27T10:45:00.000","Title":"Information Criterion SVM","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a BitBucket repo which holds code for multiple lambda functions in separate folders. In two of the folders (belonging to separate lambdas), I'm using a same Python function name which has different number of arguments on different lambdas.\nThis is being identified by Sonar as a bug.\nHow do I handle my lambdas in such a scenario? Changing the name of the function in either of the lambdas is difficult to implement as I have references to this function from multiple places. Can I edit my Sonar ruleset to accomodate these cases?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":65917883,"Users Score":0,"Answer":"Personally I would do a separate SQ project for each lambda, so that SQ does not mix up code belonging to different projects. My own best practice is that SQ should reflect the code that you build, not the code that you store (i.e. create SQ projects by projects outcome, not by Git repositories).","Q_Score":0,"Tags":"python,python-3.x,aws-lambda,sonarqube,sonarqube-scan","A_Id":65918693,"CreationDate":"2021-01-27T11:08:00.000","Title":"Handling same function names in multiple AWS Lambdas","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I want to exit everything, do I first quit terminal or the notebook in browser or what exactly?\nEDIT AS REQUESTED: I first launch Anaconda by searching in Spotlight for \"Anaconda\" and launching it (NOT through the terminal). After Anaconda opens, under Jupyter Notebooks I click \"launch\" (again not through the terminal). Jupyter Notebooks opens in my default browser but the terminal also opens. My question is, is the terminal also supposed to open as I mentioned or did I do something incorrectly with my installation of Anaconda?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":65918574,"Users Score":0,"Answer":"Yes. When you open notebook terminal also opened. First, quit notebook & then terminal.","Q_Score":0,"Tags":"python,macos,jupyter-notebook","A_Id":65918819,"CreationDate":"2021-01-27T11:51:00.000","Title":"Launching jupyter notebook in Anaconda on Mac, it also opens the terminal. Is this supposed to happen?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my one of the interview the interview ask me about the tuple and the list in python. And ask which is more efficient in case of finding a element on both .","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":245,"Q_Id":65920260,"Users Score":1,"Answer":"The major difference between tuples and lists is that a list is mutable, whereas a tuple is immutable. This means that a list can be changed, but a tuple cannot.The contents in a list can be modified, edited or deleted while the contents in a tuple are fixed and cannot be modified, edited or deleted.","Q_Score":0,"Tags":"python,django,list,tuples,data-science","A_Id":65920300,"CreationDate":"2021-01-27T13:33:00.000","Title":"What's the difference between a tuple and list in the python which one is more efficient","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Whats is the keyboard short cut to comment single line of code and slsected lines of code in Thony IDE for python?\nIe, The Thony equivalent of ctrl + \/ in VS Code","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":3015,"Q_Id":65920566,"Users Score":1,"Answer":"Single line comment.Ctrl + 1.\nMulti-line comment select the lines to be commented. Ctrl + 4.\nUnblock Multi-line comment. Ctrl + 5.","Q_Score":2,"Tags":"python,thonny","A_Id":65920609,"CreationDate":"2021-01-27T13:52:00.000","Title":"Keyboard shortcut to comment code in Thonny IDE","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Whats is the keyboard short cut to comment single line of code and slsected lines of code in Thony IDE for python?\nIe, The Thony equivalent of ctrl + \/ in VS Code","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":3015,"Q_Id":65920566,"Users Score":1,"Answer":"To comment single line:\nPut a cursor on the line which you want to comment. Press Alt+3 to comment, Alt+4 to un-comment.\nTo comment single line:\nSelect the desired lines of code to comment. Follow same keys i.e. press Alt+3 to comment, Alt+4 to un-comment.\nNote:\nFor further shortcuts, please click on Edit menu option next to File and you'd see various shortcuts available in Thonny IDE.","Q_Score":2,"Tags":"python,thonny","A_Id":66843812,"CreationDate":"2021-01-27T13:52:00.000","Title":"Keyboard shortcut to comment code in Thonny IDE","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a SQL Operator that creates a simple json. The end goal is that json being sent to a rest API. I'm finding the process of sending a HTTP POST in SQL code complicated, so if I can get the json kicked back to airflow I can handle it from there. Any help on either approach would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":65922499,"Users Score":0,"Answer":"So thanks to a coworker I was able to figure this out. The built in MsSqlHook has a get_first method that receives the first row from the results of the SQL code you give it. So in my case my SQL code returns a JSON in a single row with a single field, so using get_first I can retrieve that JSON and use a HTTPHook to send it to the rest API","Q_Score":2,"Tags":"python,json,sql-server,airflow","A_Id":65925867,"CreationDate":"2021-01-27T15:42:00.000","Title":"Using Airflow, does the MsSqlOperator accept responses from SQL Server?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am faced with a challenge to move json files from AWS s3 bucket to sharepoint.\nIf anyone can share inputs on if this is doable and what's the simplest of approach to accomplish this(thinking python script in AWS lambda).\nThanks in Advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1549,"Q_Id":65922580,"Users Score":0,"Answer":"Using S3 boto API, you can download the files from any bucket to local drive.\nThen using Office365 API, you can upload that local file to Share point. Make sure you check the disk space in local before doing the download option.","Q_Score":0,"Tags":"python,amazon-s3,sharepoint","A_Id":70940346,"CreationDate":"2021-01-27T15:47:00.000","Title":"Approach to move files from s3 to sharepoint","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am showing some scraped data on my Django website.. The data is changing a couple times every hour so it needs to be updated. I am using Beautiful Soup to scrape the data, then im sending it to the view and passing it in a context dictionary to present it on a website.\nThe problem is that scraping function takes some time to work and because of it the website is not loading until the function does its work. How can I make it load faster? There is no API on the data webstie.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":65922805,"Users Score":0,"Answer":"It depends of your algorithm, if it is an algorithm that goes through a lot of elements that can be or not updated you can save some attribute for not enter in the element. It's a first thought if you show the algorithm maybe we can give you a better answer :)","Q_Score":0,"Tags":"python,django,web-scraping,beautifulsoup","A_Id":65923047,"CreationDate":"2021-01-27T16:00:00.000","Title":"How to send scraped data to a page but without waiting for the page to load?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there a way to have fractions (in python) with roots? When I write Fraction(np.sqrt(2), 2), it gives me 1\/2, because Fraction takes ints as arguments. I want to have square root of 2 divided by 2, and keep the 2 under the root as to not lose precision while calculating.\nEdit: I am using the package \"fraction\". I couldn't find \"fractions\" anywhere when I searched for the package to install.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":402,"Q_Id":65923059,"Users Score":0,"Answer":"As a comment suggested, I found sqrt in the sympy package. For example sympy.sqrt(8) produces 2*sqrt(2), so it doesn't just keep the number under the root as not to lose precision, it also lets me make operations with it and simplifies it.","Q_Score":2,"Tags":"python","A_Id":66076912,"CreationDate":"2021-01-27T16:15:00.000","Title":"Is there a way to have fractions with roots?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pretty basic Heroku app made in Python with Jupyter and Voila. The app consists of a few ipywidgets that need to be filled in order to run a piece of code, which then goes to show a graph.\nThe app is working just fine for me, but is not working for a colleague of mine. The app does load for him, but the widgets (circled in red in the picture below) are not showing. He only sees the texts.\nAny idea what might cause this? Since the app is working fine for me, I was thinking there isn't something wrong with the app, but maybe it's something on his machine. But it's not working on both his computer and phone though...\nEDIT: Solved","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":161,"Q_Id":65923200,"Users Score":1,"Answer":"Disabling adblocker did the trick!","Q_Score":1,"Tags":"python,heroku,ipywidgets,voila","A_Id":65924762,"CreationDate":"2021-01-27T16:23:00.000","Title":"Ipywidgets not showing in Voila app deployed with Heroku for some (not all) users","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using Sphinx to generate HTML documentation. Everything works great but for some reason, the source code and comments in the generated HTML file are really outdated. I don't even understand how it's possible. I've deleted all files multiple times and generated it again and still the same issue.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":243,"Q_Id":65925179,"Users Score":1,"Answer":"I had the exact same problem. In my case, I had an older version of the package that I was documenting installed via pip and that was the source that sphinx was using to build the docs. Removing the old version with\npip uninstall solved the issue.","Q_Score":1,"Tags":"python,documentation,python-sphinx","A_Id":68017878,"CreationDate":"2021-01-27T18:27:00.000","Title":"Sphinx generates HTML output from outdated source code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm new to Python and I just wanted to know if there is any connector for Python 3.9\nI've looked at MySQL page but the last Python connector on the page (version 8.0.22) isn't compatible with the 3.9 version.\nAny help? Am I not finding it or does it not exist for now?\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":4679,"Q_Id":65925851,"Users Score":-2,"Answer":"Don't call your python script \"mysql.py\", rename it and it will works.","Q_Score":1,"Tags":"python,python-3.x,mysql-python,mysql-connector","A_Id":68149729,"CreationDate":"2021-01-27T19:14:00.000","Title":"Is there a version of MySQL connector for Python 3.9?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Simple question (I hope): Can Godot files be implemented in Python3.7.6?\nI see that there are modules for accessing Python from Godot, but is there a binding for the opposite usage?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":65926023,"Users Score":0,"Answer":"Logic (and searching) forces me to conclude that there would no rational basis for importing the Godot touch-screen functionality (from Godot files) into Python. Godot represents the 'end-user' interface. We've accomplished our goal of numerical input in Qt with Python based on our Godot work. Q.E.D.","Q_Score":0,"Tags":"python-3.x,godot","A_Id":65946702,"CreationDate":"2021-01-27T19:26:00.000","Title":"Can Godot files be implemented in Python3.7.6","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wrote a program that automatically navigates me threw a website, but how do I copy my current URL.\nContext: I am attempting to code a watch2gether bot that automatically creates a watch2gether room","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":65926560,"Users Score":1,"Answer":"I don't know which programming language you use, but in Python3 its simply driver.current_url","Q_Score":0,"Tags":"python-3.x,selenium,discord","A_Id":65926716,"CreationDate":"2021-01-27T20:03:00.000","Title":"How do i copy a URL from searchbar?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a newbie to the django framework and trying to make a watchlist for stocks. I've already made the crux of the webapp, where-in, a user can search for a quote and add it to their watchlist, along with relevant data about that quote.\nWhat I want to do now is, to save the separate watchlists that different users are creating (after creating an account on my site) and upon logging in to my site, they can view their personalized watchlist and edit it.\nI'm using a model for storing the data for the watchlist quotes and looking for a way to provide the different personalized watchlists depending upon the logged in user.\nCan anyone give me a lead on how to employ the logic for this? Do I need to use two data bases - one for the data of the users and the other one for storing the respective user watchlists? If yes, how do I connect everything?\nEDIT: Ever used a stock investment app? The way every user\/customer can log in to their account and make\/edit and save their watchlists in the app - that is the functionality I want to implement. How\/Where do I store so many watchlists?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":44,"Q_Id":65927879,"Users Score":1,"Answer":"use 'request.user' from your view, to know the user who sent the request and return the corresponding watchlist","Q_Score":0,"Tags":"python,django,django-models","A_Id":65928436,"CreationDate":"2021-01-27T21:46:00.000","Title":"How to provide different sets of data to different users in django?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to deploy timer trigger function that extracts data from web. I'm using playwright to access. My code runs as expected on my local machine. However when I tried to deploy on cloud it says:\n Result: Failure Exception: Exception: ================================================================================ \"chromium\" browser was not found. Please complete Playwright installation via running \"python -m playwright install\" ================================================================================ Stack: File \"\/azure-functions-host\/workers\/python\/3.8\/LINUX\/X64\/azure_functions_worker\/dispatcher.py\", line 353, in _handle__invocation_request call_result = await fi.func(**args) File \"\/home\/site\/wwwroot\/AsyncFlight\/__init__.py\", line 21, in main browser = await p.chromium.launch() File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/playwright\/async_api\/_generated.py\", line 9943, in launch raise e File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/playwright\/async_api\/_generated.py\", line 9921, in launch await self._impl_obj.launch( File \"\/home\/site\/wwwroot\/.python_packages\/lib\/site-packages\/playwright\/_impl\/_browser_type.py\", line 73, in launch raise not_installed_error(f'\"{self.name}\" browser was not found.')\nI have checked my consumption plan and my os on cloud is Linux and\"azureFunctions.scmDoBuildDuringDeployment\" is set to true.\nI have included playwright in my requirements.txt. Don't know what I'm missing. Please help!!\nThankyou","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":244,"Q_Id":65929770,"Users Score":0,"Answer":"Have you tried doing what the error instructs you to do:\nPlease complete Playwright installation via running \"python -m playwright install\"","Q_Score":0,"Tags":"python,azure,azure-cloud-services,playwright,playwright-python","A_Id":65929785,"CreationDate":"2021-01-28T01:29:00.000","Title":"How to run python playwright on Azure Cloud using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm moving away from WordPress and into bespoke Python apps.\nI've settled on Django as my Python framework, my only problems at the moment are concerning hosting. My current shared hosting environment is great for WordPress (WHM on CloudLinux), but serving Django on Apache\/cPanel appears to be hit and miss, although I haven't tried it as yet with my new hosting company. - who have Python enabled in cPanel.\nWhat is the easiest way for me to set up a VPS to run a hosting environment for say, twenty websites? I develop everything in a virtualenv, but I have no experience in running Django in a production environment as yet. I would assume that venv isn't secure enough or has scalability issues? I've read some things about people using Docker to set up separate Django instances on a VPS, but I'm not sure whether they wrote their own management system.\nIt's my understanding that each instance Python\/Django needs uWSGI and Nginx residing within that virtual container? I'm looking for a simple and robust solution to host 20 Django sites on a VPS - is there an out of the box solution? I'm also happy to develop one and set up a VPS if I'm pointed in the right direction.\nAny wisdom would be gratefully accepted.\nAndy :)","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":421,"Q_Id":65930682,"Users Score":3,"Answer":"Traditional approach\n\nVirtualenv is good enough and perfectly ready for production use. You can have multiple virtualenv for multiple projects on the same VM.\nIf you have multiple database engines for multiple projects. Like, MySQL for one, PostgreSQL for another something like this then you just need to set up each individually.\nInstall Nginx and configure each according to project.\nInstall supervisor to manage(restart\/start\/stop) each project individually.\nAnything that required by the project.\nHere it has a huge drawback. Because you can't use different versions on your database engine for a different project in an easy way. So, containerization is highly recommended.\n\nFor simple and robust solution,\n\nUse Docker(docker-compose) for local and production deployment.\nConfigure uWsgi with Nginx(Available on docker.)\nCreate a CI\/CD pipeline with any tool like Jenkins.\nMonitor your projects using any good tool like Raygun.\n\nThat's it.","Q_Score":1,"Tags":"python,django,nginx,web-hosting,uwsgi","A_Id":65930983,"CreationDate":"2021-01-28T03:35:00.000","Title":"Hosting multiple Django instances on a VPS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am new to machine learning & python. I found a predictive machine learning program on jupyter notebook. Is it possible to convert that jupyter project into a standalone web application? Do I need any libraries for it ? I want to demonstrate the chart & prediction formally. Suggestions?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":329,"Q_Id":65932478,"Users Score":0,"Answer":"Yeah it is possible. If you want to convert it to web application, you should use Flask, Django, etc. Flask is easy and light one, try it.","Q_Score":0,"Tags":"python,python-3.x,machine-learning,jupyter-notebook,prediction","A_Id":65932552,"CreationDate":"2021-01-28T07:08:00.000","Title":"How do I convert a jupyter notebook project into a standalone web application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am new to machine learning & python. I found a predictive machine learning program on jupyter notebook. Is it possible to convert that jupyter project into a standalone web application? Do I need any libraries for it ? I want to demonstrate the chart & prediction formally. Suggestions?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":329,"Q_Id":65932478,"Users Score":1,"Answer":"You need to use django or pyramid or flask and use the same codes in a reorganized way.","Q_Score":0,"Tags":"python,python-3.x,machine-learning,jupyter-notebook,prediction","A_Id":65932539,"CreationDate":"2021-01-28T07:08:00.000","Title":"How do I convert a jupyter notebook project into a standalone web application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"Really weird problem here. I have a Python Application running inside a Docker Container which makes requests in different threads to a http restapi. When I run the Container, I get the error:\nERROR - host not reachable abc on thread abc. Stopping thread because of HTTPConnectionPool(host='corporate.proxy.com', port=111111): Max retries exceeded with url: http:\/\/abc:8080\/xyz (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response')))\nWhen I log in onto the docker host and make the request with curl, then it works.\nWhen I execute the request inside the docker container (docker exec ....), then it works.\nWhen I start the python interpreter inside the container and make the request with the requests module (like application does it), then it works.\nThe Container is attached to the host network of the docker host machine\nDid anyone had also an issue like this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":312,"Q_Id":65933402,"Users Score":1,"Answer":"Thanks to @Tarique and others I've found the solution:\nI've added a startup delay of 30 seconds to the container to connect to the docker host network correctly. Then startet the requests.session. Additionally I removed the http_proxy and https_proxy env var from the container.","Q_Score":1,"Tags":"python,docker,proxy,python-requests","A_Id":65934707,"CreationDate":"2021-01-28T08:22:00.000","Title":"Python Docker Container gets ProxyError, despite I can connect to Server manually","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working on my python project, wherefor I need to import the package called \"boto3\". Therefore I get this error tooltip: \"Import \"boto3\" could not be resolved\". I tried out \"pip install boto3\" in the VS code terminal and reloaded the window but still the error tooltip does not go away :\/\nWhat am I doing wrong? Thanks in advance","AnswerCount":4,"Available Count":1,"Score":0.1488850336,"is_accepted":false,"ViewCount":11667,"Q_Id":65933570,"Users Score":3,"Answer":"I had the same issue on VS Code.\nTo solve it, I did these steps:\n\nInstall pip, sudo apt install python3-pip\nInstall boto3, pip install boto3\nRestart VS Code","Q_Score":6,"Tags":"python,import,boto3,importerror","A_Id":69724770,"CreationDate":"2021-01-28T08:34:00.000","Title":"Import \"boto3\" could not be resolved\/Python, VS Code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created an API, deployed it on the server on 8443 port and setup cloudflare's SSL certificates.\nEverything works perfectly, but I got a problem that urls in api-root still have http scheme. Also I set X-Forwarded-Proto $scheme header in nginx.conf. What could be the problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":176,"Q_Id":65935111,"Users Score":2,"Answer":"I solved it. I had to add SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') to my settings.py","Q_Score":0,"Tags":"python,django,django-rest-framework","A_Id":65936226,"CreationDate":"2021-01-28T10:20:00.000","Title":"Change urls' scheme from http to https in DRF api-root","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"There are few modules only visible in developer mode.\nI need to visible it in non-develope mode. How can i do it?\nMy findings:\nI have few xml views in common folder which has xml alone with out and in later point in another folder I have listed all the menuitems from common folder as well as current folder menu items to the order i want.\nwhy i need to place the menuitems in other folder ?\nIf i place the menu items in the common folder, odoo is giving them first as per menu sequences by default.But i need it later. so i combined all the menu items to the order i want in current folder.\nIt works in Developer mode without any issues. But in non developer mode it isn't.\nI have also verified if any groups is making this but no.\nI hope i made some sense.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":344,"Q_Id":65937191,"Users Score":0,"Answer":"In menuitem tag, it will be declared with groups. If you remove groups in your custom module, it will be visible without turn on developer mode.\nNOTE: Understand the purpose of reason to be hide and now visible for all.","Q_Score":1,"Tags":"python,odoo,odoo-13","A_Id":65946557,"CreationDate":"2021-01-28T12:34:00.000","Title":"Menu items not visible in developer mode odoo V13","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am doing binary classification for a time series with keras LSTM. How could I extract the final output from the model? By this I mean, how can I get a list containing zero and one values from the final model?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":242,"Q_Id":65939879,"Users Score":0,"Answer":"You should attach right after a LSTM layer a Dense layer with as much neurons as you consider (that depends upon the LSTM output itself), and on top of that one add a final Dense classification layer with a single neuron, that'd be the binary output.","Q_Score":0,"Tags":"python,tensorflow,keras,deep-learning,lstm","A_Id":65944376,"CreationDate":"2021-01-28T15:09:00.000","Title":"Keras LSTM Binary Classification Output","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently i have a Django blog website. (Fully functional)\nThis blogwebsite is something like a social media website...\nI have read that django rest framework helps you to serialize your data.\nCould I just check about the DRF:\n\nIs serializing data important? Meaning to say what huge benefits would I get by having an additional django rest framework, and also any possible disadvantages?\n\nWould it be very difficult to \"add\" the framework into my project. Eg does it require me to download a lot of things and change a lot of my codes?\n\nWhere does DRF stand, is it in the backend of frontend or is it more of like in the middle. How does it integrate into my current IT architecture: with django as backend, htmlcssjs as frontend and postgresql as database\n\n\nThank you!\nAlso if anyone is kind enough to share resources about DRF\/ open to a short zoom sharing session on DRF, please feel free to contact at kimjoowon777@gmail.com","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":133,"Q_Id":65941254,"Users Score":0,"Answer":"The idea of Django Rest Framework (DRF) is to simplify and cut down a lot of code from your Backend (aka API).\nDjango (standalone Django, not DRF) is a framework that allows you to build Backend (and also Frontend).\nDRF is a cool library that allows you to create a backend with Django but a lot easier.\nDRF allows you to create endpoints that allow you to manipulate the Data (ie. models) very quickly and with very little code.\nThe \"Serialization part\" of DRF just automatically (also magically) transforms your models data into a response (usually json) for the client and vice versa.\nIt does take some learning and coding to transform your current backend into DRF. but in my opinion unless you backend is really special, DRF can allow you to create an awesome API with very little code","Q_Score":0,"Tags":"python,django,django-models,django-rest-framework,django-views","A_Id":65955100,"CreationDate":"2021-01-28T16:28:00.000","Title":"Should I implement the django rest framework above django?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Okay.. so I am trying to create a website where the user would have to pay before registration.\nAny help please?? The question might be weird but I am actually a beginner","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":212,"Q_Id":65941432,"Users Score":1,"Answer":"Here is what I think you can do.\nFirst make sure your registration form is the actually default django_auth model for user registration.\nThe you can put in place measures that allow the user to be verified first that means you create a payment model within the application and as soon as the user pays up, a reference number of the payment is stored in the database and marked unused. So when the user is registering within the Django application that means you can provide a form field to verify the reference code from the payment.\nIts quite a long process that requires you to play around with the Django Configuration files and modify them. My best reference and recommendation though is that...\nLet the users register and restrict some of the functionality within the application for all unpaid users. That way you save yourself a lot of time.","Q_Score":0,"Tags":"python,django,django-models,django-forms,payment","A_Id":65941618,"CreationDate":"2021-01-28T16:38:00.000","Title":"How do I make a user pay before creating user account Django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a set of points(x,y), could be setup as an Array or a List.\nI wish to calculate the distance between subsequent points.\nI then need to do some calculations on the between distances to set a threshold value T.\nI then wish to process the Array\/List of points such that when a between distance exceeds the threshold T, I call a function_A passing all the preceding points and then function_B passing the current and preceding point, before continuing test points against the threshold value.\ni.e. If I have between distance list [1, 1.5, 2, 1.7, 7, 2, 3, 8, 4 ]\nThreshold calculated as 7.\nI wish to call function_A with the points that correspond to [1, 1.5, 2, 1.7] function_B with the points that correspond to 1.7 and 7, then next time Threshold is exceeded i.e 8 call function_A with points corresponding to [2,3] etc.\nNeed to process several hundred points as above, so should I use numpty?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":65942780,"Users Score":0,"Answer":"NumPy is optimized for numerical calculation, so you should use it. But in this case I would use Pandas instead, since it \"preserve the index\" when you do calculations and it's built on top of NumPy, so you can take advantage of NumPy's optimization.\nTo apply the functions you could use the conditional selection on your DataFrame, and then assign the results to a new column, and here is where you take the advantage of the index preservation in Pandas, since you don't want to lose the information about in which distance you're applying the functions.","Q_Score":0,"Tags":"python-3.x,numpy","A_Id":65944314,"CreationDate":"2021-01-28T18:02:00.000","Title":"Python : Processing sub List based on a condition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to get Jenkins host detail,\nIt seems for windows its present as COMPUTERNAME env variable in \"http:\/\/\/systemInfo\" URL.\nBut for Linux hosts, I don't see this variable present.\nIs there any way that I can fetch the Jenkins host (where the Jenkins is running) using python?\ndon't want to use a groovy script as I want to do it w\/o running any job.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":86,"Q_Id":65944648,"Users Score":1,"Answer":"You can use HOSTNAME from Environment Variables from http:\/\/\/systemInfo.","Q_Score":0,"Tags":"python,jenkins","A_Id":65945568,"CreationDate":"2021-01-28T20:21:00.000","Title":"How to get jenkins host detail?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am new on Medical Imaging. I am dealing with MRI images, namely T2 and DWI.\nI uploaded both images with nib.load, yet each sequence image has a different number of slices (volume, depth?). If I select one slice (z coordinate), how can get the corresponding slice on the other sequence image? ITK does it correctly, so maybe something in the NIFTI header could help?\nThank you so much for reading! I also tried interpolation, but it did not work.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":76,"Q_Id":65946097,"Users Score":1,"Answer":"Reading the NIFTI, namely affine, and extracting the translation Transformation Matrix to create a mapping from the T2 to the DWI voxels.\nHint: nibabel.","Q_Score":1,"Tags":"python,registration,coordinate-systems,itk,medical-imaging","A_Id":65971759,"CreationDate":"2021-01-28T22:25:00.000","Title":"How to correlate different MRI sequences images in NIFTI format?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is my first Django app, and also my first time building an app, and I am seeking some DB and API guidance. I have an app that functions as an online directory and marketplace. What I aim to do is to provide the app to many different organizations such that the organizations have their own online directory's and marketplace's where they can manage it fully on their own, but that their data is still linked to my database so that I can then operate on the data for machine learning purposes. This is a pretty standard\/routine practice, I realize, but I am trying to wrap my head around how best to make it work. For each organization, would there just be an instance of the app which would then be a separate app or interface that connects to the original? From my newbie understanding, that is essentially what an API is, correct?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":32,"Q_Id":65948213,"Users Score":0,"Answer":"Since it's an API, I wouldn't separate the DBs or the instances, neither make several Django apps, but I would have a dedicated clients model\/table in the DB, which would store encoded API keys for each of your organizations.\nEach client makes its request to your API by authenticating and using their API key, and the Django views answer only with the data linked to the client, as other DB tables have a foreign key to clients.\nSounds pretty standard and RESTful to me.\nSome Django \/ Django REST framework modules like \"Django REST Framework API Key\" could probably do most of this work for you.","Q_Score":0,"Tags":"python,django,api,django-models,django-rest-framework","A_Id":65948582,"CreationDate":"2021-01-29T03:01:00.000","Title":"Django DB Design and API Guidance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've just gotten started with Datasette and found that while I have hundreds of .sqlite databases, only one was able to load (it was empty). Every other one has had this sort of error:\nError: Invalid value for '[FILES]...': Path '\/Users\/mercury\/Pictures\/Photos' does not exist.\nAny suggestions?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":989,"Q_Id":65949007,"Users Score":0,"Answer":"It turns out this was a simple error. The file path is required to be in quotes.","Q_Score":0,"Tags":"python,datasette","A_Id":65949008,"CreationDate":"2021-01-29T04:57:00.000","Title":"Error: Invalid value for '[FILES]...': Path '{path\/to\/data}' does not exist","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"While installing modules with pip, I am always getting the following error:\nFatal error in launcher: Unable to create process using '\"c:\\users\\agniva roy\\python.exe\" \"C:\\Python39\\Scripts\\pip.exe\" install os': The system cannot find the file specified. \nTo be Note: I had recently updated from Python 3.8 to Python 3.9.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":149,"Q_Id":65949031,"Users Score":0,"Answer":"actually I recommend to you uninstall this version of python and install it again with activating set Python path in your operating system automatically.","Q_Score":0,"Tags":"python,windows,pip","A_Id":65949265,"CreationDate":"2021-01-29T05:00:00.000","Title":"Errors while installing modules with Pip","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using a Excel template which have 6 tabs (All unprotected) and writing the data on each worksheet using openpyxl module.\nOnce the excel file is created and when tried to open the generated file, its not showing all data untill and unless I click \"Enable editing\" pop up.\nIs there any attribute to disable in openpyxl.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":470,"Q_Id":65950058,"Users Score":1,"Answer":"This sounds like Windows has quarantined files received over a network. As this is done when the files are received, there is no way to avoid this when creating the files.","Q_Score":0,"Tags":"python,python-3.x,openpyxl","A_Id":65952722,"CreationDate":"2021-01-29T06:56:00.000","Title":"How to ignore \"Enable Editing\" in excel after writing data using openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Django REST Api testing and I am running an error like this raise ValueError('Related model %r cannot be resolved' % self.remote_field.model) ValueError: Related model 'auth.Group' cannot be resolved when running a test, and im not sure why this happen","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":116,"Q_Id":65950279,"Users Score":0,"Answer":"You can solve it removing the migrations files.","Q_Score":2,"Tags":"python,django,django-rest-framework","A_Id":72297270,"CreationDate":"2021-01-29T07:20:00.000","Title":"ValueError: Related model 'auth.Group' cannot be resolved when running django test","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to reinstall Jupyter-Lab with conda completely. I mean, when I run uninstall jupyterlab and install it again, the system already comes with configuration I had previously, such as extensions installed. Therefore, there is something that is still present after the uninstall.\nHence, how do I completely remove jupyter-lab and install it again from scratch?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2217,"Q_Id":65952807,"Users Score":1,"Answer":"When jupyterlab is installed use jupyter --paths to see where the configuration, data and runtime is stored. After removing the corresponding files and directories you will be able to perform a clean install without any traces of the old extensions.\nRemember to use it in the right environment.","Q_Score":0,"Tags":"python,installation,conda,jupyter-lab","A_Id":65961985,"CreationDate":"2021-01-29T10:32:00.000","Title":"Uninstall Jupyter-Lab with conda completely","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to search from large no of data of phone numbers in python which data set shall i use to search from the data type - Set or Dictionaries","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":65954036,"Users Score":0,"Answer":"If your use case could be accomplished by either, then the set will give you all you need without wasting memory on values and should be easier to read as you don't have to account for an unused value.\nUse a set.","Q_Score":0,"Tags":"python,algorithm,dictionary,data-structures,set","A_Id":65954767,"CreationDate":"2021-01-29T11:58:00.000","Title":"Data Structure in python - Which data type is best from set and dictionary if i want to search large amount of phone numbers in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to search from large no of data of phone numbers in python which data set shall i use to search from the data type - Set or Dictionaries","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":65954036,"Users Score":0,"Answer":"I retain that if you have to choose between set and dictionary you should use dictionaries because they are easier to maintain and to update and you can also search using keys.\nOn the other hand a set can contain only one copy of something and as a consequence you will never have problems due to duplicate elements.","Q_Score":0,"Tags":"python,algorithm,dictionary,data-structures,set","A_Id":65954067,"CreationDate":"2021-01-29T11:58:00.000","Title":"Data Structure in python - Which data type is best from set and dictionary if i want to search large amount of phone numbers in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to search from large no of data of phone numbers in python which data set shall i use to search from the data type - Set or Dictionaries","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":65954036,"Users Score":0,"Answer":"This is a very requirement-dependent question. First of all, if this is not an algorithmic problem and you need to run this frequently, then either of these would not work.\nNow, coming to your question. Sets are Dictionaries have different use cases. If you need to store something that references the key, then obviously usage of Set would not make sense. And if you dont have anything to store, storing a dummy value in a Dictionary just to check if the key exists is a bad idea.\nConsidering your question, have you thought about implementing a Trie. Given the fact that phone numbers are fixed in size and character set, you could do lookups pretty quickly using a trie compared to a Set or Dictionary.","Q_Score":0,"Tags":"python,algorithm,dictionary,data-structures,set","A_Id":65954746,"CreationDate":"2021-01-29T11:58:00.000","Title":"Data Structure in python - Which data type is best from set and dictionary if i want to search large amount of phone numbers in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to search from large no of data of phone numbers in python which data set shall i use to search from the data type - Set or Dictionaries","AnswerCount":4,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":65954036,"Users Score":0,"Answer":"If all you want to do is check if a certain number is in some container of numbers, you should use a set. A dict is just a set in which every element is linked to some other value. If you don't use this functionality, then a dict is just a set with some overhead.\nYou will want to use a dict if, for example, you want to map phone numbers to names of people.","Q_Score":0,"Tags":"python,algorithm,dictionary,data-structures,set","A_Id":65954733,"CreationDate":"2021-01-29T11:58:00.000","Title":"Data Structure in python - Which data type is best from set and dictionary if i want to search large amount of phone numbers in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have built a Socket TCP \/ IP server that listens on a specific port and then, with that data, makes a rest query to another server, and that response is returned through the port where it received it.\nAll Socket server is made in Python 3.8 and works great.\nI need to know how to implement this code from my Socket server to an Azure Functions, so that it provides permanent service?\nI appreciate the goodwill of anyone who can offer an answer.\nThanks Total.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":65954851,"Users Score":0,"Answer":"Simple answer: you cannot do that. Azure Functions are Event-based (such as an HTTP call). If you need to provide TCP socket, maybe hosting your python code in a container, e.g. Azure Container Instances, might be a good way to go.","Q_Score":0,"Tags":"python,sockets","A_Id":65956535,"CreationDate":"2021-01-29T12:52:00.000","Title":"How to implement a python code to work as Azure Function?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm hoping this is a fairly simple question with a simple answer.\nIn PostgreSQL I have a table with a Answer column that is a jsonb.\nAs of right now, the data that can be stored in the column can be empty or quite varied. Some examples include:\n\n{\"Answer\":\"My name is Fred\"}\n{\"Answer\":[{\"text\": \"choice 1\", \"isActive\": true}, {\"text\": \"choice 2\", \"isActive\": false}]}\n\nYes, we store a field called Answer in our column called Answer. Not sure why, but that is how it is.\nI want to be able to test if the JSON attribute Answer contains a string or an array. But I don't know how, and I must be wording my searches incorrectly. I'm not finding anything concrete. I already know how to check if Answer exists. Just can't tell what it contains.\nDoes anyone know how I would do this? Or if there isn't a way, what I need to do instead to query this data?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":65956645,"Users Score":0,"Answer":"func.jsonb_typeof(.answer.op('->')('Answer')) == \"string\" seems to do the job.","Q_Score":0,"Tags":"python,postgresql,flask-sqlalchemy,jsonb","A_Id":65957361,"CreationDate":"2021-01-29T14:48:00.000","Title":"How do I check the type of a field contained in a JSONB?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Long story short, I need to call a python script from a Celery worker using subprocess. This script interacts with a REST API. I would like to avoid hard-coding the URLs and django reverse seems like a nice way to do that.\nIs there a way to use reverse outside of Django while avoiding the following error?\n\ndjango.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.\n\nI would prefer something with low-overhead.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":101,"Q_Id":65959394,"Users Score":0,"Answer":"I am using a custom manager command to boot my django app from external scripts. It is like smashing a screw with a hammer, but the setup is fast and it takes care for pretty much everything.","Q_Score":1,"Tags":"python,django","A_Id":65959971,"CreationDate":"2021-01-29T17:49:00.000","Title":"Is there a way to use reverse outside of a Django App?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am new to Python and PyCharm and I am having troubles using my Matlab knowledge here on PyCharm. The thing is that in Matlab you have: \n\nYour script where you write your matlab code \nWorkplace where the results or variables are saved \nCommand Window where you call your function or you call your variables or do something more. \n\nNow, I am trying to understand how I can do these things on PyCharm but I dont even understand where files are saved or how I can call a function without it being in the main file where I write down my code. My question is \nIs there something similar to a command window in PyCharm?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":138,"Q_Id":65962978,"Users Score":0,"Answer":"The equivalent for command window is the Python Console, you can find it in the lower left side of Pycharm.","Q_Score":0,"Tags":"python,matlab,pycharm,command-window","A_Id":65963018,"CreationDate":"2021-01-29T23:15:00.000","Title":"PyCharm environment vs Matlab environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"For my frameless qml app i made close, minimise and maximize icons in ms paint (28\u00d728 pixel art) and then added transparent background with photoshop.\nI put them as icons on qml buttons.\nBut they are looking blurry.\nI tried disabling antialiasing, mipmaps, smoothing but still blurry.\nAny help ?\nI want them to look pixely like minecraft text.\nEdit : I appears that qml uses linear scaling for images.\nBut for pixely look i need \"nearest neighbour\" scaling.\nHow can i use nearest neighbour in qml ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":403,"Q_Id":65964638,"Users Score":0,"Answer":"NVM Solved It !\nIt turns out that for nearest neighbour upscaling, you need to set smooth: false (and for downscaling you need mipmaps as well) + remove sourceWidth & sourceHeight from your qml code. (Sometimes qt-creator turns it on by itself and this option is basically compresssing image resolution)\nBut in my case the Problem was that MS Paint in Windows 10 doesn't use solid colours. It uses solid colours in middle and increases transparency towards edges of strokes.\nAnd that creates the blurry effect when viewed on small icon.\nSimple Solution : Install Windows 7 and use its MS Paint. Or just use some other program.","Q_Score":0,"Tags":"python,qml,qt-creator,pyside2,blurry","A_Id":65977408,"CreationDate":"2021-01-30T04:29:00.000","Title":"QML icons and images looking Blurry","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I change the name of the main python file in my project directory from main.py, will it make any difference? For example, if a project of mine has an image and two scripts main.py and side.py, and if I rename main.py to myfile.py, will it impact my project in any way? If so, how?\nI don't think this is a duplicate since I researched a lot before asking. Sorry if I sound like an idiot, I am a beginner.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":388,"Q_Id":65965668,"Users Score":0,"Answer":"If you are referring to any function (e.g. function1, function2) in myfile.py from side.py, pl change your import in side.py to from myfile import function1,function2.","Q_Score":0,"Tags":"python,python-3.x,windows,pycharm,project","A_Id":65965818,"CreationDate":"2021-01-30T07:31:00.000","Title":"If the name of the main file in my project is changed from main.py to something else, will it make any difference?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a python script or IDLE shell, we can print texts using double quotes \" or single quote ' or a combination of three single or double quotes (mostly used in docstrings) '''.\nI was working with some text and tried out the following:\n''''4'''' there was an EOL while scanning text.\nThen I tried it this time using 5 quotes, i.e. '''''4''''' and the output was \"''4\".\nFinally, I tried the same with a large number of quotes:\n\nINPUT\n\n>>> '''''''''''''''''''''''''''''ff'''''''''''''''''''''''''''\n\nOUTPUT\n\n\"''ff\"\n\nI cannot understand why python returns such an output given such a large number of '.\n\nQuestion: How does it show such an anomalous output, what is the logic behind it?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":53,"Q_Id":65966443,"Users Score":2,"Answer":"'''''4''''' is parsed as\n\n''' (open string literal)\n''4 (content of the string)\n''' (close string literal)\n'' (an empty string literal)\n\nI'm not doing the whole long one. But each '''''' is an empty triple-quoted string literal, so it's along the same lines.","Q_Score":1,"Tags":"python,string,quotes,double-quotes","A_Id":65966532,"CreationDate":"2021-01-30T09:28:00.000","Title":"Logic of string termination literal in python (namely the single and double quotes)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a python script or IDLE shell, we can print texts using double quotes \" or single quote ' or a combination of three single or double quotes (mostly used in docstrings) '''.\nI was working with some text and tried out the following:\n''''4'''' there was an EOL while scanning text.\nThen I tried it this time using 5 quotes, i.e. '''''4''''' and the output was \"''4\".\nFinally, I tried the same with a large number of quotes:\n\nINPUT\n\n>>> '''''''''''''''''''''''''''''ff'''''''''''''''''''''''''''\n\nOUTPUT\n\n\"''ff\"\n\nI cannot understand why python returns such an output given such a large number of '.\n\nQuestion: How does it show such an anomalous output, what is the logic behind it?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":53,"Q_Id":65966443,"Users Score":2,"Answer":"Consider a simple FSM (three primary states: in code, in string, in docstring) consuming the input to handle single quotes. When it is in code and it encounters three quotes in a row, it enters the docstring state. It stays in the docstring state until it encounters three quotes of the same type. When it is in code and encounters fewer than three quotes in a row, it enters a string state, and stays in that state until it encounters an (unescaped) quote (which may be immediately).\nWhen this FSM is in the code state and encounters a long sequence of quotes (matching the RE \/'{3,}\/), it enters and exits the docstring state (no non-delimiters are encountered, so the strings are all empty) until the last few quote characters, at which point it's either still in the docstring state (and any remaining quotes are in the string) or it's in the code state, and any remaining quotes are interpreted as string delimiters. If it's in the string or docstring state went it encounters a long sequence of quotes, it will first transition to the code state, then every three quotes will transition between code and docstring states. Any remaining quotes at the end are interpreted as previously mentioned.\nAn actual FSM will require additional states beyond the 3 primary, but they're an implementation detail and not conceptually significant. The FSM can be easily extended to handle double-quotes as well by duplicating the single-quote states & transitions and modifying as appropriate for double-quotes.\nWriting a formal description of the FSM and calculating final states from initial states based on the count of consecutive quotes (i.e. write a function end_state(initial_state, consecutive_quote_count)) left as exercises.","Q_Score":1,"Tags":"python,string,quotes,double-quotes","A_Id":65966552,"CreationDate":"2021-01-30T09:28:00.000","Title":"Logic of string termination literal in python (namely the single and double quotes)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building a Django app that has a list of words. The app now tells the user the first word of the list via a speech function. Then the user can record an audio of a word he says. This word gets turned into a string in the front end. Now I want to compare the word from the user with the first string of a second list that has the same amount of characters as the first list. And this whole process should be repeated with all characters of the first list.\nCan I do this kind of loop in my views.py or would it work better in the frontend in javascript?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":65967351,"Users Score":0,"Answer":"I think yes, you can loop inside your views.py, inside your function make a loop and can compare your words.","Q_Score":0,"Tags":"javascript,python,django,frontend","A_Id":65967695,"CreationDate":"2021-01-30T11:18:00.000","Title":"django app for loop: views.py or frontend?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Raspberry Pi 4 running a NodeRed server.\nThis Pi has no mouse or keyboard, but it does have a regular HDMI display.\nIt runs a minimal Xorg setup and a Midori browser connects as a client to the NodeRed server itself.\nThe user can interact with NodeRed through some buttons wired to the GPIO.\nSo far so good.\nI set up a little python script which starts a screensaver (feh) when the user is idle for a while (xprintidle).\nNow I would like to stop the screensaver when the user presses a button.\nI tried to bind those GPIO pins with the RPi.GPIO library, but it says that they're already bound to something else (NodeRed) and the even won't fire when the button is pressed.\nI tried to look at \/sys\/class\/gpio\/ but I don't see those exports changing when I click the button, and besides, I would have to use a bash script which constantly poll those sys-files. I'd rather use events\/interrupts.\nHow would you go about achieving this?\nIs there some lower system way of getting interrupts from the GPIO?\nMaybe is it possible to have NodeRed kill Feh (or feed a fake user input to xorg), somehow?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":65969177,"Users Score":0,"Answer":"maybe I gave up too soon. I later realized that it's not that difficult to manage the host system processes with NodeRed.\nThat way I was also able to intercept all the button pushes and make sure that the screensaver is not running, before performing the required action.","Q_Score":0,"Tags":"python,interrupt,node-red,gpio,xorg","A_Id":65971138,"CreationDate":"2021-01-30T14:36:00.000","Title":"Reset Xorg idle time upon RPi GPIO event, maybe through NodeRed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to implement a simple source code that DROPs all RST packets that come into the computer using Python. What should I do?\nLinux servers can be easily set up using the iptables command, but I want to make it Python for use on Mac, Linux, and Windows systems.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":65970822,"Users Score":0,"Answer":"Dropping RST packets is a function of the networking firewall built into your operating system.\nThere is only one way to do it on Linux: with iptables. You could use Python to instruct iptables.\nWindows has its own way to add firewall rules. MacOS also has its own way, and each of them is different from the other.\nThere is no single common way to do this. Therefore, there is no single common way to do this with Python.","Q_Score":0,"Tags":"python,packet","A_Id":65971659,"CreationDate":"2021-01-30T17:10:00.000","Title":"how to RST packet drop using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"My website has a button to check notifications as a drop down while still remaining on the same page. I would like to update the field unread of every notification from Trueto False of that user when the button is clicked, without having to change or update the page.\nI have been looking into Celery to solve this, but before digging to deep I would like to ask the community on what the best practice is to solve this type of functionality.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":32,"Q_Id":65971098,"Users Score":0,"Answer":"As long as the actual process of getting the notification does not take long you could just make it an ajax request. That would be the easiest way to do it.\nYou could also use Django channels and websockets on the front end but that is a bit more involved.\nJust to add to that, Celery is more suitable for long running background tasks or scheduled tasks with beat.","Q_Score":0,"Tags":"python,html,django,asynchronous,celery","A_Id":65971195,"CreationDate":"2021-01-30T17:33:00.000","Title":"Django update model instance asynchronously","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to loop through a table that contains covid-19 data. My table has 4 columns: month, day, location, and cases. The values of each column in the table is stored in its own list, so each list has the same length. (Ie. there is a month list, day list, location list, and cases list). There are 12 months, with up to 31 days in a month. Cases are recorded for many locations around the world. I would like to figure out what day of the year had the most total combined global cases. I'm not sure how to structure my loops appropriately. An oversimplified sample version of the table represented by the lists is shown below.\nIn this small example, the result would be month 1, day 3 with 709 cases (257 + 452).\n\n\n\n\nMonth\nDay\nLocation\nCases\n\n\n\n\n1\n1\nCAN\n124\n\n\n1\n1\nUSA\n563\n\n\n1\n2\nCAN\n242\n\n\n1\n2\nUSA\n156\n\n\n1\n3\nCAN\n257\n\n\n1\n3\nUSA\n452\n\n\n.\n.\n...\n...\n\n\n12\n31\n...\n...","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":232,"Q_Id":65972079,"Users Score":0,"Answer":"you can check the max value in your cases list first. then map the max case's index with other three lists and obtain their values.\nex: caseList = [1,2,3,52,1,0]\nthe maximum is 52. its index is 3. in your case you can get the monthList[3], dayList[3],\nlocationList[3] respectively. then you get the relevant day, month and country which is having the most total global cases.\ncheck whether this will help in your scenario.","Q_Score":0,"Tags":"python,list,nested-loops","A_Id":65972201,"CreationDate":"2021-01-30T19:10:00.000","Title":"Looping through multiple columns in a table in Python","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Selenium with Python and want to access the plain HTML source code before it is parsed and the DOM is modified by the browser. I do not want to use \"driver.page_source\" as it is giving me back the DOM after parsing and for example dynamically created elements are included. I know I could do a second request with for example requests but I am looking for a way to extract it without doing an additional request. Any ideas?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":65973698,"Users Score":0,"Answer":"You can get the plain HTML source by using driver.get(f\"view-source:{url}\"). Then get the body of the source using driver.find_element_by_tag_name('body').text","Q_Score":0,"Tags":"python,selenium,selenium-chromedriver","A_Id":65974783,"CreationDate":"2021-01-30T22:17:00.000","Title":"How to get unparsed HTML source code with Python and Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there a way to perform an exact mirror of local vs S3, i.e. If I rename a file locally, is there a way to apply that to S3 as well?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":65974119,"Users Score":0,"Answer":"If two locations are synchronized and then a local file is renamed, then next time that aws s3 sync is run, the file will be treated as a new file and it will be copied to the destination.\nThe original file in the destination will remain untouched. However, if the --delete option is used, then the original file in the destination will be deleted.\nThe sync command does not rename remote files. It either copies them or deletes them.\nThere are some utilities that can mount Amazon S3 as a virtual disk, such that changes are updates on Amazon S3. This is great for copying data, but is not recommended for production usage at high volumes.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,boto3","A_Id":65975700,"CreationDate":"2021-01-30T23:18:00.000","Title":"AWS S3 Sync renamed files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"When editing the following:\nline_ar = fflistline.split\nif line_ar[0] == \"file\":\nMy python IDE reported\n\nCannot find reference '[' in 'function in the second line\n\nThe problem is that fflistline.split is assigning the function fflistline.split to line_ar and not calling the function fflistline.split() and assigning that list to line_ar.\nI took turns staring at that and searching for that error message for 10 minutes before I took a break and I still had to work on something else for a while before the missing empty () flashed in to my brain. Maybe IDE's should come with a setting that warns about this. If I had typed it in directly it would have autocompleted the () so I guess it must have been a cut and paste or an edit error somewhere...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":65974919,"Users Score":0,"Answer":"The problem is that fflistline.split without the () resolves to the function fflistline.split and not to the array that results form calling the function fflistline.split().","Q_Score":0,"Tags":"python-3.x,ide","A_Id":65974920,"CreationDate":"2021-01-31T01:29:00.000","Title":"What causes python IDE to report \"Cannot find reference '[' in 'function'\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using PyEMD (Empirical Mode Decomposition) on a signal (train & test ) data. All seems be working fine for all signals (datasets), but in my one of the dataset number of IMFs it is decomposing is different for train & test dataset.\nI have tried (max_imf: ) argument, but by limiting the number to minimum value so that both (train & test ) have same number of IMF, the decomposition is not correct ( it is not decomposing till final trend).\nAny suggest will be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":103,"Q_Id":65976652,"Users Score":0,"Answer":"You can decompose the data first, and then divide the training set and the test set for each component","Q_Score":1,"Tags":"python,decomposition","A_Id":68176650,"CreationDate":"2021-01-31T07:16:00.000","Title":"PyEMD returning different number of IMFs for train, test data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Gensim for building W2V models and, I didn't find a way for adding a vector for Unkown words or padding parts in Gensim and, I have to do it manually.\nI also check the index of 0 in the created embedding and, it is also used for a specific word. This matter could cause a problem for padding words because they have the same index.\nAm I missing something in here? Is Gensim handle this problem?\nP.S: For handling this issue, I always append two vectors in the model weights after I train the model.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":556,"Q_Id":65978214,"Users Score":1,"Answer":"A Gensim Word2Vec model only learns, and reports, vectors for words that it learned during training.\nIf you want it to learn some vector for any synthetic 'unknown' or 'padding' symbols, you need to include them in the training data. (They may not be very interesting\/useful vector-values, though, and having such synthetic token vectors may not outperform simply ignoring unknown-tokens or avoiding artificial padding entirely.)","Q_Score":0,"Tags":"python,gensim,word2vec","A_Id":65983452,"CreationDate":"2021-01-31T10:49:00.000","Title":"Does Gensim handling pad index and UNK index in W2V models?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to use pip to install sklearn, and I receive the following error message:\n\nERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\Users\\13434\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python37\\site-packages\\sklearn\\datasets\\tests\\data\\openml\\292\\api-v1-json-data-list-data_name-australian-limit-2-data_version-1-status-deactivated.json.gz'**.\n\nCan anyone help? Thanks.","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":91318,"Q_Id":65980952,"Users Score":0,"Answer":"keep package up to date.I upgarde my pip and the problem is gone.","Q_Score":17,"Tags":"python,scikit-learn,pip","A_Id":72364545,"CreationDate":"2021-01-31T15:36:00.000","Title":"Python: Could not install packages due to an OSError: [Errno 2] No such file or directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to use pip to install sklearn, and I receive the following error message:\n\nERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\Users\\13434\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python37\\site-packages\\sklearn\\datasets\\tests\\data\\openml\\292\\api-v1-json-data-list-data_name-australian-limit-2-data_version-1-status-deactivated.json.gz'**.\n\nCan anyone help? Thanks.","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":91318,"Q_Id":65980952,"Users Score":0,"Answer":"If Windows is 8.\nYou need to change your access policy inside Python3 repository where all packages, including PIP.\n\"Properties\" -> \"Security\" -> select a user -> select \"Change\" and everything below.\nNext, update PIP (py -m pip install --upgrade pip) and install the packages inside ENV","Q_Score":17,"Tags":"python,scikit-learn,pip","A_Id":69944689,"CreationDate":"2021-01-31T15:36:00.000","Title":"Python: Could not install packages due to an OSError: [Errno 2] No such file or directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to use pip to install sklearn, and I receive the following error message:\n\nERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\Users\\13434\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python37\\site-packages\\sklearn\\datasets\\tests\\data\\openml\\292\\api-v1-json-data-list-data_name-australian-limit-2-data_version-1-status-deactivated.json.gz'**.\n\nCan anyone help? Thanks.","AnswerCount":7,"Available Count":4,"Score":0.057080742,"is_accepted":false,"ViewCount":91318,"Q_Id":65980952,"Users Score":2,"Answer":"I removed and reinstalled Python and then entered this into my terminal command line:\npip install --trusted-host pypi.org --trusted-host files.pythonhosted.org pip install --upgrade pip\nand it fixed my issue.\npip install --trusted-host pypi.org --trusted-host files.pythonhosted.org pip install\nand it should work.","Q_Score":17,"Tags":"python,scikit-learn,pip","A_Id":69421859,"CreationDate":"2021-01-31T15:36:00.000","Title":"Python: Could not install packages due to an OSError: [Errno 2] No such file or directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to use pip to install sklearn, and I receive the following error message:\n\nERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\Users\\13434\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python37\\site-packages\\sklearn\\datasets\\tests\\data\\openml\\292\\api-v1-json-data-list-data_name-australian-limit-2-data_version-1-status-deactivated.json.gz'**.\n\nCan anyone help? Thanks.","AnswerCount":7,"Available Count":4,"Score":0.1418931938,"is_accepted":false,"ViewCount":91318,"Q_Id":65980952,"Users Score":5,"Answer":"Try sudo pip install 'package name' --user","Q_Score":17,"Tags":"python,scikit-learn,pip","A_Id":66918871,"CreationDate":"2021-01-31T15:36:00.000","Title":"Python: Could not install packages due to an OSError: [Errno 2] No such file or directory","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The file got corrupted its missing some code from everywhere.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":65981501,"Users Score":0,"Answer":"Are you sure you saved the edits to the file?\nVSC has a buffer for changes, so you can pick up where you left off on a file without actually saving your modifications to it. If you open the file by folder, you might skip over that cache and see what looks like an outdated version.","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":65981851,"CreationDate":"2021-01-31T16:30:00.000","Title":"Problems with visual studio code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been trying for a while and have not been able to figure this out. It sounds like a simple problem and I'm sure it's just me...so sure you will be able to help.\nI basically want to only have an inline model display when I am in Create View (Creating a new entry) and NOT when I'm in Edit View (Editing an existing entry).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":65981585,"Users Score":0,"Answer":"Override the form_edit_rules and remove the field of the inline model.","Q_Score":0,"Tags":"python,flask,flask-admin","A_Id":70128367,"CreationDate":"2021-01-31T16:38:00.000","Title":"Flask Admin - Only include inline_models in Create View and NOT Edit View","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have imported an Excel file as a dataframe using pandas.\nI now need to delete all rows from row 41,504 (index 41,505) and below.\nI have tried df.drop(df.index[41504]), although that only catches the one row. How do I tell Pandas to delete onwards from that row?\nI did not want to delete by an index range as the dataset has tens of thousands of rows, and I would prefer not to scroll through the whole thing.\nThank you for your help.\nKind regards","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":65983380,"Users Score":0,"Answer":"You can reassign the range you do want back into the variable instead of removing the range you do not want.","Q_Score":0,"Tags":"python,python-3.x,pandas","A_Id":65983446,"CreationDate":"2021-01-31T19:34:00.000","Title":"Ask Pandas to delete all rows beneath a certain row","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Just want to understand the logic of using - range(len()) for the purpose of indexing\nfor eg. i have mylist=[1,2,3,4,5]\nso here -> len(mylist) would be 5\nand range(len(mylist)) would be (0,5)\nhowever the computer starts reading from 0 as position : making mylist 0 to 4\nwhere mylist[4]= 5\nso here do we use range(len(mylist)-1 to set the range correctly for the purpose of indexing ?\napologies if i am not clear (beginner)\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":65986403,"Users Score":0,"Answer":"Remember that the length of the list is always one greater than it's last index. In your exmaple, the length of the list is 5, whereas the final index is 4. So when you say range(len(my_list)), the range that stops one short compensates for the len that is one too long. It all works out nicely so that you iterate through all items of the list by range(len(my_list)).","Q_Score":1,"Tags":"python-3.x,range","A_Id":65986580,"CreationDate":"2021-02-01T02:25:00.000","Title":"Using range and length function together in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm learning python and use VS Code as the editor and when I try to run the .py file I get the following message ,\n\nPython was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1352,"Q_Id":65987114,"Users Score":1,"Answer":"3 Ways to solve this :-\n\nIf Python is not installed,then install it from python.org\n\nIf its already installed then it might not have been added to path.\nTo add python to path, search for environment variables in search bar, then edit the path option and add the python installation directory location there.\n\nOR you may just re-install python from python installer and tick the \"add python to path\" option\n\n\n\nPlus I would not recommend using windows store version of python. Just use normal python installer from python.org","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":65987265,"CreationDate":"2021-02-01T04:22:00.000","Title":"Unable to run .py files from Powershell or VS code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I made a release apk using buildozer android release command. Then I sign and zipalign my apk using following commands.\ngenarate keystore file [before] - keytool -genkey -v -keystore myapp.keystore -alias myalias -keyalg RSA -keysize 2048 -validity 10000\ngenerate keystore file [after] - keytool -importkeystore -srckeystore myapp.keystore -destkeystore myapp.keystore -deststoretype pkcs12\nsign apk - jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore myapp.keystore myapp-0.1-arm64-v8a-release-unsigned.apk myalias\nzipalign apk - zipalign -v 4 myapp-0.1-arm64-v8a-release-unsigned.apk playstore-release.apk\nAfter everything done play store gives this error - You uploaded an APK with an invalid signature (learn more about signing). Error from apksigner: ERROR: MIN_SIG_SCHEME_FOR_TARGET_SDK_NOT_MET: Target SDK version 30 requires a minimum of signature scheme v2; the APK is not signed with this or a later signature scheme\ntarget max api 30 and min api 21, sdk 30 used\nHow i upload my apk to playstore in 2021","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":187,"Q_Id":65988193,"Users Score":0,"Answer":"First, your project must be able to run on an android with .apk on debug and unsigned mode.\nIf it's the case you can read theses 2 steps.\nFirst you must signed your application to be sure that you are the only one that can update it (prevent from malicious algo insertion).\nChange this in your buildozer.spec file:\n#App store ask us to have this architecture and the latest api available (actually 29)\nandroid.arch = arm64-v8a\nandroid.api = 29\n#Each time you update your apk on the app store, you have to incremente this variable by 1. It's 8211 by default BUT BE AWARE, you have to change it from the start (I begin with 1):\nandroid.numeric_version = 1\n\nNow that your key is signed, go on the terminal and:\n---- FULFILL FOLLOWING VARIABLES ----\nproject_path=~\/MY\/PATH\nkey_filename=mykeyfilename\nkey_alias=mykeyaliasname\npassword=\"turlututu\"\n---- TO DO ONLY ONE TIME ----\nmkdir -p ~\/keystores\/\nkeytool -genkey -v -keystore ~\/keystores\/$key_filename.keystore -alias $key_alias -keyalg RSA -keysize 2048 -validity 10000\n---- a warning is printed and advice us to migrate to PKCS12 so we do it ----\nkeytool -importkeystore -srckeystore ~\/keystores\/$key_filename.keystore -destkeystore ~\/keystores\/$key_filename.keystore -deststoretype pkcs12\n---- ENDING OF THE PART TO DO ONLY ONE TIME ----\nexport P4A_RELEASE_KEYSTORE=~\/keystores\/$key_filename.keystore\nexport P4A_RELEASE_KEYSTORE_PASSWD=$password\nexport P4A_RELEASE_KEYALIAS_PASSWD=$password\nexport P4A_RELEASE_KEYALIAS=$key_alias\ncd $project_path\nbuildozer -v android release","Q_Score":0,"Tags":"python,android,kivy,kivymd","A_Id":65994765,"CreationDate":"2021-02-01T06:42:00.000","Title":"Error while signing an kivy app for the play store","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an incident number column that I'm trying to make consistent in terms of the format that it's in. Some are printed as '15-0019651' and others are '18490531'. All I'm trying to do is remove the hyphen for the Inci_no's that have them.\nWhen I run df.Inci_no.dtype it returns it as object.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":26,"Q_Id":65988666,"Users Score":1,"Answer":"df['incident_no'] = df['incident_no'].str.replace('-','')","Q_Score":0,"Tags":"python","A_Id":65988793,"CreationDate":"2021-02-01T07:27:00.000","Title":"Convert column into same format","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have python function which receives and store live tick data from a server(By API requestes). And another function which fetch the data to candle bars of 1minute and appends it to pandas Data frame. Then i want to call another function which apply some mathematical computations and manages order execution in live market.\nBut i am confused in which method to use between Multi-threading, Multi-processing or AsyncIO. What i want is uninterrupted flow of tick data which receives data in fractions of milliseconds to my system so that i donot miss any realtime data, And at the same time able to manage orders and perform mathematical computations.\nPlease advise me which option will be better to choose from the above?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":162,"Q_Id":65989090,"Users Score":1,"Answer":"I think you should use multi-threading and\/or asyncIO since getting processes to work together can be a pain in the neck. Since you need to do computations, you should store the data\/use a queue and use a second process to make the calculations,and you can add more math processes if one couldn't catch up with data inflow.\nOn second thought. You'll still have to carefully pack data fast enough so you don't spend most of your time transferring the data(pickle or process Queues don't work 'cause they're too slow)you'll need some custom way, say structs to quickly pack and unpack the data.\nBut at that point, you might as well use C\/C++ as your second(math processing) process :)\nTL;DR: Use ansycio and\/or threading to receive data, custom structs to quickly pack\/unpack data, and a few other Python\/C\/C++ etc. processes to retrieve and process the data.","Q_Score":0,"Tags":"python-3.x,multithreading,time-series,multiprocessing,python-asyncio","A_Id":65989180,"CreationDate":"2021-02-01T08:05:00.000","Title":"Is multi-threading or asyncio suitable for receiving real time tick with API requests and at the same time perform other tasks","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using bulk_create to upload some data from excel to django db. Since the data is huge I had to use bulk_create instead of .create and .save. But the problem is that I need to show the user how many duplicate data has been found and has not been uploaded due to integrity error. Is there a way to get the number of errors or duplicate data while using bulk upload?","AnswerCount":1,"Available Count":1,"Score":-0.537049567,"is_accepted":false,"ViewCount":117,"Q_Id":65989590,"Users Score":-3,"Answer":"After, Reading data from csv file.\nFirst create a list before inserting data to system.\nThen convert that list to set after then again sort the data which is in set.\nHere , you gets every data exactly one time in sorted manner.","Q_Score":0,"Tags":"python,django,postgresql","A_Id":65990514,"CreationDate":"2021-02-01T08:48:00.000","Title":"is there a way to get the count of conflicts while using Django ...bulk_create(.., ignore_conflicts=True)?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have deployed my Django website to PythonAnywhere last week and in the meantime, I made some changes to texts on the website. Now, I am trying to translate these pieces of text using the internationalization package in PythonAnywhere, but somehow it does not work.\nWhen I run python manage.py makemessages -l en, my django.po file is updated and I am able to add the translations, but once I run python manage.py compilemessages -l en, the English translations do not show up on the website.\nThe first day, I did get the translations to work, but now they don't anymore. What could be the cause of this? And could anybody help me find a way to solve the issue?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":89,"Q_Id":65989971,"Users Score":4,"Answer":"Apparantely I was just being stupid, as I used some commands from my dev environment in PythonAnywhere...\nWhat solved my issues:\n\nDo not run python manage.py runserver in PythonAnywhere, as it will stop your site from updating these kinds of things.\nKeep an eye on fuzzy translations.","Q_Score":0,"Tags":"django,pythonanywhere,django-i18n","A_Id":66006149,"CreationDate":"2021-02-01T09:15:00.000","Title":"Internationalization does not work in pythonanywhere","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I recently worked on a keylogger script which sends keystrokes to email, I have 2 python files:\nThe main one which is the keylogger and the second which has the functions to send the keystrokes by email.\nI tried to convert both of the files to exe using the pyinstaller command: \"pyinstaller main.py email.py --onefile\" and it worked perfectly, but when I went to the exe file that was created and opened it the command prompt opened and closed and the code doesn't work. I tried to combine both files but it didn't work.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":65992506,"Users Score":0,"Answer":"I believe the command is like this: pyinstaller --onefile 1stfile.py 2ndfile.py","Q_Score":0,"Tags":"python,pyinstaller,exe,converters,keylogger","A_Id":65992886,"CreationDate":"2021-02-01T12:11:00.000","Title":"Converting and combining 2 Python files into one exe doesn't work","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have Django model with Image field, but sometimes I don't need to actually upload file and store path to it, but I need to store only path. Especially url. I mean, I have web client, that receives both foreign urls like sun1-17.userapi.com and url of my own server, so sometimes I don't need to download but need to store url. Is it possible, to store url in ImageField, or I need to make CharField and save files via python? If its impossible, how do I save file in python3, having one, sent me via multipart?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":254,"Q_Id":65994385,"Users Score":0,"Answer":"The URL field in the ImageField is ReadOnly, so you cannot write it. You should probably use in addition a URLField (better than a CharField) to save the URLs.\nYou can allow null values on both and use only the appropriate one according to your scenario.","Q_Score":0,"Tags":"python-3.x,django,imagefield","A_Id":65994666,"CreationDate":"2021-02-01T14:22:00.000","Title":"How to save URL to a Django ImageField","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to run an automation job in Python that restarts a deployment in Kubernetes cluster. I cannon install kubectl on the box due to limited permissions. Does anyone have a suggestion or solution for this?\nThank you.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1450,"Q_Id":65996468,"Users Score":1,"Answer":"There is no atomic corresponding operation to the kubectl rollout restart in the Kubernetes clients. This is an operation that is composed of multiple API calls.\nWhat to do, depends on what you want. To just get a new Pod of the same Deployment you can delete a Pod alternatively you could add or change an annotation on the Deployment to trigger a new rolling-deployment.","Q_Score":0,"Tags":"python,kubernetes","A_Id":65996667,"CreationDate":"2021-02-01T16:32:00.000","Title":"Python client euqivelent of `kubectl rollout restart","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a huge issue with VS Code since many weeks. One day VS Code didn't manage to run any python file. I have the message :\n\nbash: C:\/Users\/rapha\/AppData\/Local\/Programs\/Python\/Python38\/python.exe: No such file or directory\n\nI have uninstall Python and VS CODE many times to add properly python 3.8 to my windows path but I have always the error.\nHave you got any idea ?\nThank you very much","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":14834,"Q_Id":65999975,"Users Score":1,"Answer":"Go to the VS Code preferences, and under interpreter, you'll find Interpreter Path, so set that to the path of your python installation, restart VS Code, and you should be good.","Q_Score":4,"Tags":"python,python-3.x,bash,visual-studio-code","A_Id":65999997,"CreationDate":"2021-02-01T20:53:00.000","Title":"VS Code can't find Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a huge issue with VS Code since many weeks. One day VS Code didn't manage to run any python file. I have the message :\n\nbash: C:\/Users\/rapha\/AppData\/Local\/Programs\/Python\/Python38\/python.exe: No such file or directory\n\nI have uninstall Python and VS CODE many times to add properly python 3.8 to my windows path but I have always the error.\nHave you got any idea ?\nThank you very much","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":14834,"Q_Id":65999975,"Users Score":0,"Answer":"I had this same problem, but I found a different solution;\nin settings.json I had\n\"python.defaultInterpreterPath\": \"D:\\Program Files\\Python310\\python.exe\"\nbut even this was getting ignored for some reason!\nSo, I looked at $ENV:path in the powershell loaded in vscode, and the $ENV:path in the standard commandline powershell in windows, and they were different!\nIt seems that if you have a terminal open in VSCode, it remembers the $ENV from that terminal, even if you completely restart vscode or even if you reboot your computer.\nWhat worked for me (by accident) is, close all terminal windows (and possibly anything else terminal\/powershell related that's open) and give it another try!\nIf it still doesn't work, compare the $ENV:Path values again, and see if they're still different!","Q_Score":4,"Tags":"python,python-3.x,bash,visual-studio-code","A_Id":70574397,"CreationDate":"2021-02-01T20:53:00.000","Title":"VS Code can't find Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a huge issue with VS Code since many weeks. One day VS Code didn't manage to run any python file. I have the message :\n\nbash: C:\/Users\/rapha\/AppData\/Local\/Programs\/Python\/Python38\/python.exe: No such file or directory\n\nI have uninstall Python and VS CODE many times to add properly python 3.8 to my windows path but I have always the error.\nHave you got any idea ?\nThank you very much","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":14834,"Q_Id":65999975,"Users Score":1,"Answer":"I have installed VS code insider and it works perfectly. I'm happy. It doesn't fix the issue but it's a great alternative.\nEdit : The issue came back","Q_Score":4,"Tags":"python,python-3.x,bash,visual-studio-code","A_Id":66000514,"CreationDate":"2021-02-01T20:53:00.000","Title":"VS Code can't find Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have previously being using miniconda and installing needed packages on an ad hoc basis, usually in specific environments tailored to the task at hand. I'm now constantly running into error messages about inconsistencies and failed install commands even when I try to create a new environment from scratch. So I'd like to try to make a fresh start and install the entire clean anaconda distribution, ideally without clobbering the existing environments I have that do still work.\nI tried simply using conda install -c anaconda anaconda at the root level (no virtual environment) but even that returned:\n\nCollecting package metadata (current_repodata.json): done Solving\nenvironment: \\ The environment is inconsistent, please check the\npackage plan carefully The following packages are causing the\ninconsistency:\ndefaults\/linux-64::asn1crypto==0.24.0=py37_0 failed with initial frozen solve.\nRetrying with flexible solve. Solving environment:\nfailed with repodata from current_repodata.json, will retry with next\nrepodata source. Collecting package metadata (repodata.json): done\nSolving environment: | failed with initial frozen solve. Retrying with\nflexible solve.\n\nAt that point I aborted and decided to seek expert advice.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":183,"Q_Id":66000074,"Users Score":2,"Answer":"Scorched-earth: remove the entire Miniconda install by removing the folder everything is in, which is probably $CONDA_PREFIX. Replace with a fresh install (Miniconda, Anaconda, or your favorite replacement) and then re-build your environments. In my opinion, since environments are disposable, a fresh install is preferred over trying to get several broken-looking environments to work. This will obviously take some time, but can be done on a scale of minutes rather than the hours it can take trying to fix broken environments.\nCareful, but time-consuming: uninstall a bunch of programs and then re-install one by one. For example, conda install numpy will likely remove a ton of packages if you're working with scientific software. This has the benefit of keeping other installation configurations, but I don't really think it's worth the time and headache (again, with environments being disposable and designed to quickly be recreated).","Q_Score":2,"Tags":"python,anaconda,conda,miniconda","A_Id":66011388,"CreationDate":"2021-02-01T21:02:00.000","Title":"What is the correct way to upgrade from broken miniconda to a clean and complete anaconda distribution?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I an image as the background in my ursina project. I know I can change the color of the background by using window.color = color.light_gray for example. But how do I use an image?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":704,"Q_Id":66001243,"Users Score":0,"Answer":"Try\nSky(texture=\"texture_name\")","Q_Score":0,"Tags":"python,background","A_Id":66217213,"CreationDate":"2021-02-01T22:49:00.000","Title":"How can I set a .jpg as window background in python ursina?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I did a bit of research on this, and the only way I can find to define a for loop in Python is to use for iteratorVariable in someArray... And this doesn't make sense to me? What if I just want to run a specific block of code x number of times, but I don't want all the extra overhead of having to call range() and storing a ton of numbers in a list that I'm never even going to use again...\nFor example, in JavaScript I could simply say for(let i = 0;i < 10;i++){} and it would define a SINGLE variable and increment it by one every time the loop runs, whereas in Python it seems that I'm forced to call a method which returns a massive array of numbers just so that the for in loop can iterate over them... And that just doesn't make sense to me. Why would they design their language to only have a single type of for loop, and then force the user to call methods and create variables and lists just to get it to run the way they want?\nSo my question is, is there a way to run a loop in Python without giving it an array of numbers to iterate over? Can I simply define a variable and increment it by one every frame? Am I forced to use a while loop for that purpose?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":66001865,"Users Score":0,"Answer":"@DeepSpace\n'I'm sure that someone can, but no one will. SO is not a solve-my-HW site. Try it yourself and ask specific questions' \u2013 DeepSpace Nov 19 '20 at 20:47\nThank you for not helping me that time. You were right I need to do things for myself (Especially homework). After two months I fianlly decided to give the code another go and I completely aced it and got it working.\nThank you :))","Q_Score":0,"Tags":"python,loops","A_Id":66023983,"CreationDate":"2021-02-02T00:03:00.000","Title":"Is there a way to run a for loop in Python without having to pass a list to it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I using regular expression to extract word between '[' and ']'.\nfor example,\nif source is Select [dataset\\XYIblqF13F79A4163724A73.png] to [colorful]\nthe output I want is dataset\\XYIblqF13F79A4163724A73.png and colorful .\nI tried with ^\\[.\\]$ but it doesn't work.\nCan I get some idea?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":66005181,"Users Score":0,"Answer":"I tried \\[(.*?)\\] and it works well.","Q_Score":0,"Tags":"python,regex,python-re","A_Id":66005247,"CreationDate":"2021-02-02T07:20:00.000","Title":"what regular expression to extract between bracket?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to make an app with kivy and kivymd but I can't figure out how I can make the setup screen show up only the first time. This is how the application is going to work: User launches the application after installation and is being shown the sign up\/log in screen, And once the user is done with the setup, the setup screens will never appear again unless the user reinstalls the application.\nHow can I make this happen?\nPlease help and thanks SO much in advance!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":63,"Q_Id":66005886,"Users Score":1,"Answer":"I fixed this problem by creating and reading a \"text\" file.My \"text\" file has '0' as a boolean variable .Once the user is done with signing up \/ logging in , I change that \"text\" file to '1' ,and in the __init__ func, I check if that file equals to '0' or '1'.\nI'm not sure if this is the correct way or not,but this worked for me.","Q_Score":0,"Tags":"python,kivy,kivy-language,python-3.8,kivymd","A_Id":66173541,"CreationDate":"2021-02-02T08:16:00.000","Title":"Showing the setup screen only on first launch in kivy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to merge two lists:\nlon: [14.21055347, 14.21055347, 16.39356558, 16.39356558, 14.21055347]\nlat: [48.22824817, 48.22824817, 48.18617251, 48.18617251, 47.65823679]\nto get:\ncoordinates: [[14.21055347, 48.22824817], [14.21055347, 48.22824817], [16.39356558, 48.18617251], [16.39356558, 48.18617251], [14.21055347, 47.65823679]]\nHow is this done efficiently for very long lists of lon\/lat?\nThanks for your help!","AnswerCount":5,"Available Count":1,"Score":-0.0399786803,"is_accepted":false,"ViewCount":61,"Q_Id":66007041,"Users Score":-1,"Answer":"I'm not quite sure if I'm understanding your question correctly, but if you want to match the two arrays, just do:\nname = [array1, array2]","Q_Score":0,"Tags":"python,list,merge,coordinates","A_Id":66007125,"CreationDate":"2021-02-02T09:39:00.000","Title":"Merge two lists to create a list which contains items of the two lists as lists","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am experimenting building app with buildozer, and yesterday, I built my app to apk, and it installed well.\nBut today, I built same app except, I deleted Window.size in main.py code. And I installed app but then it says app not installed. Just that. Without any warning or error.\nUnless I typed some characters by mistake in buildozer.spec file, my spec file was same, too.\nWhy this happens? And this is about window size?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":267,"Q_Id":66007123,"Users Score":0,"Answer":"I solved this after some testing.\nI disabled Play Protect in Play store, and deleted previous version of app. Then it installed correctly.","Q_Score":0,"Tags":"python,kivy,python-3.8,buildozer,kivymd","A_Id":66008556,"CreationDate":"2021-02-02T09:43:00.000","Title":"Why my same kivymd app says app is not installed?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am wondering how best to feed back the changes my DQN agent makes on its environment, back to itself.\nI have a battery model whereby an agent can observe a time-series forecast of 17 steps, and 5 features. It then makes a decision on whether to charge or discharge.\nI want to includes its current state of charge (empty, half full, full etc) in its observation space (i.e. somewhere within the (17,5) dataframes I am feeding it).\nI have several options, I can either set a whole column to the state of charge value, a whole row, or I can flatten the whole dataframe and set one value to the state of charge value.\nIs any of these unwise? It seem a little rudimentary to me to set a whole columns to a single value, but should it actually impact performance? I am wary of flattening the whole thing as I plan to use either conv or lstm layers (although the current model is just dense layers).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":66008062,"Users Score":1,"Answer":"You would not want to add in unnecessary features which are repetitive in the state representation as it might hamper your RL agent convergence later when you would want to scale your model to larger input sizes(if that is in your plan).\nAlso, the decision of how much of information you would want to give in the state representation is mostly experimental. The best way to start would be to just give in a single value as the battery state. But if the model does not converge, then maybe you could try out the other options you have mentioned in your question.","Q_Score":0,"Tags":"python,deep-learning,reinforcement-learning,dqn","A_Id":66030093,"CreationDate":"2021-02-02T10:41:00.000","Title":"Reinforcement learning DQN environment structure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using a barcode reader connected to a raspberrypi with python script to read barcodes and send them to be processed in php through including the ip of file.php in python.\nmy python script is on a different physical machine than the PHP script.\nI want to start-up the python script from the php code to enable users to scan barcodes automatically.\n**I am new to python and have not used it before.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":66009724,"Users Score":0,"Answer":"You must run your Python as daemon and contact Python via PHP using a socket or a REST call","Q_Score":0,"Tags":"python,php,raspberry-pi","A_Id":66012307,"CreationDate":"2021-02-02T12:27:00.000","Title":"Run Python script at start-up from php when python script is stored on a raspberrypi not in the same computer I have my php script on?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was reading a paper about disparity, and came across the following phrase:\n\"We use the deep unary features to compute the stereo\nmatching cost by forming a cost volume.\"\nI looked in the literature for definitions of 'unary features' and 'cost volume', yet struggled to find anything. Could someone clarify what these terms mean in the context of computer vision?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":348,"Q_Id":66014793,"Users Score":0,"Answer":"For a single 2D patch (wxwx1), if you're looking for its most similar sibling in another image, each pixel is a candidate, so if you write their similarity in another image, it'll be a 2D images with similarities. You can call it a similarity surface, or cost surface if you put, say, distances in them.\nIn the paper, that I can't seem to access properly (I did see the archived HTML version of it), for WxH images, they store the cost, or distance, between a feature in one image, with all the pixels in a window around it. Since we have WxH pixels, and the window is DXxDY, then the full array is WxHxDXxDY of costs. So it's 4D but they call it a \"cost volume\" by analogy.\nYou also find cost volumes in stereo, for WxH images, and D possible depths or disparities, we can build a WxHxD cost volume. If you were to find the smallest cost for each pixel, you wouldn't need a full volume, but if you also consider the pixels together (two neighbours probably have the same depth) then you look at the full cost volume instead of just small slices of it.","Q_Score":0,"Tags":"python,machine-learning,computer-vision,feature-extraction,vision","A_Id":66521350,"CreationDate":"2021-02-02T17:36:00.000","Title":"Unary Features and Cost Volume (Computer Vision)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have downloaded Pandas library with pip install pandas through the command prompt, when I try to import pandas as pd PyCharm returns an error : ModuleNotFoundError: No module named 'pandas'\nI have tried to uninstall and install again many times but nothing seems to work. Does anybody know a solution to this?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":922,"Q_Id":66016461,"Users Score":0,"Answer":"You likely have multiple copies of Python installed on your system. PyCharm can be configured to use any version of Python on your system, including any virtual environments you've defined. The solution is to match up the version of Python you've installed Pandas into with the version of Python that PyCharm is using to run your code.\nThere are two places where you specify a Python version. First of all, your Project has a version associated with it. Check the \"Python Interpreter\" section of the \"Project\" section of your Preferences for that. That version is used for syntax highlighting, code completion, etc.\nBy default, the abovementioned Python version will also be used to run your code. But you can change the version of Python that your code is run with by creating or modifying a Run Configuration. To do this, check the menu next to the Run and Debug toolbar buttons near the top-left of your PyCharm window.\nWhen you do get into the Python Interpreter section of the Preferences, you'll find that you can see all of the modules installed for each Python version that PyCharm knows about. You can use this to check to see if Pandas is installed for a particular Python version.\nI would suggest you get comfortable with all that I've said above. It will save you many headaches in the future.","Q_Score":0,"Tags":"python,pycharm","A_Id":66016499,"CreationDate":"2021-02-02T19:33:00.000","Title":"How do I import Pandas library into PyCharm?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have downloaded Pandas library with pip install pandas through the command prompt, when I try to import pandas as pd PyCharm returns an error : ModuleNotFoundError: No module named 'pandas'\nI have tried to uninstall and install again many times but nothing seems to work. Does anybody know a solution to this?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":922,"Q_Id":66016461,"Users Score":0,"Answer":"You can try downloading the library from PyCharm settings:\n\nFile -> Settings\nthen, Project: -> Python Interpreter\nClick a + sign to the right,\nSearch for the pandas library,\nand finally, press 'Install Package'","Q_Score":0,"Tags":"python,pycharm","A_Id":66016514,"CreationDate":"2021-02-02T19:33:00.000","Title":"How do I import Pandas library into PyCharm?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does ,_,_ have any specific meaning in this?\nfaces,_,_ = detector.run(image = imgR, upsample_num_times = 0, adjust_threshold = 0.0)\nIs it possible to code it like this?\nfaces = detector.run(image = imgR, upsample_num_times = 0, adjust_threshold = 0.0)","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":25,"Q_Id":66020807,"Users Score":1,"Answer":"detector.run might be returning three values. So you need three placeholders to read them. Since code might not be using the other two return values, them have been read into _ which is a style followed by people.","Q_Score":1,"Tags":"python,python-3.x","A_Id":66020816,"CreationDate":"2021-02-03T03:25:00.000","Title":"What does the ,_,_ mean in faces,_,_ = detector.run(image = imgR, upsample_num_times = 0, adjust_threshold = 0.0)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python code that uses watchdog and pandas to automatically upload a newly added excel file once it has been pasted on a given path.\nThe code works well on my local machine but when I run it to access files on windows server 2012 r 2, I am getting a file permission error. what can be the best solution?\nNB: I am able to access the same files using pandas read_excel() without using the watchdog but I want to automate the process so that it auto reads the files every time files are being uploaded","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":66021822,"Users Score":0,"Answer":"Few possible reasons that you get a permission deny\n\nThe file has been lock because someone is opening it.\nYour account doesn't have the permission to read\/write\/execute","Q_Score":0,"Tags":"python,watchdog,python-watchdog","A_Id":66021964,"CreationDate":"2021-02-03T05:32:00.000","Title":"windows server file permission error when using watchdog","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use Python to monitor the complete network, if the route or link goes up\/down will get the notification.\nI found a few packages like lanscan, but I\u2019m not sure that will work fine or not.\nBasically, I want to use python same as NMS (Network management system). Please suggest to me some good frameworks or packages.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":66022670,"Users Score":0,"Answer":"Pure python-based monitoring solutions are not really scalable as python at the core is pretty slow compared to something like c and multiprocessing is not native. Your best bet would be to use an opensource solution like Zabbix or cacti and use python API to interact with the data","Q_Score":0,"Tags":"python,python-3.x,network-programming,nms,opennms","A_Id":68525850,"CreationDate":"2021-02-03T07:01:00.000","Title":"How to monitor the complete network through Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for ideas\/thoughts on the following problem:\nI am working with food ingredient data such as: milk, sugar, eggs, flour, may contain nuts\nFrom such piece of text I want to be able to identify and extract phrases like may contain nuts, to preprocess them separately\nThese kinds of phrases can change quite a lot in terms of length and content. I thought of using NER taggers, but I don't know if they will do the job correctly as they are mainly used for identifying single-word entities...\nAny ideas on what to use as a phrase-entity-recognition system? Also which package would you use? Cheers","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":389,"Q_Id":66024941,"Users Score":2,"Answer":"It looks like your ingredient list is easy to split into a list. In that case you don't really need a sequence tagger; I wouldn't treat this problem as phrase extraction or NER. What I would do is train a classifier on different items in the list to label them as \"food\" or \"non-food\". You should be able to start with rules and train a basic classifier using anything really.\nBefore training a model, an even simpler step would be to run each list item through a PoS tagger (say spaCy), and if there's a verb you can guess that it's not a food item.","Q_Score":2,"Tags":"python,nlp,text-processing,named-entity-recognition,information-extraction","A_Id":66060338,"CreationDate":"2021-02-03T09:47:00.000","Title":"Alternatives to NER taggers for long, heterogeneous phrases?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"With this question I would like to gain some insights\/verify that I'm on the right track with my thinking.\nThe request is as follows: I would like to create a database on a server. This database should be updated periodically by adding information that is present in a certain folder, on a different computer. Both the server and the computer will be within the same network (I may be running into some firewall issues).\nSo the method I am thinking of using is as follows. Create a tunnel between the two systems. I will run a script that periodically (hourly or daily) searches through the specified directory, convert the files to data and add it to the database. I am planning to use python, which I am fairly familiar with.\nNote: I dont think I will be able to install python on the pc with the files.\nIs this at all doable? Is my approach solid? Please let me know if additional information is required.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":19,"Q_Id":66026560,"Users Score":1,"Answer":"Create a tunnel between the two systems.\n\nIf you mean setup the firewall between the two machines to allow connection, then yeah. Just open the postgresql port. Check postgresql.conf for the port number in case it isn't the default. Also put the correct permissions in pg_hba.conf so the computer's ip can connect to it.\n\nI will run a script that periodically (hourly or daily) searches through the specified directory, convert the files to data and add it to the database. I am planning to use python, which I am fairly familiar with.\n\nYeah, that's pretty standard. No problem.\n\nNote: I dont think I will be able to install python on the pc with the files.\n\nOn Windows you can install anaconda for all users or just the current user. The latter doesn't require admin privileges, so that may help.\nIf you can't install python, then you can use some python tools to turn your python program into an executable that contains all the libraries, so you just have to drop that into a folder on the computer and execute it.\nIf you absolutely cannot install anything or execute any program, then you'll have to create a scheduled task to copy the data to a computer that has python over the network, and run the python script there, but that's extra complication.\nIf the source computer is automatically backed up to a server, you can also use the backup as a data source, but there will be a delay depending on how often it runs.","Q_Score":0,"Tags":"python,database,scheduled-tasks","A_Id":66026756,"CreationDate":"2021-02-03T11:26:00.000","Title":"What strategy should I use to periodically extract information from a specific folder","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a question regarding cvxpy capability to systematically choose one of the solutions which result in the same value of an objective function.\nLet us consider a typical supply chain optimization problem as an example:\n\nThere is a product which is ordered by customers A, B, C.\n\nThe demand for this product is 100, 200, and 100 pcs, respectively (total demand is 400 pcs).\n\nThe available supply is 250 pcs (hence, there is 150 pcs shortage).\n\nEach customer pays the same price for the product ($10\/item).\n\nThe objective is to allocate this product among the customers in such a way that the revenue is maximized.\n\nSince unit prices are identical, there are multiple possible solutions \/ allocation alternatives resulting in the optimal value of the objective function of $2500 (i.e. the total allocation multiplied by the unit price).\n\n\nIs there a way to pass as a parameter to the solver (e.g. to CBC or cvxpy) which of the allocation alternatives should be chosen? By default, the solver does the allocation on the first come, first served basis, whereas the intended allocation is the one proportional to the demand.\nYour help and assistance would be much appreciated.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":137,"Q_Id":66027371,"Users Score":2,"Answer":"I think this can be formulated as a multiple objective problem:\n\nMinimize Cost\nTry to be as close as possible to a single fraction of demand is met.\n\nThis can be solved in two steps:\n\nSolve for objective 1.\nAdd objective 1 as a constraint to the problem and solve for objective 2.\n\nWe need to allow deviations from the fraction of demand being met to allow objective 1 stay optimal, so I would do that by adding slacks and minimizing those.\nThis is similar to what @sascha suggested in the comments.","Q_Score":0,"Tags":"python,optimization,data-science,cvxpy,coin-or-cbc","A_Id":66028480,"CreationDate":"2021-02-03T12:15:00.000","Title":"Supply Chain Optimization Problem with CVXPY and CBC","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Sorry to my bad english.\nSo i use a automate with MERVIS software and i use a Bacnet server to have my variable in my IHM (weintek panel pc with Easybuilder Pro).\nSo all i make is good and work but i'm not happy to EasyBuilder pro and i want make my own HMI. I decide to make my application with QT in C++.\nBut i'm physicien at the begining so learn little bit by little bit( i have base of python,c++, structur text). I know nothing about how build a bacnet client and do you have idea where can i find some simple exemple to communicate with my PLC because i find nothing and i need to learn and make this to my project.\nSo i have my PLC, link in ethernet to my PC where i make my hmi. In the future i want put this application in PANEL PC tactil work in window and link to my PLC with MERVIS software.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":66029193,"Users Score":0,"Answer":"If I'm clear on the question, you could checkout the 'BACnet Stack' project source code or even the 'VTS' source code too - for a C\/C++ (language) reference.\nOtherwise YABE is a good project in C# (language), but there's also a BACnet NuGet package available for C# too - along with the mechanics that underpin the YABE tool.","Q_Score":0,"Tags":"python,c++,client,bacnet,human-interface","A_Id":67459182,"CreationDate":"2021-02-03T14:09:00.000","Title":"Create Bacnet client variable automate","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"hello guys I'm trying to make another one to access my Django website I host it on my localhost by type\npython manage.py runserver 0.0.0.0:8000\nand i set the ALLOWED_HOSTS = ['*']\nwhen I trying to connect with my IP it's says refused to connect.\ncan someone help me","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":78,"Q_Id":66032193,"Users Score":-1,"Answer":"you are only hosting your server in your local network therefore no-one outside of this network can access your server. To make them access it you would have to make it accessible over the internet for example via hosting it on aws or another cloud hoster.","Q_Score":0,"Tags":"python,django","A_Id":66032246,"CreationDate":"2021-02-03T17:00:00.000","Title":"how can i make someone on another network access my website with django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Let's imagine, I have a simple tkinter program: only tkinter.Entry(), where I can write down some text. The main goal I have set to this tkinter.Entry() is to make next: when I try to input there some symbol, it is immediately deleted from tkinter.Entry(). So the question is how to make tkinter.Entry() delete every symbol, when it have been just input there?\nI hope the problem is fully described. Thanks in advance for your help.\n\nI apologize, but it seems to me that this question has lost its former relevance for me. Sorry for letting you take all of your precious time. I took all the answers and tips into account. I will delete the question soon. Thank you for your attention to me","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":45,"Q_Id":66033049,"Users Score":1,"Answer":"From what I deduced, you're trying to delete the content from the entry widget.\ntkinter.Entry.delete('0',END)\nThis should do it.","Q_Score":0,"Tags":"python-3.x,tkinter,tkinter-entry","A_Id":66033184,"CreationDate":"2021-02-03T17:56:00.000","Title":"How to immediately delete just input symbol in tkinter.Entry()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Tkinter app which I have converted to .app and .exe, but after giving this app to others if I have to update the app how should I do it(like in play store update)? And also if I package this app and distribute with an installer then how to send update to the app?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":44,"Q_Id":66033193,"Users Score":2,"Answer":"I don't think that it's possible to update the app like that. Android apps usually are made with Java, and iOS apps are made with Xcode, Swift, and Objective-C. I don't usually make apps with Python, unless they are for myself, because once they are made into apps, they cannot be updated (as far as I know). If I wanted to update my Python app, I would remove the first app, then use Pyinstaller to make the updated app.\nHope this helps, and have a good day. :)","Q_Score":1,"Tags":"python,python-3.x,windows,macos,tkinter","A_Id":66034171,"CreationDate":"2021-02-03T18:05:00.000","Title":"how can i create a tkinter app updater which is included with the app","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My input csv file is already 1-hot encoded (its exported from another system):\n\n\n\n\nid\nvehicle_1(car)\nvehicle_1(truck)\nvehicle_1(other)\n\n\n\n\n1\n0\n1\n0\n\n\n2\n1\n0\n0\n\n\n\n\nIs there a way to tell pandas to treat the 'vehicle_' columns a 1-hot encoded? Perhaps during the construction of the dataframe? I'm assuming libraries like seaborn, which can plot data based on categories would need to know to treat the set of columns as 1-hot encoded values.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":66035776,"Users Score":1,"Answer":"I don't think there's a way to tell pandas that the columns imported are already encoded (whichever it was used already before importing).\nThe advantage is you don't have to encode again.\nThe disadvantage is the imported DF treats your encoded columns as new columns rather than encoded values of the same column.","Q_Score":0,"Tags":"python,pandas","A_Id":66041052,"CreationDate":"2021-02-03T21:15:00.000","Title":"Creating a pandas dataframe from a csv file with 1-hot encoded set of columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have this path in my mac:\n\/usr\/local\/lib\/python3.8\/site-packages but I don't have \/usr\/local\/bin\/python3.8, which should be the interpreter.\nCurrently, my pip3 install command would install packages into \/usr\/local\/lib\/python3.8\/site-packages, but I can't use python3.8 since I don't have the interpreter. I don't care which version of python I use. I just want to install packages into the directory that I can use.\nSo please help me with one of the questions:\n\nInstall Python 3.8 interpreter so I can use packages installed by pip3.\n\nOR\n\nChange the default pip3 installation path to other directory such as \/usr\/local\/lib\/python3.7\/site-packages which I already have.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":880,"Q_Id":66038066,"Users Score":0,"Answer":"You could do one of them.\nInstall python 3.8 should solve your problem\nor\nHave you tried to rename the python3.8 to python3.7?\n(Make sure you have python 3.7, by clicking \"python3 --version\" in terminal)\n\n\/usr\/local\/lib\/python3.8\/site-packages ->\n\/usr\/local\/lib\/python3.7\/site-packages","Q_Score":1,"Tags":"python,python-3.x,pip,interpreter,site-packages","A_Id":66038244,"CreationDate":"2021-02-04T01:18:00.000","Title":"I have the site-packages for Python 3.8 but not the interpreter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to ask if there is any way to make the regression really generalized for my dataset.\nMy problem is that after I trained the data with Random forest or SVM regressor, it works kinda well in the training dataset but it shows very bad result when I try with test dataset.. even if they have the same underlying equations.\nI really don't have idea how to improve this. Does it mean that I should keep training my regression with more dataset?\nCould anybody help me? :(","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":51,"Q_Id":66038420,"Users Score":1,"Answer":"We are not able to answer your question. You dont even try to provide data or your code. How could one tell why your problem appears.\nJust my two cents:\n\nIs the train and test data unbalanced?\n-> This is the main reason for bad test results\n\nIs the sample reasonable large?","Q_Score":1,"Tags":"python,matlab,regression,svm,random-forest","A_Id":66044215,"CreationDate":"2021-02-04T02:11:00.000","Title":"How can I improve regression?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to Machine Learning and I'm a bit confused about how data is being read for the training\/testing process. Assuming my data works with date and I want the model to read the later dates first before getting to the newer dates, the data is saved in the form of earliest date on line 1 and line n has the oldest date. I'm assuming that naturally data is being read from line 1 down to line n, but I just need to be sure about it. And is there any way to make the model(E.g Logistic Regression) read data whichever direction I want it to?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":66038977,"Users Score":2,"Answer":"A machine learning model in supervised learning method learns from all samples in no order, it is encouraged to shuffle the samples during training.\nMost of the times, the model won't get fed with all samples at once; the training set is split into batches, batches of random samples or just in whatever order it is in the training set.","Q_Score":0,"Tags":"python,machine-learning,scikit-learn,logistic-regression,random-forest","A_Id":66039014,"CreationDate":"2021-02-04T03:29:00.000","Title":"Do Machine Learning Algorithms read data top-down or bottom up?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project where I need to generate almost 40 reports every month from SSRS reporting services. I have been trying to automate it using python but I am missing something in the URL that I am passing. Any insights would be much appreciated!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":199,"Q_Id":66039000,"Users Score":1,"Answer":"Your URL should consist of the following:\nhttp:\/\/\n\/\nReportServer?\/\n\n?=&=\n&rs:Format=PDF\nYou'd have to replace all '<...>' placeholders with your server and report information.\nHere's an example: https:\/\/servername\/reportserver?\/SampleReports\/Employee Sales Summary&EmployeeID=38&rs:Format=PDF\nThis sample URL simply tells SSRS to get the Employee Sales Summary report in folder \/SampleReports passing the parameter EmployeeID=38 and render the report as a PDF file.","Q_Score":0,"Tags":"python,visual-studio,reporting-services","A_Id":66041896,"CreationDate":"2021-02-04T03:33:00.000","Title":"How to automate SSRS reports using python in VS Code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project where i have to detect moving object from moving camera for example: detecting hanging apples on trees using drone, detecting animals in farm using drone and detecting flowers on field like this. the main thing is i am using moving camera and i don't have fixed lighting condition as the video is captured in outdoor so i lighting may vary. I have to use open CV and python please suggest me reliable method that can be used for example as mentioned above.I know some basic method like background subtraction and motion detection but as my lighting conditions are not stationary i am not getting proper output","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":414,"Q_Id":66041548,"Users Score":0,"Answer":"You can try optical flow. Since your platform is moving it's difficult to differentiate stationary vs dynamic objects with typical background subtraction techniques. With optical flow objects at the same distance from the camera should be moving with the same direction and magnitude. You can detect moving objects because they have different velocities relative to the area around them. This isn't trivial though; be ready to do a lot of tweaking to get the detection to work well.","Q_Score":0,"Tags":"python,opencv","A_Id":66049951,"CreationDate":"2021-02-04T08:03:00.000","Title":"Extract moving object from video taken by moving camera(drone) using opencv python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I searched too much but can't find any salution over the internet.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":536,"Q_Id":66042048,"Users Score":2,"Answer":"label=QLabel(\"hello\",self)\nlabel.setStyleSheet(\"QLabel{font-size:50px;font-family:'Orbitron'}\")\n\nOR\n\nfrom PyQt5.QtGui import QFontDatabase label=QLabel(\"hello\",self)\nQFontDatabase.addApplicationFont('file_name.otf or .ttf')\nlabel.setStyleSheet(\"QLabel{font-size:50px;font-family:'Orbitron'}\")","Q_Score":1,"Tags":"python,python-3.x,user-interface,pyqt,pyqt5","A_Id":66042094,"CreationDate":"2021-02-04T08:41:00.000","Title":"How to import font family in pyqt5?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Normal syntax for calling a function is func() but I have noticed that loc[] in pandas is without parentheses and still treated as a function. Is loc [] really a function in pandas?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":311,"Q_Id":66043313,"Users Score":0,"Answer":"LOC[] is a property that allows Pandas to query data within a dataframe in a standard format. Basically you're providing the index of the data you want.","Q_Score":1,"Tags":"python,pandas","A_Id":66043424,"CreationDate":"2021-02-04T09:58:00.000","Title":"Is loc[ ] a function in Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there anywhere in Firebase Auth where you can access public and private RSA keys for users? This would be really helpful for my project instead of having to generate some and store them securely","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":372,"Q_Id":66043713,"Users Score":1,"Answer":"There is nothing built into Firebase Authentication for storing key pairs for the user. You'll typically want to use a secondary data store (like Firebase's Realtime Database, or Cloud Firestore) for that and associate the keys with the user's UID.","Q_Score":0,"Tags":"python,ios,swift,firebase-authentication,rsa","A_Id":66049160,"CreationDate":"2021-02-04T10:25:00.000","Title":"Getting Public \/ Private Keys from Firebase Auth","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"From Python script I'd like to login to Azure account and check if I have assigned a given role in a given subscription.\nThe important thing is that the MFA is configured (with Authenticator app but also with TOTP codes).\nI will have username, password and TOTP code in the script.\nI found Microsoft Authentication Library (MSAL) for Python but so far I don't see a way to use it in this scenario.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":158,"Q_Id":66049189,"Users Score":1,"Answer":"As far as I know this is not a supported scenario with MSAL, as MFA is meant as user interaction (not machine interaction).\nI would recommend not using a personal account for this kind of activity but to use a Service Principal that has the permissions to view roles.","Q_Score":0,"Tags":"python,azure,oauth-2.0,azure-active-directory","A_Id":66049486,"CreationDate":"2021-02-04T15:57:00.000","Title":"Programmatically authenticate in Azure with MFA","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I\u00b4m using the python jira package to create and update issues in our self hosted Jira. Sometimes it happens, that issues are created without filling in all information. Because of that, customfields are empty after the creation.\nLater, people want to update some fields that were empty before, because now they have new information.\nThis is not working, I get the error:\n\"errorMessages\":[],\"errors\":{\"customfield_100233\":\"Field 'customfield_100233' cannot be set. It is not on the appropriate screen, or unknown.\"}}\nI guess this happens because the field is hidden and therefor can\u00b4t be updated, because it currently has an empty value.\nHas somebody an idea, how to solve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":66049531,"Users Score":0,"Answer":"I found the solution.\nJira Dropdown Fields can\u00b4t be updated to None via a value, you have to set the id to -1.","Q_Score":0,"Tags":"jira-rest-api,python-jira","A_Id":66122501,"CreationDate":"2021-02-04T16:15:00.000","Title":"Populating hidden customfield via python jira","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"On the advice of TheZadok42 I installed PyCharm 2020.3.3 on both my Windows machine and my Raspberry PI 4. I have also bought and installed the FreeNove Ultimate Starter Kit for Raspberry on both. The first tutorial lesson is Blink.py, which just blinks an LED. It works fine if I just run \"python Blink.py\". However, when trying to run it from PyCharm it complains: \"No module named 'RPI'\" in reference to the line that says \"import RPi.GPIO as GPIO\". How do I get PyCharm to find it?\nPlease note that I am not well-versed in Linux, having grown up in the MS-DOS then Windows world, so please make installation instructions or configuration file edit instructions complete.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1327,"Q_Id":66052931,"Users Score":0,"Answer":"I had the same problem. Solution by Arseniy seems to be broken now, but the RPI.GPIO-def package seems to have exactly the purpose of enabling IntelliSense on PyCharm, without needing to install the full package.","Q_Score":2,"Tags":"python,pycharm,gpio,raspberry-pi4","A_Id":67810347,"CreationDate":"2021-02-04T19:56:00.000","Title":"PyCharm IDE can't find RPI.GPIO module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Task Scheduler to execute python run.py. It works, but the Python interpreter pops up. I want to run run.py in the background without any interpreter popping up.\nHow can I do this? In Linux I'd just do python run.py & to get it to run in the background silently, but I'm not sure how to achieve the same in Windows with Task Scheduler.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":655,"Q_Id":66055158,"Users Score":0,"Answer":"You can just change .py extension to .pyw\nand the python file will run in background.\nAnd if you want to terminate or check if it actually running in background,\nsimply open Task manager and go to Processes you will see python32 running\nthere.\nEDIT\nAs you mentioned, this doesn't seem like working from command line because changing the file's .extension simply tells your system to open the file with pythonw application instead of python.\nSo when you are running this via command line as python .\\run.pyw even with the .pyw this will run with python.exe instead of pythonw.exe.\nSolution:\nAs you mentioned in the comments, run the file as pythonw .\\run.pyw or .\\run.py\nor just double click the run.pyw file.","Q_Score":0,"Tags":"python,windows,scheduled-tasks","A_Id":66055185,"CreationDate":"2021-02-04T23:02:00.000","Title":"How to hide window when running a Task Scheduler task","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Not using a virtual environment,on my command prompt, \"python manage.py command\" works but when I activate the virtual environment and use \"python manage.py command\" it just goes to the next line without doing anything","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":66056082,"Users Score":0,"Answer":"Try installing the virtual environment again using this command :\npip3 install virtualenvwrapper-win\nthen make a new virtual env using:\nmkvirtualenv django_env_name\nthen install your django in this env using : pip install django\n(if env is not activated , then run this command : workon django_env_name)\nThen simply go to project dir and run command : python manage.py runserver","Q_Score":0,"Tags":"python,django,virtualenv","A_Id":66057684,"CreationDate":"2021-02-05T00:53:00.000","Title":"Why does my CMD prompt do nothing when I use the \"python manage.py runserver\" command after activating the virtual environment for my Django project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I had developed a standalone application on Windows for Deep Learning \/ Computer vision in Python 3.X using various standard python libraries such as pandas, numpy, TensorFlow, Keras, Yolo, PyQt ...etc. I want to deliver this application to my client but without source code.\nCan you please help me how to do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":66056606,"Users Score":0,"Answer":"\"I want to deliver this application to my client but without source code.\"\nCan you clarify this process?\nIf you just want to deliver this service to your client you can just use HTTP\/POST to let users upload their data to you, then you run these data on your network on the server, and finally, just return the prediction result to them.","Q_Score":0,"Tags":"python,python-3.x,windows,deep-learning","A_Id":66056643,"CreationDate":"2021-02-05T02:07:00.000","Title":"How to hide \/ encrypt source code written in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Getting below error when trying to upgrade jupyter lab using pip. Have tried pip3 install --user jupyter lab --upgrade but still not working\nERROR: Could not install packages due to an EnvironmentError: [WinError 5] Access is denied: C:\\\\Users\\\\ADITHYA\\\\AppData\\\\Local\\\\Temp\\\\pip-uninstall-low949lo\\\\jupyter.exe\nConsider using the --user option or check the permissions.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":74,"Q_Id":66057265,"Users Score":-1,"Answer":"Right click and check user privileges of the folder C:\\Users\\ADITHYA\\ and open up write permissions for your account now.\nIf this fails, find cmd.exe in C:\\Windows\\System32\\ and right click on it, then \"run as administrator\" - now you can execute any command in it without worrying permission.","Q_Score":0,"Tags":"python,jupyter-lab","A_Id":66057613,"CreationDate":"2021-02-05T03:42:00.000","Title":"Is there any workaround for EnvironmentError: [WinError 5] Access is denied:","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hell everyone,\nCan Healpy compute a bispectrum of the CMB map?\nIt looks like there is no such built-in function in Healpy library.\nThanks!\nAll the best,","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":51,"Q_Id":66057826,"Users Score":2,"Answer":"No, there is no support for computing a bispectrum, if anyone implements it, it would be nice to have a pull request contribution towards healpy.","Q_Score":2,"Tags":"python,healpy","A_Id":66067928,"CreationDate":"2021-02-05T05:04:00.000","Title":"Can Healpy compute a bispectrum to study non-gaussianity of the CMB map?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Our application process a large audio file into number of smaller segemnts and displays it on a table in the GUI. The user can listen, label and comment on each segment in the table. So is there a way to save the progress that can be resumed from where the user left off with the last accessed row in the table?\nFor example in the table has 700 rows and the user has worked with 100 and closes the application, the next time they open the application they must be able to start working with the 101st row and the previous work must be saved.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":118,"Q_Id":66059335,"Users Score":0,"Answer":"pickle (import pickle) allows you to write\/read python objects and therefore save a progress status from one session to another.","Q_Score":0,"Tags":"python-3.x,user-interface,pyqt5,progress-bar,desktop-application","A_Id":66060530,"CreationDate":"2021-02-05T07:38:00.000","Title":"Is there a way to save progress in a desktop application developed in python and pyqt5?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using TF2.4 and when I start training the model I get this in my terminal:\n2021-02-05 07:44:03.982579: I tensorflow\/compiler\/mlir\/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)\nI know this is not an error and it is benign and is saying MLIR was not being used, but my training deosnt start while stays at this without stopping. Therefore after couple of hours I just quit the program.\nHow can I proceed with training?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":856,"Q_Id":66059582,"Users Score":0,"Answer":"I found the error: This happens only when there is an issue on tf.record file. So i have re-done the tf.record and I realized there was a missing xml while there was no error in creating tf.record file. I have corrected that file and re created tf.record and problem solved.","Q_Score":1,"Tags":"python-3.x,tensorflow","A_Id":67303523,"CreationDate":"2021-02-05T07:59:00.000","Title":"TF2 object detection training doesnt start with no error","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a piece of functionality which is made to transfer funds to a connected Stripe account. In our account payouts setting configured to \u201cautomatic\u201d so I use source_transaction parameter to tie transfer with transaction to avoid potential problems with insufficient available balance. Now I need to transfer an amount of such value when I don\u2019t have any \u201cpending\u201d transactions to cover that size of amount and any attempts to perform a transfer ending up with this \u201cinsufficient balance\u201d exception.\nWhat I\u2019ve tried:\n\nto pass a source_transaction charge with \u201cavailable\u201d status (got \u201cinsufficient funds\u201d error)\nwandering through Stripe documentation and Stackoverflow posts to figure out the potential solution (still can\u2019t find one)\n\nWould be great if someone could help me to answer to these questions:\n\nRetrieving balance using API gives me a negative amount in the \u201cavailable\u201d section. But as far as I understand it\u2019s not important since I use an existing charge as source_transaction, right?\nConsidering things I mentioned above is there any way for me to perform a transfer without setting payouts to \u201cmanual\u201d configuration with accumulating funds in \u201cavailable\u201d balance?\n\nHad an idea to split this transfer onto a number of smaller ones, but that doesn\u2019t seem like a good one for me.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":226,"Q_Id":66060300,"Users Score":1,"Answer":"Retrieving balance using API gives me a negative amount in the\n\u201cavailable\u201d section. But as far as I understand it\u2019s not important\nsince I use an existing charge as source_transaction, right?\n\nYou can't transfer money you don't have or that is incoming \u2014 I'm a little confused because you say you \"don\u2019t have any \u201cpending\u201d transactions\" but then you also mention using source_transaction to point to a charge that is not available yet which is contradictory.\nMy guess here is that the payment you're using as source_transaction was already paid out and thus those funds aren't available to be connected to a transfer. Ultimately you can't transfer $x unless you have a payment of at least $x that you received recently and hasn't been settled and paid out to your bank account already.\nI tested on a Stripe account(creating a large negative available balance by issuing refunds, then creating a new charge and transferring it with source_transaction) and it does work as long as the charge is not already used for another transfer or was paid out already.","Q_Score":1,"Tags":"python,stripe-payments,payment-gateway","A_Id":66061085,"CreationDate":"2021-02-05T08:56:00.000","Title":"Stripe Transfers: create transfer without having appropriate charge to use as a source_transaction","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Alice, Bob, Charlie, David, and Eve are friends trying to decide whether to go skiing or study next weekend. Each casts a vote. Let their votes be de- noted by predicates a, b, c, d, and e where each is True if the preference is for skiing, and False if the preference is for studying. Write a python formula (using and, or, not, and parentheses \u2013 no \u201dif\u201d statements or other Python operators allowed) that is True if the majority wants to ski, and False if the majority wants to study. Note that you can break long lines in Python with a backslash.\nHint: Do not make a function. The answer should be a single expression using variables a, b, c, d and e, plus operators and, or and not.\nExample:\n(a or b or c) and (a or b or d)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":66061377,"Users Score":0,"Answer":"I'm going to assume this is a homework question and suggest you enumerate (in English, using the words and, or, not) all possible situations in which a majority want to ski, then translate that sentence into python code.\nTo start you out, consider the situation where Alice, Bob and Charlie want to ski - this is a majority - represented by the predicate (a and b and c)","Q_Score":0,"Tags":"python,operators","A_Id":66066206,"CreationDate":"2021-02-05T10:14:00.000","Title":"Developing expressions using and or not","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im using AWS SageMaker Notebooks.\nWhat is the best way to execute notebook from sagemaker?\n\nMy idea is to have an S3 bucket.\nWhen a new file is putted there i want to execute a notebook that reads from S3 and puts the output in other bucket.\n\nThe only way i have from now is to start an S3 event, execute a lambda function that starts a sagemaker instance and execute the notebook. But is getting too much time to start and it doesnt work yet for me with a big notebook.\nMaybe is better to export the notebook and execute it from another place in aws (in order to be faster), but i dont know where.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":66061721,"Users Score":0,"Answer":"I have limited experience, but..If you are talking about training jobs, the only other way to launch one is to create you own container, push to ECR and launch directly the training job without dealing with notebooks. I am working on a similar project where an S3 upload triggers a lambda function which start a container (it's not sagemaker but the concept is more or less the same). The problem with this approach is that however AWS takes time to launch an instance, minutes I would say. Another approach could be to have a permanent running endpoint and trigger in some way the elaboration.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3,aws-lambda,amazon-sagemaker","A_Id":66139522,"CreationDate":"2021-02-05T10:37:00.000","Title":"In AWS, Execute notebook from sagemaker","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to build a path in Python (windows) and frustratingly enough it gives me wrong path every time. The path I'm trying to build is C:\\Users\\abc\\Downloads\\directory\\[log file name].\nSo when I use print(os.getcwd()) it returns C:\\Users\\abc\\Downloads\\directory which is fine. But when I try to use the os join in python, (os.path.join(os.path.abspath(os.getcwd()),GetServiceConfigData.getConfigData('logfilepath')))\nit returns only C:\\Logs\\LogMain.log and not the desired output. (Path.cwd().joinpath(GetServiceConfigData.getConfigData('logfilepath'))) also returns the same result.\nlogfilepath is an XML string ","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":192,"Q_Id":66063345,"Users Score":1,"Answer":"Thanks for all the help, in the end it was solved by removing 1 backslash.\n\nto\n","Q_Score":0,"Tags":"python","A_Id":66078582,"CreationDate":"2021-02-05T12:31:00.000","Title":"Python windows path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm currently doing an internship. I have a Windows program (visual components) that I need to use to control robots in an virtual environment.\nYou can add to some elements in the program a Python script to do more advanced things. These Python scripts are stored in a few specific folders on the C drive.\nIf you are in the editor of the program it's just like notepad. I'm new to programming so I would like to use an IDE so that I can import a module and see which functions I can use so that when I type \"system.\" and the IDE give me what is possible after the dot.\nProblem is when I use PyCharm to open the file directly the modules aren't recognized, however they need in to be in one of those underlaying folders because in the direct editor in the software recognized the module without problems.\nHow can I configure PyCharm to recognize the modules?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":43,"Q_Id":66065014,"Users Score":1,"Answer":"Specifically for PyCharm: File -> Settings -> Project -> Project structure.\nFrom there, you can 'Add Content Root' and point to the other folders that contain the data you want to be able to edit.\nThese should then appear on the Project Tab.\nHOWEVER, you should be aware that doing this appends these folders to your PATH (where python looks for modules). Depending on your project structure, you may end up importing some files you weren't expecting to.","Q_Score":0,"Tags":"python,windows,ide","A_Id":69434489,"CreationDate":"2021-02-05T14:26:00.000","Title":"edit python file with ide software directly from c folder","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Tensorflow to train my model. I am routinely saving my model every 10 epochs. I have a limited number of samples to train, so I am augmenting my dataset to make a larger training dataset.\nIf I need to use my saved model to resume training after a power outage would it be best to resume training using the same dataset or to make a new dataset?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":66070990,"Users Score":1,"Answer":"Your question very much depends on how you're augmenting your dataset. If your augmentation skews the statistical distribution of the underlying dataset then you should resume training with the pre-power outage dataset. Otherwise, you're assuming that your augmentation has not changed the distribution of the dataset.\nIt is a fairly safe assumption to make (assuming your augmentations do not change the data in an extremely significant way) that you are safe to resume training on a new dataset or the old dataset without significant change in accuracy.","Q_Score":0,"Tags":"python,tensorflow","A_Id":66071586,"CreationDate":"2021-02-05T21:41:00.000","Title":"Tensorflow stop and resume training","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a single camera that I can move around. I have the intrinsic parameter matrix and the extrinsic parameter matrix for each camera orientation. For object detection, I use YOLO and I get 2D bounding boxes in the image plane. My plan is to use a temporal pair of images, with the detected object in it, to triangulate the midpoint of the resulting 2D bounding box around the object.\nRight now, I use two images that are 5 frames apart. That means, the first frame has the object in it and the second frame has the same object in it after a few milliseconds. I use cv2.triangulatePoints to get the corresponding 3D point for the 2D midpoint of the bounding box.\nMy main problem is that when the camera is more or less steady, the resulting distance value is accurate (within a few centimeters). However, when I move the camera around, the resulting distance value for the object starts varying quite a bit (the object is static and never moves, only the camera looking at it moves). I can't seem to understand why this is the case.\nFor cv2.triangulatePoints, I get the relative rotation matrix between the two temporal camera orientations (R = R2R1) and then get the relative translation (t = t2 - Rt1). P1 and P2 are the final projection matrices (P1 for the camera at an earlier position and P2 for the camera at a later position). P1 = K[I|0] and P2 = K[R|t], where K is the 3x3 intrinsic parameter matrix, I is a 3x3 identity matrix, and 0 is 3x1 vector of zeros.\nShould I use a temporal gap of 10 frames or is using this method to localize objects using a single camera never accurate?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":164,"Q_Id":66071507,"Users Score":0,"Answer":"The centers of the bounding boxes are not guaranteed to be the projections of a single scene (3d) point, even with a perfect track, unless additional constraints are added. For example, that the tracked object is planar, or that the vertexes of the bounding boxes track points that are on a plane. Things get more complicated when tracking errors are present.\nIf you really need to triangulate the box centers (do you? are you sure you can't achieve your goals using only well-matched projections?), you could use a small area around the center in one box as a pattern, and track it using a point tracker (e.g. one based on the Lucas-Kanade algorithm, or one based on normalized cross-correlation) in the second image, using the second box to constrain the tracker search window.\nThen you may need to address the accuracy of your camera motion estimation - if errors are significant your triangulations will go nowhere. Bundle adjustment may need to become your friend.","Q_Score":0,"Tags":"python,deep-learning,computer-vision,opencv-python,3d-reconstruction","A_Id":66096269,"CreationDate":"2021-02-05T22:34:00.000","Title":"How to use cv2.triangulatePoints with a single moving camera","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just uninstalled and reinstalled python on my Windows machine. Before I uninstalled my previous version I was able to just double-click on a python script and it would open the command prompt, run the script, and close automatically. After re-installing with the newest version (3.9), I am no longer able to execute the script like that with a double-click.\nClearly I had done something special last time to set that up for myself, but I don't remember what it was. Any idea how I can get that double-click deal going again?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1269,"Q_Id":66073004,"Users Score":0,"Answer":"There will be an option of \"Open With\" after right-click on the file go and choose CMD. I hope it helps if not then sorry. Because I use Parrot OS","Q_Score":2,"Tags":"python,python-3.x,windows","A_Id":66073027,"CreationDate":"2021-02-06T02:26:00.000","Title":"How to execute .py file with double-click","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have built a snake game using Turtle graphics module of Python, and now I wish to convert it into an apk.\nI have tried kivy. It builds the apk, but the app crashes as soon as I open it in android. When using adb logact -s python, it says that the tkinter module is not available.\nOn further researching, I got to know that turtle graphics is based upon tkinter module and tkinter is not supported by python-for-android. The solutions suggest to rewrite my code in Kivy, but I don't know how to do so.\nAny suggestions on how can I run my turtle graphics game on android?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":277,"Q_Id":66080212,"Users Score":0,"Answer":"The solutions suggest to rewrite my code in Kivy, but I don't know how to do so.\n\n\nAny suggestions on how can I run my turtle graphics game on android?\n\nIt looks like you've already found the solution: rewrite your graphics in Kivy, or another python module that works on android. Recent pygame releases might.\nIf you don't know how to do so, you need to learn. If you try to do so but have problems with any specific question, that would be a better target for a stackoverflow question.","Q_Score":0,"Tags":"android,python-3.x,kivy,python-turtle,python-for-android","A_Id":66081362,"CreationDate":"2021-02-06T18:05:00.000","Title":"Turtle Graphics APK","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to Micropython and microcontrollers in general. I'm trying to create a script to run on a Raspberry Pi Pico that takes two time variables time1 = utime.time_ns() and time2 = utime.time_ns() and then subtracts time2 from time1 to give me the difference between the two times with nanosecond precision. When attempting to do this it prints out the value in nanoseconds rounded up to the second... for example, If there is 5 seconds between the two times the value returned is 5000000000... Is there a way that I can get a more accurate time? Am I going about this the wrong way? Thank you!!!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3515,"Q_Id":66083913,"Users Score":1,"Answer":"The processor crystal is not going to be accurate enough to get nanosecond precision. You would have to replace it with a TCXO\/OCXO crystal to get microsecond precision. Another problem is crystal drift with temperatures. The OCXO is a heated crystal. A TCXO is a temperature compensated crystal. As long as the temperature change a small a TCXO will likely get you in the microsecond ballpark. Then comes the firmware issues. Python is too slow for precise timing. you would have to pick a compiled language to minimize the jitter issues. I hope this helped.","Q_Score":0,"Tags":"time,microcontroller,micropython,raspberry-pi-pico","A_Id":68249989,"CreationDate":"2021-02-07T02:08:00.000","Title":"Raspberry Pi Pico - Nanosecond Timer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a simple question:\nI have a 2D Array, with each 'x' and 'y' coordinate having a height 'z'. Can OpenCV be used on NumPy arrays to extract contours in Python? I've seen numerous examples with JPG, PNG etc but can't find examples with an input array.\nI just want to compare the quality\/suitability of the contour on DICOM arrays in my research.\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":66084636,"Users Score":0,"Answer":"Image is simply an array. So, if you can do the same with images then obviously it's possible with the array as well. So, the answer to your question is yes. You can use the OpenCV to extract the contours from the NumPy array.","Q_Score":0,"Tags":"python,arrays,opencv","A_Id":66085297,"CreationDate":"2021-02-07T04:43:00.000","Title":"Contouring 2D Array with OpenCV (Python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The Current problem I am looking into is described below:\nMy computer is Win10, I installed only one anaconda 3.5.3 on it. Using where python there is only one python in my computer.\nI downloaded a rpy2python wheel file from the uefi website, and install that using pip install.\nWhen I import rpy2 in C disk, it is already fine, import rpy2,import rpy2.robjects are all OK.\nBut when I import rpy2 in my own project, I can only first import rpy2, when I import rpy2.robjects,the program says can not find rpy2 module.\nFinally I found the problem is that in my project, I occasionaly established an rpy2.py file, when I first import rpy2, it where automatically create an rpy2.pycache folder, secondly when I import rpy2.robjects, Of Course the computer can not find an rpy2.robjects.\nJust Keep a track of my problem.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":66084811,"Users Score":0,"Answer":"You'll want to check the Python documentation about import rules for modules. By default, having a file called rpy2.py in the working directory for your Python code will cause import rpy2 to find this one rather that the rpy2 package.\nThe easiest fix is probably to rename your module rpy2.py into something else.","Q_Score":0,"Tags":"python,import,anaconda,rpy2","A_Id":66410958,"CreationDate":"2021-02-07T05:18:00.000","Title":"import rpy2 but can not import rpy2.robjects when changing start folders","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have VRP problem. I have vehicles starting positions and I have distance matrix. I want solution to be terminated\/finished when certain locations are visited.\nSo I don't want it to actually visit each index of location_matrix but if visiting different index beside \"MUST VISITS\" make for better solution then I have no problem. Because you know sometimes going from directly 1 to 3 is slower than 1-2-3. (visiting 2 which is not necessary but make it for shortcut)\nI defined a dummy depot which cost 0 , I used this for end because if you use starts you have to define ends. And I put ends 000 which are basically ending position. You might ask why you didnt put your \"JOB\" locations. But this means they have to end there. So it doesn't seem optimal because example one vehicle could be really near to both of \"JOB\" locations but if it terminates \/ ends because it has END defined vehicle would stop.\nI have no idea how to make this work. Basically what I want that if certain locations are visited once just terminate - that's the solution. So if jobs are (1,3,5) and Vehicle 1 visited 1,3 and Vehicle 2 just visited 2 it should be finished.\nIf I use solver in ortools It will be like TSP problem which will try to visit each location in distance_matrix. I don't exactly want this. It could visit if results better solution(does it make sense?) but it should be focusing on \"JOB\" locations and how to go them faster","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":112,"Q_Id":66087248,"Users Score":1,"Answer":"Potential approach: Compute a new distance matrix with only the \"MUST VISIT\" locations, and run a VRP with this matrix.\n\nCompute a distance matrix with only the \"MUST VISIT\" locations.\nEach cell contains the shortest path between two \"MUST VISIT\" locations.\nStore all the pairwise shortest paths found.\nRun a regular VRP on this distance matrix.\nReconstruct the full path using the shortest paths found earlier.","Q_Score":2,"Tags":"python,python-3.x,algorithm,or-tools,vehicle-routing","A_Id":66105127,"CreationDate":"2021-02-07T11:16:00.000","Title":"Vehicle Routing Problem - How to finish\/determine when certain locations are visited?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Error : pytesseract.pytesseract.TesseractError: (127, 'tesseract: error while loading shared libraries: libarchive.so.13: cannot open shared object file: No such\nMy apt file looks like this :\nlibgl1 libsm6 libxrender1 libfontconfig1 libarchive-dev libtesseract-dev tesseract-ocr tesseract-ocr-eng\nMy requirements file has pytesseract mentioned.\nI added a buildpack, set the TESSDATA_PREFIX config variable path.\nThe issue persists.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1329,"Q_Id":66087588,"Users Score":3,"Answer":"I have faced same issue recently. But i fixed by adding the following library in Aptfile\n\n*libarchive13*\n\nand then redeployed my app and everything works fine...","Q_Score":1,"Tags":"heroku,deployment,tesseract,python-tesseract","A_Id":70569434,"CreationDate":"2021-02-07T11:53:00.000","Title":"'tesseract: error while loading shared libraries: libarchive.so.13: python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset with token and entity columns. In the token column there is a word and also a URL. I want to get the number of URL in token column. but I didn't find suitable source code. What I found is a way to remove the URL. Is there a way to calculate the number of URLs in the dataset? How do I calculate the number of URLs in a dataset?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":66089460,"Users Score":0,"Answer":"I Think you can use the count() fun in the DataFrame class in the Pandas library.\nFirst try importing the Pandas as pd\ncreate a object for Dataframe class\nUse the count() with object with necessary parameters.","Q_Score":0,"Tags":"python,python-3.x","A_Id":66089562,"CreationDate":"2021-02-07T15:12:00.000","Title":"Counting URL in dataset","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I select the rows with null values in respect of columns name?\nWhat I have:\n\n\n\n\nID\nA\nB\n\n\n\n\n1\na\nb\n\n\n2\n\nv\n\n\n3\ny\n\n\n\n4\nw\nj\n\n\n5\nw\n\n\n\n\nWhat I want:\nSelect rows with null in respect with e.g. column B:\n\n\n\n\nID\nB\n\n\n\n\n3\ny\n\n\n5\nw","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":818,"Q_Id":66089946,"Users Score":1,"Answer":"I guess you can use isna() or isnull() functions.\ndf[df['column name'].isna()]\n(or)\ndf[df['column name'].isnull()]","Q_Score":1,"Tags":"python,pandas,dataframe","A_Id":66090038,"CreationDate":"2021-02-07T16:00:00.000","Title":"select rows with null value python-pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to write text in a Blender mesh plane from python? I need to change characters very quickly and the knife is not ideal.\nBest","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":242,"Q_Id":66091560,"Users Score":0,"Answer":"Text object seems like the thing you are looking for Add -> Text. You can then change its content via python like this: bpy.data.objects['Text'].data.body = 'some text'.","Q_Score":0,"Tags":"python,blender,blender-2.67,blender-2.76","A_Id":66123392,"CreationDate":"2021-02-07T18:25:00.000","Title":"Blender Python Write text to plane","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I'm working on a project where I'm storing passwords in a mongoDB and using Python. Python do has bcrypt build-in module which allows us to hash a plaintext. Now, I can hash a password and store the hashed password in database. Cool. And If I want to know if saved (hashed password saved in database) is same as a given password or not. (i.e. If I hashed a password 'Password' and saved it in database, bcrypt allows us to compare this hashed password and a plain password to check if they are same or not) I can check it with some built in functions.\nBut, what I really want is, I want to take that hashed password and want to print original plaintext.\n(e.g. If I hashed a password (say Plain password is 'Password' and hashed password is 'Hashed_Password' ) and saved it in database along with UserID and email for a specific website, now at some point I want to know what was the UserID and Password. So I can get UserID (since I'm not gonna hash it) but I'll only be able to get hashed password (i.e. 'Hashed_Password) and not the real one (i.e. 'Password') I saved.)\nI hope you can Understand my problem and give me a solution. In summary, is there a way to get plaintext (i.e. original text) from hashed text or Should I used any other method to do so (like encryption or something).","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2671,"Q_Id":66091635,"Users Score":1,"Answer":"The whole purpose of hashing passwords before saving them in a databse is, others should not be able to see(calculate) the oraginal password from database.\nSimply you cannot get oraginal value from a hashed value.","Q_Score":0,"Tags":"python,hash,bcrypt","A_Id":66091779,"CreationDate":"2021-02-07T18:33:00.000","Title":"How to get plaintext from hashed text?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I'm working on a project where I'm storing passwords in a mongoDB and using Python. Python do has bcrypt build-in module which allows us to hash a plaintext. Now, I can hash a password and store the hashed password in database. Cool. And If I want to know if saved (hashed password saved in database) is same as a given password or not. (i.e. If I hashed a password 'Password' and saved it in database, bcrypt allows us to compare this hashed password and a plain password to check if they are same or not) I can check it with some built in functions.\nBut, what I really want is, I want to take that hashed password and want to print original plaintext.\n(e.g. If I hashed a password (say Plain password is 'Password' and hashed password is 'Hashed_Password' ) and saved it in database along with UserID and email for a specific website, now at some point I want to know what was the UserID and Password. So I can get UserID (since I'm not gonna hash it) but I'll only be able to get hashed password (i.e. 'Hashed_Password) and not the real one (i.e. 'Password') I saved.)\nI hope you can Understand my problem and give me a solution. In summary, is there a way to get plaintext (i.e. original text) from hashed text or Should I used any other method to do so (like encryption or something).","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2671,"Q_Id":66091635,"Users Score":0,"Answer":"From what I know,\nWe hash passwords so that even if the attacker gets access to the database the attacker will not be able to have the passwords without using a technique like brute force to get the password. I.E. assuming the attacker know how you hash the password a dictionary of passwords can be hashed and compared with the database to see which passwords match.\nNow if you want to reverse the hash I am pretty sure that can't be done other than you trying the brute force method explained above. We can't reverse the hash but only guess by giving passwords.\nIn terms of encryption which is often used as another layer. For example you could use encryption before the hash and then hash the encrypted password. This way even if the attacker inputs the correct password and only hashes that password the attacker will simply not get it right when compaing the hashes.","Q_Score":0,"Tags":"python,hash,bcrypt","A_Id":66091801,"CreationDate":"2021-02-07T18:33:00.000","Title":"How to get plaintext from hashed text?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm taking a MOOC on Tensorflow 2 and in the class the assignments insist that we need to use tf datasets; however, it seems like all the hoops you have to jump through to do anything with datasets means that everything is way more difficult than using a Pandas dataframe, or a NumPy array... So why use it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":66093323,"Users Score":0,"Answer":"The things you mention are generally meant for small data, when a dataset can all fit in RAM (which is usually a few GB).\nIn practice, datasets are often much bigger than that. One way of dealing with this is to store the dataset on a hard drive. There are other more complicated storage solutions. TF DataSets allow you to interface with these storage solutions easily. You create the DataSet object, which represents the storage, in your script, and then as far as you're concerned you can train a model on it as usual. But behind the scenes, TF is repeatedly reading the data into RAM, using it, and then discarding it.\nTF Datasets provide many helpful methods for handling big data, such as prefetching (doing the storage reading and data preprocessing at the same time as other stuff), multithreading (doing calculations like preprocessing on several examples at the same time), shuffling (which is harder to do when you can't just reorder a dataset each time in RAM), and batching (preparing sets of several examples for feeding to a model as a batch). All of this stuff would be a pain to do yourself in an optimised way with Pandas or NumPy.","Q_Score":0,"Tags":"python,tensorflow,conceptual","A_Id":66093460,"CreationDate":"2021-02-07T21:32:00.000","Title":"(Conceptual question) Tensorflow dataset... why use it?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm dealing with a highly imbalanced dataset for my project rn, for the simplicity, I will give a simple example here: a dataset has a number of 20 '0's and 80 '1's so the total is 100.\nSuppose I have already used X_train, X_test,y_train,y_test = train_test_split(X, y,stratify=y,random_state=42) to make a stratified split (X_train.shape is 80 and X_test.shape is 20), so my question is how to achieve under-sampling with K-fold validation in the train dataset at the same time.\nMy initial thought is use from imblearn.under_sampling import RandomUnderSampler to get 16 '0's and 16 '1's (total is 32) to make equal distributed dataset, and do the K-fold cross-validation on that 32 dataset and discard the rest of 48 in the X_train. Use the model to predict the X_test. So I was wondering if this is correct procedure to deal with.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":99,"Q_Id":66093809,"Users Score":0,"Answer":"You can use RandomUnderSampler method to achieve it. Put random states and ratio value in the arguments and try to see if this works.","Q_Score":0,"Tags":"python,machine-learning","A_Id":66102683,"CreationDate":"2021-02-07T22:32:00.000","Title":"How to do under-sampling with K-fold validation in machine learning","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 3 small ints (under 255) which can, therefore, be represented by a single byte.\nI would like to create a larger int from that 3 bytes sequence.\nFor example, if those 3 ints where 1, 2 and 3, I would have:\n00000001, 00000010 and 00000011\nI then would like to get the int corresponding to:\n000000010000001000000011, which according to a calculator would be the integer 66051 when converted to a decimal.\nHow can I go from 3 small ints to that final larger int in Python?","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":46,"Q_Id":66093963,"Users Score":5,"Answer":"Bit shift operators?\nFor your example (1 << 16) + (2 << 8) + 3 gives 66051.","Q_Score":2,"Tags":"python,python-3.8","A_Id":66093993,"CreationDate":"2021-02-07T22:53:00.000","Title":"How to concatenate bytes from 3 small ints to make a larger number that is represented by those bytes in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed the NewsApi via my Mac OS terminal using pip3 install newsapi-python, and when I check my anaconda environment packages I see newsapi installed. However when I try to use the following command I get an error:\nfrom newsapi import NewsApiClient\nerror:\nImportError: cannot import name 'NewsApiClient' from 'newsapi' (\/opt\/anaconda3\/lib\/python3.8\/site-packages\/newsapi\/__init__.py)\nHas anybody else experienced this?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":581,"Q_Id":66103645,"Users Score":0,"Answer":"If you have saved your python filename as \"news-api.py\" or the same as the package name you might face this error. Try renaming your file.","Q_Score":1,"Tags":"python,anaconda","A_Id":69688926,"CreationDate":"2021-02-08T14:35:00.000","Title":"Having a problem installing Newsapi on Anaconda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is really throwing me for a loop. In a pandas dataframe (df) I have the following:\n\n\n\n\ndate\nNews\n\n\n\n\n2021-02-03\nSome random event occurred today.\n\n\n2021-02-03\nWe asked a question on Stack Overflow.\n\n\n2021-02-02\nThe weather is nice.\n\n\n2021-02-02\nHello. World.\n\n\n\n\nThe date column is the index which is of the date format, and the News column is a string. What I want to do is to combine the duplicate dates and join or concatenate the News column, for example:\n\n\n\n\ndate\nNews\n\n\n\n\n2021-02-03\nSome random event occurred today. We asked a question on Stack Overflow.\n\n\n2021-02-02\nThe weather is nice. Hello. World.\n\n\n\n\nSo far, I have:\ndf = df.groupby(['date']).agg({'News': list})\nHowever, while this does combine the duplicated dates, it puts the string values in a list, or rather according to the errors I've been getting while trying to join them, into a series. At this point, I am completely lost and any hint\/tip to lead me to the right pythonic way of doing this would be greatly appreciated!\nPS: I would like to avoid using a loop if at all possible since this will need to parse through roughly 200k records multiple times (as a function). If it makes any difference, I'll be using TextBlob on the News column to perform sentiment analysis on.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":66104180,"Users Score":0,"Answer":"Quang Hoang answered the question perfectly! Although I'm not able to mark it as the answer sadly =(\n\ndf.groupby('date')['News'].agg(' '.join). \u2013 Quang Hoang Feb 8 at 15:08","Q_Score":1,"Tags":"python,pandas,string,dataframe,concatenation","A_Id":66169392,"CreationDate":"2021-02-08T15:07:00.000","Title":"Concatenating a series of strings into a single string within a Pandas Dataframe column (for each row)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"After creating a new build system and saving it, every subsequent time I start Sublime Text 3 I get an error message that says:\nError loading syntax file \"Packages\/JavaScript\/JSON.sublime-syntax\": Unable to read Packages\/JavaScript\/JSON.sublime-syntax","AnswerCount":1,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":1246,"Q_Id":66112510,"Users Score":7,"Answer":"I found that if I click on the OK button on the alert dialog box and then close all the open tabs in Sublime Text 3 and then close ST3 and reopen it, the error message no longer comes up.","Q_Score":3,"Tags":"python,sublimetext3","A_Id":66112511,"CreationDate":"2021-02-09T03:19:00.000","Title":"Sublime Text 3 - Error loading syntax file \"Packages\/JavaScript\/JSON.sublime-syntax\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm not a Python programmer and i rarely work with linux but im forced to use it for a project. The project is fairly straightforward, one task constantly gathers information as a single often updating numpy float32 value in a class, the other task that is also running constantly needs to occasionally grab the data in that variable asynchronously but not in very time critical way, and entirely on linux. My default for such tasks is the creation of a thread but after doing some research it appears as if python threading might not be the best solution for this from what i'm reading.\nSo my question is this, do I use multithreading, multiprocessing, concurrent.futures, asyncio, or (just thinking out loud here) some stdin triggering \/ stdout reading trickery or something similar to DDS on linux that I don't know about, on 2 seperate running scripts?\nJust to append, both tasks do a lot of IO, task 1 does a lot of USB IO, the other task does a bit serial and file IO. I'm not sure if this is useful. I also think the importance of resetting the data once pulled and having as little downtime in task 1 as possible should be stated. Having 2 programs talk via a file probably won't satisfy this.\nAny help would be appreciated, this has proven to be a difficult use case to google for.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":32,"Q_Id":66115598,"Users Score":3,"Answer":"Threading will probably work fine, a lot of the problems with the BKL (big kernel lock) are overhyped. You just need to make sure both threads provide active opportunities for the scheduler to switch contexts on a regular basis, typically by calling sleep(0). Most threads run in a loop and if the loop body is fairly short then calling sleep(0) at the top or bottom of it on every iteration is usually enough. If the loop body is long you might want to put a few more in along the way. It\u2019s just a hint to the scheduler that this would be a good time to switch if other threads want to run.","Q_Score":0,"Tags":"python,python-3.x,multithreading,multiprocessing","A_Id":66115896,"CreationDate":"2021-02-09T08:45:00.000","Title":"Python3 help, 2 concurrently running tasks, one needs data from the other","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've built an IB TWS application in python. All seems to work fine, but I'm struggling with one last element.\nTWS requires a daily logout or restart. I've opted for the daily restart at a set time so I could easily anticipate a restart of my application at certain times (at least, so I thought.)\nMy program has one class, called InteractiveBrokersAPI which subclasses the ECClient and EWrapper. Upon the start of my program, I create this instance and it successfully connects to and works with TWS. Now, say that TWS restarts daily at 23:00. I have implemented logic in my program that creates a new instance of my InteractiveBrokersAPI, and calls run() on it af 23:15. This too seems to work. I know this because upon creation, InteractiveBrokersAPI calls reqAccountUpdates() and I can see these updates coming in after my restart. When I try to actually commit a trade the next day, I get an error that it's not connected.\nDoes anyone else have experience in how to handle this? I am wondering how others have fixed this issue. Any guidance would be highly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":198,"Q_Id":66117983,"Users Score":0,"Answer":"Well, this doesnt exactly answer your question, but have you looked at ib_insync","Q_Score":0,"Tags":"python,interactive-brokers,tws","A_Id":66525038,"CreationDate":"2021-02-09T11:19:00.000","Title":"Interactive Brokers TWS: How to handle daily restart in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an assignment to build an ANN for regression problems using Python from scratch without using any ML library. You might have guessed I am a true beginner at this and the process is a little confusing so I would really appreciate some help in answering the few questions that I have.\nThis is the basic training algorithm that I understand for training an ANN:\n\nForward prop for prediction\nBackward prop to calculate errors\nCalculate deltas for each weight using the errors\nAccumulate deltas over a dataset iteration and calculate the partial gradient for each weight\nOptimize weights using gradient descent\n\nI hope the steps make sense and are okay. Here are a few questions that I have:\n\nWhat activation function should I use? Sigmoid is probably not the answer.\nShould an activation function be used on the single output node?\nThe formula to calculate errors for hidden layers in back-prop is \u03b4(l) = Transpose[\u03f4(l)] x \u03b4(l+1) .* g`[z(l)] where l is the layer number and g`[z(l)] I believe is the derivative of the activation function usually taken as a(l) .* (1 - a(l)). Will this change as we use an activation function other than sigmoid?\nAny errors I made or any important tip?\n\nApologies if the questions are very basic. I am a raw beginner at this.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":66118673,"Users Score":0,"Answer":"If anyone stumbles upon this with any of the same queries, here are the answers:\n1 - Sigmoid should work but it restricts the output between 0 and 1 and since this was a regression problem, you could design your problem to get a normalized output and scale it to the right number. Or just use ReLU.\n2 - Yes. The type of activation on the output node changes the type of output you get.\n3 - Of course it will. The derivative of the activation gradient is multiplied with the backpropagating global gradient to give the local gradient. Changing the activation function will change the derivative too.\n4 - Maybe try to understand the Math behind backpropagation. It's basic chain rule and once you get it, you won't get confused like this.","Q_Score":0,"Tags":"python,neural-network,linear-regression","A_Id":67556437,"CreationDate":"2021-02-09T12:06:00.000","Title":"Making an ANN for Polynomial Regression","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We created a Bot using Python SDK V4. Now we want to create the bot with REST API in Azure.\nAs per Microsoft site's suggestion, we need to create API service in C# or Node.js.\nQuestions\n\nDo the Bot Framework and the REST API need to be in the same language like in Python?\nIf the Bot Framework is in Python language and the Bot API service is in C# will it work? If yes, how will they connect with each other?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":164,"Q_Id":66119587,"Users Score":1,"Answer":"A REST API receives and responds to HTTP requests. A Microsoft Bot Framework bot is a web app service and can be considered a REST API. There is also the Bot Framework REST API that exists separately from the individual bots, and Bot Framework bots send requests to the Bot Framework REST API in order to send messages to various channels. Any rest API can communicate with any other REST API regardless of what language they're written in. They all use HTTP so the protocol is the same.","Q_Score":0,"Tags":"python,c#,botframework,rest,azure-bot-service","A_Id":66165122,"CreationDate":"2021-02-09T13:05:00.000","Title":"BOT Framework in Python and Bot service is in C# will it work?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to import paramiko library to AWS Lambda. I have tried to do so on lambda using Python version 2.7, 3.6, 3.8. I upload the zip file (created on ec2 machine using cmd, containing all dependencies) by creating a layer on Lambda function, however it keeps giving me the error-No module named Paramiko. Could you please suggest me how to successfully import paramiko to establish an sftp connection.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":358,"Q_Id":66124600,"Users Score":1,"Answer":"I have imported paramiko to Lambda Python 3.8 runtime as a layer. The point is you have to pip install it and pack it into a zip file in amznlinux2 x86 EC2 instance with Python3.8 installed. And make sure all content in the zip file is in a folder named python.","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda,paramiko,pysftp","A_Id":68801883,"CreationDate":"2021-02-09T18:07:00.000","Title":"Problem in importing Paramiko in AWS Lambda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to know more about this as this is new for me..\nI am trying to query InfluxDB with python to fetch data in 5 min time interval. I used a simple for-loop to get my data in small chunks and appended the chunks into another empty dataframe inside for loop one after another. This worked out pretty smoothly and I see my output. But while I try to perform mathematical operations on this large dataframe , it gives me a Memory error stated below:\n\"Memory Error : Unable to allocate 6.95GiB for an array with shape (993407736) and datatype int64\"\nMy system has these info 8.00GB RAM, 64 bit OS x64 based processor.\nCould my system be not supporting this ?\nIs there an alternate way I can append small dataframes into another dataframe without these memory issues. I am new to this data stuff with python and I need to work with this large chunk of data.... may be an year","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":66130596,"Users Score":0,"Answer":"Even though, your system has 8GB memory, it will be used by OS and other applications running in your system. Hence it is not able to allocate 6.95GiB only for this program. In case you are building a ML model & trying to run with huge data, You need to consider any of the below options\n\nUse GPU machines offered by any of the cloud provider.\nProcess the data in small chunks (If it is not ML)","Q_Score":0,"Tags":"python-3.x,pandas,dataframe,time-series,influxdb","A_Id":66130700,"CreationDate":"2021-02-10T03:40:00.000","Title":"Python Memory Error (After Appending DataFrame)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have one data frame with time in columns but not sorted order, I want to sort in ascending order, can some one suggest any direct function or code for data frames sort time.\nMy input data frame:\n\n\n\n\nTime\ndata1\n\n\n\n\n1 month\n43.391588\n\n\n13 h\n31.548372\n\n\n14 months\n41.956652\n\n\n3.5 h\n31.847388\n\n\n\n\nExpected data frame:\n\n\n\n\nTime\ndata1\n\n\n\n\n3.5 h\n31.847388\n\n\n13 h\n31.847388\n\n\n1 month\n43.391588\n\n\n14 months\n41.956652","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":41,"Q_Id":66134616,"Users Score":-1,"Answer":"Firstly you have to assert the type of data you have in your dataframe.\nThis will indicate how you may proceed.\ndf.dtypes or at your case df.index.dtypes .\nPreferred option for sorting dataframes is df.sort_values()","Q_Score":0,"Tags":"python-3.x,pandas,dataframe,rows","A_Id":66134737,"CreationDate":"2021-02-10T09:57:00.000","Title":"Sorting data frames with time in hours days months format","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My code (test.py) looks like this (simplified):\nfrom app.utils import conversion \n(code) \nWhen I try to make an executable using PyInstaller, the .exe works when I import generic modules. However, I get the following error message when I use ''from app.utils import conversion'' at the beginning of my code:\nModuleNotFoundError: No module named 'app' \nand the .exe won't run.\nMy project is structured this way (simplified):\nproject\/app\/test.py \nproject\/app\/utils\/conversion.py \nThe instruction I put in the console is:\npyinstaller --onefile test.py \nAny idea why and how to overcome this? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":66144821,"Users Score":0,"Answer":"Here is how I fixed my problem:\nin the .spec file, added the missing app module in hiddenimports:\nhiddenimports=[\"app\"]\nThen to compile the executable, I run the .spec file instead or the .py file.\npyinstaller --onefile test.spec","Q_Score":0,"Tags":"python,module,compiler-errors,pyinstaller,exe","A_Id":67639113,"CreationDate":"2021-02-10T20:52:00.000","Title":"ModuleNotFoundError: No module named 'x' because can't find the module folder, PyInstaller","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My graduation project is to use transfer learning on a CNN model that can diagnose Covid-19 from Chest X-ray images. After spending days Fine tuning the hyper parameters such as the number of fully connected layers, the number of nodes in the layers, the learning rate, and the drop rate using Keras tuner library with Bayesian Optimizer, I got some very good results, A test accuracy of 98% for multi class classification and a 99% for binary class classification. However, i froze all the layers in the original base model. I only fine tuned the last Fully connected layers after exhaustive hyper parameter optimization. Most articles and papers out there say that they fine the fully connected layers as well as some of the convolutional layers. Am i doing something wrong? I am afraid that this is too good to be true.\nMy data set is not that big, only 7000 images taken from the Kaggle Covid-19 competition.\nI used image enhancement techniques such as N-CLAHE on the images before the training and the classification which improved the accuracy significantly compared to not enhancing the images.\nI did the same for multiple State of art models, such as VGG-16 and ResNet50, and they all gave me superb results.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":157,"Q_Id":66145265,"Users Score":0,"Answer":"If you mean by \"only fine tuned the last Fully connected layers\" then NO, you did not.\nYou can choose to fine-tune any layer of your choice but most importantly the final layers of the model, which is what you did, so you're good to go.","Q_Score":0,"Tags":"python,machine-learning,keras,deep-learning,transfer-learning","A_Id":66148625,"CreationDate":"2021-02-10T21:25:00.000","Title":"Do i Need to fine tune the last convolutional layers in a state of art CNN models like ResNet50?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 100 images, I applied horizontal and vertical augmentation with a probability of 1 using Pytorch function RandomHorizontalFlip and RandomVerticalFlip. After this would my total number of images will be 300 or something else?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":272,"Q_Id":66147184,"Users Score":0,"Answer":"the Above illustration is precise.[to show if you want to increase the dataset size]\nbut when you are using transforms.Compose function it augment images in the runtime and dumps them after the operation. this is because storage redundant data is only a storage overhead.","Q_Score":2,"Tags":"python,pytorch","A_Id":66148651,"CreationDate":"2021-02-11T00:36:00.000","Title":"Number of images after image augmentation in pytorch","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a task which requires downloading the data from \"Who has seen this view\" from multiple dashboards each week, to then prepare a report on weekly activity.\nI am trying to find a way to automate this using Python to avoid having to manually download each week, but I can't find a way to do this programmatically without being the organisations administrator (not, I own the dashboards).\nMy thoughts were to use a web scraper, but I am encountering a hurdle with the companies Okta SSO also.\nIs there a way that I can use my open browser (or something that already contains my SSO credentials) to access this data and then download as CSV? I hope this makes sense, appreciate any help you can give.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":258,"Q_Id":66147814,"Users Score":0,"Answer":"Had me the same problem week ago.\nI tried to make it in a lot of ways with python and API I think that for now there's no way to grab it with python for this moment without using any scraping tools...\nIn postgres database you might find table view stats and here is everything and\neven more which is visible in 'who seen this' in tableau.","Q_Score":0,"Tags":"python,web-scraping,tableau-api","A_Id":71228672,"CreationDate":"2021-02-11T02:06:00.000","Title":"Tableau \"Who has seen this view\" data automation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've used TERR to make calculated columns and other types of data functions in Spotfire and am happy to hear you can now use Python. I did a simple test to ensure things are working (x3 = x2*2) - that's literally script i wrote in the python data function window and then set up the input paramters (x2 as a column) and the output paramters (x3) to be a new column....the values come out fine but the newly calculated column comes out as named x2(2)....i looked into the input\/output parameters and all the names are correct, yet the column still comes out named that way. My concern is that this is uber simple, yet why is the column not being named what is in the script even though everything is set up correct. There is even a Youtube video by a Spotfire employee, the same thing happens to them and the don't mention it at all.\nHas anybody else run across this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":91,"Q_Id":66148445,"Users Score":1,"Answer":"It does seem to differ from how the equivalent TERR data function works. I consulted with the Spotfire engineering team, and here is what they suggest. It has to do with how a Column input is handled internally in Python vs TERR. In both Python and TERR, inputs (and outputs) are passed over as a table. In TERR's case a data.frame, and in Python's case a pandas.DataFrame. In TERR's case though, if the Data Function says the input is a Column, this is actually converted from a 1-column data.frame to a vector of the equivalent type; similarly, for a Value it is converted from its 1x1 data.frame to a scalar type. In Python, Value inputs are treated the same, but Column inputs are left as a pandas.Series, which retains the column name from the original input column.\nMaybe you can try something different. You wouldn't want to convert it to a standard Python list, because in that case, x2*2 would actually make the column twice as long, rather than a vectorised arithmetic operation. But you could make it a straight numpy array instead. You can try adding \"x2 = x2.to_numpy()\" at the top of your example, and see if the result matches what you expected.","Q_Score":0,"Tags":"python,spotfire","A_Id":66174808,"CreationDate":"2021-02-11T03:40:00.000","Title":"Why isn't my new column being named correctly when using a Python Data Function in Spotfire","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have Azure http triggered function app(f1) which talks to another http triggered function app(f2) that has a prediction algorithm.\nDepending upon input request size from function(f1), the response time of function(f2) increase a lot.\nWhen the response time of function(f2) is more, the functions get timed out at 320 seconds.\n\nOur requirement is to provide prediction algorithm as a\nservice(f2)\n\nAn orchestration API(f1) which will be called by the client and\nbased on the clients input request (f1) will collect the\ndata from database do data-validation and pass the data to\n(f2) for prediction\n\nAfter prediction (f2) would respond back predicted result to\n(f1)\n\nOnce (f1) receives the response from (f2), (f1) would respond\nback to client.\n\n\n\nWe are searching for alternative azure approach or solution which will\nreduce the latency of an API and also the condition is to have f2\nas a service.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":270,"Q_Id":66150680,"Users Score":0,"Answer":"If it takes more than 5 minutes in total to validate user input, retrieve additional data, feed it to the model and run the model itself, you might want to look at something different than APIs that return response synchronously.\nWith these kinds of running times, I would recommend a asynchronous pattern, such as F1 stores all data on a Azure Queue, F2 (Queue triggered) runs the model and stores data in database. Requestor can monitor database for updates. Of F1 takes the most time, than create a F0 that stores the request on a Queue and make F1 queue triggered as well.","Q_Score":1,"Tags":"python,azure,rest,azure-functions,azure-api-management","A_Id":66150785,"CreationDate":"2021-02-11T07:49:00.000","Title":"Need better approach for azure api to process large amount of data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to fetch Cloud watch logs for different lambda fucntions and trace for a pertucular string just like insights becasue we have some set of lambda functions and applications which need to run for a perticular job. Many time some of the services stop runnig due to some issues so i want to trace them with perticulart string and get error report where it got errored.\nEg : Lambda1 will call Lambda2 and Lambda2 will add entry in DynamoDB.\nNow i want to create a lambda function which will trace lambda 1 and lambda 2 give me report if both lambdas run succesfully for a perticular JOBID.\nSo far i have tryed to use AWS Cloud watch as a trigger but it is giving log only for 1 perticular function but not for all functions.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":66151320,"Users Score":0,"Answer":"There is a Python library called Boto3 which helped me to resolve this issue.The Lambda function should have a peoper read access to fetch the logs or the corrosponding lambda fucntions we are tracking.","Q_Score":0,"Tags":"java,python,amazon-web-services,aws-lambda","A_Id":71016126,"CreationDate":"2021-02-11T08:40:00.000","Title":"Get Cloud Watch logs from differnet lambda functions or applications","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I\u2019m trying to make a bot to play a video game for me, but neither pyautogui or directinput can make the character move in another direction. What is going on? Is there any alternative?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":167,"Q_Id":66153533,"Users Score":0,"Answer":"Try running the python script as an administrator.","Q_Score":0,"Tags":"python,pyautogui,directinput","A_Id":66158637,"CreationDate":"2021-02-11T11:06:00.000","Title":"Why doesn\u2019t pyautogui and directinput work in video game?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a glue job that parses csv file uploaded to S3 and persist data to rds instance. It was working fine. But one day there occurred an error\n\nAn error occurred while calling\nz:com.amazonaws.services.glue.util.Job.commit. Not initialized.\n\nHow can I resolve this? I haven't made any changes in the script or anywhere. The python version used is 3, glue version 2. Somebody please help.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":668,"Q_Id":66153577,"Users Score":1,"Answer":"Resetting the Job Bookmark seemed to have fixed this for me. In the Console, select the Glue Job -> Action -> Reset job bookmark.","Q_Score":1,"Tags":"python-3.x,amazon-s3,amazon-rds,aws-glue","A_Id":69762043,"CreationDate":"2021-02-11T11:09:00.000","Title":"An error occurred while calling z:com.amazonaws.services.glue.util.Job.commit. Not initialized","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to activate the Python virtual environment using workon command in Visual Studio Code. Typing command workon is listing all the virtual environments already available, but when I am typing the command workon env-name to activate the environment, nothing is happing and I am also not getting any errors. Can someone help me with this problem?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":2717,"Q_Id":66155756,"Users Score":3,"Answer":"If you have already created an environment outside the Visual Studio (via command promote) and trying to activate it from MS Visual Studio then most common cause is Powershell:\n\nCheck the terminal window and check the command type we are using, by default it will be Powersheel,\nChange it to cmd and try again the command.\nMay be it will work for you, I have got success after correcting it. Thanks.","Q_Score":3,"Tags":"python,visual-studio-code,virtualenv","A_Id":67098146,"CreationDate":"2021-02-11T13:32:00.000","Title":"Virtual Environment Setup in Visual Studio Code -- Workon command","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let me describe it as briefly and clearly as possible:\nI have 10 different copies of a node JS based program running on 10 different desktops. I want to create a Node JS based (or any other technology) web app deployed on a server which will check if these 10 programs are online or not.\nAny suggestions as to how I can implement this?\nNote: The node JS based desktop apps are running on electron.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":66156309,"Users Score":0,"Answer":"You can use 2 most probable ways.\n\nif you want to know immediately whether out of 10 programs, if any of them goes offline then you should use Socket.io\n\n\nYour server nodejs program will act as server and your 10 desktop program will work as client. Your 10 client socket connection will connect with server socket connection and server socket can check whether socket client is still connected or not based on Ping\/Pong concept of socket.\n\nin brief, Ping\/pong technique in which server sends Ping event on socket connection and client will receive server's ping event and send Pong event back to server.\nif client is not sending Pong event back in predefined time interval on getting Ping event then that client is offline or disconnected.\n\nYou can periodically (say every 1\/5\/10 minutes etc ) call simple HTTP request and check if response status is 200 or not. If any of the 10 desktop program is offline then you will know it by response status whether it is 200 or not.","Q_Score":0,"Tags":"javascript,python,node.js,electron,backend","A_Id":68328337,"CreationDate":"2021-02-11T14:08:00.000","Title":"Node JS detect connectivity to all Node JS programs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let me describe it as briefly and clearly as possible:\nI have 10 different copies of a node JS based program running on 10 different desktops. I want to create a Node JS based (or any other technology) web app deployed on a server which will check if these 10 programs are online or not.\nAny suggestions as to how I can implement this?\nNote: The node JS based desktop apps are running on electron.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":66156309,"Users Score":0,"Answer":"While you can use socket.io for this there may also be a simpler way and that is to just use a post request \/ cron to check every X minutes if the server is reachable from 'Checking' server (that would just be the server that is doing the check)\nSo why not use socket.io? Well, without knowing how you node servers are setup, its hard to say if socket.io would be a good fit, this is simply because socket.io uses WSS to connect, so unless you are running it from the browser it will need additional configurations \/ modules setup on the server to actually use WSS (if you do go this route, you will need socket.io-client module on each system, this is important because this will allow you to connect to the socket.io server, also make sure the version of socket.io matches the socket.io-client build)\nAll in all, if I was building this out, I would probably just do a simple ping of each server and log it to a DB or what not but your requirements will really dictate the direction you go","Q_Score":0,"Tags":"javascript,python,node.js,electron,backend","A_Id":68328746,"CreationDate":"2021-02-11T14:08:00.000","Title":"Node JS detect connectivity to all Node JS programs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Please note that I am using pure Python and not something like Anaconda. Running on a new, updated Windows 10 machine with no virtual environment. After removing all previous Python installations, rebooting and performing a fresh install if Python 3.9.1. I run the python console and type:\nimport sqlite3\nI receive the following error:\nPython 3.9.1 (tags\/v3.9.1:1e5d33e, Dec 7 2020, 17:08:21) [MSC v.1927 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\n\n\nimport sqlite3\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"C:\\Program Files\\Python39\\lib\\sqlite3_init_.py\", line 23, in \nfrom sqlite3.dbapi2 import *\nFile \"C:\\Program Files\\Python39\\lib\\sqlite3\\dbapi2.py\", line 27, in \nfrom _sqlite3 import *\nImportError: DLL load failed while importing _sqlite3: The specified module could not be found.\n\n\n\nI verified that the sqlite3.dll file does exist in C:\\Program Files\\Python39\\DLLs","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":493,"Q_Id":66162281,"Users Score":0,"Answer":"It was an environment issue. Previous builds (using Kivy) had left .pyd and .pyc files in the application directory and when we ran the application, python would try to load those and the files they reference, rather than using the proper files in the Python39 directory. As soon as we deleted those artifacts, the app ran fine (and sqlite loaded fine).","Q_Score":0,"Tags":"python,windows,sqlite","A_Id":66162703,"CreationDate":"2021-02-11T20:25:00.000","Title":"Python 3.9.1 errors when attempting to import sqlite3","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to convert a CSV file to JSON. When I type in my code, it is functioning fine but I am getting all these random letters:\\u00ef\\u00bb\\u00bf. Is this supposed to be happening or do I need to check the file?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":66163141,"Users Score":0,"Answer":"In json.dump, add a named parameter ensure_ascii=False.","Q_Score":1,"Tags":"python","A_Id":66163271,"CreationDate":"2021-02-11T21:26:00.000","Title":"Output is printing something weird","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Couldn't find any answers other than one that didn't actually zoom out the same way, because when i tried grabbing elements they just came out as ' ' when it was too far zoomed out (Which Doesn't Happen With Manual Zoom) the code I tried then is:\n\ndriver.execute_script(\"document.body.style.zoom='25%'\")\n\nThe reason I need this is there are elements a need to access that can only be seen by scrolling, but by zooming out it shows all of them. If there's another way to do that then that'll be fine","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":66165179,"Users Score":0,"Answer":"Figured it out, basically when I was looping through all elements with the classname of the things in the list, I did the command you suggested and it scrolled down a bit each one, showing more of them, then those scroll it down, etc till it reaches the bottom, thanks, stumped me for a while","Q_Score":0,"Tags":"python,selenium","A_Id":66168175,"CreationDate":"2021-02-12T01:10:00.000","Title":"How Do You Zoom In\/Out Using Chromes Zoom","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have just started with AWS Serverless and I am having some doubts. Here is my use case and what I have tried and done so far:\nUse Case:\nMake multiple GET and POST requests to an API using HTTP API(not REST API) in AWS using lambda function.\nWhat I have done:\nCreated an HTTP API. Using $default stage currently. Created a POST route. Created a function(in python) with POST request. Attached the function integration with my POST route. I am successfully able to call this route using my frontend code(written in vanilla js). Using the data that I receive from frontend, I call an external API using it's URL in my python lambda function.\nProblem:\nI want to make a GET request to another API using it's URL. Will I have to make another lambda function to do so?\nAny help will be great. Pardon me if I have asked a silly question. It's just that I am new to AWS and HTTP API. Thank You for your time!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":495,"Q_Id":66167053,"Users Score":1,"Answer":"Based on the comments.\nA single lambda function can be used for both POST and GET requests. For this, you can have either two routes, one for POST and one for GET. Both can be integrated with the same function.\nAlternatively, you can have one ANY route to route everything into a single function.\nThe function can have same file and same handler. However, its logic must be probably modified to handle POST and GET events differently, depending on your use-case.","Q_Score":1,"Tags":"python,amazon-web-services,http,aws-lambda,aws-http-api","A_Id":66167317,"CreationDate":"2021-02-12T06:02:00.000","Title":"How to make multiple HTTP method calls in AWS using HTTP API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am writing a web app with Django in which I have to share some data with another python socket server running in parallel.\nI have tried using a common python file to store global variable.\nbut since those server are two different processes it is not working.\n(I am not interested in using any database)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":118,"Q_Id":66167177,"Users Score":1,"Answer":"You could use main file that starts up both services either with the libraries threading or multiprocessing and just communicate with normal python data types.\nIf these two services have to be separate programs then you would need to design an API for them to communicate and use sockets or some library build on-top of sockets to communicate.","Q_Score":1,"Tags":"python,django,memory,python-sockets,shared-variable","A_Id":66168291,"CreationDate":"2021-02-12T06:17:00.000","Title":"how to share data between django server and python websocket server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am planning to acquire position in 3D cartesian coordinates from an IMU (Inertial Sensor) containing Accelerometer and Gyroscope. I'm using this to track the objects position and trajectory in 3D.\n1- From my limited knowledge I was under the assumption that Accelerometer alone would be enough, resulting in acceleration in xyz axis A(Ax,Ay,Az) and would need to be integrated twice to get velocity and then position, but integrating would add an unknown constant value, this error called drift increases with time. How to remove this error?\n2- Furthermore, why is there a need for gyroscope in the first place, cant we just translate the x-y-z axis acceleration to displacement, if accelerometer tells the axis of motion then why check orientation from Gyroscopes. Sorry this is a very basic question, everywhere I checked both Gyro+Accel were used but don't know why.\n3- Even when stationary and not in any motion there is earth's gravitation force acting on the sensor which will always give values more than that attributed by the motion of sensor. How do you remove the gravity?\nOnce this has been done ill apply Kalman Filters to them to fuse them and to smooth the values. How accurate is this method for trajectory estimation of an object for environments where GPS is not an option. I'm getting the Accelerometer and Gyroscope values from arduino and then importing to Python where it will be plotted on a 3D graph updating in real time. Any help would be highly appreciated, especially links to similar codes.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3421,"Q_Id":66167733,"Users Score":3,"Answer":"1 - An accelerometer can be calibrated to account for some of this drift but in the end no sensor is perfect and inaccuracy will inevitably cause drift. To fix this you would need some filter such as the Kalman filter to use the accelerometer for short high frequency data, and a secondary sensor such as a camera to periodically get the absolute position and update the internal position. This is the fundamental idea behind the Kalman filter.\n2 - Accelerometers aren't very good for high frequency rotational data. Just using the accelerometers data would mean the system could not differentiate between a horizontal linear acceleration and rotational position. The gyroscope is used for the high frequency data while the accelerometer is used for low frequency data to adjust and counteract the rotational drift. A Kalman filter is one possible solution to this problem and there are many great online resources explaining this.\n3 - You would have to use the methods including gyro \/ accel sensor fusion to get the 3d orientation of the sensor and then use vector math to subtract 1g from that orientation.\nYou would most likely be better off looking at some online resources to get the gist of it and then using a pre-built sensor fusion system whether it be a library or an fusion system on the accelerometer (on most accelerometers today including the mpu6050). These onboard systems typically do a better job then a simple Kalman filter and can combine other sensors such as magnetometers to gain even more accuracy.","Q_Score":4,"Tags":"python,position,accelerometer,imu,pykalman","A_Id":66168101,"CreationDate":"2021-02-12T07:20:00.000","Title":"Getting 3D Position Coordinates from an IMU Sensor on Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have recently started web development using Django. What confuses me the most that, why do we create tests for scenarios like \"Check if password is provided\", \"Check if password is correct\", \"Check if a password is at least 8 characters long\" etc. when we can do the same thing using validators in Django. Is there a specific advantage of using tests over validators? Thanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":31,"Q_Id":66167740,"Users Score":1,"Answer":"Try writing a big project and manually test all the functionalities to make sure there are NO bugs and then write tests for the entire thing and I guarantee you will find at least 5","Q_Score":0,"Tags":"python-3.x,django,unit-testing,django-rest-framework","A_Id":66168477,"CreationDate":"2021-02-12T07:20:00.000","Title":"Unit\/Functional Tests Vs Validators","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have already deleted the .py but in the configuration the file still shows in the run tab in PyCharm.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":160,"Q_Id":66168762,"Users Score":1,"Answer":"Got to \"Edit configurations\", and select the configuration you want to remove. Then on top you see + - ... . The - key will delete the selected configuration.\nThe interface style of JetBrain is not so obvious for many people: it uses the macos paradigm and not the Microsoft paradigm. But when you get used on it, you will find also some advantages (being powerful and compact is one).","Q_Score":3,"Tags":"python,pycharm","A_Id":66168857,"CreationDate":"2021-02-12T08:49:00.000","Title":"How to delete run configuration file for a py file that was already deleted in PyCharm?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am able to call cnxn.commit() whenever a cursor has data however when the cursor is empty it throws cnxn.commit() mariadb.InterfaceError: Commands out of sync; you can't run this command now\nusing\ncursor.execute(\"call getNames\")","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":378,"Q_Id":66169037,"Users Score":0,"Answer":"The solution is that I should not call stored procedure but rather pass the query manually like cursor.execute(\"SELECT * FROM names\")","Q_Score":0,"Tags":"python,mariadb,database-cursor","A_Id":66169755,"CreationDate":"2021-02-12T09:09:00.000","Title":"mariadb.InterfaceError: Commands out of sync; you can't run this command now","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"im trying to split text at all punctuation for english and russian. this works except for with spaces. for some reason \\s is not working. allRussianWords ends up containing spaces but I do not want it to.\nallRussianWords = re.split(\"[\u2014\u2026();\u00ab\u00bb!?.:,%\\s\\n]\",words)\nthis is the string that i am attempting to split\nwords = \"\u043f\u0440\u0438\u0432\u0435\u0442, \u043c\u043e\u0451 \u0438\u043c\u044f \u041c\u044d\u0442\u0442. \u041a\u0430\u043a \u0442\u044b?\"\nthe punctuation is in russian","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":66177595,"Users Score":1,"Answer":"Seems like you need a + after the closing square bracket, to match consecutive characters. One of the other answers points this out, too.\nThe \\n is also redundant, as \\s contains the line return character.","Q_Score":0,"Tags":"python,split,space,spaces,python-re","A_Id":66178948,"CreationDate":"2021-02-12T19:00:00.000","Title":"why is my python re pattern not working for splitting at spaces?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently building a neural network to predict features such as temperature. So the output for this could be a positive or negative value. I am normalizing my input data and using the tanh activation function in each hidden layer.\nShould I use a linear activation function for the output layer to get an unbounded continuous output OR should I use tanh for the output layer and then inverse normalize the output? Could someone explain this I don't think my understanding of this is correct.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":186,"Q_Id":66178463,"Users Score":1,"Answer":"You are actually in the correct direction\nOption1:\nyou need to normalize the temperatures first and then fed it to the model let say your temperature ranges from [-100,100] so convert it into a scale of [-1,1] then use this scaled version of temp in your target variable.\nAt the time of prediction just inverse transform the output and you will get your desired result.\nOption2:\nYou create a regression kind of neural network and don't apply any activation function to the output layer (means no bonds for values it could be +ve or -ve).\nIn this case you are not required to normalize the values.\nSample NN Spec:\nInput Layer==> # neurons one per feature\nHidden Layer==>relu\/selu as activation function| # of neurons\/Hidden layers is as per your requirement\nOutput Layer==> 1 neuron\/ no Activation function required","Q_Score":0,"Tags":"python,machine-learning","A_Id":66185049,"CreationDate":"2021-02-12T20:13:00.000","Title":"How to get an unbounded continuous output from a neural network?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to solve a large MIP scheduling problem. Since it will take a long time to solve the problem, I want to run the same model with fewer event point and find its n-th solution. Use that solution as an initial solution\/seed for a bigger(more event points) model to find its n-th solution and use this to cascade up till the desired number of event points.\nUsing the solution from the small problem, I use its binary values in the mip start and let the newly added event point un touched. I save these values in a dictionary name seed_sol where the key is the binary variable(obtain when creating the varible) and the value is 0\/1 from the previous solve.\nm.add_mip_start(SolveSolution(m, seed_sol))\nUsing the above code, I warm start my larger runs. However, when I look at the output log I realised that the solution rarely improves and the gap is very low(I know for a fact that the actual optimal solution is much much higher). I suspect that the 'add_mip_start' function forces the solution values to my initial seed solution and tries to improve the solution by only adjusting the newly added binary variables.\nHow do i fix this to get the desired outcome?\nUsing:\n\nPython 3.6.8\ncplex 12.10.0.0\ndocplex 2.19.202","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":401,"Q_Id":66179181,"Users Score":0,"Answer":"The MIP start is used as a starting point, but its start values might be changed in the search, as opposed to a fixed start, where fixed values are considered as hard constraints.\nBTW, you can also implement fixed starts with constraints, this facilitates adding or removing these fixed starts.\nHowever, the interest of a MIP start resides in the quality of the initial solution. In your case, it seems the initial solution is much smaller than the large problem, so it might not help much.\nTo assess your MIP performance issues, can you indicate the size of the problem (as printed by Model.print_information()))\nand also the CPLEX log (at least the part where cplex stalls.)","Q_Score":0,"Tags":"python,cplex,docplex","A_Id":66198320,"CreationDate":"2021-02-12T21:20:00.000","Title":"DOCPLEX MIP warmstart","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a new learner. Just installed python on my windows pc. Instead of going to the start menu to get to the python command line to select python 64 bit, I typed python in the search field and selected IDLE python 64 bit. When I realized my mistake, I closed using exit(). The system prompted me that it would \"kill\" the running program and I clicked ok without knowing the implication. I am worried that my action may affect the system performance. Is it serious? How do I correct this? Also, what is the right way to close IDLE python next time? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":66179373,"Users Score":0,"Answer":"IDLE Python is just the standard python editor.\n\"Killing\" a problem just means that it'll be terminated or in other words stopped.\nThe \"correct\" way to close IDLE would just be to close the window like you would any other window when your code has finished running.\nThere isn't any damage done to your system or anything so you don't have to worry.","Q_Score":0,"Tags":"python-3.x","A_Id":66180855,"CreationDate":"2021-02-12T21:38:00.000","Title":"How do I close IDLE python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I do not have administrative privileges' on my Windows 10 workstation. The IT department installed Python 2.7 as my request but I proceed a PIP upgrade without the \"--user\" setting, and now the already installed PIP got corrupted and I do not know how to recover it.\nThe corrupted PIP always return syntax error on lib\\site-packages\\pip_internal\\cli\\main.py\", line 60\nsys.stderr.write(f\"ERROR: {exc}\")\nI can not run again the --upgrade or get-pip\nI can write in the Python folder so I can change the main.py file.\nIs there a way to manually recover the installation (without sudo)? I need to reinstall the Python?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":226,"Q_Id":66180184,"Users Score":0,"Answer":"It would be better to reinstall Python, yes.\nIt would be better to install a version of Python that was actually still supported, such as 3.6 or newer.","Q_Score":0,"Tags":"python,installation,pip,failed-installation","A_Id":66180204,"CreationDate":"2021-02-12T23:08:00.000","Title":"How to to manually recover the a PIP corrupted installation?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two versions of python in my Macbook, python3.7 and python3.8. I've been using python3.7 to do my data analysis. I used $python3.7 -m jupyter notebook to open jupyter notebook all the time. I've stopped for a while, last week I typed the same command in the terminal, jupyter notebook was opened by python3.8. I tried different ways, seems python3.7 isn't functional at all, jupyter notebook is only able to be opened by python3.8.\nI have two questions:\n\nHow to open jupyter notebook again with python3.7?\nCan I use python3.8 to keep analyzing the data generated in python 3.7? I tried to do some analysis in python3.8, seems it's fine so far, but I'm not sure about it as I'm still a beginner.\n\nThank you in advance for your nice help!\nYi","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":66180647,"Users Score":0,"Answer":"Get rid of python3.7 it had known issues. I do not know why your jupyter-notebook stopped working suddenly but you can use python3.8. I doubt that you are using any processes that got deprecated between the two major releases.","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":66180783,"CreationDate":"2021-02-13T00:10:00.000","Title":"python3.7 is not able to open jupyter notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed the dlib package on my Mac M1 through anaconda, and when I type \"conda list\" in the terminal, I can see \"dlib\" being installed. But when I type \"dlib --version\" in the Terminal, I have this message \"zsh: command not found: dlib\". Furthermore, when I open Spyder, or Jupyter, and when I try to import dlib, the Kernel crashes.\nCan anyone help me with this? I have been struggling with this issue for so long...\nThanks for the support!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":885,"Q_Id":66180798,"Users Score":0,"Answer":"I had the same problem. I uninstalled anaconda, reinstalled it, and I used pip install (cmake, boost, dlib) instead of Conda install. Both dlib and face_recognition worked.","Q_Score":0,"Tags":"python,dlib","A_Id":67425559,"CreationDate":"2021-02-13T00:34:00.000","Title":"Dlib installed but cannot open it- Mac M1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have used the software Thonny to send programs to my raspberry pi pico. I am trying to make a specific program auto run when my pico is plugged in. At the moment another program which is on the pico auto runs but I want another program to run instead.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":846,"Q_Id":66183596,"Users Score":3,"Answer":"Name the program you want to run main.py","Q_Score":3,"Tags":"micropython,thonny,raspberry-pi-pico","A_Id":67217062,"CreationDate":"2021-02-13T09:27:00.000","Title":"How can you make a micropython program on a raspberry pi pico autorun?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am following a tutorial for Django and at some point, it says to run .\/manage.py migrate and says that if I get an error, I should fix it.\nI get an error but when I search for it I can't find specifically .\/manage.py but I keep finding mixes with that and python manage.py.\nBut when I run the code python3 manage.py everything works fine and I don't get errors.\nSo are .\/manage.py and python(3) manage.py different or not?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":119,"Q_Id":66184235,"Users Score":2,"Answer":"No this is the same.\nError might be because of your system configuration, for example when system does not know how to run python script, or when script has not \"executable\" permission set (chmod a+x manage.py might fix this).","Q_Score":1,"Tags":"python,django","A_Id":66184264,"CreationDate":"2021-02-13T10:47:00.000","Title":"Django: What is the difference between .\/manage.py [...] and python manage.py [...]","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have been working on a project which involves me to get the icon from the icon theme used on Linux so that I can use it with the Gtk Pixbuf like how Gnome-system-monitor displays the icon for all the process, the same thing I want to achieve. Any ideas about how to do this?\nI am using python with Gtk on PopOS 20.10.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":200,"Q_Id":66184610,"Users Score":1,"Answer":"Gio.AppInfo in the Gtk library stack is a good point to start.\nIf you are looking for the approach that is used by the gnome-system-monitor then the prettytable.c file will the one you need to check.\nThere is one more approach, scanning the \/usr\/share\/application\/ directory and creating a file monitor for this directory. All icons of the application are that are in the menu can be found here.","Q_Score":0,"Tags":"python,linux,user-interface,process,gtk","A_Id":69208718,"CreationDate":"2021-02-13T11:32:00.000","Title":"Get icon of a process (like in gnome-system-monitor) to be used with Gtk and python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm training a model with a 4 day look back and 4 days future forecast. The last 4 days will be acting as a feature for the next days.\nIn that case if i have x_test as [[1,2,3,4],[5,6,7,8]] and y_test[[0.1,0.2,0.3,0.4],[0.5,0.6,0.7,0.8]]\nif we do a model.predict(x_test[0]), the result y(hat) we need to comapare with y[1].\nSo how is model.evaluate() doing this comparison? if we comapre y(hat)[0] with y[0], it is wrong right?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":562,"Q_Id":66184612,"Users Score":0,"Answer":"As you mentioned, if we give values for consecutive days then we can use the second set of 4 values to evaluate model predictions of the first set of 4. [ it is called the rolling window validation method]\nBut, Since your dataset is in the form of input - [first 4 values] \/ label - [next 4 values], here we do not need to evaluate result of x_test[0] with y[1] because the actual y value for x_test[0] is in the y[0].","Q_Score":0,"Tags":"python,tensorflow,deep-learning,time-series,lstm","A_Id":66185823,"CreationDate":"2021-02-13T11:32:00.000","Title":"How does model.evaluate() work in tensorflow?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I'm starting to study RNN, particularly LSTM, and there is part of the theory that I just don't understand.\nWhen you stack LSTM cells, I see how everybody detaches the hidden state from history, but this makes no sense to me, aren't LSTM supposed to use hidden states from history to make better predictions?\nI read the documentation but it still not clear to me, so any explanation is welcomed","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":853,"Q_Id":66187443,"Users Score":3,"Answer":"You got it right, the hidden state in the LSTMs is there to serve as a memory. But this question arises, are we supposed to learn them? No, hidden state isn\u2019t suppose to be learned, so we detach it to let the model use those values but to not compute gradients.\nIf you don't detach, then the gradients will be really big.","Q_Score":0,"Tags":"python,lstm,recurrent-neural-network","A_Id":66187921,"CreationDate":"2021-02-13T16:37:00.000","Title":"LSTM- detach the hidden state","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm watching a video on YouTube that that is telling me to type cd documents and then within that file type cd python to connect python, but every time I try this it says \"the system cannot find the path specified\" I don't understand why it is saying this because I can see that python is in my documents folder.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":66188745,"Users Score":0,"Answer":"If you are using Windows CMD or Powershell, use the command py instead of python.\nExample: py myscript.py (or py.exe .\\myscript.py also works)","Q_Score":0,"Tags":"python-3.x","A_Id":66188811,"CreationDate":"2021-02-13T18:54:00.000","Title":"connecting python to the command prompt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm watching a video on YouTube that that is telling me to type cd documents and then within that file type cd python to connect python, but every time I try this it says \"the system cannot find the path specified\" I don't understand why it is saying this because I can see that python is in my documents folder.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":66188745,"Users Score":0,"Answer":"if you want to run the python code which is located in your documents folder then open cmd and type \"cd documents\" and if your python code is in same folder then type \"python 'filename'.py\" pre enter to run","Q_Score":0,"Tags":"python-3.x","A_Id":66188829,"CreationDate":"2021-02-13T18:54:00.000","Title":"connecting python to the command prompt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Thanks for your time.\nBasically, what I'm trying to do is to set a object from a database list (.csv) and if i get an ValueError I would like to set that field value and keep adding data\n\nValueError: Field 'age' expected a number but got 'M'\n\nI'm quite sure that's a doc for this, but I've been reading for some time and hasn't found.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":45,"Q_Id":66191418,"Users Score":1,"Answer":"How about filtering the data once you receive it? For example lets say age field expects an Integer, and before you save it you could check if the data is an Integer. But I also think the most efficient way is using try except.","Q_Score":0,"Tags":"python,python-3.x,django,django-models,django-rest-framework","A_Id":66193705,"CreationDate":"2021-02-14T00:46:00.000","Title":"Django how to change a vallue of a field that received error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Thanks for your time.\nBasically, what I'm trying to do is to set a object from a database list (.csv) and if i get an ValueError I would like to set that field value and keep adding data\n\nValueError: Field 'age' expected a number but got 'M'\n\nI'm quite sure that's a doc for this, but I've been reading for some time and hasn't found.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":66191418,"Users Score":0,"Answer":"Using a simple 'try', 'except' block should work where you have a default value to use in 'except' before saving.","Q_Score":0,"Tags":"python,python-3.x,django,django-models,django-rest-framework","A_Id":66191475,"CreationDate":"2021-02-14T00:46:00.000","Title":"Django how to change a vallue of a field that received error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"N.B.: Please do not lock\/delete this question as I did not find any relevant answer.\nI need to convert a .py file to .exe. I have tried pyinstaller, py2exe, and auto_py_to_exe. Unfortunately, the output files are very big. For example, if I simply convert a python file to print('Hello world!'), the output folder becomes 22 MB. The command of --exclude-module does not reduce so much.\nIf I write the same code in C and compiled by Dev-C++, the file size will be below 1 MB.\nSo, is there any way to convert the .py file to a .exe file with a smaller file size?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":614,"Q_Id":66191542,"Users Score":2,"Answer":"As mentioned by Jan in the comments, pyinstaller bundles everything together that is needed to run the program. So realistically the only to make the file smaller is to make sure the target computer has a python environment. Long story short, if you must use a .exe then they are not going to get any smaller unless you re-write it so it needs less external libraries etc.","Q_Score":0,"Tags":"python,python-3.x,pyinstaller,py2exe","A_Id":66193322,"CreationDate":"2021-02-14T01:11:00.000","Title":"How to convert .py file to .exe file with a smaller file size?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How to import postgresql database (.sql) file from AmazonS3 to AWS RDS?\nI am very new to AWS, and Postgresql.\nI have created a database using PgAdmin4 and added my data to the database.\nI have created a backup file of my database i.e. .SQL file.\nI have created a database instance on AWS RDS.\nI have uploaded my database file and several documents s3 bucket.\nI tried to integrate AWS S3 and RDS database using AWS Glue, but nothing is working for me. I am not able to figure out how to integrate S3 and RDS for importing and exporting datafrom S3 to RDS and vice versa.\nCan you please tell me how can I set up RDS and S3?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":510,"Q_Id":66192696,"Users Score":0,"Answer":"What you can do is install a pure python library to interact with rds and run the commands via that library just like you would do with any normal python program. It is possible for you to add libraries like this to run in your glue job. In your case pg8000 would work like a charm","Q_Score":0,"Tags":"python-3.x,postgresql,amazon-s3,amazon-rds,aws-glue","A_Id":66200464,"CreationDate":"2021-02-14T05:24:00.000","Title":"How to import postgresql database (.sql) file from AmazonS3 to AWS RDS?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am solving a stochastic differential equation and I have a function that contains an algorithm to solve it. So I have to call that function at each time step (it is similar to Runge Kutta's method but with a random variable), then I have to solve the equation many times (since the solution is random) to be able to make averages with all the solutions . That is why I want to know how to call this function in each iteration in the most efficient way possible.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":647,"Q_Id":66196167,"Users Score":1,"Answer":"The best way to implement a function on an iterable is to use the map function.\nSince map is written in C and is highly optimized, its internal implied loop can be more efficient than a regular Python for loop.","Q_Score":2,"Tags":"python,performance,function,loops,call","A_Id":66196437,"CreationDate":"2021-02-14T14:03:00.000","Title":"How can I efficiently call a function in a loop with python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am solving a stochastic differential equation and I have a function that contains an algorithm to solve it. So I have to call that function at each time step (it is similar to Runge Kutta's method but with a random variable), then I have to solve the equation many times (since the solution is random) to be able to make averages with all the solutions . That is why I want to know how to call this function in each iteration in the most efficient way possible.","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":647,"Q_Id":66196167,"Users Score":3,"Answer":"Some ways to optimize function calls:\n\nif the function arguments and results are always the same, move the function call out of the loop\nif some function arguments are repeated and the results for a given set of arguments are the same, use memoize or lru_cache\n\nHowever, since you say that your application is a variation on Runge-Kutta, then neither of these is likely to work; you are going to have varying values of t and the modeled state vector, so you must call the function within the loop, and the values are constantly changing.\nIf your algorithm is slow, then it won't matter how efficient you make the function calls. Look at optimizing the function to make it run faster (or convert to Cython) - the actual call itself is not the bottleneck.\nEDIT: I see that you are running this multiple times, to determine a range of values given the stochastic nature of this simulation. In that case, you should use multiprocessing to run multiple simulations on separate CPU cores - this will speed things up some.","Q_Score":2,"Tags":"python,performance,function,loops,call","A_Id":66198269,"CreationDate":"2021-02-14T14:03:00.000","Title":"How can I efficiently call a function in a loop with python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a robotics project which sends a livestream of images. The images are all published to the same topic. What I am finding is that a backlog is created and a delay starts to form between images being sent for publish and them actually being published.\nI am assuming there is some form of internal buffer \/ queueing system in PAHO MQTT which is causing this.\nGiven the nature of the project I am not precious about each image being published, ideally I'd be able to drop any messages that are waiting to be published to a certain topic and re-publish new content. Does anyone know if this is possible, and if so how?\nThank you","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":246,"Q_Id":66198205,"Users Score":2,"Answer":"No, this is not possible.\nThe only thing that would cause messages to back up on the client is if you are publishing them quicker than the client can send them to the broker, which under normal circumstances will be a product of the network speed between the client and the broker.\nThe only other thing that might have an impact would be if you are manually running the network loop and you are not calling it often enough.","Q_Score":1,"Tags":"python,mqtt,communication,paho","A_Id":66198616,"CreationDate":"2021-02-14T17:36:00.000","Title":"Paho MQTT Python - Clear topic queue if new message published","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an array of numbers\na = [440, 320, 650]\nI am trying to write a .wav file that writes those frequencies in order. But I don't know if scipy.io.wavfile is able to write an array of frequencies to a wav file. All I can do right now is something like\nwavfile.write('Sine.wav', rate, a[0])\nI am thinking to do something like this though.\nfor x in range(0, len(a)):\n#wavfile.addFrequency('Sine.wav', rate, a[x])","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":178,"Q_Id":66200524,"Users Score":0,"Answer":"\"In order\" doesn't sound very clear to me. I guess you would like to mix those sine waves. For that you must make a single wave with the 3 sines by summation of their amplitude values. For each sample, three values must be added (one for each sine) taking care that the result never overflows -1.0 or 1.0 (for float values, for 16 bit integers the range would be -32768 to 32767 instead).\nNow if you plane to render the three waves successively instead, you have to determine the duration of each segment and take care that the junction between two waves is done at amplitude zero to prevent numeric clicks.","Q_Score":2,"Tags":"python,scipy,wav","A_Id":66200647,"CreationDate":"2021-02-14T21:56:00.000","Title":"Is there a way to write multiple frequencies to a .wav file?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My configuration (very basic):\n\n\n settings.py\n AWS_S3_REGION_NAME = 'eu-west-3'\n AWS_S3_FILE_OVERWRITE = False\n # S3_USE_SIGV4 = True # if used, nothing changes\n # AWS_S3_SIGNATURE_VERSION = \"s3v4\" # if used, nothing changes\n AWS_ACCESS_KEY_ID = \"xxx\"\n AWS_SECRET_ACCESS_KEY = \"xxx\"\n AWS_STORAGE_BUCKET_NAME = 'xxx'\n # AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com' # if used, no pre-signed urls\n AWS_DEFAULT_ACL = 'private'\n AWS_S3_OBJECT_PARAMETERS = {'CacheControl': 'max-age=86400'}\n AWS_LOCATION = 'xxx'\n DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'\n \n INSTALLED_APPS = [\n ...,\n 'storages'\n ]\n \n models.py\n class ProcessStep(models.Model):\n icon = models.FileField(upload_to=\"photos\/process_icons\/\")\n\n\nWhat I get:\n\nPre-signed url is generated (both in icon.url and automatically on admin page)\nPre-signed url response status code = 403 (Forbidden)\nIf opened, SignatureDoesNotMatch error. With text: The request signature we calculated does not match the signature you provided. Check your key and signing method.\n\nTried:\n\nchanging access keys (both root and IAM)\nchanging bucket region\ncreating separate storage object for icon field (same error SignatureDoesNotMatch)\nchanging django-storages package version (currently using the latest 1.11.1)\n\nOpinion:\n\nboto3 client generate_presigned_url returns url with invalid signature\n\nQuestions:\n\nWhat should I do?\nWhy do I get the error?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":313,"Q_Id":66200605,"Users Score":-2,"Answer":"Patience is a virtue!\nOne might wait for 1 day for everything to work","Q_Score":0,"Tags":"django,amazon-s3,acl,python-django-storages","A_Id":66224777,"CreationDate":"2021-02-14T22:09:00.000","Title":"django storages AWS S3 SigVer4: SignatureDoesNotMatch","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it possible to upload and manipulate a photo in the browser with GitHub-pages? The photo doesn't need to be stored else than just for that session.\nPS. I'm new to this area and I am using python to manipulate the photo.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":25,"Q_Id":66200998,"Users Score":1,"Answer":"GitHub pages allows users to create static HTML sites. This means you have no control over the server which hosts the HTML files - it is essentially a file server.\nEven if you did have full control over the server (e.g. if you hosted your own website), it would not be possible to allow the client to run Python code in the browser since the browser only interprets JavaScript.\nTherefore the most easy solution is to re-write your code in JavaScript.\nFailing this, you could offer a download link to your Python script, and have users trust you enough to run it on their computer.","Q_Score":1,"Tags":"python,github-pages","A_Id":66201055,"CreationDate":"2021-02-14T23:08:00.000","Title":"Can I manipulate an image in the browser with github pages?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I\u2019m working on a small program that generates animations and, for most parts, it\u2019s working as expected. The only place where I\u2019m facing a problem is when the midi onset\u2019s are of incredibly small duration and my animation then goes extremely out of sync.\nA basic outline of my process is this:\n\nFind the difference between the current onset and the onset that follows it (in seconds).\n\nGenerate n frames for the current onset where n is round(difference * frame rate)\n\n\nBut when too many small duration onset\u2019s are played together, the entire animation that follows it goes out of sync because all the minimal time lags caused rounding n in step 2 sum up.\nIs there a better way to tackle this problem where my animation would be in sync regardless of the changes in onset\u2019s?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":26,"Q_Id":66203042,"Users Score":0,"Answer":"In step 1, the difference is computed based on unrounded times. But you have to use the actual time when the current onset becomes visible. This is the sum of all previous n, divided by the frame rate.","Q_Score":0,"Tags":"python-3.x,midi,midi-instrument","A_Id":66203159,"CreationDate":"2021-02-15T05:16:00.000","Title":"Synchronising animation with midi data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python gui project which having a database. i want to convert my project into .exe\nHow the end user will get database or how can convert my entire project along with database(mysql).\nMy requirement is end user want to have all the things along with db in a single .exe file.\nNote: application is for windows and database will be located locally.\nThanks in advance for your response.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":155,"Q_Id":66203343,"Users Score":0,"Answer":"Use py2exe or pyinstaller.\n\n$ pip install pyinstaller\n\n\n$ pip install py2exe","Q_Score":0,"Tags":"python,user-interface,tkinter","A_Id":67026187,"CreationDate":"2021-02-15T05:57:00.000","Title":"How to convert my Python Project (GUI With Tkinter) into .exe where it having database dependence?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Step 1:\nI create a Pycharm project and I create a Github repository from it.\nOR\nI create a Github repository and I pull it to create a Pycharm project.\n.\nStep 2:\nI copy\/paste files that I want to use from another project into my new PyCharm project.\nTheses files are available from PyCharm but when I commit and push the project, they aren't push to Github.\nPS: if I create theses files from the PyCharm interface and copy\/paste the code on the file, it works like a charm.\n.\nQuestion: How to add theses \"manually added files\" to the Github repository ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":184,"Q_Id":66205746,"Users Score":0,"Answer":"Right click on the file -> Git -> Add\nCtrl+Alt+A","Q_Score":0,"Tags":"python,pycharm","A_Id":66205796,"CreationDate":"2021-02-15T09:44:00.000","Title":"how to push from Pycharm to Github files that I copy\/paste on the project?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"C:\\Users\\Dell>pip install git-review\nFatal error in launcher: Unable to create process using '\"c:\\python39\\python.exe\" \"C:\\Python39\\Scripts\\pip.exe\" install git-review': The system cannot find the file specified\nI am getting this error i have tried many way to resolve it.\nby installing pip and python again.\nand trying old question to solve this error but unable to solve","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":381,"Q_Id":66206292,"Users Score":0,"Answer":"As @programandoconro mentioned in comment\npython -m pip install --upgrade --force-reinstall pip and then python -m pip install git-review\nworked for me","Q_Score":1,"Tags":"python,python-3.x,pip,git-review","A_Id":66206611,"CreationDate":"2021-02-15T10:25:00.000","Title":"Git review is not installing using PIP","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was originally running Tensorflow using PyCharm.\nIn PyCharm, the same phrase as the title did not appear.\nBut after I switched to VS Code and installed Python extension,\nWhen I write and execute import tensorflow as tf, the error like the title appears repeatedly.\n\nImportError: Could not load dynamic library 'cudart64_110.dll'\n\nConsidering that there was no problem in PyCharm, it does not seem to be an environmental variable problem.\nWhen I type the same command that was executed in VS Code in the command prompt window, another phrase appears,\n\n\"Connection failed because the target computer refused to connect.\"\n\nMy OS: Windows 10\nI am using Anaconda, and I created a virtual environment.\nvscode ver : 1.53.2\ntensorflow ver : 2.4.1\nCUDA : 11.2\ncudnn : 8.1","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12553,"Q_Id":66206835,"Users Score":0,"Answer":"I copied \"cudart64_110.dll\" to the CUDA\/v11.2\/bin folder and it was resolved.","Q_Score":1,"Tags":"python,tensorflow,visual-studio-code","A_Id":66288429,"CreationDate":"2021-02-15T11:02:00.000","Title":"ImportError: Could not load dynamic library 'cudart64_110.dll'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed black Python code formatter using 'pip install black' in my virtual environment. But when I run '''python -m black output.py --check''', output is like this ''' \/usr\/bin\/python: No module named black. '''. How to correct this error?\nI'm getting same error outside virtual environment as well.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2134,"Q_Id":66208353,"Users Score":1,"Answer":"As you have mentioned, you installed black to a virtual env.\nBut your output says \/usr\/bin\/python which probably means that the virtual environment is not activated.\nTry activating it with source {YOUR_VENV_ROOT}\/bin\/activate, if you are using python-virtualenv, or activate it by other means, and try again.\nYou can also access your venv by executing your local python executable: {YOUR_VENV_ROOT\/bin\/python -m black output.py --check","Q_Score":1,"Tags":"python,formatter","A_Id":66208471,"CreationDate":"2021-02-15T12:49:00.000","Title":"I have installed \"black\" in virtual environment using pip, but when I run python -m black my_module.py , it says No module named black","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I started looking into Numpy using a 'Python for data analysis'. Why is the array dimension for arr2d is \"2\", instead of \"3\". Also why is the dimension for arr3d \"3\", instead of \"2\".\nI thought the dimension of the array is based on the number of rows? Or this doesn't apply to higher dimensional and multidimensional arrays?\narr2d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\narr2d.shape\nOutput: (3, 3)\narr2d.ndim\nOutput: 2\narr3d = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])\n arr3d.shape\nOutput: (2, 2, 3)\narr3d.ndim \nOutput: 3","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":107,"Q_Id":66210485,"Users Score":1,"Answer":"well see basically the dimension of the array is not based on the number of rows\nbasically it is based on the brackets i.e [] that you entered in np.array() method\nsee\narr2d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\nin arr2d there are 2 brackets([[]]) or there are 2 opening brackets([[) or its has 2 closing brackets(]]) so its an 2D array of (3,3) i.e 3 rows and 3 columns\nsimilarly\narr3d = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]])\nin arr3d there are 3 brackets([[[]]]) or there are 3 opening brackets([[[) or or its has 3 closing brackets(]]]) so its an 3D array of (2,2,3) i.e its has 2 arrays of 2 rows and 3 columns","Q_Score":0,"Tags":"python,arrays,numpy","A_Id":66212428,"CreationDate":"2021-02-15T15:08:00.000","Title":"Understanding Numpy dimensions of arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a csv file which has a column named population. In this CSV file the values of this column are shown as decimal (float) i.e. for e.g. 12345.00. I have converted whole of this file to ttl RDF format, and the population literal is shown as the same i.e. 12345.0 in the ttl file. I want it to show as integer (whole number) i.e. 12345 - Do I need to convert the data type of this column or what to do? Also, I would ask how can I check the data type of a column of a dataFrame in python?\n(A beginner in python)- Thanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":66212917,"Users Score":0,"Answer":"csv_data['theColName'] = csv_data['theColName'].fillna(0)\ncsv_data['theColName'] = csv_data['theColName'].astype('int64') worked and the column is successfully converted to int64. Thanks everybody","Q_Score":0,"Tags":"python,pandas,dataframe,type-conversion","A_Id":66215372,"CreationDate":"2021-02-15T17:45:00.000","Title":"data type conversion in dataFrame","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have two data frames. One with employee IDs. And one with employee IDs and a Boolean value that indicates if they were given a raise when they asked (they can ask multiple times).\n\n\n\n\nID\n\n\n\n\n5\n\n\n8\n\n\n9\n\n\n22\n\n\n\n\n\n\n\nID\nRaise\n\n\n\n\n5\nTrue\n\n\n5\nFalse\n\n\n5\nTrue\n\n\n8\nTrue\n\n\n9\nTrue\n\n\n22\nFalse\n\n\n\n\nHow can I create a dataframe that merges employee IDs and whether they were given a raise (regardless of how many times they asked)? Like the following.\n\n\n\n\nID\nRaise\n\n\n\n\n5\nTrue\n\n\n8\nTrue\n\n\n9\nTrue\n\n\n22\nFalse\n\n\n\n\nWhenever I try to merge normally, extra rows are created due to multiple of the same ID.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":66214679,"Users Score":0,"Answer":"As @Paul H said, you need an aggregation on the second dataframe\ndf2.groupby(\"ID\")[\"Raise\"].any()\nThen you can merge with the first one using ID.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":66214965,"CreationDate":"2021-02-15T20:02:00.000","Title":"Merging Dataframes and getting extra rows. (Python\/Pandas)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've created Python script which makes a GET request to infrastructure monitoring tool to fetch a json object of problems that occurred in the last 30 days. All these problems have unique ids.\nAfter that, it makes a POST request to push this json object record by record to another API end-point.\nI want to attach this script to cron job to execute it every 5 minutes, but I could not figure out a way to only get or post previously not sent events. So it always fetches a list of problems in the last 30 days and it pushes all of them.\nI thought about writing this json to local file and comparing it with the latest request, but then the previously fetched records also becomes considered new. So I am stuck and couldn't find a similarly asked question. I am open to all suggestions as long as things do not get too complicated :)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":66215476,"Users Score":0,"Answer":"You need some sort of persistence.\nYou can save a file with the timestamp of the most recent event.\nThen, the next time the script executes, it only POSTs events with a timestamp later than the one in the saved file, and then updates the saved file.\nIf the API to which you're POSTing has a way to fetch the last event you sent to it (NOT the last event it received from any client), you don't even need the file, the API is your persistence.","Q_Score":0,"Tags":"python,python-3.x,api,http,python-requests","A_Id":66218203,"CreationDate":"2021-02-15T21:14:00.000","Title":"Python API GET\/POST Unique JSON Records","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've created Python script which makes a GET request to infrastructure monitoring tool to fetch a json object of problems that occurred in the last 30 days. All these problems have unique ids.\nAfter that, it makes a POST request to push this json object record by record to another API end-point.\nI want to attach this script to cron job to execute it every 5 minutes, but I could not figure out a way to only get or post previously not sent events. So it always fetches a list of problems in the last 30 days and it pushes all of them.\nI thought about writing this json to local file and comparing it with the latest request, but then the previously fetched records also becomes considered new. So I am stuck and couldn't find a similarly asked question. I am open to all suggestions as long as things do not get too complicated :)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":66215476,"Users Score":0,"Answer":"Simplest way would be to keep a local file in your preferred format that tracks the unique IDs of all items you've pushed, ever. Load that in with your cron'd script, and then do your pull. Compare each item in the pull against your cached values, and generate a list of new items to push. Add these items to your cached values variable, push the items, and then write the entirety of your cached values out to file. A potentially faster (in terms of I\/O, if your file gets large) way of doing this would be to keep all the values in the file, one per line, and then just open the file with \"append\" mode and write out only the new values to it, one per line.","Q_Score":0,"Tags":"python,python-3.x,api,http,python-requests","A_Id":66215565,"CreationDate":"2021-02-15T21:14:00.000","Title":"Python API GET\/POST Unique JSON Records","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to build a simple project where I need to access a public website, input a single string of text, and get the output from the result of \"submitting\" the string of text.\nI know this can be done with Selenium but I was wondering if it's possible to do this silently. The webpage does not have an API, it's just a database where you input a single text of string, query the result and display it.\nIs this possible to do at all with Python?, again hopefully silently where when this runs it won't take over the monitor and can potentially and eventually be done on an arduino?\nThanks in advance,","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":66217790,"Users Score":0,"Answer":"Resolution:\nI was able to do this with Selenium using headless (previously: phantom) options.\nWorks like a charm, no UI being invaded. Querying what I need takes 2.5 seconds to get results.","Q_Score":0,"Tags":"python,python-3.x,web-scraping,automation","A_Id":66220715,"CreationDate":"2021-02-16T02:03:00.000","Title":"Py - Possible to Input Text into Web page and Extract Result Silently?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to create a file in a python3 program, in which I want to store some information that I don't want users to see or manipulate. However, in execution time I need to read and modify its content, being sure that users cannot modify the content of the file. How can I achive this?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":2296,"Q_Id":66218337,"Users Score":2,"Answer":"All information stored on end-user devices can eventually be manipulated.\nOne option is storing the data you don't want users to manipulate on a web-server and retrieve\/update it when needed.\nThe downside of this approach is that your code cannot work offline.","Q_Score":1,"Tags":"python,encryption,cryptography,pycrypto","A_Id":66218426,"CreationDate":"2021-02-16T03:30:00.000","Title":"Encrypt and protect file with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I used Pyinstaller to make a exe. The exe works perfectly fine but when I use NSSM to launch it on startup, the scripts runs but does nothing from the part where it is supposed to take an input from user. I also tried moving that part of code to a new python file and call that py file from the compiled exe but it isn't working as well. What should I do to solve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":66219754,"Users Score":0,"Answer":"this is about nssm Environment variables issue\n\nnssm edit servicename\nselect Environment\nEnviroment variables , KEY=Vaule","Q_Score":0,"Tags":"python,windows,pyinstaller,nssm","A_Id":69704018,"CreationDate":"2021-02-16T06:41:00.000","Title":"How to run a Pyinstaller compiled exe with NSSM","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a GUI in python, but I don't want it to open in a window. But is it possible? A GUI that runs just like a print(\"hello world\") program would run and doesn't open in a new window of its own? When you run the hello world program it prints hello world in the console. Is anything at least close to that possible for a GUI? If it is possible, how would I go about implementing it in python?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":66219965,"Users Score":0,"Answer":"I think you want a gui without the console\nits simple\nfirst you need to convert it to exe for that\n\npip install pyinstaller\nrun cmd\ntype pyinstaller --onefile --noconsole 'path' where path is like d:\/something\/file.py\n\n#i would recommend eel library for nearly no framework gui development as initial phase.","Q_Score":0,"Tags":"python,python-3.x,user-interface,console","A_Id":66220789,"CreationDate":"2021-02-16T07:02:00.000","Title":"Is it possible to create a GUI that doesn't run in an external window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a GUI in python, but I don't want it to open in a window. But is it possible? A GUI that runs just like a print(\"hello world\") program would run and doesn't open in a new window of its own? When you run the hello world program it prints hello world in the console. Is anything at least close to that possible for a GUI? If it is possible, how would I go about implementing it in python?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":66219965,"Users Score":0,"Answer":"Not sure that you can do GUI without window...\nBut the main purpose of GUI is that you have something window-like-that-you-can-interact-with...\nAlso you can start GUI with terminal, but still opens in a its own window.\nBut if you wanna implement GUI in python check Kivy and Tkinter.","Q_Score":0,"Tags":"python,python-3.x,user-interface,console","A_Id":66220091,"CreationDate":"2021-02-16T07:02:00.000","Title":"Is it possible to create a GUI that doesn't run in an external window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a GUI in python, but I don't want it to open in a window. But is it possible? A GUI that runs just like a print(\"hello world\") program would run and doesn't open in a new window of its own? When you run the hello world program it prints hello world in the console. Is anything at least close to that possible for a GUI? If it is possible, how would I go about implementing it in python?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":84,"Q_Id":66219965,"Users Score":1,"Answer":"This is one of the few things that's not possible -- at least, not practical. The output window of your Python interpreter is preconfigured by the run-time system with its characteristics; those include being the recipient of the data channel stdout and stderr. These channels have character-oriented data handlers, such that the bytes coming down that line are taken as a simple character stream, rather than RGB values, or positioning commands.\nDepending on your interpreter's implementation, it might be possible to reach back into the run-time definitions and reconfigure the streams to your will, but you'd have a lot of work to do, to handle both the graphics you want and the expected characteristics of those streams.","Q_Score":0,"Tags":"python,python-3.x,user-interface,console","A_Id":66220049,"CreationDate":"2021-02-16T07:02:00.000","Title":"Is it possible to create a GUI that doesn't run in an external window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When using Azure Bobs the path separator is a slash (\/). As Blobs are flat there are no actual directories. However, often there are prefixes that should be treated as directories.\nWhat are the right methods to deal with such paths? os.path is os dependent and will assume backslashes e.g. on Windows machines.\nSimply using str.split('\/') and similar does not feel right, as I would like to have the features from os.path to combine paths and I don't want to care about trailing and leading slashes and so on.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":66222401,"Users Score":0,"Answer":"I normally do use str.split('\/'). Don't know if its the \"right\" method but it works fine for me. Later on I can use os.path.join() to combine the resulting strings again (when needed).","Q_Score":0,"Tags":"python,azure,path,blob","A_Id":66222454,"CreationDate":"2021-02-16T10:10:00.000","Title":"What are correct python path splitting methods for use with Azure Blob","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I would like to send DirectInput keys to an inactive window without interfering with my actual mouse. I tried using PostMessage, SendInput and SendMessage but pywin32 uses virtual keycodes while ctypes does work with DirectInput. I have no idea how I can make it send in an inactive window.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":552,"Q_Id":66222584,"Users Score":-1,"Answer":"Try using this, it manages to work for me send the keystrokes to the inactive window,\nUse (but add error checking) hwndMain = win32gui.FindWindow(\"notepad\", \"\u200bprueba.txt: log keys\") hwndEdit = win32gui.FindWindowEx","Q_Score":3,"Tags":"python,ctypes,pywin32","A_Id":66222675,"CreationDate":"2021-02-16T10:24:00.000","Title":"How to send DirectInput keys to an inactive window in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the ResNet18 pre-trained model which will be used for a simple binary image classification task. However, all the tutorials including PyTorch itself use nn.Linear(num_of_features, classes) for the final fully connected layer. What I fail to understand is where is the activation function for that module? Also what if I want to use sigmoid\/softmax how do I go about that?\nThanks for your help in advance, I am kinda new to Pytorch","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":854,"Q_Id":66222699,"Users Score":3,"Answer":"No you do not use activation in the last layer if your loss function is CrossEntropyLoss because pytorch CrossEntropyLoss loss combines nn.LogSoftmax() and nn.NLLLoss() in one single class.\nThey do they do that ?\nYou actually need logits (output of sigmoid) for loss calculation so it is a correct design to not have it as part of forward pass. More over for predictions you don't need logits because argmax(linear(x)) == argmax(softmax(linear(x)) i.e softmax does not change the ordering but only change the magnitudes (squashing function which converts arbitrary value into [0,1] range, but preserves the partial ordering]\nIf you want to use activation functions to add some sort of non-linearity you normally do that by using a multi-layer NN and having the activation functions in the last but other layers.\nFinally, if you are using other loss function like NLLLoss, PoissonNLLLoss, BCELoss then you have to calculates sigmoid yourself. Again on the same note if you are using BCEWithLogitsLoss you don't need to calculate sigmoid again because this loss combines a Sigmoid layer and the BCELoss in one single class.\ncheck the pytorch docs to see how to use the loss.","Q_Score":1,"Tags":"python,pytorch,resnet","A_Id":66222964,"CreationDate":"2021-02-16T10:32:00.000","Title":"ResNet family classification layer activation function","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I find my created python file in the windows folder. I double click to run where I only get the window. I enter the information and never see the output result. The window closes too fast. What is a good way to stop this or pause the window?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":19,"Q_Id":66224855,"Users Score":0,"Answer":"Add input(\"Please plress Enter to exit\") . This will display the line as well as close when enter is pressed.\nI hope it helps","Q_Score":0,"Tags":"python,terminal","A_Id":66224927,"CreationDate":"2021-02-16T13:02:00.000","Title":"Running Program from Folder File","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can log in to a certain website using Postman. Doing so will set a \"bearer-token\" Cookie. When I export this to Python code in Postman, it generates a request that already explicitly supplies the token. However if I just post the login data with requests without giving that token, I get a 403.\nHow does Postman negotiate that token, can you give me a Python snippet that will?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":66226855,"Users Score":0,"Answer":"Solved this, the login page was redirecting right after login.\nSo in the requests.post\/get function, supply the allow_redirects=True argument and then look for a Set-Cookie entry in the response headers.","Q_Score":0,"Tags":"python-requests,postman","A_Id":66328943,"CreationDate":"2021-02-16T15:00:00.000","Title":"How does Postman get the bearer Token","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am converting a code from Matlab to python.\nIn Matlab an s4p file (s-parameters of a 4 port network) is read and then the s-parameters of 4 single ports are converted to s-parameters of 2 differential ports using s2sdd method.\nI am using skrt (scikit-rf) in python to read in the s4p file but I am stuck when converting the s-parameters. Is there a method doing this?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":107,"Q_Id":66226893,"Users Score":0,"Answer":"Eventually I ported the s2sdd from matlab to python. I could not find a suitable existing method.","Q_Score":0,"Tags":"python,matlab","A_Id":67063153,"CreationDate":"2021-02-16T15:02:00.000","Title":"Matlab network s2sdd for s-parameters in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Good evening all. I know that we can groupby multiple columns by just df.groupBy('col1,'col2','col3')\nI think that this grouping means that it first groups by col1 and for each member of col1 it groups by col2 and so on. If this is wrong just correct me, I basically started yesterday with PySpark because a university project.\nI have the need to group the data by 4 members: 2 string columns and 2 time window.\ndf.groupBy('col1,'col2','1HourTimeWindow','15MinTimeWindow')\nI'm aware that can do a groupBy with a window like this\ndf.groupBy(window(\"timeCol\", \"1 hour\")) but can't have more than 1 window in the same groupBy.\nAny solution you can recommend to me would be awesome. Thanks in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":177,"Q_Id":66229442,"Users Score":0,"Answer":"Solved by aggregating groupBy(col1, col2, 15Min) and after that grouping by 1 hour in subsequent tasks.","Q_Score":1,"Tags":"python,dataframe,apache-spark,pyspark,apache-spark-sql","A_Id":66262724,"CreationDate":"2021-02-16T17:42:00.000","Title":"PySpark groupby multiple time window","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have around 2000 tweet id's for which i have to extract respective tweets. first and foremost thing excel doesnt allow me to save those tweet id in the same format as it is supposed to be and last four digits of the tweet id is truncated to 0000 .\nex : tweet id 572330170108922545 is truncated to 572330170108920000 .\ni wanted to use twitter's tweepy library to extract tweets .. but seems that initial problem is not letting me start of the work. can i still use .txt file format to read each tweet id's line by line by using with open(filename.txt, 'r) as tweet_id : ?\nPlease let me know if there is any work around for this.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":144,"Q_Id":66229536,"Users Score":0,"Answer":"If your question is simply if this can be done, the answer is certainly yes. You just have to figure out how this data is structured. If you go line by line, you can use .split() or re (regular expression operations) to find the relevant parts of the lines.","Q_Score":0,"Tags":"python,excel,twitter,twitterapi-python","A_Id":66229734,"CreationDate":"2021-02-16T17:48:00.000","Title":"how to avoid tweet id's getting truncated when saved in excel as csv","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 3D CT image of a car engine in raw form with no header information. When loaded into numpy array as 16 bit unsigned integer, I noticed that the values range between 0 to 52000. Is this normal for a CT Image? When viewing the image, I also noticed a lot of cloud like noise in every slice. I'm trying to extract features using a deep learning method. This is my first time working with CT Images. What pre processing is neccesary for such CT Images?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":129,"Q_Id":66229988,"Users Score":0,"Answer":"From my experience in working with soft tissue CT medical images, I have seen CT having -1024 intensity for air and up to +3000 intensity for cancellous bone. When it comes to metal-like electrodes and even tooth images, it reaches up to +9000. So I guess for heavy metals like car engines the intensity value of 52000 is not aberration(though I never heard CT images of car engines so far!).\nPre-processing can include windowing or normalization operations and noise removals. Windowing is something already mentioned here to restructure the intensity range to a more workable one.","Q_Score":0,"Tags":"python,image-processing,medical-imaging","A_Id":68568420,"CreationDate":"2021-02-16T18:18:00.000","Title":"CT image preprocessing in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 3D CT image of a car engine in raw form with no header information. When loaded into numpy array as 16 bit unsigned integer, I noticed that the values range between 0 to 52000. Is this normal for a CT Image? When viewing the image, I also noticed a lot of cloud like noise in every slice. I'm trying to extract features using a deep learning method. This is my first time working with CT Images. What pre processing is neccesary for such CT Images?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":129,"Q_Id":66229988,"Users Score":0,"Answer":"Since it is 16 uint format, the value range could be 0 ~ 2 ^ 16 which is up to 65535, so 0 to 52000 is very reasonable. If it is a car engine instead of soft tissue, chances are you don't need that big range, you could reduce your data to 12 bit or even 8 bit without losing much details, by applying windowing and leveling to it.\nFor the \"cloud like noise\", please upload some image to show exact problem. If my guess is correct, it is probably due to the endian-ness. Try opening the raw data using ImageJ and try 16 uint little-endian and big-endian, one of these might give you normal image.","Q_Score":0,"Tags":"python,image-processing,medical-imaging","A_Id":67696094,"CreationDate":"2021-02-16T18:18:00.000","Title":"CT image preprocessing in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a beginner in python and I am trying to make a program using PyQt5 that detects mouse click events that are outside of the program window. I was unable to find anything online, is there any way of do this?\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":-0.3799489623,"is_accepted":false,"ViewCount":87,"Q_Id":66230647,"Users Score":-2,"Answer":"Easy: Make your program fullscreen.","Q_Score":0,"Tags":"python,pyqt5,mouseevent,mouse","A_Id":66230667,"CreationDate":"2021-02-16T19:06:00.000","Title":"PyQt5 - Detecting mouse clicks outside of window","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Correct me if I'm wrong but PyCharm used to auto-complete the path for files located in the project's tree. For example when trying to open a file placed in your projects directory, the first character typed at open function's first argument would lead to the specified file.\n\nNowadays PyCharm seems to have another path as the default one for file searching. Is there a way to reconfigure PyCharm so I won't need to look up the absolute path of a file?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":66232937,"Users Score":0,"Answer":"There are Project settings and there are IDE settings for Pycharm\nProject settings are stored with each specific project as a set of .xml files under the .idea folder. If you specify the template project settings, these settings will be automatically used for each newly created project.\nIDE settings are stored in the dedicated directories under PyCharm home directory. The PyCharm directory name is composed of the product name and version.\nAll of these directories default to standards for Windows Mac and Linux. It sounds like you have some settings in the .idea folder that should be changed. Have a look and keep a safe copy of the files (or a source code controlled version) before you mess with them.","Q_Score":1,"Tags":"python,pycharm","A_Id":66233044,"CreationDate":"2021-02-16T22:14:00.000","Title":"Making PyCharm search for a file at project's paths","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"After running pip install python-telegram-bot, I'm getting this error that the 'telegram' module is not found.\nUnder pip list I see that my python-telegram-bot package is installed version 13.2\nIs anyone else getting this error?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1597,"Q_Id":66236885,"Users Score":0,"Answer":"pip3 install python-telegram-bot\nInstall it in outside of virutal environment in terminal. Also unintall telegram. python-telegram-bot is sufficient for Telegram bot. In my case it resolved my issue.","Q_Score":3,"Tags":"installation,pip,python-telegram-bot","A_Id":72194900,"CreationDate":"2021-02-17T06:41:00.000","Title":"ModuleNotFoundError: No module named 'telegram' after pip install","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"After running pip install python-telegram-bot, I'm getting this error that the 'telegram' module is not found.\nUnder pip list I see that my python-telegram-bot package is installed version 13.2\nIs anyone else getting this error?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1597,"Q_Id":66236885,"Users Score":0,"Answer":"I also had this problem - for me the issue was that I was trying to run my code from a module called telegram.py. Newbie mistake I know...","Q_Score":3,"Tags":"installation,pip,python-telegram-bot","A_Id":71301249,"CreationDate":"2021-02-17T06:41:00.000","Title":"ModuleNotFoundError: No module named 'telegram' after pip install","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to use the Google Error Reporting Client library (from google.cloud import error_reporting).\nBasically, you instantiate a client:\nclient = error_reporting.Client(service=\"my_script\", version=\"my_version\")\nand then you can raise error using:\n\nclient.report(\"my message\") or\nclient.report_exception() when an exception is caught\n\nI have 3 environments (prod, staging and dev). They are each setup on their own Kubernetes cluster (with their own namespace). When I look at Google Cloud Error Reporting dashboard, I would to quickly locate on which environment and which class\/script the error was raised.\nUsing service is a natural choice to describe the class\/script but what about the environment?\nWhat is the best practice? Should I use the version to store that, e.g. version=\"staging_0.0.2\"?\nMany thanks in advance\nCheers,\nLamp'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":123,"Q_Id":66238791,"Users Score":0,"Answer":"I think the Error Reporting service is deficient (see comment above).\nSince you're using Kubernetes, how about naming your Error Reporting services to reflect Kubernetes Service names: ${service}.${namespace}.svc.cluster.local?\nYou could|should replace the internal cluster.local domain part with some unique external specifier (FQDN) to your cluster: $[service}.${namespace}.${cluster}\n\nNOTE These needn't be actual Kubernetes Services but some way for you to uniquely identify the thing within a Kubernetes cluster my_script.errorreporting.${namespace}.${cluster}","Q_Score":0,"Tags":"python,kubernetes,error-reporting,google-cloud-logging,google-cloud-error-reporting","A_Id":66374446,"CreationDate":"2021-02-17T09:06:00.000","Title":"Google Cloud - Error Reporting Client Libraries","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Good day,\nWhile downloading my Jupyter notebook ipynb file , the saving format as default changes to PDF Adobe Acrobat and this makes the file unreadable.\nI have tried changing the name of the file but still this doesn\u00b4t work...\nThe name for the file would be : abcdefg.ipynb and the type of doc \"Adobe Acrobat Document\".\nI hope you can help me, thank you very much","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":443,"Q_Id":66239161,"Users Score":0,"Answer":"In Jupyter Lab you should be able to save your notebook by clicking on File > Save Notebook As... If you prefer to save your notebook in a different format than .ipynb, click on File > Export Notebook As...","Q_Score":0,"Tags":"python,python-3.x,jupyter-notebook,anaconda,jupyter","A_Id":66239401,"CreationDate":"2021-02-17T09:31:00.000","Title":"Changing format from Adobe Acrobat pdf to Jupyter Notebook ipynb","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"Good day,\nWhile downloading my Jupyter notebook ipynb file , the saving format as default changes to PDF Adobe Acrobat and this makes the file unreadable.\nI have tried changing the name of the file but still this doesn\u00b4t work...\nThe name for the file would be : abcdefg.ipynb and the type of doc \"Adobe Acrobat Document\".\nI hope you can help me, thank you very much","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":443,"Q_Id":66239161,"Users Score":0,"Answer":"I might be misunderstanding it, but if you want to change a file's extension and you're on windows (8 or 10):\n\nWhen saving the file, you should be able to set your own extension if you can set the \"save as type\" to 'All'\nTo rename an existing file's extension, on Explorer go to \"View\" and make sure \"File name extensions\" is checked, then rename the file normally\n\nNote that if the file was an actual PDF document, you can't convert it just by renaming the extension. (The same also applies to most other file types)","Q_Score":0,"Tags":"python,python-3.x,jupyter-notebook,anaconda,jupyter","A_Id":66239633,"CreationDate":"2021-02-17T09:31:00.000","Title":"Changing format from Adobe Acrobat pdf to Jupyter Notebook ipynb","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using ubuntu 16.04 and trying to use vim plugin that requires python3.6 (YouCompleteMe). I used update-alternatives to set python3.6 as the default python and python3 interpreter but vim still using python3.5.\nIs there a way to tell vim to use python3.6 interpreter?\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":103,"Q_Id":66241307,"Users Score":3,"Answer":"Vim uses the Python interpreters it was compiled with. No setting will affect it. If you can't find a Vim binary with the desired Python support, the only way to make Vim use Python3.6 is to compile it with Python3.6 yourself. See --enable-python3interp, --with-python3-command and --with-python3-config-dir options to Vim's configure.","Q_Score":1,"Tags":"python,vim,ubuntu-16.04","A_Id":66241359,"CreationDate":"2021-02-17T11:44:00.000","Title":"force VIM to use python3.6 in ubuntu 16.04","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I\u00b4m trying to load a .csv in to my rocksdb database, but it fails and show me this error:\n Got error 10 'Operation aborted:Failed to acquire lock due to rocksdb_max_row_locks limit' from ROCKSDB\nI've tried with SET SESSION rocksdb_max_row_locks=1073741824; but same error always.\nAnyone can help me?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":540,"Q_Id":66241696,"Users Score":6,"Answer":"This should do the trick (before starting the insert)\n\nSET session rocksdb_bulk_load=1;","Q_Score":3,"Tags":"python,mariadb,rocksdb","A_Id":66831678,"CreationDate":"2021-02-17T12:10:00.000","Title":"ROCKSDB Failed to acquire lock due to rocksdb_max_row_locks","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I remove the red underline in just one file of Pycharm.\nI'm writing a kivy application and the code works but there are the whole time these red underlines. How do I stop that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":66246097,"Users Score":0,"Answer":"Perhaps you could use the commentary #noqa like:\nblabla.do_something() #noqa\nBut works only for one line","Q_Score":2,"Tags":"python,pycharm,kivy","A_Id":66246156,"CreationDate":"2021-02-17T16:33:00.000","Title":"Turn off the red underline in just one file in pycharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running a very simple Locust script that is using the standard requests module and Python 3.7.7\nThe error is:\n'in get_adapter\nraise InvalidSchema(\"No connection adapters were found for {!r}\".format(url))\nrequests.exceptions.InvalidSchema: No connection adapters were found for '=https:\/\/.....'\nNot sure where the '=' sign is coming from in the request url?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":66246334,"Users Score":0,"Answer":"Sorry, found the answer I had mistyped the cmd line for launching locust and had an extra = in there","Q_Score":0,"Tags":"python-3.x,locust","A_Id":66246806,"CreationDate":"2021-02-17T16:44:00.000","Title":"running a simple locust scritp and getting No connection adapters error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been trying to install the chatterbot library. I've tried by using pip install and downloading from git but its giving me an error that I cant resolve.\nERROR: Package 'chatterbot' requires a different Python: 3.9.1 not in '<=3.8,>=3.4'","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":94,"Q_Id":66248146,"Users Score":-1,"Answer":"try installing Chatterbot v1.0.2\npip install chatterbot==1.0.2\nor download a python version 3.4 to 3.8","Q_Score":0,"Tags":"python,chatterbot","A_Id":66248284,"CreationDate":"2021-02-17T18:46:00.000","Title":"Error while installing chatterbot library through command prompt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Newbie here... 2 days into learning this.\nIn a learning management system, there is an element (a plus mark icon) to click which adds a form field upon each click.\u00a0 The goal is to click the icon, which generates a new field, and then put text into the new field.\u00a0 This field does NOT exist when the page loads... it's added dynamically based on the clicking of the icon.\nWhen I try to use \"driver.find_element_by_*\" (have tried ID, Name and xpath), I get an error that it can't be found. I'm assuming it's because it wasn't there when the page loaded. Any way to resolve this?\nBy the way, I've been successful in scripting the login process and navigating through the site to get to this point. So, I have actually learned how to find other elements that are static.\nLet me know if I need to provide more info or a better description.\nThanks,\nBill","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":66250310,"Users Score":0,"Answer":"Apparently I needed to have patience and let something catch up...\nI added:\nimport time\nand then:\ntime.sleep(3)\nafter the click on the icon to add the field. It's working!","Q_Score":0,"Tags":"python,selenium,selenium-chromedriver","A_Id":66250513,"CreationDate":"2021-02-17T21:38:00.000","Title":"Selenium\/Python - Finding Dynamically Created Fields","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run a project in atom using a virtual environment and I keep getting an error in regards to the manage.py file. File \"manage.py\", line 17 ) from exc ^ SyntaxError: invalid syntax\nI've searched for solutions all over and most people seem to resolve it by using python3 manage.py runserver in the virtual environment instead of python manage.py runserver but that did not work for me. Any suggestions on this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":545,"Q_Id":66252213,"Users Score":0,"Answer":"This is happening because you have tried to run your Django app but you have not activated the virtual environment which you installed Django in to.\ntry to activate your virtual env by making sure you are in the directory of your django app in terminal\/command prompt and type:\nLinux:\nsource nameofenv\/bin\/activate\nWindows:\nnameofenv\\Scripts\\activate","Q_Score":0,"Tags":"python,django,atom-editor,python-venv","A_Id":66645558,"CreationDate":"2021-02-18T01:14:00.000","Title":"Django issue running manage.py on Mac","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When you import a library in Python, where do you import it from? For example, if you do import math, where does it come from? Why couldn't it be included in the first place?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":68,"Q_Id":66253675,"Users Score":1,"Answer":"\"When you import a library in Python, where do you import it from?\"\n\nPython has the standard library. All the modules are located there.\n\n\"Why couldn't it be included in the first place?\"\n\nBecause it is the syntax of Python.","Q_Score":0,"Tags":"python,import","A_Id":66253759,"CreationDate":"2021-02-18T04:34:00.000","Title":"When you import a library in Python, where do you import it from?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When you import a library in Python, where do you import it from? For example, if you do import math, where does it come from? Why couldn't it be included in the first place?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":68,"Q_Id":66253675,"Users Score":1,"Answer":"Is a module that comes with the standard library that comes by default from the python packages.\nLibraries not needed by the default don\u2019t need to be imported by default, they makes your program heavy and slow for processing.\nimport math - module from standard library, not needed in all programming project.","Q_Score":0,"Tags":"python,import","A_Id":66253795,"CreationDate":"2021-02-18T04:34:00.000","Title":"When you import a library in Python, where do you import it from?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Gist of the problem is, I'm not developing an SPA, I'm developing a mobile app, with a backend in Flask. FlaskSecurityToo has provided me with some great features, and I'm now trying to use their password reset functionality. Here's my gripe.\nI want to have the email send a deeplink, which users on the mobile app will click and get sent to the password reset form on the app. There's no UI view for this. But FlaskSecurityToo has logic that requires the server is first hit to validate the token, then redirects them to whatever has REDIRECT_HOST set. Which works great when I set the REDIRECT_BEHAVIOR as spa\nIs there a way to tell Flask \"Hey, don't worry about the need to validate the token from the initially provided password reset email, let the UI\/Mobile app make the call to determine that whenever they want\" from the provided configuration? Thus, relaxing the constraint on the host name \/ details of the url for a password reset, as long as a token exists? Or is this abusing some of the principles of FlaskSecurity that I don't grasp yet?\nMy current plan is to let it open a mobile browser, and hopefully the redirect forces the app open? I have little experience with deeplinks, so I'm testing and probing things as I learn.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":266,"Q_Id":66255958,"Users Score":1,"Answer":"You are correct about current FS behavior - here is a suggestion (not clean but it would be interesting if it was all you need) - the POST \/reset\/ endpoint is stand-alone - you don't have to call GET first - the POST will ALSO verify the token is valid. So the issue becomes how to generate the link for the email that has what you want. FS currently doesn't allow to configure this (that could be an enhancement) - but in 4.0.0 you can easily replace the MailUtil class and have your implementation of send_mail look for template='reset_instructions'. Now - at this point the email has already been rendered - so you would have to parse the body and rewrite the url (keeping the token intact). Ugly but doable - is this sufficient? If so I can see a few simple improvements in FS to allow more flexibility around emails.","Q_Score":0,"Tags":"python,redirect,single-page-application,deep-linking,flask-security","A_Id":66263733,"CreationDate":"2021-02-18T08:11:00.000","Title":"How to set password reset url to a mobile deeplink using Flask Security?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i am new in this field and i need a small help.\ni just want to know that, what is the best way to append multiple data arrays in a variable of a xarray dataset?\neach data array has a different time and value but has the same x,y coordinates same as dataset.\ni tried ds[variable_name] = da but it works only for the first data array .\ni want to create a function that gets data arrays and put them into one variable of the dataset and updating the time dimension of the dataset.\nthanks for your help","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":223,"Q_Id":66258827,"Users Score":0,"Answer":"The best way for doing that is first to convert data arrays to datasets separately then merge datasets together (using xr.merge).\nHope it helps the others.","Q_Score":0,"Tags":"python,numpy,python-xarray,rasterio","A_Id":66500507,"CreationDate":"2021-02-18T11:14:00.000","Title":"How to append multiple data arrays into one varibale of xarray dataset?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I create a Todo web application in Django and i deploy it on Heroku. I want to know how can i push the notification in my browser for upcoming task.Thanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":823,"Q_Id":66259563,"Users Score":2,"Answer":"You should use websockets and async functionality of Django to be able to push realtime notifications as they occur.\nBasic http protocol does not give you such functionality.","Q_Score":1,"Tags":"python,django,heroku","A_Id":66259742,"CreationDate":"2021-02-18T12:02:00.000","Title":"How to push the notification on web browser using django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I run\nsudo apt install mysql-workbench-community\nI get the error\nThe following packages have unmet dependencies:\nmysql-workbench-community : Depends: libpython3.7 (>= 3.7.0) but it is not installable\nE: Unable to correct problems, you have held broken packages.\nI then ran\nsudo dpkg --get-selections | grep hold\nWhich did not return anything\ntyping\npython3 -v\nProduces an error\nif I type\npython3 --version\nI get\nPython 3.8.5\nIf I try to run\nsudo apt install libpython3.7\nI get the error\nE: Package 'libpython3.7' has no installation candidate\nI cannot come up with a way to fix this I have recently updated from 19\nHelp much appreciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":507,"Q_Id":66260846,"Users Score":0,"Answer":"This was caused due to running an older version of MYSQL.\nfix was to remove the mysql repository for tools and install the work bench via snap.","Q_Score":0,"Tags":"python-3.x,mysql-workbench","A_Id":66271946,"CreationDate":"2021-02-18T13:22:00.000","Title":"cannot install mysql-workbench-community on ubuntu 20.04","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to install python 3.7 env for miniconda on my raspberry pi 4 model B.\n\nBut when I'm doing conda install python 3.7\nI get Error: No packages found in current Linux-armv7l channels matching: 3.7.\nhow can I install python 3.7 in some way on that miniconda?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2058,"Q_Id":66261176,"Users Score":0,"Answer":"You might want to try with this command : conda install -c anaconda python=3.7\nTell us if this works","Q_Score":4,"Tags":"python,raspberry-pi,anaconda,python-3.7,miniconda","A_Id":66261381,"CreationDate":"2021-02-18T13:41:00.000","Title":"How to install python 3.7 on miniconda","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are using python 3.5.1 and requests 2.25. I am using request.post to get a token. It fails when we ran the first time, the same thing when we ran 2nd time it is running successfully. Did anyone face the same issue before?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":66264366,"Users Score":0,"Answer":"I found the solution, it is due to dependency packages. When we downgraded the below packages, it started to work\n\nPrevious Packages which are not working: cachetools==4.2.1 google-auth==1.27.0 pytz==2021.1 rsa==4.5\nWorking packages cachetools==4.2.0 rsa==4.6 pytz==2020.4 google-auth==1.23.0","Q_Score":0,"Tags":"python-3.x,python-requests","A_Id":66279038,"CreationDate":"2021-02-18T16:48:00.000","Title":"python3 request.post is failing intermittently","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create a chatbot using gupshup, but I don't have much experience with JS, and for my implementation, it will be easier to code in python, but I'm not finding any material about it.\nIs it possible to develop a ChatBot with python using GupShup?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":199,"Q_Id":66264967,"Users Score":0,"Answer":"Right Now Gupshup Only Provides JS Language Support, but we are looking to have python as a development language also for chatbots.","Q_Score":1,"Tags":"python,gupshup","A_Id":67054515,"CreationDate":"2021-02-18T17:22:00.000","Title":"Using python to create a ChatBot in GupShup Bot Builder","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Python has a class named \"pyautogui\" to perform some task automatically on system interfere(Control mouse and keyboard strokes by method of that class). Is there any similar class available in C++ and Java? Help me out dudes by writing down names of this class. Thanks in advance, \u2764.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":360,"Q_Id":66272066,"Users Score":2,"Answer":"Your question: Is there any similar class available in Java?\nAnswer: Yes.\nMore info: Read the java documentation of java.awt.Robot","Q_Score":0,"Tags":"java,c++,python-3.x,pyautogui","A_Id":66272171,"CreationDate":"2021-02-19T05:06:00.000","Title":"Is there any similar class available in C++ and Java as \"pyautogui\"?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my template, I want to show the join date of a user, so I am using {{ user.date_joined }} which shows the date and time (in local time zone - same as what is shown in the admin panel). To just show the date, I use {{ user.date_joined.date }}, but it seems to be converting the date and time to UTC before showing the date (I am in EST\/EDT - I never remember which is which).\nFor example:\n{{ user.date_joined }} ==> Feb. 18, 2021, 7 p.m.\n{{ user.date_joined.date }} ==> Feb. 19, 2021\nIs there a way for me to change this so that it shows the same date as the full date and time?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":153,"Q_Id":66272633,"Users Score":1,"Answer":"Found a solution\/workaround for anyone else with a similar question.\nInstead of using {{ user.date_joined.date }} like a traditional datetime object, I used {{ user.date_joined|date }}","Q_Score":0,"Tags":"python,python-3.x,django,django-templates","A_Id":66272941,"CreationDate":"2021-02-19T06:19:00.000","Title":"Django User.date_joined.date using UTC time?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I am using AWS SAM to build and deploy some function to AWS Lambda.\nBecause of my slow connection speed uploading functions is very slow, so I decided to create a Layer with requirements in it. So the next time when I try to deploy function I will not have to upload all 50 mb of requirements, and I can just use already uploaded layer.\nProblem is that I could not find any parameter which lets me to just ignore requirements file and just deploy the source code.\nIs it even possible?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":746,"Q_Id":66280432,"Users Score":0,"Answer":"I hope I understand your question correctly, but if you'd like to deploy a lambda without any dependencies you can try two things:\n\nnot running sam build before running sam deploy\nhaving an empty requirements.txt file. Then sam build simply does not include any dependencies for that lambda function.\n\nOf course here I assume the layer is already present in AWS and is not included in the same template. If they are defined in the same template, you'd have to split them into two stacks. One with the layer which can be deployed once and one with the lambda referencing that layer.\nUnfortunately sam build has no flag to ignore requirements.txt as far as I know, since the core purpose of the command is to build dependencies.","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda,requirements.txt,sam","A_Id":66290011,"CreationDate":"2021-02-19T15:30:00.000","Title":"make sam to IGNORE requirements.txt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm performing some (binary)text classification with two different classifiers on the same unbalanced data. i want to compare the results of the two classifiers.\nWhen using sklearns logistic regression, I have the option of setting the class_weight = 'balanced' for sklearn naive bayes, there is no such parameter available.\nI know, that I can just randomly sample from the bigger class in order to end up with equal sizes for both classes, but then the data is lost.\nWhy is there no such parameter for naive bayes? I guess it has something to do with the nature of the algorithm, but cant find anything about this specific matter. I also would like to know what the equivalent would be? How to achieve a similar effect (that the classifier is aware of the imbalanced data and gives more weight to the minority class and less to the majority class)?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1376,"Q_Id":66280588,"Users Score":2,"Answer":"I'm writing this partially in response to the other answer here.\nLogistic regression and naive Bayes are both linear models that produce linear decision boundaries.\nLogistic regression is the discriminative counterpart to naive Bayes (a generative model). You decode each model to find the best label according to p(label | data). What sets Naive Bayes apart is that it does this via Bayes' rule: p(label | data) \u221d p(data | label) * p(label).\n(The other answer is right to say that the Naive Bayes features are independent of each other (given the class), by the Naive Bayes assumption. With collinear features, this can sometimes lead to bad probability estimates for Naive Bayes\u2014though the classification is still quite good.)\nThe factoring here is how Naive Bayes handles class imbalance so well: it's keeping separate books for each class. There's a parameter for each (feature, label) pair. This means that the super-common class can't mess up the super-rare class, and vice versa.\nThere is one place that the imbalance might seep in: the p(labels) distribution. It's going to match the empirical distribution in your training set: if it's 90% label A, then p(A) will be 0.9.\nIf you think that the training distribution of labels isn't representative of the testing distribution, you can manually alter the p(labels) values to match your prior belief about how frequent label A or label B, etc., will be in the wild.","Q_Score":4,"Tags":"python,scikit-learn,logistic-regression,naivebayes","A_Id":66310760,"CreationDate":"2021-02-19T15:40:00.000","Title":"class_weight = 'balanced' equivalent for naive bayes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just tried to install package WoE using pip which works fine. Then in Jupyter Notebook when I try to run the command:\nfrom WoE import WoE\nI receive an error that there is no module named \"WoE\"\nI keep trying to figure out how to use sys.path.append to make this module work but I cannot figure it out. Any help or advice would be appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":66281276,"Users Score":0,"Answer":"Try running command prompt as admin and then doing the command py -m pip install WoE. If that still doesn't work try restarting your computer, it could just be an issue with Jupyter not seeing the module yet. You can also do py -m pip show WoE and if that gives you a file location then that means it did install correctly.","Q_Score":0,"Tags":"python,jupyter-notebook,sys.path","A_Id":66282893,"CreationDate":"2021-02-19T16:21:00.000","Title":"Solving ModuleNotFoundError: Importing the module WoE and manipulating sys.path.append to allow my notebook to identify the new module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The command is:\ndocker run -v \"$PWD\":\/var\/task \"lambci\/lambda:build-python3.6\" \/bin\/sh -c \"pip install -r \/var\/task\/requirements.txt -t python\/lib\/python3.6\/site-packages\/; exit\"\nAnd I am running it from the same folder as the requirements.txt file.\nI get the following error: ERROR: Could not open requirements file: [Errno 2] No such file or directory: '\/var\/task\/requirements.txt'","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":90,"Q_Id":66283683,"Users Score":0,"Answer":"This seems to be a \"Docker on WSL2\" issue, not a Docker issue.","Q_Score":0,"Tags":"python,docker,pip,windows-subsystem-for-linux","A_Id":66284304,"CreationDate":"2021-02-19T19:12:00.000","Title":"Why is Docker saying that the requirements.txt file doesn't exist?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"i got this message when i wanted run a beysian personalized ranking by GPU in colab, How can i resolve this problem ?\nmessage is :\nGPU training requires factor size to be a multiple of 32 - 1. Increasing factors from 100 to 127.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":337,"Q_Id":66284758,"Users Score":0,"Answer":"It could be that google colab is running out of ram\nwhy?\nbecause we are loading all data at once.or generating all data at once.\nexample :\ngoogle colab having 12 GB of ram. and it running out of ram.\nSo what i would suggest is:\nwe can process that data in chunks. if the total size of the data is 12 GB. than we can divide it into chunk(file) of 1 Gb.\n12 GB data = 12 chunks(files) of 1 Gb\nso now we have to load only 1 GB file into ram. which won't crash our notebook.","Q_Score":0,"Tags":"python,google-colaboratory","A_Id":66284829,"CreationDate":"2021-02-19T20:44:00.000","Title":"my colab notebook crash, how can i resolve it?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i got this message when i wanted run a beysian personalized ranking by GPU in colab, How can i resolve this problem ?\nmessage is :\nGPU training requires factor size to be a multiple of 32 - 1. Increasing factors from 100 to 127.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":337,"Q_Id":66284758,"Users Score":0,"Answer":"On Colab a multitude of things could lead to crash. It's likely that you ran out of RAM or out of GPU memory.","Q_Score":0,"Tags":"python,google-colaboratory","A_Id":66284848,"CreationDate":"2021-02-19T20:44:00.000","Title":"my colab notebook crash, how can i resolve it?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am just wondering what the difference is between Python2, Python3, PyPy2, PyPy3 is.\nI already understand that Python3 is the latest version of python, however I have no clue as to what PyPy2 and PyPy3 are, apart from the fact that some syntax is different. Thanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":633,"Q_Id":66284784,"Users Score":1,"Answer":"Python is the language - Python 2 and Python 3 are different major versions.\nPyPy is an implementation of that language - it happens to be implemented in Python itself. This is in contrast to something like CPython (the de-facto \"standard\" implementation), which is written in C instead. PyPy 2 and PyPy 3 are implmentations of Python 2 and 3, respectively.","Q_Score":0,"Tags":"python,version,pypy","A_Id":66284837,"CreationDate":"2021-02-19T20:46:00.000","Title":"Difference Between Python2, Python3, PyPy2, PyPy3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running a Ubuntu server on Azure's cloud. I run the command nohup python3 scanner.py to allow me to run my script and close the putty terminal and let it keep running. The problem is now I have no way to give input to the process, and if I want to terminate it I have to use the kill command.\nWhat's the best way to disconnect\/connect to a running process on ubuntu server command line","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":66285212,"Users Score":0,"Answer":"There are a couple of ways, but none are particularly fantastic. First, you could rework the script such that it has some form of socket connection, and expects input on that, and then write yet another script to send information to that socket. This is very work-heavy, but I've done it before. Second, you could use something like screen or tmux instead of nohup so that you can reconnect to that terminal session later, and have direct access to stdout\/stdin.","Q_Score":1,"Tags":"python,azure,ubuntu,server,remote-server","A_Id":66285508,"CreationDate":"2021-02-19T21:27:00.000","Title":"Communicate to a running process ubuntu python server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using the discord.ext.commands Bot class. I need to get user info from id's I have in a dictionary, so that I can send them direct messages. I know there is a method to get user info from id using the client class (the .get_user_info() ) function, however I am not using the client class, only the Bot class. Is there a way to get user info using the bot class?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":81,"Q_Id":66286627,"Users Score":0,"Answer":"Use user = bot.fetch_user(id). After this, you can access all the user's attributes through user, such as user.display_name.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":66286810,"CreationDate":"2021-02-20T00:13:00.000","Title":"How to get user info given discord id using the discord Bot class?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am relatively new to oop and had a quick question about the best way to approach coding a certain type of calculation. I'm curious if there's an established design pattern to approach this sort of problem.\nConsider a chemical process flow where you convert materials (a,b) with attributes such as temperature, pressure, flow rate, etc. into a final product c. To get there, I need unit operations D,E,F... each with its own set of attributes (cost, size, etc.). I would only need information flow in one direction as closed loops will probably increase the complexity (if not, I would really appreciate insight into how closed loops would work).\na,b --> D --> E --> F --> c\nUltimately I would like to be able to do a system cost analysis, where I would sum up the cost attributes of D,E,F.\nMy current thought process to approach this is to define a \"materials\" object, then have D inherit materials, E inherit D... c inherit F then lastly a \"system\" object inherit c to analyze the system variables. Since I would like to be able to swap out D,E,F for say G,H,I, there also needs to be code for conditional inheritance where D must be able to accept inputs a,b (based on defined attributes) and E be able to inherit D for the same reason. One of the things I'm unsure of is how object c would be able to understand how to sum up attributes of all the inherited objects (probably based on some consistent naming convention of objects\/attributes?).\nSorry for the somewhat lengthy question - if you are aware of AspenPlus, I'm looking to replicate a smaller scale version of this (ie no solvers) in Python. Thank you for reading through this!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":66287297,"Users Score":0,"Answer":"I would argue that in your case functional programming is actually more suited than OOP since what it boils down to is a set of operations process on \"blank\" materials that results in a new material, well actually the same with different properties.\nIf I was restrained to OOP I would create different classes :\n\nMaterialType which is basically a string or enum (of a, b & c)\nExternalProperties for temperature\/pressure, etc.\nMaterial which contains the Material_Type and various properties\/functions aimed at transforming the material type so for instance it could contain a transform function with an unbounded list of ExternalProperties\nLaboratory to do all the operations\n\nHere object c would be the MaterialType which the Material can calculate without inheriting of everything else.\nIt's hard to propose an accurate concretisation since your example is very abstract but I think inheritance brings more problems than solutions here.","Q_Score":1,"Tags":"python,oop,design-patterns","A_Id":66299410,"CreationDate":"2021-02-20T02:07:00.000","Title":"Python OOP Design Pattern for Calculation Flows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"The elementary functions in numpy, like mean() and std() returns np.nan when encounter np.nan. Can I make them ignore it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":320,"Q_Id":66289714,"Users Score":2,"Answer":"The \"normal\" functions like np.mean and np.std evalutates the NaN i.e the result you've provided evaluates to NaN.\nIf you want to avoid that, use np.nanmean and np.nanstd. Note that since you have only one non-nan element the std is 0, thus you are dividing by zero.","Q_Score":0,"Tags":"python,numpy","A_Id":66289750,"CreationDate":"2021-02-20T08:57:00.000","Title":"'ignore nan' in numpy functions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I don t see any problem when I run simple codes such as creating a turtle. However, when I try OOP via turtle library code and specific classes, the code output neither works nor responds to my commands. Then kernel dies and python restarts.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":66290200,"Users Score":0,"Answer":"I use both anaconda base spyder and PC. PC uses 3.9 python version, spyder uses 3.8.5. Probably I have to delete spyder.","Q_Score":0,"Tags":"python,screen,spyder,turtle-graphics","A_Id":66302803,"CreationDate":"2021-02-20T09:56:00.000","Title":"I run the python's turtle module then I want to see the output on the sypder's output screen, my computer doesnot respond to me?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have a list of 100 numbers. I can find the mean by summing and dividing by the number of elements. But how can I find two values, one that gravitates towards the left of the list (assuming the list is ordered) and one towards the right, so that the list is equally divided into three blocks?\nSorting the array and taking the 33th and the 66th elements doesn't work because I could have all 1's before the 33th position and bigger values after, so the 33th position would be too early in the array. Those two 'means' depend on the values of the array and not solely on the indices.\nI'm sure what I'm trying to do has a proper naming but I can't really remember it now.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":66290266,"Users Score":0,"Answer":"You could try numpy.quantile for example np.quantile(your_list, [0.33, 0.66]) I think should do the trick","Q_Score":0,"Tags":"python,arrays,list,numpy,mean","A_Id":66290439,"CreationDate":"2021-02-20T10:01:00.000","Title":"How to find multiple equally distributed means of a numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can't get my Python program (which runs in Terminal with no problem) to run through cron.\nHere's the crontab command I use:\n38 11 * * * \/usr\/bin\/python3 \/home\/pi\/Cascades2\/03_face_recognition.py >> \/home\/pi\/Cascades2\/cron.log 2>&1\nThe error message that appears in the cron.log file is:\n\n: cannot connect to X server\n\nWhat's the problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":160,"Q_Id":66290650,"Users Score":0,"Answer":"Like tocode suggested, adding export DISPLAY=:0.0 && to the script works perfectly.\n38 11 * * * export DISPLAY=:0.0 && \/usr\/bin\/python3 \/home\/pi\/Cascades2\/03_face_recognition.py >> \/home\/pi\/Cascades2\/cron.log 2>&1","Q_Score":0,"Tags":"python,cron","A_Id":66292030,"CreationDate":"2021-02-20T10:47:00.000","Title":"Unable to run a Python program with cron: cannot connect to X server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am currently writing a Python server to deploy on AWS Lambda. I want to use the firebase-admin package to send notifications with FCM and read data from cloud firestore. however when I try deploying my function to AWS Lambda with the .zip file archives, I get this error on execution:\n[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda_function': Failed to import the Cloud Firestore library for Python. Make sure to install the \"google-cloud-firestore\" module.\nI installed the module with this: pip install --target . firebase-admin into a folder, added my code files (to the root as instructed), zipped it recursively and uploaded it with the aws-cli, I can clearly see that there is a google-cloud-firestore folder inside the .zip so i'm not sure whats going on. any help is appreciated!","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1208,"Q_Id":66291208,"Users Score":0,"Answer":"From the look of it you have bundled your code correctly and deployed successfully. The error occurs because Firestore relies on a C-based implementation of GRPC. By default this does not work on AWS Lambda. I'm currently creating a work-around and will update this post with my results.","Q_Score":0,"Tags":"python,amazon-web-services,firebase,aws-lambda,firebase-admin","A_Id":66393235,"CreationDate":"2021-02-20T11:56:00.000","Title":"Firebase-Admin with AWS Lambda Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am calculating similarity between 2 texts using universal sentence encoder\nMy question is whether embedding text at sentence level (which yields no of vectors equal to the no of sentences) and then average out scores instead of just creating a vector per text is a right way to do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":66292655,"Users Score":0,"Answer":"As always, it depends on your data set. You can try it both ways and see which one gives the scores useful for your use case. In general, I have found that just feeding the whole text at one time to USE for text up to 100 words works just fine or even better. There is not a need to break into sentences and then average.","Q_Score":0,"Tags":"python,tensorflow,sentence-similarity","A_Id":66956358,"CreationDate":"2021-02-20T14:29:00.000","Title":"Universal sentence encoder for multi sentence text similarity","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example: I want to know wether 5 and 500 have a 1:100 ratio, I also want to know how I can see if they roughly have the same ratio or not, how do I do this??","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":406,"Q_Id":66293480,"Users Score":1,"Answer":"If you need to know whether a\/b and c\/d are roughly the same ratio, then (in Python 3 only) you can do math.abs(a\/b - c\/d) < margin. The smaller the positive number margin is, the more close the ratios have to be for the expression to return True. margin = 1\/100 would be within a percentage point.","Q_Score":1,"Tags":"python,python-3.x","A_Id":66293671,"CreationDate":"2021-02-20T15:45:00.000","Title":"How do I check if 2 numbers are equal to a specific ratio in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two different django projects in same a computer and i run both the project by giving command python manage.py runserver 8000 and python manage.py runserver 8001. They both run without any error but when i open them in a same chrome browser in two different tabs and trying to login in both projects they are reset and return to login page again.\nI am trying to say that when i login in first project it login successfully and then i login in second project it also login successfully but the first project comes to reset and return in login page again. it seems that browser only allows only one login at a time.\nis there anything about django session either in Cookies session or Authentication Session?\nand do i have to add or change something in settings.py file?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":66294489,"Users Score":0,"Answer":"What you should know is that when you login, the session cookie is stored in browser which is then sent in every request to the server. The cookie is set for a domain on which you are logging in which in your case is localhost.\nSo because in both of your application the domain name is same \"localhost\" and hence the cookie is getting overridden by the next project you start.\nTo resolve this you can either run one your project on 127.0.0.1:8000 and the other one on localhost:8001.\nThis way the cookie wont get overridden because the domain names are different now.\nYou can also use your systems ip instead of localhost to run one of your project.\nYou can view what cookie is being set and on what domain by visiting inspect section on chrome and then navigating to application section there -> cookies -> yourdomain","Q_Score":0,"Tags":"python,django,django-authentication,django-settings,django-sessions","A_Id":66294744,"CreationDate":"2021-02-20T17:24:00.000","Title":"Authentication login problem in running two django projects at the same time in same PC machine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a conceptual question and hoping someone can clarify. When running let's say CV=10 in GridSearchCV, the model is getting trained on 9 partitions and tested in the remaining 1 partition.\nThe question is what's more relevant here? The avg AUC results coming from the 9 partitions or the avg AUC of the testing partitions. What if the AUC's on these 2 (9 vs 1 partition) are far apart, let's say more than 20% apart. What does that say about the efficacy of the trained model? Any rule of thumb on how far the AUC's could be? What is generally reported as the measure of model performance, the 9 partition AUC (train) or the testing partitions?\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":66295030,"Users Score":0,"Answer":"I assume it is machine learning model (e.g. neural net)\n\nWhen running lets say CVfold =10 in gridsearchCV, the model is getting\ntrained on 9 partitions and tested in the remaining 1 partition.\nthe avg auc results coming from the 9 partitions or the avg\n\nUsually he model is trained on 1 partition (train set) composed from 9 arbitrary partitions. Therefore there is not such a thing as avg AUC from 9 partitions, there is only one train AUC. This words are not true if you are sure that you train on 1 partition, calculate metric, train on 2nd, caluclate metric and so on until you have got metrics results from 9 partitions and averaged them.\nThe key question:\n\nQuestion is what is more relevant here?\n\nDepends what is the question you are answering to.\nResults from test partitions should tell you about what performance can you more or less except when you release the model to the world (make predictions on unseen data). However it is easy to introduce some sort of data leakage when you use CV, data leakage makes the results less trustful.\nComparison between training and test should tell you if you are overfitting your model or if you should make model more fitted. I do not rule of thumb of how much the difference is fine (suggest further reading about overfitting) but I have never seen anyone accepting 20% difference.","Q_Score":0,"Tags":"python,gridsearchcv","A_Id":66295285,"CreationDate":"2021-02-20T18:18:00.000","Title":"GridsearchCV in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was just randomly trying out different float to string conversions, in which I found the following issue:\nstr(123456789.123456789)\nwhich is returning\n'123456789.12345679'\nwhere the 8 is missing. Is this a memory issue or what might I be not understanding here?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":34,"Q_Id":66296389,"Users Score":2,"Answer":"The default floating-point formatting behavior in some Python implementations is now to produce just as many digits as needed to uniquely distinguish the number represented. In the floating-point format most commonly used, 123456789.123456789 is not exactly representable. When this numeral is encountered in source text, it is converted to the nearest representable value, 123456789.12345679104328155517578125.\nThen, when that is formatted as a string, \u201c123456789.12345679\u201d is printed because fewer digits fail to distinguish it from nearby representable values, like 123456789.123456776142120361328125, and more digits are not necessary, as the \u201c\u202679\u201d suffices to distinguish it from its two neighbors \u201c\u202677\u2026\u201d and \u201c\u202680\u2026\u201d.","Q_Score":0,"Tags":"python,string,floating-point","A_Id":66296500,"CreationDate":"2021-02-20T20:48:00.000","Title":"Conversion of float to str is removing a character in the process","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was just randomly trying out different float to string conversions, in which I found the following issue:\nstr(123456789.123456789)\nwhich is returning\n'123456789.12345679'\nwhere the 8 is missing. Is this a memory issue or what might I be not understanding here?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":34,"Q_Id":66296389,"Users Score":2,"Answer":"Try typing the float that you used as an example directly in your Python console: you will see that it prints out as 123456789.12345679 instead of 123456789.123456789. This is simply a rounding issue.","Q_Score":0,"Tags":"python,string,floating-point","A_Id":66296410,"CreationDate":"2021-02-20T20:48:00.000","Title":"Conversion of float to str is removing a character in the process","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i need to create random numbers in python, i have been using the random library, but is this library really random or is it just pseudo random? and if it is pseudo random how can I get real random numbers in python?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":58,"Q_Id":66298189,"Users Score":3,"Answer":"All computer generated random numbers are pseudo-random. If you want a more \"randomized\" version, you can use the secrets module instead of the random module.","Q_Score":0,"Tags":"python,random","A_Id":66298209,"CreationDate":"2021-02-21T01:11:00.000","Title":"Real random python numbers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm having a hard time connecting my facial recognition system (realtime) to the database.\nI am using python language here. Try to imagine, when the system is doing REAL-TIME face detection and recognition, it will certainly form frame by frame during the process (looping logic), and I want if that face is recognized then the system will write 'known face' in the database. But this is the problem, because what if the upload to the database is done repeatedly because the same frame is continuously formed?\nthe question is, how do you make the system only upload 1 data to the database and if the other frames have the same image, the system doesn't need to upload data to the database?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":66298575,"Users Score":0,"Answer":"you dont show any code, but to do what you're asking you want to have a flag that detects when a face is found and sets the variable. Then clear the variable once the flag leaves the frame. to account for false positives you can wait 4-5 frames before clear the flags and see if the face is still in the frame (i.e someone turns their head and the tracking looses the face)","Q_Score":0,"Tags":"python,face-recognition,yolo","A_Id":66298603,"CreationDate":"2021-02-21T02:28:00.000","Title":"Connect face recognition model to database efficiently","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Code:\nfrom matplotlib import animation\nOutput:\nImportError: cannot import name 'animation' from partially initialized module 'matplotlib' (most likely due to a circular import)\nThe motplotlib version is 3.3.4","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2132,"Q_Id":66298746,"Users Score":0,"Answer":"I finally solved the problem after tracking down the file path, I've always been using spyder of Anaconda, so i uninstalled and installed matplotlib using anaconda prompt over and over again, however, the problem is not error in anaconda, but python37 idle, it seems that if a packege is installed in idle, the spyder will run that one first(which is a bad one in my case), just by deleting matplotlib in python37 idle, the problem is solved at last","Q_Score":1,"Tags":"python,matplotlib,matplotlib-animation","A_Id":70274777,"CreationDate":"2021-02-21T03:03:00.000","Title":"cannot import name 'animation' from partially initialized module 'matplotlib'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Code:\nfrom matplotlib import animation\nOutput:\nImportError: cannot import name 'animation' from partially initialized module 'matplotlib' (most likely due to a circular import)\nThe motplotlib version is 3.3.4","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":2132,"Q_Id":66298746,"Users Score":2,"Answer":"simply update matplotlib using pip install --upgrade matplotlib worked for me.","Q_Score":1,"Tags":"python,matplotlib,matplotlib-animation","A_Id":67559932,"CreationDate":"2021-02-21T03:03:00.000","Title":"cannot import name 'animation' from partially initialized module 'matplotlib'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know that when you use numpy.random.seed(0) you get the same result on your own computer every time. I am wondering if it is also true for different computers and different installations of numpy.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1038,"Q_Id":66299283,"Users Score":0,"Answer":"It all depends upon type of algorithm implemented internally by numpy random function. In case of numpy, which is operated by pseudo-random number generators (PRNGs) algorithm. What this means is that if you provide the same seed( as of starting input ), you will get the same output. And if you change the seed, you will get a different output. So this kind of algorithm is no system dependent.\nBut for a true random number generator (TRNG) these often rely on some kind of specialized hardware that does some physical measurement of something unpredictable in the environment such as light or temperature electrical noise radioactive material. So if an module implements t\nhis kind of algorithm then it will be system dependent.","Q_Score":0,"Tags":"python,numpy","A_Id":66299426,"CreationDate":"2021-02-21T04:57:00.000","Title":"Does numpy.random.seed make results fixed on different computers?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is a natural language processing related question.\nSuppose I have a labelled train and unlabelled test set. After I have cleaned my train data(stopword, stem, punctuations etc), I use this cleaned data to build my model.\nWhen fitting it on my test data, will I also have to clean the test data text using the same manner as I did with my train set? or should I not touch the test data completly.\nThanks!","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":352,"Q_Id":66301306,"Users Score":0,"Answer":"Yes, you should do the same exact preprocessing on your training and testing dataset.","Q_Score":0,"Tags":"python,nlp,data-science,text-processing,train-test-split","A_Id":66302584,"CreationDate":"2021-02-21T10:29:00.000","Title":"Do you have to clean your test data before feeding into an NLP model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is a natural language processing related question.\nSuppose I have a labelled train and unlabelled test set. After I have cleaned my train data(stopword, stem, punctuations etc), I use this cleaned data to build my model.\nWhen fitting it on my test data, will I also have to clean the test data text using the same manner as I did with my train set? or should I not touch the test data completly.\nThanks!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":352,"Q_Id":66301306,"Users Score":0,"Answer":"Yes, data cleaning is a mandatory step in machine learning or NLP problem.\nSo you have to always first clean our data and then only have to feed it to the model.\nReg. Test and train data cleaning --> you can clean both data there is no harm of doing this.","Q_Score":0,"Tags":"python,nlp,data-science,text-processing,train-test-split","A_Id":68775998,"CreationDate":"2021-02-21T10:29:00.000","Title":"Do you have to clean your test data before feeding into an NLP model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently wrote my first application in python and it works quite well.\nA bit later my anti-virus program flagged and uninstalled it as a trojan.\nAfter a bit of research, I believe it is because I haven't signed my code but it made me think. Couldn't a compiler modify my Program by basically adding a piece of malware into your otherwise legit code? And how would I figure out if it does it?\nThis might be a really dumb question but I couldn't find any answers by googling and am new and trying to learn and understand. =","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":289,"Q_Id":66302117,"Users Score":-1,"Answer":"I am working for Anti-malware company and I am really surpriser by your post.\n\nAnti-malware programs do not \"uninstall\" malware. They remove file. Though they can do some cleanup, but it is not \"uninstalling\"\nYour python code can not be \"signed\" as it should be .EXE file or some driver.\n99% is that you have faced \"false positive\" - some heuristic\/Machine Learning detection that found that your code is simmilar to known malware.\n\nName of malware and name of antivirus product could help. You can address this problem to antivirus company support.","Q_Score":2,"Tags":"python,security","A_Id":66303363,"CreationDate":"2021-02-21T12:06:00.000","Title":"How do i know a compiler isn't injecting malware into my program","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in a really weird problem. I am cleaning a dataset that contains a lot of ISO formatted datetime object. However, a lot of them are throwing errors. Let me clarify them with an example:\nExample 1: date_str1 = '2019-09-18T07:52:53.167-04:00 . I used datetime.datetime.fromisoformat(date_str1) on it and that works.\nExample 2: date_str2 = 2019-09-18T07:52:50.69-04:00 . I used datetime.datetime.fromisoformat(date_str2) on it and that does not work.\nExample 3: date_str2 = 2019-09-18T07:52:50.690-04:00 . I used datetime.datetime.fromisoformat(date_str2) on it and that works.\nApparently, if the seconds value has 3 digit after the decimal points, it works. If there are any less it does not work which is odd because syntactically they are same.\nAny help would be much appreciated. Thanks in advance for your time.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":465,"Q_Id":66302741,"Users Score":1,"Answer":"I solved it finally. I needed to use the strptime() function for it. The code snippet that worked is: dt.datetime.strptime(date_str2,'%Y-%m-%dT%H:%M:%S.%f%z'). Note two things:\n\nThe timezone should be lowercase z, not uppercase.\nYou need to account for fractional seconds by adding %f in the representation string.","Q_Score":1,"Tags":"python,datetime,time,python-dateutil","A_Id":66330219,"CreationDate":"2021-02-21T13:20:00.000","Title":"Python YYYY-MM-DD hh:mm:ss.fff-zz:xx formatted datetime issue","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"New to learning code: I started an online learning program for Machine Learning and Data Science. I completed the first project linear regression using a Jupyter Notebook and Python 3 using Anaconda. It was saved as a ipynb file.\nI downloaded and saved it on laptop(PC) but now it will not open. I think maybe I need to download something on my laptop that recognizes the notebook application.\nOk thank you to those who responded to my question. File opens from Jupyter Notebook just fine, I was trying to open from document folder. Thanks again!!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":543,"Q_Id":66304161,"Users Score":0,"Answer":"Have you installed it? ;-)\nIf you\u2019re using a menu shortcut or Anaconda launcher to start it, try opening a terminal or command prompt and running the command jupyter notebook.\nIf it can\u2019t find jupyter, you may need to configure your PATH environment variable. If you don\u2019t know what that means, and don\u2019t want to find out, just (re)install Anaconda with the default settings, and it should set up PATH correctly.\nIf Jupyter gives an error that it can\u2019t find notebook, check with pip or conda that the notebook package is installed.\nTry running jupyter-notebook (with a hyphen). This should normally be the same as jupyter notebook (with a space), but if there\u2019s any difference, the version with the hyphen is the \u2018real\u2019 launcher, and the other one wraps that.","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":66304323,"CreationDate":"2021-02-21T15:54:00.000","Title":"downloaded a jupyter notebook but it won't open","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"print (\"The number is\", (roll),'.')\nThe number is 5 .\nI want to remove the space between '5' and '.'\nI have tried a few different methods to remove the space before the period, but I am either getting errors or the same result.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":59,"Q_Id":66305703,"Users Score":1,"Answer":"There are multiple solutions to your question one of the solution you can do by using .format function.\nprint (\"The number is {}.\".format(roll))","Q_Score":1,"Tags":"python","A_Id":66305837,"CreationDate":"2021-02-21T18:33:00.000","Title":"Python: Removing a space before a string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"print (\"The number is\", (roll),'.')\nThe number is 5 .\nI want to remove the space between '5' and '.'\nI have tried a few different methods to remove the space before the period, but I am either getting errors or the same result.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":59,"Q_Id":66305703,"Users Score":1,"Answer":"There are multiple solutions to your problem. I can recommend f-strings.\nprint (\"The number is\", (roll),'.')\nbecomes\nprint(f\"The number is {roll}.\")","Q_Score":1,"Tags":"python","A_Id":66305760,"CreationDate":"2021-02-21T18:33:00.000","Title":"Python: Removing a space before a string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using django-email-verification for sending verification link to email of the user but it tales time to send email , i want to send mail with celery for the same to speed it up , please guide me how can i add celery configs?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":127,"Q_Id":66306295,"Users Score":0,"Answer":"Celery isn't going to make this run any faster. What Celery will do for you is make the task asynchronous.","Q_Score":0,"Tags":"python,django,email,celery,email-verification","A_Id":66306824,"CreationDate":"2021-02-21T19:35:00.000","Title":"send email verfication link using django-email-verification with celery","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have 2019a version of MATLAB and I am trying to explore the usage of Python from within MATLAB environment. I have Anaconda 3 installed for Python. In MATLAB, when I issue, pyenv, I get 'Undefined function or variable 'pyenv''\nThe documentation says that Python is supported, but I am not sure why this doesn't work. Any suggestions?\nEdit:\nThanks. Solution is to use pyversion, but also set the path with the entire path\npyversion 'C:\\Users\\newuser\\AppData\\Local\\Continuum\\anaconda3\\python.exe';","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":276,"Q_Id":66307112,"Users Score":3,"Answer":"pyenv was introduced in R2019b. In R2019a and older, you need to use the older pyversion function.","Q_Score":0,"Tags":"python,matlab","A_Id":66307421,"CreationDate":"2021-02-21T21:05:00.000","Title":"Use Python within MATLAB environment (2019a)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried everything that I found. Tried to connect with extension but it was unsuccessful (I didn't find a ext config). Tried to internal settings (about:config). Tried connect with JS inside chrome.\nCan I just use proxy for entire process (WebDriver)?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":66313304,"Users Score":0,"Answer":"Chrome cant work with socks5, but you can use it in Firefox with addon (Proxyfoxy)\n\nCreate WEbDrive\nInstall addon\nSetup proxy and select it\nDone!","Q_Score":1,"Tags":"python,python-3.x,selenium-webdriver,socks","A_Id":66313481,"CreationDate":"2021-02-22T09:47:00.000","Title":"How to work with SOCKS5 in selenium Firefox\/Chrome","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In python, after I set some filter with warnings.filterwarning (or some package I import does), how can I access the list of active filters? I tried sys.warnoptions but it always gives me an empty list.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":34,"Q_Id":66316703,"Users Score":1,"Answer":"I found it looking at the source code. It's warnings.filters.","Q_Score":2,"Tags":"python,warnings","A_Id":66316874,"CreationDate":"2021-02-22T13:32:00.000","Title":"How to access active warning filters list?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Flask server that will fetch keys from Redis. How would I maintain a list of already deleted keys? Every time I delete a key, it needs to be added to this list within Redis. Every time I try to check for a key in the cache, if it cannot be found I want to check if it is already deleted. This would allow me to distinguish in my API between requests for unknown keys and requests for deleted keys.\nI am using the standard redis python library.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":261,"Q_Id":66321777,"Users Score":1,"Answer":"Instead of deleting the key, you can set the to-be-deleted key with a special value, say, empty string.\nWhen you get key-value pair from Redis, you can check if the value equals to the special value. If it does, then the key has been deleted. Otherwise, serve the client with the value.\nAlso, as @Mihai mentioned, you might also need to set a TTL with those deleted keys to avoid Out-Of-Memory problem.","Q_Score":0,"Tags":"python,redis","A_Id":66326358,"CreationDate":"2021-02-22T18:57:00.000","Title":"How would you maintain a list of previously deleted keys in Redis?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using a Raspberry Pi to dim a light, using python to get 0%....50%...100% and various % brightnesses in between.\nTaking a PWM approach in the code means a flickering light, which won't work for the lower % brightness, as the flicker will become more apparent. I can't seem to find a method to code brightness level without PWM, but there must be! Any suggestions gratefully received!\n(this is my first go at coding with python and trying to see what is possible)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":114,"Q_Id":66322118,"Users Score":0,"Answer":"If you want that many level of brightness PWM is your only option. Don't worry you won't notice the flicker anyway. If you don't want to use PWM then you will have to construct external circuit, for each brightness level you will have to calculate the voltage and add voltage divider according to it. Then Switch between them from raspberry pi code. you can do this for 100 brightness level if you want but it will be a big mess. Use this if you are okay with 3-4 levels of brightness","Q_Score":0,"Tags":"python,raspberry-pi,home-automation","A_Id":66323057,"CreationDate":"2021-02-22T19:21:00.000","Title":"Dimming Lights without PWM. Possible? Using Raspberry Pi","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say you create your own custom word embeddings in the process of some arbitrary task, say text classification. How do you get a dictionary like structure of {word: vector} back from Keras?\nembeddings_layer.get_weights() gives you the raw embeddings...but it's unclear which word corresponds to what vector element.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":162,"Q_Id":66324507,"Users Score":1,"Answer":"This dictionary is not a part of keras model. It should be kept separately as a normal python dictionary. It should be in your code - you use it to convert text to integer indices (to feed them to Embedding layer).","Q_Score":1,"Tags":"python,keras,nlp,word-embedding","A_Id":66327719,"CreationDate":"2021-02-22T22:42:00.000","Title":"How to get word embeddings back from Keras?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building a web application that has some API calls to other services and currently I am just putting these API keys and secrets in variables which is not very secure.\nMy objective:\nTo store\/secure these API credentials either in the code or store it into the database in encrypted form maybe.\nI am currently coding in PHP and I have scripts in python to call these API services. I am planning to do up an API page where users can enter API credentials and it will be encrypted\/hashed and stored into the database. But I am not sure if this is the right way or how to go about it.\nAny help on this is welcomed. Thank you.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1902,"Q_Id":66327518,"Users Score":0,"Answer":"Actually you in some levels of security you can't be sure that key is private. First of all hashing is not suitable cause you can't reach api key form its hash so you can't use that api! If you encrypt that api key, there will be a key and an algorithem that you use for your encryption. If this information is likely to be leaked, you can do nothing to stop it! But in general, you trust on some layers of security and you can say store application key in an .env file. I think encrypting can be a good idea for storing keys in database in case of data being leaked by database attacks.","Q_Score":0,"Tags":"python,php,api","A_Id":66327651,"CreationDate":"2021-02-23T05:26:00.000","Title":"How to securely store API keys and secrets?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I run training phase of TF2 model (based on object detection pre-trained models from TF2-Models Zoo) on GPU (Nvidia 3070).\nIs there some way to define evaluation phase (for checkpoints created by training) on CPU?\n\nCause train phase allocates almost all memory of GPU, I cant run both of them (train and eval) on GPU.\n\nOS - Ubuntu 20.04\n\nGPU - Nvidia 3070 (driver 460)\n\nTF - 2.4.1\n\nPython - 3.8.5\n\n\nThank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":144,"Q_Id":66328975,"Users Score":0,"Answer":"In my case, the solution is into evaluation function define:\nos.environ['CUDA_VISIBLE_DEVICES'] = '-1'","Q_Score":0,"Tags":"python-3.x,tensorflow,tensorflow2.0,cpu","A_Id":66329827,"CreationDate":"2021-02-23T07:44:00.000","Title":"TF2 Model: How run training on GPU, and evaluation on CPU","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have C++ app where it does many calculations, but the most of time it needs to execute simple scripts by PyRun_SimpleString() so I'm curious if is there something in Python\/C API, what would do faster following tasks:\n\ndo simple assignents like x = 0.1 (changing over time)?\nexecute script storaged in string (maybe something like compile once to run faster)?\n\nFor now I do all these tasks by executing PyRun_SimpleString(). In loop performance loss is significatn.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":83,"Q_Id":66332450,"Users Score":1,"Answer":"As you suspect, there's indeed two parts to running Python code: interpretation and execution. See Py_CompileString and PyEval_EvalCode. But for simple statements like x=0.1, you might even consider modifying locals yourself via PyLong_FromDouble(0.1)","Q_Score":0,"Tags":"c++,optimization,python-c-api","A_Id":66333553,"CreationDate":"2021-02-23T11:41:00.000","Title":"Is PyRun_SimpleString() function slow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that runs a data migration script through a transaction interfacing with a MySQL DB. I am the process of moving this script over to NodeJS which is accessible through an API endpoint.\nThe problem I am having is that, since my Python data migration is wrapped in a transaction, then my Node process cannot interact with the new data.\nI have started to collect relevant information in my Python script and then send in over the POST body to my Node script for now, but this strategy has it's own complications with keep data in sync and then responding with the new information that I need to make sure to insert back in my Python process.\nIs there a better way that I can share the transaction data between my Python and my Node process?\nThank you","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":47,"Q_Id":66334825,"Users Score":1,"Answer":"Here are some ideas:\n\nStore data in a cache server that both the Python app and the Node.js app have access to. Popular software for this is Memcached or Redis.\n\nUse a message queue to send data back and forth between the apps. Some examples are RabbitMQ or ActiveMQ.\n\nCommit the data in the database using your Python app. Then make an http POST request to the Node.js app, to signal the Node.js app the data is ready (the POST request doesn't need to contain the data). The Node.js app does what it's going to do with the data before sending the http response. So the Python app knows that once it receives the response, the data has been updated by Node.js.","Q_Score":1,"Tags":"python,mysql,node.js,api","A_Id":66335631,"CreationDate":"2021-02-23T14:13:00.000","Title":"Sharing transaction between processes MySQL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when i do command prompt on windows for the django version it doesn't come up even though its successful installed on pycharm help please.\ni am new to python as well.\nwhen i go on to command prompt on windows to check django version after adding python -m django --version\ni get this result:\n\nC:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python39\\python.exe: No\nmodule named django\n\ni am using pycharm as IDE and it seems when i did install django on pycharm it was successful should i worry about the django version coming up on command prompt or not.\nI am using windows 10 to do my coding for Uni so its not on any other OS :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":66336320,"Users Score":0,"Answer":"try this command after reopening your cmd\ndjango-admin --version","Q_Score":0,"Tags":"python,django,pycharm,command-prompt","A_Id":66338995,"CreationDate":"2021-02-23T15:42:00.000","Title":"when i do command prompt for the django version it doesnt come up even though its successful on pycharm help please","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am on data science. I have a .csv file with 5kk records and 3.9gigas of size. Whats the best pratice to deal with it? I normally use vscode or jupyter and even when i set max-memory to 10gigas the operations like load etc are taking too much time to complete.\nWhat do you recommend to improve my work?\nnotebook lenovo S145 20gigas ram i7-8565U - Ubuntu\nThanks","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":18,"Q_Id":66342822,"Users Score":1,"Answer":"If you want to bring a CSV into a database for reporting, one fairly quick and easy option is to use an external table. It uses syntax similar to SQLLDR in the create table definition. Once established, the latest saved CSV data will immediately be available as a table in the database.","Q_Score":1,"Tags":"sql,python-3.x,pandas,visual-studio-code,data-science","A_Id":66342911,"CreationDate":"2021-02-24T00:06:00.000","Title":"Best approach to work with .csv files with 4 gigas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In PYPI SpeechRecognition, it states that the Package is only supported up to Python version 3.6\nI have successfully got it working in Python 3.6. in the past. But now upgraded to Python3.9.1. SpeechRecognition does not work as it is not supported.\nDoes anyone know a good workaround (in Python) to handle SpeechRecognition that returns the text not an audio file?\nMany thanks!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":948,"Q_Id":66343073,"Users Score":0,"Answer":"Both Google and AWS will work, from my experience, Google is faster and more accurate but does not work as well with compressed file types, so you'd best go with a wav. If these solutions aren't applicable, you always can use a virtual environment with an older version of python.\nI also read through their documentation and it said that versions 3.3+ are all supported, so maybe updating the library will fix your issue.\n(for the Google and AWS transcription services I have created some handy toolkits which would make the process much easier)\nHope this helped!","Q_Score":2,"Tags":"python,speech-recognition,python-3.9","A_Id":66497639,"CreationDate":"2021-02-24T00:42:00.000","Title":"SpeechRecognition available in Python 3.9+","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm adding this question because the internet had no answer. I'll answer it here.\n(google searching the error message led to only 2 pages, neither helpful, hopefully this is 3)\nWhen trying to\npython -m spacy download en_core_web_sm\nI got the following error message:\nnumpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject\nAll subsequent error messages were misleading.\nthe solution that solved it was to downgrade spacy\n pip install spacy==2.3.0","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":130,"Q_Id":66343200,"Users Score":1,"Answer":"Downgrade spacy\n pip install spacy==2.3.0","Q_Score":0,"Tags":"python,pandas,numpy,spacy,pycaret","A_Id":66343201,"CreationDate":"2021-02-24T00:57:00.000","Title":"spacy pandas pycaret compatibility issues","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I did pip freeze, and found requests, therefore I have requests, but I am getting an error saying\n\nModuleNotFoundError: No module named 'requests'\n\nI just installed Python 3.9.2, and Python 3.8 is still on my computer. (Stating this in case it's a contributing factor to my problem.)\nPlease help!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":91,"Q_Id":66344764,"Users Score":0,"Answer":"Are you sure requests was installed in the correct python install, you can do python3.9 -mpip list | grep requests to check if requests is installed for python3.9, if it doesn't show up you can use python3.9 -mpip install requests to install it. (the error might be the pip command is the wrong python instances pip command)","Q_Score":0,"Tags":"python","A_Id":66344793,"CreationDate":"2021-02-24T04:34:00.000","Title":"I have requests installed, but I am still getting the ModuleNotFoundError: No module named 'requests' error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have set up a telegram bot to fetch data from my mysql db.\nIts running well until like after 1 day..And It just cannot connect:\nFile \"\/usr\/local\/lib\/python3.8\/site-packages\/mysql\/connector\/connection.py\", line 809, in cursor\nraise errors.OperationalError(\"MySQL Connection not available.\")\nI have checked that the script is perfect and I can even run it perfectly on the server, while at the same time if it run through the bot , it throws the above errors.\nEven so ,it will resume to normal after I reboot the apache server. Can anyone help??? Thanks first.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":349,"Q_Id":66344843,"Users Score":0,"Answer":"It turns out that its not related to my bot. But sql connection called by my django server (not orm, but mysql.connector.)\nI didn't close the connection properly (I closed cursor). After I closed the connection conn.close()immediately after the fetch, the problem vanished.\nYet I still dun understand why it doesn't cause any problem when i run the script manually. I feel its something about connection time. I am no expert of mysql, in fact I am just a amateur of programming. see anyone can give further solution. (I have changed the title in order to make my problem more relevant.)","Q_Score":0,"Tags":"python,mysql,telegram-bot","A_Id":66410549,"CreationDate":"2021-02-24T04:44:00.000","Title":"OperationalError(\"MySQL Connection not available.\")","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"By default scipy.io.wavfile.read(file) set sample rate 8000, where I in Librosa by default is 22.05k.\nI want to set a 22.05k sample rate in scipy.io.wavfile.read.\nIn the documentation, there is no way to define the sample rate explicitly.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":80,"Q_Id":66345606,"Users Score":0,"Answer":"No, that's not how it works. Each wave file HAS a sample rate. scipy.io.wavfile.read tells you what that rate is. If you want to change it, then you have to do a sample rate conversion.","Q_Score":0,"Tags":"python,scipy","A_Id":66346965,"CreationDate":"2021-02-24T06:15:00.000","Title":"How to define sample rate explicitly in scipy.io.wavfile.read?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently running 8 Random Forest models(each with grid search) on 8 different datasets for 1000 times. To save time, I opened up 8 different terminals and am running each models in parallel. Now, my question is\nIs it faster to train each random forest models with the argument n_jobs ==-1 or is it better to assign number of cores such as n_jobs=3? I have 26 cores available. Thus I may be able to assign at most 3 cores to each model.\nLooking at the cpu usage with htop, I can see that running with n_jobs=-1 already makes full use of all cpus. Would it be possible that setting the n_jobs=-1 actually results in bottleneck when distributing the data to each core?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":429,"Q_Id":66345860,"Users Score":0,"Answer":"The fastest in my opinion would be to use n_jobs=-1 in all 8 terminals and your computer will internally allocate as needed the necessary CPU resources to each worker.","Q_Score":1,"Tags":"python,scikit-learn,parallel-processing,cpu","A_Id":66348426,"CreationDate":"2021-02-24T06:39:00.000","Title":"Scikit Learn) is n_jobs=-1 still faster than n_jobs= c when we are running multiple Random Forest in Paralllel?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run some python code from my c program by using the python.h header but I keep getting\nfatal error: Python.h: No such file or directory\nAfter checking my anaconda environment, I have found a Python.h file in ...Anaconda\/envs\/myenv\/include\nI tried adding this path to my system variable's PATH but it didn't work. How can I include the Python.h file from my anaconda env to my program?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":89,"Q_Id":66349253,"Users Score":1,"Answer":"PATH is only used for executables, not C headers. Tell your compiler or build system to add this path. On gcc and clang, for example, it's the -I flag.","Q_Score":0,"Tags":"python,c","A_Id":66349443,"CreationDate":"2021-02-24T10:44:00.000","Title":"Missing Python.h in Windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am looking for a solution to find the index(number) of the chousen OptionMenu option.\nPython 3.8\nValues can be the same.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":115,"Q_Id":66354502,"Users Score":0,"Answer":"I just threw tk.OptionMenu, using ttk.Combobox instead","Q_Score":0,"Tags":"python,user-interface,tkinter","A_Id":66355531,"CreationDate":"2021-02-24T16:05:00.000","Title":"How to find index of Python tkinter's OptionMenu index?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried using a file '.txt' located in the same folder as the '.py' file which is supposed to manipulate it and it never works just with the filename and extension in python 3.8. I am on windows 10 pro.\nI am just sharing maybe you could go through the same.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":66356240,"Users Score":0,"Answer":"I have to use the absolute path all the time, which is the only working approach i.e: ('C:\\Users\\Carlos\\alice.txt'). Make sure to use the double backslashes otherwise you get an error for the escape character in windows.","Q_Score":0,"Tags":"python","A_Id":66356241,"CreationDate":"2021-02-24T17:51:00.000","Title":"Python 3.8+ Does Not Recognize File In Same Folder Except Using Absolute Path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been working inside a conda environment (python==3.6). I've been trying to make a requirements.txt using\npip freeze > requirements.txt\nThe file shows the following:\npandas @ file:\/\/\/C:\/ci\/pandas_1602088210462\/work\nand\nPillow @ file:\/\/\/C:\/ci\/pillow_1609842634117\/work\nI was expecting to see:\npandas==1.1.3\nDoes anyone has a resolution.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":66358668,"Users Score":0,"Answer":"You can use conda list --export > requirements.txt to export a conda-compliant list of package you can then import using:\nconda install --file requirements.txt\nOr, as others said, you can also use: pip list --format=freeze > requirements.txt","Q_Score":1,"Tags":"python,pip,conda,environment,requirements.txt","A_Id":66358741,"CreationDate":"2021-02-24T20:51:00.000","Title":"Creating requriements.txt in conda doesn't show versions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My team use VSCode to edit a Python project and I'd like to keep some of the project settings under version control (we use Git). But I'd also like to leave some project settings free to be customized by each developer.\nI've already commited the .vscode\/settings.json configuration. It has the project standard configuration:\n\ncode formatter to be used\nlevel of type checking\nlinter selection\ntest framework used\n\nBut some configurations should be left to the developer. Our greatest problem is with the Python interpreter. We are using the standard venv module for our virtual environments. So we must tell VSCode which interpreter it must use.\nOur envs are created in the same directory of the project and put in .gitignore. The main problem is that venv has a different structure if you are in Windows or in Linux. In Windows, we must use the interpreter located in Scripts\/python.exe and in Linux we must use bin\/python.\nBeyond the interpreter setting, it would be nice if each developer could have his\/her own preferences not commited in the repository.\nIs it possible to have more than one VSCode project settings file and just one of them commited?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":222,"Q_Id":66361255,"Users Score":0,"Answer":"This isn't really an answer, but a solution of my problem with a portable path for the Python Interpreter.\nI started to create my virtual environments inside my repository in a directory called .venv. The interpreter path is configured with a path relative to the workspace root dir. I also add the path to my .gitignore file.\nIt works fine for everybody and also has the benefit of forcing everybody to use a virtual environment.","Q_Score":0,"Tags":"git,visual-studio-code,vscode-settings,python-venv","A_Id":68840103,"CreationDate":"2021-02-25T01:43:00.000","Title":"VSCode: How to keep some, but not all, project settings under version control?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"do I convert it a txt file? how do I inject the new line in between the other lines? I'm trying to inject a wallet address to a simple mining batch file without needing to physically open it prior.\npretty much the last step to automating my mining rigs for full self sufficiency.\nif anyone has any way of doing this, please describe in full detail or show an example, as I am self taught and in way over my head for a project that's exceeding expectations before release lol","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":66362462,"Users Score":0,"Answer":"I would read in the entire file with f.readlines() so you get a list of strings (where each string represents a line in the file), write some logic that determines where the new string should go in between, and then re-write that to a file after.","Q_Score":1,"Tags":"python,html,batch-file,helper","A_Id":66363316,"CreationDate":"2021-02-25T04:35:00.000","Title":"how to inject code with a batch into a batch that has existing code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a minimum value for the WebdriverWait in Python Selenium? By default it is set as 0.5s\ndef __init__(self, driver, timeout, poll_frequency=POLL_FREQUENCY, ignored_exceptions=None):\nAlso, is there any substantial drawback to reducing the polling time?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":927,"Q_Id":66363499,"Users Score":1,"Answer":"Parameter POLL_FREQUENCY is expected to be float which minimal value can be queried with sys.float_info.min. In my system it returns 2.2250738585072014e-308.\nThere is no \"algorithmic\" drawback since WebDriver is actually a REST service and you use it in synchronous way. However too short period would result in to much useless calls to a driver which could impact the performance of your system, introduce noise to your logs, consume more network traffic, etc.","Q_Score":0,"Tags":"python,selenium,selenium-webdriver,webdriverwait","A_Id":66366686,"CreationDate":"2021-02-25T06:34:00.000","Title":"Python Selenium Webdriver Wait Poll Frequency","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I ran $ snakeviz code_profile.prof from CLI :\nsnakeviz web server started on 127.0.0.1:8080; enter Ctrl-C to exit http:\/\/127.0.0.1:8080\/snakeviz\/%2Fhome%2Fatmadeep%2FProjects%2FtrafficAI-master%2Fprofile_info-60.prof snakeviz: error: no web browser found: could not locate runnable browser\nAfter some search, I tried running it in server only mode and got this error using --server argument :\nTraceback (most recent call last): File \"\/home\/atmadeep\/anaconda3\/envs\/work-env\/lib\/python3.7\/site-packages\/tornado\/web.py\", line 1681, in _execute result = self.prepare() File \"\/home\/atmadeep\/anaconda3\/envs\/work-env\/lib\/python3.7\/site-packages\/tornado\/web.py\", line 2430, in prepare raise HTTPError(self._status_code) tornado.web.HTTPError: HTTP 404: Not Found \nWhat might be the problem here? Is the profile generated corrupt or am I not understanding something related to snakeviz.\nNote: I'm not running this code in jupyter-notebook.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":734,"Q_Id":66366029,"Users Score":0,"Answer":"So, here's the solution. from the console, run:\n$python -m snakeviz profile.prof --server\nWhen running from console, snakeviz needs to be called as shown above.","Q_Score":0,"Tags":"python,snakeviz","A_Id":66499234,"CreationDate":"2021-02-25T09:42:00.000","Title":"Can't run snakeviz. What might be the problem here?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I ran $ snakeviz code_profile.prof from CLI :\nsnakeviz web server started on 127.0.0.1:8080; enter Ctrl-C to exit http:\/\/127.0.0.1:8080\/snakeviz\/%2Fhome%2Fatmadeep%2FProjects%2FtrafficAI-master%2Fprofile_info-60.prof snakeviz: error: no web browser found: could not locate runnable browser\nAfter some search, I tried running it in server only mode and got this error using --server argument :\nTraceback (most recent call last): File \"\/home\/atmadeep\/anaconda3\/envs\/work-env\/lib\/python3.7\/site-packages\/tornado\/web.py\", line 1681, in _execute result = self.prepare() File \"\/home\/atmadeep\/anaconda3\/envs\/work-env\/lib\/python3.7\/site-packages\/tornado\/web.py\", line 2430, in prepare raise HTTPError(self._status_code) tornado.web.HTTPError: HTTP 404: Not Found \nWhat might be the problem here? Is the profile generated corrupt or am I not understanding something related to snakeviz.\nNote: I'm not running this code in jupyter-notebook.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":734,"Q_Id":66366029,"Users Score":0,"Answer":"on a server you have to add --server argument and to click on the link in the terminal. Putting a threshold at 1\/100 helps also.","Q_Score":0,"Tags":"python,snakeviz","A_Id":71391855,"CreationDate":"2021-02-25T09:42:00.000","Title":"Can't run snakeviz. What might be the problem here?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Ive tryed to print a value using this code.\n\nprint(sh1.cell(2,1).value)\n\nbeing sh1, a valid worksheet of the workbook, and the cell (2,1) containing a string\nbtw i'm using pycharm","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":66368595,"Users Score":0,"Answer":"I figure it out i was trying to retrive a value thas was added to the workbook after the last time i save the file, once i save it again, python retr","Q_Score":0,"Tags":"python-3.x,excel,cell,return-value,openpyxl","A_Id":66374330,"CreationDate":"2021-02-25T12:35:00.000","Title":"How do i retrieve a value from an excel cell through python using the Cell method?, using python3.9 and openpyxl","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I made an API for my AI model but I would like to not have any down time when I update the model. I search a way to load in background and once it's loaded I switch the old model with the new. I tried passing values between sub process but doesn't work well. Do you have any idea how can I do that ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":66370017,"Users Score":0,"Answer":"You can place the serialized model in a raw storage, like an S3 bucket if you're on AWS. In S3's case, you can use bucket versioning which might prove helpful. Then setup some sort of trigger. You can definitely get creative here, and I've thought about this a lot. In practice, the best options I've tried are:\n\nSet up an endpoint that when called will go open the new model at whatever location you store it at. Set up a webhook on the storage\/S3 bucket that will send a quick automated call to the given endpoint and auto-load that new item\nSame thing as #1, but instead you just manually load it. In both cases you'll really want some security on that endpoint or anyone that finds your site can just absolutely abuse your stack.\nSet a timer at startup that calls a given function nightly, internally running within the application itself. The function is invoked and then goes and reloads.\n\nCould be other ideas I'm not smart enough (yet!) to use, just trying to start some dialogue.","Q_Score":0,"Tags":"python,fastapi","A_Id":66394917,"CreationDate":"2021-02-25T14:02:00.000","Title":"Load new model in background","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was able to convert a .py file to and exe file,\nhowever when I try to send it via Gmail, it detects as a virus.\nAlso, when trying to transfer the file on a USB flash drive, the computer says it's a virus.\nAny ideas on how to fix this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":118,"Q_Id":66370571,"Users Score":0,"Answer":"Apart from getting your exe signed (not really a viable option unless you're working on a big and important project) or writing the program in a natively compiled programming language like C, no, there is no way to avoid the detection since the Py2Exe converter you're using embeds the Python interpreter and all needed dependencies into the binary, which is a technique often used by viruses.\nEDIT FOR:\nI didn't actually get the fact that Gmail is the thing blocking the exe, not your AV. Well, as said by other comments, Gmail blocks certain files by default. Try adding the exe to a zip or rar archive and send that instead of the plain .exe.","Q_Score":0,"Tags":"python,exe,converters,antivirus,virus","A_Id":66370775,"CreationDate":"2021-02-25T14:34:00.000","Title":"Created an EXE file from .py and it's detected as virus","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was running a deep learning program on my Linux server and I suddenly got this error.\nUserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW (Triggered internally at \/opt\/conda\/conda-bld\/pytorch_1603729096996\/work\/c10\/cuda\/CUDAFunctions.cpp:100.)\nEarlier when I just created this conda environment, torch.cuda.is_available() returned true and I could use CUDA & GPU. But all of a sudden I could not use CUDA and torch.cuda.is_available()returned false. What should I do?\nps. I use GeForce RTX 3080 and cuda 11.0 + pytorch 1.7.0. It worked before but now it doesn't.","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":24066,"Q_Id":66371130,"Users Score":24,"Answer":"I just tried rebooting. Problem solved. Turned out that it was caused by NVIDIA NVML Driver\/library version mismatch.","Q_Score":26,"Tags":"python,linux,pytorch","A_Id":66385167,"CreationDate":"2021-02-25T15:04:00.000","Title":"CUDA initialization: Unexpected error from cudaGetDeviceCount()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was recently trying to solve a data science test. Part of the test was to get the number of observations in a dataset for which the variable X is less than the 4th 5-quantile of this variable X.\nI don't realy understand what they meant by the 4th 5-quantile! I tried using pandas df.quantile function but I wasn't able to figure out how to use it in my case","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1216,"Q_Id":66372010,"Users Score":0,"Answer":"4th 5-quantile translates value = data.quantile(4\/5)","Q_Score":0,"Tags":"python,pandas,dataframe,statistics,data-science","A_Id":66394110,"CreationDate":"2021-02-25T16:00:00.000","Title":"Is there a pandas method to find the 4th 5-quantile of a dataset?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two applications that access the same DB. One application inserts data into a table. The other sits in a loop and waits for the data to be available. If I add a new connection and close the connection before I run the SELECT query I find the data in the table without issues. I am trying to reduce the number of connections. I tried to leave the connection open then just loop through and send the query. When I do this, I do not get any of the updated data that was inserted into the table since the original connection was made. I get I can just re-connect and close, but this is a lot of overhead if I am connecting and closing every second or 2. Any ideas how to get data that was added to a DB from an external source with a SELECT query without having to connect and close every time in a loop?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":66372325,"Users Score":0,"Answer":"Do you commit your insert?\nnormally the best way is you close your connection, and it is not generating very overhead if you open a connection for the select query.","Q_Score":0,"Tags":"python,mysql,sql,pymysql","A_Id":66372679,"CreationDate":"2021-02-25T16:17:00.000","Title":"pymysql SELECT * only detecting changes made externally after instantiating a new connection","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can I use the discord API in python to harvest messages from a server (not my own server)? Assuming you have an invite link.\nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":661,"Q_Id":66372939,"Users Score":0,"Answer":"Well, if your using a discord bot, you need them to invite your bot to their server.\nOther than that you could theoretically listen with a bot on your own account but that would be against the discord TOS.","Q_Score":0,"Tags":"python,discord,discord.py","A_Id":66373023,"CreationDate":"2021-02-25T16:53:00.000","Title":"Is it possible to harvest messages from other peoples discord servers?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to pipenv and it might be some very newbie problem I'm facing.\nI'm using python 3.9 and installed pipenv using python3 -m pip install pipenv.\nI have a project with a requirements.txt and after running pipenv install -r requirements.txt it was supposed to create a virtual environment but after running pipenv shell and pipenv run src\/manage.py runserver it says:\nError: the command src\/manage.py could not be found within PATH or Pipfile's [scripts]\nThe virtual environment was created at \/Users\/myuser\/.local\/share\/virtualenvs\/project1-iLzXCwVe and not at the working space. Is it possible it has something to do with that? Any way this can be solved?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":508,"Q_Id":66373902,"Users Score":0,"Answer":"If you want to run src\/manage.py using the syntax pipenv run you will need to be in the root directory and either need to change your command to pipenv run python src\/manage.py or make manage.py executable to leave it as pipenv run src\/manage.py\nAlso note that you don\u2019t need to use pipenv run if you are actively utilizing your virtual environment (which is activated by pipenv shell).","Q_Score":0,"Tags":"python,django,pipenv","A_Id":66417523,"CreationDate":"2021-02-25T17:53:00.000","Title":"Pipenv doesn't recognize virtual environment that was created by itself","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with a data file of customer addresses. The source data contains foreign characters and is UTF-8 encoded.\nI import the data thus:\ncolumns = ['userid','email','firstname','lastname','phonenumber','fax','address','unit','city','province','postalcode','listing_geopoint','website','tier']\ndata = pd.read_csv(file, delimiter=',', usecols=columns, encoding='utf-8')\n...execute some manipulation (de-duping mainly)\nand then export the data thus:\nclinic.to_excel('clinic-'+revision+'.xlsx',index=False)\n(and I have tried this too)\nclinic.to_csv('clinic-'+revision+'.csv',sep=seperator,index=False, encoding='utf-8')\nIn both cases when I open the export file I get the raw unicode value for foreign chars and not the foreign character.\ne.g.\n3031 boul de la Gare Cliniques Sp\\u00e9cialis\\u00e9es 3eme \\u00e9tage\ninstead of the correct output\n3031 boul de la Gare Cliniques Sp\u00e9cialis\u00e9es 3eme \u00e9tage\nWhat am I missing?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":56,"Q_Id":66374325,"Users Score":1,"Answer":"Are you sure your initial data is encoded in UTF-8?\nI've encountered European characters in latin-1 encoding before so I would try reading in and exporting the csv with (... encoding='latin1')","Q_Score":1,"Tags":"python,pandas,csv,utf-8","A_Id":66374415,"CreationDate":"2021-02-25T18:22:00.000","Title":"Exporting Data to CSV Pandas - issue with Foreign Characters","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table and I want to find the lowest number associated with each 'leadId' in my table. Here is a snapshot of it below:\n\n\n\n\nIndex\nleadId\nrepId\nhoursSinceContacted\n\n\n\n\n1\n261\n1111\n10391\n\n\n2\n261\n2222\n10247\n\n\n3\n261\n3333\n1149\n\n\n4\n261\n4444\n10247\n\n\n5\n262\n5555\n551\n\n\n6\n262\n6666\n982\n\n\n6\n262\n3333\n214\n\n\n\n\nIs there a groupby statement I can use to get a table that looks like this?:\n\n\n\n\nIndex\nleadId\nrepId\nhoursSinceContacted\n\n\n\n\n3\n261\n3333\n1149\n\n\n6\n262\n3333\n214\n\n\n\n\nAny suggestion will be much appreciated.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":36,"Q_Id":66374908,"Users Score":1,"Answer":"You can do:\ndf.groupby('leadid').agg({'hoursSinceContacted' : 'min'}).reset_index()","Q_Score":1,"Tags":"python,python-3.x,pandas,dataframe,pandas-groupby","A_Id":66375321,"CreationDate":"2021-02-25T19:07:00.000","Title":"How to find the lowest number of a grouped dataframe in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a difference between ContextVar and Global Var within Google Cloud Functions?\nI noticed that as Google tries to re-use GCF instances some global vars classes are reused from one GCF invocation to another and not init at the start of each GCF invocation. I need each of those global var classes to be unique for each GCF invocation.\nAre ContextVars unique for each GCF invocation?\nCurrently I assign those global vars to None and re-init afterwards to ensure fresh init of each class","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":66375148,"Users Score":0,"Answer":"You have your Cloud Functions, I assume, an HTTP cloud functions. (same with background functions, it's just for my example).\nNow, test your HTTP Cloud Functions: create a webserver (with flask for example because you seem in Python).\nStart your webserver.\n\nThat's for the context. Now my explanation:\n\nWhen a Cloud Functions is created, the platform run a webserver (flask) as you create it.\nWhen a request comes in, the webserver get it and call the \"function\" to process it (i.e. the Cloud Functions).\n\nSo, the GlobalVars and the ContextVars on Cloud Functions have exactly the same lifecycle than them in your local webserver. There is no magic\/strange stuff.\nMore useful, you can test this locally, it's quicker and easier!","Q_Score":0,"Tags":"python,google-cloud-functions,python-3.7,python-contextvars","A_Id":66376936,"CreationDate":"2021-02-25T19:24:00.000","Title":"Google CloudFunctions: ContextVar vs Global Var","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"With the Google Cloud command line CLI running you can specify a local jar with the --jars flag. However I want to submit a job using the Python API. I have that working but when I specify the jar, if I use the file: prefix, it looks on the Dataproc master cluster rather than on my local workstation.\nThere is an easy workaround which is to just upload the jar using the GCS library first but I wanted to check if the Dataproc client libraries already supported this convenience feature.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72,"Q_Id":66376451,"Users Score":3,"Answer":"Not at the moment. As you mentioned, the most convenient way to do this strictly using the Python client libraries would be to use the GCS client first and then point to your job file in GCS.","Q_Score":2,"Tags":"python,google-cloud-dataproc","A_Id":66378101,"CreationDate":"2021-02-25T21:08:00.000","Title":"Can I Upload a Jar from My Local System using Cloud Dataproc Python API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have a Python project where the main file has spaces in the name. I understand that this is not encouraged, but in this case, I think it's necessary. The file is being compiled into a stand alone executable using py2app--which uses the file name for the application name when building the executable (app name, menu references, etc.). This works fine because the base file is not imported anywhere within the project and py2app handles the spaces gracefully. Let's call the file Application Name.py.\nIn order to run unit tests against Application Name.py, however, I have to eliminate the spaces in order to import the file into unittest. I'm unable to use importlib or __import__ because the file is not constructed as a package, so both approaches fail. My workflow has been to refactor the file name to application_name.py, run the unit tests, and then refactor the name to Application Name.py before compiling it into Application Name.app.\nSo the options appear to be:\n\nKeep doing what I'm doing (workable, but not ideal),\nCreate a wrapper called Application Name.py that imports application_name.py where the wrapper doesn't need to be unit tested (seems silly),\nConvert Application Name into a package so I can use importlib (seems like overkill), or\nSomething else entirely.\n\nIs there some way to gracefully handle file names with spaces in unit testing that I'm not seeing or should I just suck it up?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":53,"Q_Id":66379066,"Users Score":1,"Answer":"Seems like option 2 probably works best.\nOption 1 and 2 are your best bets (yes 3 is a bit overkill), and although 2 seems excessive, it does isolate your python logic from your py2app requirements - now Application Name.py is a \"py2app wrapper file\", and application_name.py contains your actual logic.\nThis works better than Option 1 because separation of responsibilities is generally preferred. If you come up with other requirements for your application name, you'd want to have to deal with just the \"py2app wrapper file\", and not change anything related to actual logic.\nYour current workflow works too, but it does mean more manual renaming when you want to run unit tests - what if you want to automate the unit testing process?","Q_Score":0,"Tags":"python-3.x,python-unittest,py2app","A_Id":66379114,"CreationDate":"2021-02-26T01:59:00.000","Title":"Unit Testing File with Space in the Name","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When trying to use requests in my function, it only timeout without additional error. I was trying to use rapidapi for amazon. I already have the host and key, but when I test my function it always timeout. My zip file and my function were on the same directory and I knew that my code was correct.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":55,"Q_Id":66382131,"Users Score":2,"Answer":"I just figured out that my VPC configuration in Lambda was can only access within the resources of the VPC. I just removed the VPC and it now runs. But when your lambda function will connect to your database, you need to add and configure your VPC.","Q_Score":0,"Tags":"python,amazon-web-services,aws-lambda,python-requests","A_Id":66382132,"CreationDate":"2021-02-26T08:03:00.000","Title":"python requests in AWS Lambda Timeout","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to install PyQt5 on my Windows 10 python 3.7. I tried several things suggested by different users like:\n\nInstall using pip install on cmd.\npip not found error is showing.\n\nInstall PyQt5 from downloaded files.\nNot happening\n\nPip3 not working\n\nEven my cmd is not recognizing that my pc has a python program on it.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":189,"Q_Id":66385399,"Users Score":0,"Answer":"I suggest trying a few things, and then testing if you can use pip to install pyqt.\n\nTry using pip3 instead of pip or vice versa\nTry uninstalling and reinstalling python, making sure that you tick the box to add python to path\nRestart your computer\n\nIf this doesn't work, then I don't know what else to do.","Q_Score":0,"Tags":"python,windows,installation,pyqt5,spyder","A_Id":66386333,"CreationDate":"2021-02-26T11:54:00.000","Title":"How to install Pyqt5 on windows python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wrote a pybind11 wrapper over a shared c++ library (.dll) on Windows.\nI wanted to make a distributable package using setuptools.\nI wrote a setup.py file and it generated the pyd file for the wrapper.\nBut when I try to run a script which imports the wrapper package python crashes.\nIt only succeeds only if I place all the dll dependencies in the script folder.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":66385458,"Users Score":0,"Answer":"I found a temporary solution to the above problem.\nI created an init.py file which adds the DLL directory in the beginning of the environment path variable.","Q_Score":0,"Tags":"dll,python-3.6,setuptools","A_Id":66407546,"CreationDate":"2021-02-26T11:59:00.000","Title":"Python setuptools help on windows. Dll dependencies not identified","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I have two complex images Z_1 and Z_2. I want to make a relative-phase map of the second image with respect to the first. This means:\nZ_2_relative = Z_2 * np.exp(-1j * np.angle(Z_1))\nThis creates a new complex valued matrix where the complex-phase should now be given by\nnp.angle(Z_2_relative) == np.angle(Z_2) - np.angle(Z_1)\nBut according to python these two are not equal. I bet it has something to do with the np.angle function.. but I cant pinpoint it, or know how to fix it...\nPS: Sorry, cant make a reproducible piece of code atm. Can do it later today","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":66386086,"Users Score":0,"Answer":"Bah.. stupid question. Sorry for anyone that read it. If you do module 2pi, then everything is the same","Q_Score":0,"Tags":"python","A_Id":66386806,"CreationDate":"2021-02-26T12:47:00.000","Title":"Subtracting angles of complex valued matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a DAC which can be used with a 50MHz SPI interface.Its a 16 bit DAC with 8 bit address,hence i require to send 24 bits data. I want to use Pico to send data to DAC so as to produce a sine wave of 1 kHz with 20 sample (hence sampling rate\nnot more than 20ksps). I used Micropython to program the pico but i am unable to get more than 500 hz wave. What am i doing wrong.....Is there a way to use DMA to speed up this process? also the DAC requires chip select which is not in the machine module so i had to use gpio. whether that is slowing down the process?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1047,"Q_Id":66388451,"Users Score":1,"Answer":"Aside from any other issues, the SPI hardware implementation in the RP2040 only provides automatic control of CSn for transfers of up to 16 bits.\nFor your case, implementing a simple, 24-bit fixed-format, output-only SPI in the PIO subsystem is quite straightforward, and has the advantage of only requiring a single DMA channel for fully-DMA operation (compared to at least 2 chained DMA channels for a fully-DMA SPI\/GPIO approach). The example in the RP2040 datasheet already provides most of the implementation.","Q_Score":0,"Tags":"spi,dma,micropython,dac,raspberry-pi-pico","A_Id":68366110,"CreationDate":"2021-02-26T15:24:00.000","Title":"How to use Raspberry Pi Pico with DAC with SPI to generate sine wave of 1 kHz with 20 k samples per cycle","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm attempting to thread a function call in my Python catastr^H^H^H^H^H^Hreation, and I've read up on how to use the threading.Thread() call. My function takes a simple string argument, so theoretically it should be as easy as:\nthread = threading.Thread(target = my_func, args = (string_var, ))\nbearing in mind that the args() needs to be a tuple. Got it. However, it appears as though I'm still doing something wrong because I continually get the barffage from Python:\nTypeError: my_func() takes 1 positional argument but 2 were given\nI'm a bit stumped here. Any guidance?\nThanks!","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":55,"Q_Id":66389451,"Users Score":0,"Answer":"Seems the issue is that because it's a method (thanks gribvirus74 for the idea) and I'm attempting to thread it, it won't inherit the self. And that appears to be the issue. I moved the function outside of the class and called it with the Thread(). Works fine now.","Q_Score":0,"Tags":"python,multithreading,tuples","A_Id":66389727,"CreationDate":"2021-02-26T16:27:00.000","Title":"Python Thread() - Function Arg Tuple Not Working?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"so if I do something like this:\nname = input(\"Enter your name: \")\nthen when I put in a name like bob for example it will just go to the next line.\nHow do I fix this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":26,"Q_Id":66390445,"Users Score":0,"Answer":"I was trying to do something similar the other day. If you run your code from the command prompt instead of within sublime it should correctly place the input in the variable and continue with the program.","Q_Score":0,"Tags":"python,python-3.x,input","A_Id":66390725,"CreationDate":"2021-02-26T17:39:00.000","Title":"How can I fix sublime text input?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been looking everywhere for a solution, but I found nothing.\nhow would I invert something like this?\npow(4, 4, 91). pow(4, 4, 91) returns 74.\nand I'm trying to get 4 from the number 74\nI've tried using gmpy, yet no luck. (I'm probably going to get some answers that include brute force, and brute forcing is the last thing I want to do)\nTo clarify, I want to solve for x in the equation y = xa mod N, where y, a, and N are all known.","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":139,"Q_Id":66390530,"Users Score":4,"Answer":"Because you're using modulus argument, you can't get the exact same values from before, since there is not only one answer. For example pow(2, 3, 5) == pow(2, 7, 5). In this case should you get 3, or 5? The answer isn't clear at all. This is why what you want to achieve is not really possible. It may be possible if you add additional constraints","Q_Score":0,"Tags":"python,math","A_Id":66390618,"CreationDate":"2021-02-26T17:46:00.000","Title":"How do I invert pow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install the pptk module in PyCharm for visualising 3D point clouds.\nInitially I tried installing in PyCharm via File > Settings > Python Interpreter > Install > pptk\nHowever, it couldn't be found and advised I use pip instead.\nSo on my command prompt I navigated to the folder containing Python 3.9 and tried pip install pptk\nAnd I got the following error ERROR: Could not find a version that satisfies the requirement pptk (from versions: none) ERROR: No matching distribution found for pptk\nI'm not sure what I'm doing wrong or why no version can be found? Am I missing something obvious in the installation?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1610,"Q_Id":66393368,"Users Score":4,"Answer":"According to the requirements by the pptk module, the minimum Python version required is 3.6. Judging by the fact that there's no distribution found for 3.9 I believe that it was missing a PyPi repo for it, therefore downgrading to Python 3.6 should fix the problem. Judging by your response to my comment, that did work.","Q_Score":2,"Tags":"python,pip,pptk","A_Id":66394030,"CreationDate":"2021-02-26T22:06:00.000","Title":"can't install pptk in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to find information on how to populate database in sql server express using python and pyodbc. Most searches describe methods using sql server and NOT the express version. Any suggestions would be appreciated","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":66394387,"Users Score":0,"Answer":"There is no difference. SQL Express is just SQL Server and has exactly the same connection methods as any other SQL Server instance.\nYou should check that remote connections are enabled though -\n\nEnsure your local Windows Firewall allows Port 1433 inbound\nOpen SQL Configuration Manager on your computer and under Network Configuration, click on protocols for SQLEXPRESS and ensure that TCP\/IP is enabled.\nRight-click on TCP\/IP and selecvt Properties. On the IP Address tab under IPAll, set TCP Dynamic Ports to 0 and TCP Port to 1433 (assumes that you don't have any other SQL Server instance on your computer)\n\nYour instance will need a restart and you can check in the error log to ensure that it is listening on TCP\/IP and Port 1433","Q_Score":0,"Tags":"python,sql-server-express","A_Id":66395286,"CreationDate":"2021-02-27T00:21:00.000","Title":"python and sql server express data storage","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use aquatone from python. It works fine when I run it from VS Code or terminal using either os or subprocess. But When it is start from a parent program which is started on startup as a service. It doesn't work anymore. My guess is that it is due to the parent program being run as root.\nThe parent program requires root privileges.\nSo is there any way to I can start the aquatone as a non-root user from within python??","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":45,"Q_Id":66395449,"Users Score":1,"Answer":"It depends where you've installed aquatone. By default, if you're using pip, aquatone will be installed to python\/site-packages so in order to access the package and Python interpreter any app that runs Python will need to be granted root privileges. This is the simplest way to solve the problem.","Q_Score":1,"Tags":"python,service","A_Id":66395523,"CreationDate":"2021-02-27T03:55:00.000","Title":"How to execute bash commands as non root user from Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am streaming certain keywords from Twitter API and I wanted to understand what percentage of the total tweets do I get from using the streaming function in Tweepy? (I am using the free developer account - Is there any benefit in terms of volume if I upgrade to a pro account?)\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":66396291,"Users Score":0,"Answer":"Approximately 500,000,000 tweets occur every day. The software you refer to limits you to collect only 500,000 requests per month where each request can contain 120 tweets.","Q_Score":0,"Tags":"twitter,tweepy,twitterapi-python","A_Id":66450332,"CreationDate":"2021-02-27T06:43:00.000","Title":"What percentage of tweets do I get from a free twitter API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The problem is when i hit the TAB after the tf.nn. some options are shown. But for example tf.nn.layers does not appear. Or when i want to complete tf.cont to get tf.contrib the only thing appears is tf.control_dependencies.\nHow do i get all the options in auto-completion?\nI am using windows 10, python 3.7, pip 10.0.1, jedi = 0.17.2, tensorflow = 2.41, notebook 6.2.0\nThank you in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":66397405,"Users Score":0,"Answer":"Tf.contrib is removed in Tensorflow 2.x versions. It is expected behaviour.\nTo use tf.contrib API's downgrade your Tensorflow version to TF ==1.15","Q_Score":0,"Tags":"python,tensorflow,jupyter-notebook","A_Id":70809731,"CreationDate":"2021-02-27T09:40:00.000","Title":"Tab completion not working for some functions of tensorflow","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example if I have a string abc%12341%%c%9876 I would like to substitute from the last % in the string to the end with an empty string, the final output that I'm trying to get is abc%12341%%c.\nI created a regular expression '.*#' to search for the last % meaning abc%12341%%c% , and then getting the index of the the last % and then just replacing it with an empty string.\nI was wondering if it can be done in one line using re.sub(..)","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":39,"Q_Id":66399811,"Users Score":1,"Answer":"I think it is called lookahead matching - I will look it up if I am not too slow :-)\n(?=...)\nMatches if ... matches next, but doesn\u2019t consume any of the string. This is called a lookahead assertion. For example, Isaac (?=Asimov) will match 'Isaac ' only if it\u2019s followed by 'Asimov'","Q_Score":0,"Tags":"python,regex","A_Id":66399832,"CreationDate":"2021-02-27T14:29:00.000","Title":"How to match anything in a regular expression up to a character and not including it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to deploy a Django REST API on Heroku. Normally I wouldn't have any issues with this but for this app, I am using a legacy database that exists on AWS. Is it possible for me to continue to use this remote database after deploying Django to Heroku? I have the database credentials all set up in settings.py so I would assume that it should work but I am not sure.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":124,"Q_Id":66403645,"Users Score":0,"Answer":"Will it work? I think yes, provided you configure your project and database for external access.\nShould you want it? How may queries does an average page execute? Some applications may make tens of queries for every endpoint and added wait can combine into seconds of waiting for every request.","Q_Score":0,"Tags":"python,django,heroku","A_Id":66403788,"CreationDate":"2021-02-27T21:29:00.000","Title":"Use an external database in Django app on Heroku","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to deploy a Django REST API on Heroku. Normally I wouldn't have any issues with this but for this app, I am using a legacy database that exists on AWS. Is it possible for me to continue to use this remote database after deploying Django to Heroku? I have the database credentials all set up in settings.py so I would assume that it should work but I am not sure.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":124,"Q_Id":66403645,"Users Score":0,"Answer":"It should not pose any problem to connect with an database on AWS.\nBut be sure that the database on AWS is configured to accept external access, so that Heroku can connect.\nAnd I would sugest that you take the credentials out of the source code and put it in the Config Vars that Heroku provide (environment variables).","Q_Score":0,"Tags":"python,django,heroku","A_Id":66403755,"CreationDate":"2021-02-27T21:29:00.000","Title":"Use an external database in Django app on Heroku","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm sorry for the title of my question if it doesn't let clear my problem.\nI'm trying to get information from an image of a document using tesseract, but it doesn't work well on pictures (on print screens of text it works very well). I want to ask if somebody know a technique that can help me. I think that letting the image black and white, where the information I want is in black would help a lot, but I don't know how to do that.\nI will be glad if somebody knows how to help me. (:","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":74,"Q_Id":66403978,"Users Score":0,"Answer":"Using opencv might help to preprocess the image before passing it to tesseract.\nI usually follow these steps\n\nConvert the image to grayscale\nIf the texts in the image are small, resize the image using cv2.resize()\nBlur the image (GaussianBlur or MedianBlur)\nApply threshhold to make the text prominent (cv2.threshold)\nUse tesseract config to instruct tesseract to look for specific characters.\nFor example If the image contains only alphanumeric upper case english text then passing\nconfig='-c tessedit_char_whitelist=0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ\" would help.","Q_Score":1,"Tags":"python,image,image-processing,tesseract,python-tesseract","A_Id":66404196,"CreationDate":"2021-02-27T22:20:00.000","Title":"How to get information from an image of a document, like name, CPF, RG, on python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've trained a binary classification model that takes a (128x128x3) image and then gives a binary value of 0 or 1. I then want to take a larger image, say (nxmx3), and apply a windowing function and have the model run a prediction on each window.\nI used skimage.util.view_as_windows to convert a (1024x1024x3) image, into a (897,897,128,128,3) numpy array. I now want to run each (i, j, 128,128,3) window through my model, and then place it in the same location. In the end, I'd like a (897,897) array containing the probability of that class existing.\nThe way I'm doing it now requires a for-loop that takes nearly 1-2 minutes to run through, while slowing down as the list containing the model predictions gets larger.\nIs there a way to vectorize this process? Perhaps flattening the numpy array, running model.predict() on it, and then creating a 2d-array with the same previous dimensions?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":94,"Q_Id":66404904,"Users Score":0,"Answer":"You can use fully convolutional networks which uses sliding window to predict output and it's not dependent on the input shape. replace your fully connected layers with convolutional layers with same output_shape and train it on (128x128x3) datasets.\nif you predict on 1024x1024 input image the network predict one label for each 128x128 regions.","Q_Score":1,"Tags":"python,numpy,tensorflow,keras,scikit-image","A_Id":66406974,"CreationDate":"2021-02-28T00:50:00.000","Title":"How to run model.predict() on a 5 dimensional array in Keras?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried to use python's zmq lib. And now I have two questions:\n\nIs there a way to check socket connection state?\nI'd like to know if connection is established after call connect\n\nI want to one-to-one communication model.\nI tried to use PAIR zmq socket type.\nIn that case if one client is already connected, server will not receive any messages from secondary connected client.\nBut I'd like to get info in the second client that there is another client and server is busy.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":284,"Q_Id":66405910,"Users Score":-1,"Answer":"You'd get an error if connect fails.\nBut I guess the real question is how often do you want to check this? once at startup, before each message, or periodically, using some heartbeat?\n\nThat does not make sense, as you can not send info without connecting first.\nHowever, some socket types might give some more info.\n\n\nBut the best way would be to use multiple sockets: one for such status information, and another one for sending data.\nZMQ is made to use multiple sockets.","Q_Score":0,"Tags":"python,zeromq","A_Id":66409386,"CreationDate":"2021-02-28T04:48:00.000","Title":"Python zmq connections","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to validate the following expressions :\n\n\"CODE1:123\/CODE2:3467\/CODE1:7686\"\n\"CODE1:9090\"\n\"CODE2:078\/CODE1:7788\/CODE1:333\"\n\"CODE2:77\"\n\nIn my case, the patterns 'CODE1:xx' or 'CODE2:xx' are given in any different orders.\nI can sort the patterns to make them like 'CODE1:XX\/CODE1:YY\/CODE2:ZZ'\nand check if matches something like\n\nr'[CODE1:\\d+]*[CODE2:\\d+]*'\n\nCould we make it shorter : is it possible to solve this with one regex matcher ?\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":66408498,"Users Score":0,"Answer":"CODE is static but after it the digit is dynamic, to make it shorter just use CODE\\d:\\d+\nif you want to match only two digit after : use CODE\\d:\\d{2}","Q_Score":0,"Tags":"python,regex","A_Id":66408744,"CreationDate":"2021-02-28T11:18:00.000","Title":"Check if expression matches a regex","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to find the mean of each column, across all rows, given a set of data. I've opened the csv file using sata = np.genfromtxt('movieRatingsDeidentified.csv',delimiter=','), and am currently trying to find the mean of each column across all rows using D1 = numpy.nanmean(sata,axis=0)however I'm getting an error message saying that sata is not defined, and is also not showing up in my variable explorer. It's incredibly frustrating as I have relatively little experience in programming and have only started using Spyder a few weeks ago.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":66410691,"Users Score":0,"Answer":"Well, if it says that sata is not defined, you are most likely not opening the file properly. My guess is that you are not in the proper folder path. Use the package os to find your current working folder and change to the appropriate folder. The function os.getcwd() will show you in which folder you are and os.chdir(\"your\/target\/folder\") will lead you to the appropriate folder. Alternatively, instead of passing only the file name to np.genfromtxtyou can pass the full path.\nAlso, unless you have missing values (NaN) in your dataset, you should probably use the regular np.mean instead of np.nanmean.","Q_Score":0,"Tags":"python,numpy","A_Id":66410766,"CreationDate":"2021-02-28T15:29:00.000","Title":"How to find mean of a numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe imported from Excel. After importing I have checked there are lots of NAN values in the dataframe. When I convert dataframe columns to Str Object. There are no NAN values remains. My mean is that dataframe can't be able to count NAN values anymore. Those NAN values showed as nan in the dataframe. I actually wants those NAN to be empty cells in the dataframe like Excel. Any suggestion?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":43,"Q_Id":66414167,"Users Score":0,"Answer":"Look at .fillna()\nYou can use that to edit the NaNs before converting the column values to strings.","Q_Score":0,"Tags":"python,pandas","A_Id":66414280,"CreationDate":"2021-02-28T21:47:00.000","Title":"Python Pandas: After converting dataframe to Str NAN is no more NAN","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In VS Code, many extensions, such as Tab Nine and Lint, rely on specific Python packages to function. On the other hand, the code I develop may need a different set of packages. Because there is the potential for package conflict and because we want the environment that we develop code to mimic the production environment, it is convenient to have the dev environment\/extensions use one Anaconda Environment and the code I develop to use a different Anaconda Environment. But I am not sure how to configure this. Can someone help?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":66414367,"Users Score":0,"Answer":"Down in the bottom left corner of VS code you can manually select the python environment depending on which codebase you are working with. The selection can be saved in the settings.json file so you don't have to manually reselect each time.","Q_Score":0,"Tags":"python,visual-studio-code,anaconda,vscode-settings","A_Id":66414999,"CreationDate":"2021-02-28T22:12:00.000","Title":"VSCode and Anaconda Environments: How to have dev extensions\/environment and my code under development use different Anaconda environments","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"For speed of upload, I have a multiprocessing Python program that splits a CSV into many parts, and uploads each in a different process. Also for speed, I'm putting 3000 inserts together into each insert_many.\nThe trick is that I have some bad data in some rows, and I haven't yet figured out where it is. So what I did was a Try\/Except around the insert_many, then I try to insert again the 3000 documents, but one at a time, inside another Try\/Except. Then I can do a pprint.pprint on just the rows that have errors.\nHowever, I'm wondering if when the update of 3000 documents fails because of an error, in for example the 1000th row, does the entire 3000 fail? Or do the first 999 rows get stored and the rest fail? Or do the 2999 rows get stored, and only the one bad-data row fails?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":138,"Q_Id":66414955,"Users Score":2,"Answer":"When you do inserts via a bulk write, you can set the ordered option.\nWhen ordered is true, inserts stop at the first error.\nWhen ordered is false, inserts continue past each document that failed for any reason.\nIn either case, bulk writes are not transactional in the sense that a failure doesn't remove any previously performed writes (inserts or otherwise).\nIf you would like a transaction, MongoDB provides those as an explicit feature.","Q_Score":0,"Tags":"python-3.x,mongodb,pymongo","A_Id":66415007,"CreationDate":"2021-02-28T23:36:00.000","Title":"is PyMongo \/ MongoDB insert_many transactional?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I made a selenium script that would go to a website and buy a certain product that is currently out of stock. I have an email alert that I would get when the product goes back in stock, however when I receive the email I will probably not be at the computer with the script.\nWhat would be the simplest way to run the program from my phone. Would it be to set up a remote desktop to access my computer from my phone? Also, everything would still run correctly if the monitor was off and if the selenium window was in the background, correct?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":66415039,"Users Score":0,"Answer":"What you should do to make the entire process automated is link the email alert to your program, when the email comes in it will trigger the program to visit the webpage and automatically buy the item.\nI am assuming everything is already saved payment information and other important info etc., and just keeping the programming running in the background on a computer at all times.\nI would obviously encourage this to be hard wired with an ethernet cord too, if there is an issue or someone disconnects your WiFi and you miss the item.\nHave fun!","Q_Score":1,"Tags":"python,selenium","A_Id":66415380,"CreationDate":"2021-02-28T23:48:00.000","Title":"Is there a way to trigger a selenium program remotely? (Like from my phone or from another laptop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I were to use the Requests library in a python kivy app and compile the app for android and iOS, would it still be able to perform its requests, or will it just crash because it's on the wrong operating system or something?\nIf it would crash, how could I solve this problem or go about finding a solution?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":66415044,"Users Score":0,"Answer":"Well, any python library that you use in kivy will work on both android and ios or any other platform (windows & mac). As all the python library are compiled and handled by kivy so you need not worry about the platform for that. If you are using any platform-specific thing then that might not work on other platforms, but python libraries like requests work fine on any platform you use.","Q_Score":0,"Tags":"python,android,ios,python-requests,kivy","A_Id":66422121,"CreationDate":"2021-02-28T23:49:00.000","Title":"Compiling the Requests library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to digdag. Below is an example workflow to illustrate my question:\n_export:\nsh:\nshell: [\"powershell.exe\"]\n_parallel: false\n+step1:\nsh>: py \"C:\/a.py\"\n+step2:\nsh>: py \"C:\/b.py\"\nThe second task runs right after the first task starts. However, I want the second task to wait for the first task to complete successfully.\nI modified the first task a.py to just raise ValueError, but the second task still runs right after the first task starts.\nThis is not consistent with my understanding of the digdag documentation. But I dont know what goes wrong with my workflow. Could someone please advise?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":66415245,"Users Score":0,"Answer":"No. This can not be solved by redownloading.","Q_Score":0,"Tags":"python,workflow,etl,pipeline,directed-acyclic-graphs","A_Id":67528412,"CreationDate":"2021-03-01T00:26:00.000","Title":"digdag shell script tasks complete instantaneously","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"import deathbycaptcha as dbc\nclient = dbc.SocketClient(username, password)\nDBC version - 0.1.1","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":231,"Q_Id":66417464,"Users Score":0,"Answer":"just uninstall that library and reinstall it with\npip install git+https:\/\/github.com\/codevance\/python-deathbycaptcha.git\nit works for me","Q_Score":1,"Tags":"python-3.x,api","A_Id":67968838,"CreationDate":"2021-03-01T06:16:00.000","Title":"AttributeError: module 'deathbycaptcha' has no attribute 'SocketClient'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to import a variable from a running python script into another script. Something like notebooks cells, but with scripts.\nFor example, load a neural network model and use it in another script so as not to load it every time it starts.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":66418689,"Users Score":0,"Answer":"The are a lot of ways doing it, here are some of the most popular from easy to harder:\n\nSave the variable into a file, read it from the other script.\nSave the variable into some kind of environment variable, use it from the second script.\nOpen a socket between the scripts and communicate through it.","Q_Score":0,"Tags":"python","A_Id":66418801,"CreationDate":"2021-03-01T08:19:00.000","Title":"Import variable from running python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to crawl a site[URL : https:\/\/www.khaasfood.com\/shop\/]\nFirst I found I have to get categories with a hierarchy.\n.container has a list of li tags that is the parent category.\nEach parent category may have a children's li tag.\nFirst I have to take parent categories. but how?\n'''\nresponse.css('.container li .cat-item')\n'''\nthis code returns all li tags which means bot parent and child category.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":72,"Q_Id":66419315,"Users Score":1,"Answer":".container > li.cat-item would only select li tags which are the child of .container element\n.container li.cat-item css selector without > will select all descendant li tags","Q_Score":0,"Tags":"python,html,css,scrapy","A_Id":66420986,"CreationDate":"2021-03-01T09:10:00.000","Title":"Scrapy, how to extract parent li only except child li","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I run my main.py file from console like python main.py everything works just fine. However when I package the app with zippapp it opens up window and apparently shows some error which I am unable to read because it immediately closes.\nHow to debug\/resolve this? Is it possible to somehow stop that so I can see the error?\nI have folder in which is data folder and app folder, in the app folder there is main.py and there is my_function() which is being run. The zipapp command:\npython -m zipapp Entire_package -m app.main:my_function","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":155,"Q_Id":66419791,"Users Score":0,"Answer":"Lulz.\nOne just have to run the .pyz file from terminal\/command line with python like this:\npython my_executable_pyz_file.pyz\nand then you get the error printed straight into the terminal\/cmd window and you can read it.","Q_Score":0,"Tags":"python,package,pyz","A_Id":66443625,"CreationDate":"2021-03-01T09:45:00.000","Title":"Debug .pyz executable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking at some tensorflow stuff and I understand for loops or atleast I think I do, however I came across for _ in range(20) and was wondering what is the meaning of the _ in this case. I am used to for x in range or for i in range stuff and understand those but haven't been able to understand what i've read on the underscore","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20818,"Q_Id":66425508,"Users Score":19,"Answer":"When you are not interested in some values returned by a function we use underscore in place of variable name . Basically it means you are not interested in how many times the loop is run till now just that it should run some specific number of times overall.","Q_Score":15,"Tags":"python,python-3.x,machine-learning","A_Id":66425545,"CreationDate":"2021-03-01T16:02:00.000","Title":"What is the meaning of 'for _ in range()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am trying to build a login dialog for my application using qt\/python.\nI got confused by QT's view\/model architecture. QT provides models and view for tables, lists etc., but how do I go about implementing a login dialog using this architecture?\nI use a ui file that I created with QtDesigner.\nShould I create a user model that interfaces with the DB and retrieves user data, handling the login process, and return this result to the view? (view and controller combined, as per QT's terminology)\nI would like to use the same architecture throughout the application, but I got confused with this one. Do I even need a model for this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":24,"Q_Id":66425819,"Users Score":0,"Answer":"Models are for binding data to view. You can create list model with user\/password fields and use QWidgetMapper to bind it to two QLineEdits but there's no much use in it since there's no much data and no complicated interactions.","Q_Score":0,"Tags":"python,pyqt","A_Id":66425984,"CreationDate":"2021-03-01T16:23:00.000","Title":"QT View\/Model Login dialog","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am doing a project where we run into a problem, where we have N number of tickets available at certain time. And we are serving it from the backend REST API and people are buying it in FIFO fashion. Its like a race between the buyers.\nProblem:\nCurrently we are handling it like first we take the money from the user (we take money first because it is might possible that customer's bank transaction fails due to some reason so we make it confirm we do not lose anything) and the proceed for the conformation email but we faced the Overbooking issue. Because we already had taken the money but there was no tickets available because in the meanwhile someone else bought it. So we have to give that customer tickets, which lead us to over booking.\nI want to solve it. I have put some thoughts on it.\nFirst Solution:\nThe first solution I come up with is that I have an API that will initially check whether enough tickets are available to sell, if it is, it subtract the number of tickets from the available tickets and ask the customer for payment. Once the payment get confirmed we send customer confirmation email. If it fails we will add those subtracted tickets to the number of available tickets. It apparently seems working but It has some flaws.\nFor example we may lose the tickets permanently, in case user decide not to make any bank transaction or his browser crashes etc.\nSecond Solution:\nSecond solution is to use Django session that will save the information temporarily by subtracting the number of tickets user buying from the tickets available once the the we get the confirmation we will commit the the data base transaction. if not we can roll back. I believe it will break the RESTful API stateless property.\nQuestion:\nDo we have any standard way to solve this problem? I have searched over internet I did not find any solution, and do let me know if this problem is know from another name and already have a standard solution that can be used as is. If not then please share you knowledge and experience.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":141,"Q_Id":66426371,"Users Score":0,"Answer":"I am answering this late, but I was manage to solve this problem. To solve the issue, I used django signals. It was two step procedure.\nFirst of all I added states in Transaction states, which was confirmed, cancelled, reserved, expired.\n\nUser will call the \/reserve endpoint this will create a transaction which will subtract tickets, but in reserved state. It will subtract the number of reserved tickets from the available tickets of an event. on_create signal will initiate a celery task, which will run lets say after 5 minutes, so after 5 minutes it will add those tickets back if they are expired.\n\nUser will come to next step where he will confirm the transaction from end point \/purchase. So now we will mark the transaction in confirmed state.\n\n\nThere was a \/cancel endpoint which will explicitly cancel the tickets.\nThis had downside that someone may abuse the system by keeping the tickets in reserved state (obliviously that's rare case).","Q_Score":0,"Tags":"python,django,rest,django-rest-framework","A_Id":70805207,"CreationDate":"2021-03-01T16:59:00.000","Title":"Ticket Selling Problem in REST with django","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have installed the odoo9 server with workers, and also runs the flask server both are needs the werkzeug server, but the odoo9 werkzeug version is = 0.9.6 and flak needs 1.1.X version. while doing this need to uninstall the odoo werkzeug and installed the 0.9 version, can we run the flask also in the same werkzeug version(0.9.6 )not the 1.12\nI follows these commands\npip uninstall Werkzeug\nsudo pip install Werkzeug==0.9.6\nSERVER RESTART\npip install -U flask-cors\nFLASK SERVER RESTART","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":121,"Q_Id":66426575,"Users Score":0,"Answer":"in python you could prepare different environments according to the needs of the application using virtualenv or the new one pyenv.\npyenv gives the ability to run different python versions as well. so you could have python 2 for odoo & python 3 for flask.","Q_Score":0,"Tags":"python,api,server,odoo","A_Id":66429707,"CreationDate":"2021-03-01T17:14:00.000","Title":"Regarding odoo and flask server installations?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This is my first project that involves parallel programming so forgive me if I'm not using the proper terminology.\nI want to interface a RaspberryPi 4 with a peripheral board using an SPI serial interface. In order to completely understand the serial communication I want to code the SPI communication without using an external library.\nThe purpose of the program is to send data to the peripheral and to read data from it, while plotting the recived data \"in real time\".\nIn order to easily manage the communication I need to run a thread that will generate the sclk and chip select signals, and another thread that will read\/write data and plot them.\nMy question is: given that I will use a sclk frequency around 1MHz, is it a problem that I'm threading the functions instead of making them really parallel (using multiprocessing)?\nI'd say that the clock frequency of the Rpi4 is much higher than the sclk frequency, so the time delay due to the \"fake\" parallelism is not a problem (considering the fact that all the threads are made of few instructions), but I want to know if there are other factors to consider. Thank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":180,"Q_Id":66427892,"Users Score":1,"Answer":"You absolutely do not want separate threads generating clock and data. This is a SERIAL protocol, so those two things have to be synchronous. The 1MHz number is just a maximum limit. The clock doesn't have to be exact, nor does it have to be regular. You, as the master, are in full control of that. Everything is based on the transitions. In this order, you set the output pin, assert the clock, read the input pin, deassert the clock, rinse and repeat. One function, easy as pie. You might need to add some stalls if that process takes less than a microsecond.","Q_Score":0,"Tags":"python,raspberry-pi,spi","A_Id":66428055,"CreationDate":"2021-03-01T18:50:00.000","Title":"RaspberryPi and peripheral SPI interface using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The problem is, suppose I pass 'N' = 9 to the function,\nSo the list will be\nlist = [1,2,3,4,5,6,7,8,9,8,7,6,5,4,3,2,1].\nAt first, the list elements increased to (1-9) then it decreased to reversed order (8-1)\nPlease suggest the easiest way to achieve this.\nThank You in advance.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":66428178,"Users Score":0,"Answer":"You can use list comprehension to achieve in one line. First we have the list comprehension which has a simple loop that appends the number from 1 to n. The second is a list comprehension where the numbers in the for loop are passed through the equation (n+1)-i. This is to calculate the difference between the current value of i and n+1. This gives us the pattern of descending numbers. Finally, both lists are added and stored in the variable r.\nr = [x for x in range(1,n+1)] + [n+1-i for i in range(2, n+1)]\nWhen r is printed out it produces the following output.\n[1, 2, 3, 4, 5, 6, 7, 8, 9, 8, 7, 6, 5, 4, 3, 2, 1]","Q_Score":0,"Tags":"python,python-3.x,list","A_Id":66428436,"CreationDate":"2021-03-01T19:09:00.000","Title":"A Function take 'N' as argument, performs an Increase N times of Elements then decrease (N-1) times and then return as a list in the Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a question regarding the usage of EFS as and additional memory location for lambda. I am using python along with pandas to perform some tests on my files. And it works great if the files are not that large, but if the files exceed 2-3 GB lambda dies because of the memory limitation (using both max memory and time of lambda). Files are originally located at S3 and I was wondering would it be possible to use EFS in this scenario? If so what would be required for this solution. Would I need to transfer files from S3 to EFS in order to open them?, or is there an better solution where I can directly load the files from S3 to EFS and open them with pandas. And also there is the timeout limitation, but I hope that wont be an issues if the lambda would be faster with EFS.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":392,"Q_Id":66428845,"Users Score":1,"Answer":"As far as I know pandas requires the whole file to fit into memory.\nIn principle you can fit larger files into memory in Lambda, since you can now configure Lambda functions with up to 10GB of RAM.\nThat's doesn't translate to you being able to read a 10GB file from S3 and create a dataframe out of it, because in order for pandas to parse the data, it needs to either be stored on disk (of which there's only 500MB available to you) or in memory.\nIf you download the file into memory, it also takes up it's size in system memory and then you can create a pandas data frame from that. Pandas data structures are probably larger than the raw bytes of the file, so my guess is that you can load a file from S3 into memory and turn that into a data frame that is about 30-40% of the memory capacity of the lambda function.\nIf you store that file on EFS, you'll be able to fit more into memory, because pandas can read the bytes from disk, so you could probably squeeze out a few more gigabytes. These I\/O operations take time however and Lambda is limited to at most 15 minutes of runtime. You probably also need to write that data somewhere which takes time as well.\nBottom line: Loading larger files than that into Lambda is probably not a good idea. If you can, break up the dataset into smaller chunks and have lambda functions work on them in parallel or choose a service like Athena or EMR or Glue ETL, which are built to handle that stuff.","Q_Score":2,"Tags":"python,pandas,amazon-web-services,aws-lambda,amazon-efs","A_Id":66429091,"CreationDate":"2021-03-01T20:00:00.000","Title":"Using EFS with AWS Lambda (memory issue)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In a specific column of my dataframe it does not convert to json and it stays like that\n[{'self': 'https:\/\/servicedesk.com\/rest\/api\/2\/c', 'value': 'EXANDAS', 'id': '10120'}]\nHow can i grab only the value or convert that column two three more columns in the existing dataframe?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":66429044,"Users Score":0,"Answer":"Assuming your column is a list of dicts you can do\ndf[col].map(lambda x: x[0]['value'])\nto get the values","Q_Score":0,"Tags":"json,python-3.x,dataframe","A_Id":66429193,"CreationDate":"2021-03-01T20:15:00.000","Title":"Jira Json to dataframe with Python. Issue with one column which stays as json","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So im trying to simply just trying to make a directory with kivy and i simply cant get it to work. I a useing the os.mkdir() function.\nI have tried to just simply os.mkdir(\"a\"), but i couldnt find any directory named a (after doing a full phone search, with the phone plugged in to my pc).\nI have alos tried os.getcwd() as in os.mkdir(os.getcwd()+\"a\"), but to no avail.\nTo put it simply i am lost, can't find anything about it online either... So if you know how i would greatly appreciate to be enlightend on the subject, thx in advance.\nAnd i am importing os, also tried to run os.mkdir before importing kivy.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":354,"Q_Id":66429880,"Users Score":0,"Answer":"I have tried to just simply io.mkdir(\"a\")\n\nThere is no io.mkdir - do you mean os.mkdir?\nThe directory will be created inside the current directory of the script, which is a folder inside the app's private directory as defined by Android.\nThat means you should see your new directory in e.g. os.listdir(\".\").","Q_Score":0,"Tags":"python,android,kivy,buildozer","A_Id":66430097,"CreationDate":"2021-03-01T21:24:00.000","Title":"How would one go about making a directory (mkdir) in python with kivy on android(buildozer)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a python script that is designed to process some data, create a table if not exists, and truncate the table before inserting a refreshed dataset. I am using a role that has usage, read, write, create table permissions, as well stage permissions set as follows:\ngrant usage, read, write on future stages in schema to role \nI am using the write_pandas function in python via the snowflake connector. The documentation says that this function uses PUT and Copy Into commands:\nTo write the data to the table, the function saves the data to Parquet files, uses the PUT command to upload these files to a temporary stage, and uses the COPY INTO command to copy the data from the files to the table. You can use some of the function parameters to control how the PUT and COPY INTO
statements are executed.\nI still get the error message that I am unable to operate on the schema, and I am not sure what else I need to add. Does someone have the list of permissions that are required to run the write_pandas command?","AnswerCount":5,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":7614,"Q_Id":66431601,"Users Score":5,"Answer":"write_pandas() does not create the table automatically. You need to create the table by yourself if the table does not exist beforehand. For each time you run write_pandas(), it will just append the dataframe to the table you specified.\nOn the other hand, if you use df.to_sql(..., method=pd_writer) to write pandas dataframe into snowflake, it will create the table automatically for you, and you can use if_exists in to_sql() to specify different behaviors - append, replace, or fail - if the table already exists.","Q_Score":3,"Tags":"python,permissions,snowflake-cloud-data-platform,database-schema,connector","A_Id":67380436,"CreationDate":"2021-03-02T00:27:00.000","Title":"write_pandas snowflake connector function is not able to operate on table","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been trying to utilise mutual_info_regression method from sklearn, I have updated sklearn to latest build which is 0.24.1 and when I checked the source code inside my conda env path there is folder and files for feature_selection.mutual_info_regression, but when I try to import it in my Jupiter notebook it throws this error ImportError: cannot import name 'mutual_info_regression' from 'sklearn.model_selection' (\/opt\/anaconda3\/envs\/\/lib\/python3.8\/site-packages\/sklearn\/model_selection\/__init__.py)\nI tried restarting kernel as well, but it is still not working, has anyone else faced this issue? Im using macOS 11.2.1 and conda 4.8.3 with Python3\nThanks","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":256,"Q_Id":66432634,"Users Score":0,"Answer":"I found the solution,\nI just had to restart my terminal and then it started working for some reason.\nI hope this helps anyone facing such problem in future\nThanks SO!","Q_Score":0,"Tags":"python,scikit-learn,pip,anaconda,lib","A_Id":66432686,"CreationDate":"2021-03-02T02:52:00.000","Title":"sklearn.feature_selection.mutual_info_regression not found","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Settings > Advansced Settings Editor > Keyboard Shortcuts, I am trying to re-map some of the keys to use with Command key in Macbook. I tried \"Cmd\" and \"Command\", both of which didn't work. Is there a key defined for Command key in Jupyter lab?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":179,"Q_Id":66432651,"Users Score":0,"Answer":"Yes, I had the same problem. The key \"Command\" is called \"Accel\" in Jupyterlab.","Q_Score":0,"Tags":"python,jupyter,jupyter-lab","A_Id":67377528,"CreationDate":"2021-03-02T02:54:00.000","Title":"How to use Mac command key in Jupyter lab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"In python, if there are two variables being assigned a value in the same line, for example here\nimg_tensor, label = dataset[0]\nWhere dataset[0] is an array, what is actually going on? What does this do?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":66433401,"Users Score":0,"Answer":"There can be two major cases while executing the line img_tensor, label = dataset[0].\nCase 1: If dataset[0] has a length of 2. As per comments it would assign the first index of dataset[0] to img_tensor and second index to label.\nCase 2: If len(dataset[0]) > 2 or len(dataset[0]) < 2. The line would produce ValueError yelling either \"too many values to unpack\" or \"not enough values to unpack\".\nIt can also result in a TypeError if dataset[0] is not an iterable\nThere might be other scenarios that are possible depending upon the type of dataset[0].","Q_Score":0,"Tags":"python,arrays","A_Id":66435635,"CreationDate":"2021-03-02T04:34:00.000","Title":"Two variables being assigned at the same time","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm a bit of a noob when it comes to large datasets and was hoping someone could point me in the right direction.\nI have two large data frames that i need to merge based on datetime.\nDataframe 1:\n\n250 million rows of data\nDatetime index\nColums containing motion sensor values\n\nDataframe 2:\n\n50 million row of data\nDatetime index\nColumns containing easting and northings\n\nI want to add easting and northings from dataframe2 to dataframe 1 based on the closest datetime. I have tried a few different methods (i.e. df.index.get_loc, df.interpolate) but the processing time is huge and memory becomes unstable very quickly. Is there a way to process this without iterating through the dataframes? Any help would be great.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":66434101,"Users Score":1,"Answer":"pd.merge_asof will help match based on the closest time.","Q_Score":1,"Tags":"python,python-3.x,pandas,dataframe","A_Id":66434301,"CreationDate":"2021-03-02T06:02:00.000","Title":"How to efficiently combine two large Pandas dataframes by datetime","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to operate(sum) two 2d-vectors(NumPy.array) in python 3.\nI know I can use functions in NumPy, but I still want to know is there any package to support SSE instruction opreation in python 3? or any existing package with high efficiency to do that?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":134,"Q_Id":66435407,"Users Score":1,"Answer":"There's numpy-mkl which is Numpy compiled against Intel's Math Kernel Library.","Q_Score":0,"Tags":"python-3.x,sse","A_Id":66435487,"CreationDate":"2021-03-02T07:57:00.000","Title":"How to use SSE instruction in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why I have a problem with updating Django from version 1.11.29 to 2.0.13. When updating the library django-oauth-toolkit to version 1.2.0 - version support Django 2.0 I receive an error: __version__ = pkg_resources.require(\"django-oauth-toolkit\")[0].version pkg_resources.ContextualVersionConflict: (urllib3 1.25.11 (\/.virtualenvs\/django-oauth-tookit-conflict\/lib\/python3.6\/site-packages), Requirement.parse('urllib3<1.25,>=1.21.1'), {'requests'})","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":153,"Q_Id":66436010,"Users Score":0,"Answer":"It's because was changed in \/oauth2_provider\/init.py\nversion = '0.11.0'\nauthor = \"Massimiliano Pippi & Federico Frenguelli\"\ndefault_app_config = 'oauth2_provider.apps.DOTConfig'\nVERSION = version # synonym\nTo:\nimport pkg_resources\nversion = pkg_resources.require(\"django-oauth-toolkit\")[0].version\ndefault_app_config = \"oauth2_provider.apps.DOTConfig\"","Q_Score":0,"Tags":"python,django,urllib3,django-oauth-toolkit","A_Id":66470352,"CreationDate":"2021-03-02T08:44:00.000","Title":"Version conflict django-oauth-toolkit>0.12.0 and urllib3==1.25.11","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"file = open('hello.txt','w')\nfile = open('hello.txt','wt')\n\"Both of these line create a file hello.txt. Are these line handle differently in the background?\"","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":66436612,"Users Score":0,"Answer":"when use of w mean write but when use of wf means w+f and open for w and open for f in time","Q_Score":0,"Tags":"python-3.x","A_Id":66436719,"CreationDate":"2021-03-02T09:24:00.000","Title":"I will wondering if there is a difference between file = open('hello.txt','w') and file = open('hello.txt','wf') becase both are working","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to deploy a flask app with http.server (ngnix not installed by admin). I want any user who logs into the cluster to access it. Is it possible?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":26,"Q_Id":66436982,"Users Score":1,"Answer":"HTTP server interfaces are visible to all users that are connected to a machine that has direct network access to the machine your server is running on.\nIf you need them to access the interface just provide the ip address and port where the server is running and the will be able to access it as users of the Flask app you are running. Just make sure you allow the users to access the needed resources.","Q_Score":0,"Tags":"python,linux,http,server","A_Id":66437057,"CreationDate":"2021-03-02T09:49:00.000","Title":"If I run an http server in my user account (linux cluster) how to enable other users to access it?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am using trimesh python library. I want to overlay face normals ontop of face centroids. Is there any easy way to do that?\nPerhaps if there is an external library or helper script that can do this quickly that would also help. thanks","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":152,"Q_Id":66437422,"Users Score":1,"Answer":"I'm not sure what you mean by overlaying face normals. If it's just for visualization use libraries that can visualize normals for you like meshlab or open3d.\nIf the question is how to get the face centroids in trimesh, there's mesh.triangles_center.","Q_Score":0,"Tags":"python,graphics,mesh,trimesh","A_Id":70830739,"CreationDate":"2021-03-02T10:18:00.000","Title":"Trimesh get faces centroids \/ centers","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list in python where certain elements contain substring like 'Page 1 of 67' , 'Page 2 of 67' and so on till 'Page 67 of 67'.\nlist1 = [\"24\/02\/2021| Page 1 of 67|\", \"Wealth Protect Assure 2 - 100| 100| 500,000.00| 9,000.00| 750.00|\", \"Proposed By: Sample Agent 012| Ver 7.14.0| Page 2 of 67|\", \"Deduct fees & charges\"]\noutput = [\"Wealth Protect Assure 2 - 100| 100| 500,000.00| 9,000.00| 750.00|\",\"Deduct fees & charges\"]","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":66439566,"Users Score":0,"Answer":"import re\npattern = re.compile(r\"| Page \\d+ of \\d+|\")\nlist1 = [\"24\/02\/2021| Page 10 of 67|\", \"Wealth Protect Assure 2 - 100| 100| 500,000.00| 9,000.00| 750.00|\",\n\"Proposed By: Sample Agent 012| Ver 7.14.0| Page 20 of 67|\", \"Deduct fees & charges\"]\nfiltered = [i for i in list1 if not pattern.search(i)]\nprint(filtered)","Q_Score":0,"Tags":"python-3.x,list","A_Id":66440280,"CreationDate":"2021-03-02T12:39:00.000","Title":"Removal of elements from list containing substring like 'Page 1 of 67' or 'Page 2 of 67'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was on my computer and I was trying to code some tkinter code but when I put the starters in Pycharm, there was an error saying that Tkinter is not a module and then I tested it as tkinter and it was also not found. Can someone please help me get this sorted out as I really want to do some coding and this is slowing it down.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1523,"Q_Id":66441291,"Users Score":3,"Answer":"Is it installed on your OS? You can try sudo apt install python3-tk or sudo apt install python-tk.\nYou could also try installing it via pip: pip install python-tk","Q_Score":1,"Tags":"python,ubuntu,tkinter,pycharm,ubuntu-20.04","A_Id":66441620,"CreationDate":"2021-03-02T14:27:00.000","Title":"Tkinter is not a module in Ubuntu 20.04","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Windows AD and would like to let my users run Python. with administrative credentials, the user installs Python, installs the libraries needed, but when the attempts to run the code, the libraries aren't being found. You run a cmd with elevated permissions, pip install the package and you get that the package has been already installed.\nWhat would be the correct way to install Python for Windows domain users where they can run code, preferably by not forcing them to be administrators :) ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":66442263,"Users Score":0,"Answer":"Installing Python from the MS Store can be done by restricted users. Installing packages, as @Raspy said, can be done without running an elevated command prompt","Q_Score":0,"Tags":"python-3.x,active-directory","A_Id":66535701,"CreationDate":"2021-03-02T15:28:00.000","Title":"Installing Python for AD users","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"For establishing many-to-many relationships, django admin has a nice widget called 'filter_horizontal'.\nThe problem is - it only displays primary key of the related table.\nHow do I add other fields from the related table?\nFor example, if I have a many-to-many relationship in my Order model with User model, in 'Order' django admin I can only see User's primary key(id). How do I add their name into the widget?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":66443136,"Users Score":0,"Answer":"Turns out it is changed in str method for the User model","Q_Score":0,"Tags":"python,django,admin","A_Id":66510005,"CreationDate":"2021-03-02T16:20:00.000","Title":"Django admin many-to-many - how to show additional fields?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hey everyone I installed Python3.9 on my Mac using homebrew package manager but now I do not know how to install packages to it for use\nCan anyone please tell me, thanks !","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":642,"Q_Id":66444227,"Users Score":0,"Answer":"You should first do some research on the Python virtual environment, but the final answer to your question is to use pip install for installing Python packages. Be aware that there are other options out there, but pip is the most prevalent.","Q_Score":1,"Tags":"python,macos,homebrew","A_Id":66444366,"CreationDate":"2021-03-02T17:28:00.000","Title":"Installing packages in python installed using home-brew","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am training a unet model. I started training with my pc, but it was too slow as it didn't run with GPU. Therefore I started using Google colab for faster training (using GPU).\nI have the same dataset saved locally and in Google Drive and I also have the exact same code in colab and in my pc, except paths as I need to change them to read from my Google Drive files.\nMy problem is that the results of training the unet in my computer differ a lot from those obtained training the unet with Google colab.\nI don't know why this happens (I use the same seed in both and I have tested that I always obtain the same results locally if I use that seed).\nWhen I train the unet in my pc I obtain more or less 90% accuracy. However, when I train it using colab with GPU I only obtain 65%. I've also tried to use CPU in colab and I get the same issue.\nThanks","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1959,"Q_Id":66446056,"Users Score":0,"Answer":"I also went through the exact same problem. For me the problem was with random split into train and test, It was just a coincidence that random split on colab got the good performance while bad on the local machine. Then just using random_state fixed the issues for me.\nE.g.:\ntrain_test_split(data, target, test_size=0.1, random_state=30, stratify=target)","Q_Score":1,"Tags":"python,tensorflow,keras,google-colaboratory,unity3d-unet","A_Id":70819902,"CreationDate":"2021-03-02T19:40:00.000","Title":"Different results on Google colab than local","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am most likely missing something obvious, but what approach\/model was used to train the Token vectors in spacy's english medium model? Was it word2vec? A deep learning architecture? Just curious on what was used to estimate those embeddings.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":66446435,"Users Score":2,"Answer":"The English vectors are GloVe Common Crawl vectors. Most other languages have custom fastText vectors from OSCAR Common Crawl + Wikipedia. These sources should be included in the model metadata, but it looks like the vector information has been accidentally left out in the 3.0.0 model releases.","Q_Score":1,"Tags":"python,spacy","A_Id":66452704,"CreationDate":"2021-03-02T20:11:00.000","Title":"What is the model architecture used in spacy's token vectors (english)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to python coding and I'm using OSX.\nI installed Python 3.9 and openpyxl using Brew (if I understood correctly Brew puts everything in \/usr\/local).\nWith Brew Cask I also installed Spyder 4.\nIn Sypder 4 I didn't find openpyxl so, following a guide, I tried to change the interpreter selecting the Python 3.9 installed under \/usr\/local (here the path \"\/usr\/local\/Cellar\/python@3.9\/3.9.1_8\/Frameworks\/Python.framework\/Versions\/3.9\/bin\/python3.9\").\nI get this error\n\n\"Your Python environment or installation doesn't have the spyder\u2011kernels module or the right version of it installed (>= 1.10.0 and < 1.11.0). Without this module is not possible for Spyder to create a console for you.\"\n\nI'm stuck and I need help. thanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":129,"Q_Id":66446558,"Users Score":0,"Answer":"Solved.\nI installed spyder-kernels using pip3.","Q_Score":0,"Tags":"python,macos,spyder,openpyxl","A_Id":66458912,"CreationDate":"2021-03-02T20:20:00.000","Title":"How to set Spyder 4 for my existing python environment (OSX)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"i need to halign text in kivymd button to the left and i found solutions for default kivy like text_size: self.size and halign='left', is button still a label in kivymd like in kivy? If no then how do you align text inside it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":213,"Q_Id":66447022,"Users Score":0,"Answer":"Try the following:\ntext_align:'center'","Q_Score":0,"Tags":"python,kivy,kivymd","A_Id":67648878,"CreationDate":"2021-03-02T20:56:00.000","Title":"how do i align text in kivymd button?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a twitter bot using the Tweepy module in Python. I've been looking through the documentation for Tweepy and can not seem to find anything related to this. I just need to get the tweet id of tweets that reply on any of my tweets. I think maybe you could use API.search() but there are no parameters related to replies to your own tweet.\nThanks in Advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":219,"Q_Id":66448191,"Users Score":1,"Answer":"Use the api to get the most recent tweets from your account. Then, get your tweet ids. Then get tweets that tag you with api.search(q='to:@yourhandle'). Now, for each tweet you searched, you can see if it has the attribute in_reply_to_status_id_str. If it does, you can get the tweet id from there and match with your current tweets.","Q_Score":0,"Tags":"python,twitter,tweepy","A_Id":66450251,"CreationDate":"2021-03-02T22:41:00.000","Title":"Tweepy get reply","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a string that is 5 GB in size, I would like to get the last 30 characters of the string. Is using the slice function the best way to get that substring, will it cause memory problem? Is it that another 5 GB will be created because a 4.99 GB and a 0.1 kb substring are created during the splitting process?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":127,"Q_Id":66449996,"Users Score":0,"Answer":"You can get the last 30 characters using string slicing e.g. name_of_string[-30:] to slice the last 30 characters. This won't create a new object for the rest of the string.","Q_Score":1,"Tags":"python,slice","A_Id":66450049,"CreationDate":"2021-03-03T02:38:00.000","Title":"Does python's slice function uses a lot of memory in this case?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I thought I understood python's pass-by-reference and pass-by-value processing...\ncan anyone explain what is the difference between pass-by-reference and pass-by-value in python","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":393,"Q_Id":66450894,"Users Score":0,"Answer":"Python doesn't really do either. Python does \"pass by assignment\". Python names are always references to object. When you pass an object, the receiving function's parameter name gets another reference to the object.","Q_Score":0,"Tags":"python","A_Id":66450926,"CreationDate":"2021-03-03T04:40:00.000","Title":"how pass-by-reference and pass-by-value works in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use or tools to solve the CVRP problem and I know can use routing.IsVehicleUsed(assignment, vehicle_id) method know the vehicle is used or not.\nCan I reuse the used vehicle.\nBecause I have a problem ,when I set data['num_vehicles'] = 1 or-tools returns no result but when I set data['num_vehicles'] = 4 I got a solution.\nIt cannot be modified the data['vehicle_capacities'] so I want to the used vehicle can start again when it return start point","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":65,"Q_Id":66451830,"Users Score":0,"Answer":"Once a vehicle reach its end node, it's over. End node are always the last node of a vehicle route.\nYou should create some dummy node (duplicate of depot) to simulate an unload please take a look at the refuel station example in the sample directory on github...","Q_Score":1,"Tags":"python-3.x,or-tools","A_Id":66471423,"CreationDate":"2021-03-03T06:25:00.000","Title":"Ortools CVRP reuse the used vehicle","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying\u00a0to connect c++(server) and python(client) using socket and want to send share data using shared memory also message sending. Data in the formate of CSV file which is created by c++.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":256,"Q_Id":66455259,"Users Score":0,"Answer":"If you are on Windows you can try Memurai. It's a really fast-data store. It works like a charm","Q_Score":0,"Tags":"python,c++,sockets,shared-memory","A_Id":66494014,"CreationDate":"2021-03-03T10:33:00.000","Title":"shared memory between c++ and python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have around 30 machines and I plan to use them all to run a distributed training python script.They all have the same username, in case this helps. I am on Windows and need to use the GPU in case this helps too. They all have the same python version and software installations necessary but I would need them to have the same version modules installed (pandas, matplotlib, etc).\nMy approach: I initially used one machine to run python -m venv myenv and then did pip install -r requirements.txt. I put the folder in a network drive and had all machine change directory to that network drive. Since they all have the same username I though it would be ok. It worked on a couple of them but not all of them. The alternative solution would be to have all machines run the command python -m venv myenv pip install -r requirements.txt but wouldn't this not be ideal? What if I have to add a module? Does anyone have any suggestions?\nEDIT: I am hoping for an alternative solution to Docker.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":64,"Q_Id":66459834,"Users Score":1,"Answer":"Maybe you could use a conda environment and the export it:\nconda-env export -n myenvnaame > myenvfile.yml\nand then import on the other machines:\nconda-env create -n venv -f=myenvfile.yml\nor in alternative you could use docker and share the image on all the machines","Q_Score":1,"Tags":"python,python-venv","A_Id":66459973,"CreationDate":"2021-03-03T15:14:00.000","Title":"Alternatives to copying the folder created with venv for many machines to use the environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have around 30 machines and I plan to use them all to run a distributed training python script.They all have the same username, in case this helps. I am on Windows and need to use the GPU in case this helps too. They all have the same python version and software installations necessary but I would need them to have the same version modules installed (pandas, matplotlib, etc).\nMy approach: I initially used one machine to run python -m venv myenv and then did pip install -r requirements.txt. I put the folder in a network drive and had all machine change directory to that network drive. Since they all have the same username I though it would be ok. It worked on a couple of them but not all of them. The alternative solution would be to have all machines run the command python -m venv myenv pip install -r requirements.txt but wouldn't this not be ideal? What if I have to add a module? Does anyone have any suggestions?\nEDIT: I am hoping for an alternative solution to Docker.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":64,"Q_Id":66459834,"Users Score":1,"Answer":"Although this is slightly outside the realm of pure-python, creating a docker container for the execution, completing a setup with build instructions, would allow complete isolation of that venv from its surrounding machine setup.\nDocker allows for multiple instances running on the same box to share the same definitions, so if you have a situation where you're running 5 copies of the same code, it can re-use the same memory IIRC for the shared in-mem code runtimes. I've read this, haven't tried the memory spec though.","Q_Score":1,"Tags":"python,python-venv","A_Id":66460022,"CreationDate":"2021-03-03T15:14:00.000","Title":"Alternatives to copying the folder created with venv for many machines to use the environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been exploring how async works in Python. So far, I've made a few simple MQTT-based async mini-services (get a message, do something, maybe retrieve something, send a message).\nFor my next mini-project, I'm trying to tie Discord and MQTT together. The goal is to have discord messages appear over MQTT, and mqtt messages on discord. I've got an async discord-client object, and an async mqtt-client object. Both work fine, but connecting them is a bit of an issue.\nMy current approach is to have the Discord object be 'leading', while I put the MQTT object in the Discord object (discord-client.mqtt-client = mqtt-client, meaning I can do things like await self.mqtt-client.publish(). This appears to work, so far.\nMy problem is that this approach feels a bit wrong. Is this a normal approach? Are there other approaches?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":66461157,"Users Score":0,"Answer":"Answering my own question, with help.\nHaving nested async objects works, but 'smells wrong'. It also makes for odd looking (and therefore difficult to maintain) code, and likely introduces weird bugs and such later on.\nA better approach is to make use of asyncio.Queue, where each of the two client-objects has one queue to write to, and one to read from. Like this:\ndiscord-client -> discord-to-mqtt-queue -> mqtt-client\nmqtt-client -> mqtt-to-discord-queue -> discord-client\n(Not the actual naming of the objects)\nThis works, makes sense, and the resulting code is readable.","Q_Score":1,"Tags":"python,asynchronous,python-asyncio","A_Id":66478637,"CreationDate":"2021-03-03T16:31:00.000","Title":"Two async objects interacting","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Tensorflow to solve a regression problem with known dynamic components, that is, we are aware that the (singular) label at time t depends on some dynamic state of the environment, but this feature is unknown. The initial attempt to solve the problem via simple regression has, understandably, failed, confirming our assumption that there is some kind of dynamic influence by a feature we have no access to.\nHowever, the state of the environment at time t should be reflected somewhere in the features and labels (and in particular their interplay) known at times t0-n, where n > 0. Unfortunately, because of the nature of the problem, the output at time t heavily depends on the input at time t, about as much as it depends on the dynamic state of the environment. I am worried that this renders the approach I wanted to try ineffective - time series forecasting, in my understanding, would consider features from previous timesteps, but no inputs on the current timestep. Additionally, I know labels from previous timesteps, but not at the time at which I want to make my prediction.\nHere is a table to illustrate the problem:\n\n\n\n\nt\ninput\noutput\n\n\n\n\n0\nx(t=0)\ny(t=0)\n\n\n...\n...\n...\n\n\nt0-1\nx(t=t0-1)\ny(t=t0-1)\n\n\nt0\nx(t=t0)\ny(t=t0)=?\n\n\n\n\n\nHow can I use all the information at my disposal to predict the value of y(t=t0), using x(t=t0) (where x is the array of input features) and a defined window of features and labels at previous timesteps?\nIs there an established method for solving a problem like this, either using a neural net or perhaps even a different model?\nDoes this problem require a combination of methods, and if so, which ones might be suitable for tackling it?\n\nThe final model is meant to be deployed and continue working for future time windows as well. We know the size of the relevant time window to be roughly 100 time steps into the past.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":66461368,"Users Score":0,"Answer":"The kind of problem I have described is, as I have since learned, linked to so-called exogenous variables. In my case, I require something called NNARX, which is similar to the ARMAX model at its core, but (as a neural net) can take non-linearity into account.\nThe general idea is to introduce an LSTM layer which acts as an Encoder for the historical input, which is then coupled to another input layer with the exogenous variables. Both are coupled at the so-called Decoder - the rest of the NN architecture.","Q_Score":1,"Tags":"python,tensorflow,machine-learning,neural-network,time-series","A_Id":66527744,"CreationDate":"2021-03-03T16:45:00.000","Title":"Neural Network Regression - Considering a dynamic state","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I manually uploaded a CSV to S3 and then copied it into redshift and ran the queries. I want to build a website where you can enter data and have it automatically run the queries when the data is entered and show the results of the queries.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":16,"Q_Id":66462945,"Users Score":1,"Answer":"Amazon Redshift does not have Triggers. Therefore, it is not possible to 'trigger' an action when data is loaded into Redshift.\nInstead, whatever process you use to load the data will also need to run the queries.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-redshift","A_Id":66464775,"CreationDate":"2021-03-03T18:38:00.000","Title":"Is it possible to upload a CSV to redshift and have it automatically run and export the saved queries?","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I try several ways to handle ElementClickInterceptedException\n\nlike wait the driver\nget the focus and click.\ntry to exec a javascript click\n\nDo you have any ideas how to manage this kind of error ?\nI can put a github repo in attachement","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":66463195,"Users Score":0,"Answer":"I simply delete a blocking element in the dom.","Q_Score":0,"Tags":"python-3.x,selenium,webdriver","A_Id":66600218,"CreationDate":"2021-03-03T18:54:00.000","Title":"How to handle ElementClickInterceptedException selenium python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My machine learning model dataset is cleaveland data base with 300 rows and 14 attributes--predicting whether a person has heart disease or not..\nBut aim is create a classification model on logistic regression...\nI preprocessed the data and ran the model with x_train,Y_train,X_test,Y_test.. and received avg of 82 % accuracy...\nSo to improve the accuracy I did remove features that are highly correlated to each other [as they would give the same inforamtion]\nAnd I did RFE[recursive feature elimination]\nfollowed by PCA[principle component analysis] for dimensionality reduction...\nStill I didnt find the dataset to be be better in accuracy..\nWhy is that?\nAlso why does my model shows different accuracy each time? is it beacuse of taking different x_train,Y_train,X_test,Y_test each time?\nShould i change my model for better accuracy?\nIs 80 % average good or bad accuracy?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":990,"Q_Id":66473248,"Users Score":1,"Answer":"Should i change my model for better accuracy?\n\nAt least you could try to. The selection of the right model is highly dependend on the concrete use case. Trying out other approaches is never a bad idea :)\nAnother idea would be to get the two features with the highest variance via PCA. Then you could plot this in 2D space to get a better feeling if your data is linearily separable.\n\nAlso why does my model shows different accuracy each time?\n\nI am assuming you are using the train_test_split method of scikit-learn so split your data?\nBy default, this method shuffels your data randomized. Your could set the random_state parameter to a fixed value to obtain reproducable results.","Q_Score":1,"Tags":"python,machine-learning,logistic-regression","A_Id":66473508,"CreationDate":"2021-03-04T10:32:00.000","Title":"How to Increase accuracy and precision for my logistic regression model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Windows subsystem for Linux WSL with the Ubuntu App (Ubuntu 20.04 LTS). I have installed Anaconda (Anaconda3-2020.11-Linux-x86_64) on my Windows 10 Education 1909. I have Jupyter notebook, and can run this in the Firefox on my computer and it seams to be working properly. However when I try to install packages such as:\nUbuntu console: pip install scrapy\nThen the Jupyter notebook can not find it.\nJupyter notebook: import scrapy\nI am currently working in the base environment, but I believe that Jupyter is actually running python from a different source (I also have Anaconda on my Windows).\nI confirmed this by running:\nimport sys and sys.version both in the WSL and in the Jupyter notebook.\nJupyter notebook returns: '3.6.6 |Anaconda, Inc.| (default, Oct 9 2018, 12:34:16) \\n[GCC 7.3.0]'\nWSL returns: '3.8.5 (default, Sep 4 2020, 07:30:14) \\n[GCC 7.3.0]'\nconfirming that the \"wrong python is used\".\nI am hesitant to delete my Windows Anaconda since I have my precious environments all set up there and are using them constantly.\nThe spessific package that forces me to linux can be found at \"http:\/\/www.nupack.org\/downloads\" but requires registration for downloads.\nI do not have Anaconda or python in my Windows environment variables.\nI would be happy If I either would know where to install my packages (as long as they are in Linux), or if someone knows how to force Jupyter to use the Anaconda from WSL.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":558,"Q_Id":66474736,"Users Score":0,"Answer":"Thanks to Panagiotis Kanavos I found out that I had both Anaconda3 and Miniconda3 installed and that the WSL command line used the miniconda3 version while Jupiter Notebook used Anaconda3.\nThere is probably a way of specifying which version to use but for me I simply deleted Miniconda and it now works.","Q_Score":1,"Tags":"python,jupyter-notebook,windows-subsystem-for-linux,anaconda3","A_Id":66489815,"CreationDate":"2021-03-04T12:09:00.000","Title":"Anaconda on Windows subsystem for Linux (WSL) uses the \"wrong\" anaconda when creating a Jupyther Notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":1},{"Question":"I am trying to detect anomalies on a time series that controls battery voltage output. I find that my original dataset has some outliers. In this case do I need to remove those points using InterQuartile Range (IQR) or Zscore? of course before using the LSTM keras model","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":224,"Q_Id":66477338,"Users Score":1,"Answer":"Removing or not removing outliers all depends on what you are trying to achieve. You write here that your goal is anomaly detection...so at first glance it seems like a poor idea to remove points or values that you are trying to detect. However, if you detect values that are of such naturethat they cannot even be due to plausible anomalies, then yes, you should remove them. In all other cases you should consider to keep them.","Q_Score":0,"Tags":"python,pandas,jupyter-notebook,statistics,outliers","A_Id":66479588,"CreationDate":"2021-03-04T14:44:00.000","Title":"Is it necessary to discard outliers before applying LSTM on time series","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a feasible package that helps encrypt an xlsx\/xls file without having to call win32 COM API (such as pywin32 and xlwings)?\nGoal is to achieve protecting data from viewing without the password.\nReason not to use pywin32 is that it'll trigger an excel instance to manipulate excel files. For my use cases, all scripts are centrally executed on server and server has issue with excel instance or is very slow when opening an excel.\nPreviously stuck with reading excel with pwd, but this has been resolved by msoffcrypto-tool package which doesn't depend on win32 COM api.\nPackages like openpyxl only provide workbook\/worksheet protection, which doesn't really stop others from viewing the data, so unfortunately this is no go.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":135,"Q_Id":66480173,"Users Score":0,"Answer":"Basically there's no effective workaroud now. Still have to use win32 API call to make it happen","Q_Score":0,"Tags":"python-3.x,excel,pywin32,win32com,password-encryption","A_Id":67614510,"CreationDate":"2021-03-04T17:35:00.000","Title":"Python Encrypt xlsx without calling win32 COM API","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Windows 10, Anaconda Spyder, Python\nTrying to convert word '.doc' to PDF\nThe first step fails\nimport comtypes.client\nword = comtypes.client.CreateObject('Word.Application')\n\nGet Error:\nTraceback (most recent call last):\nFile \"\", line 1, in \nword = comtypes.client.CreateObject('Word.Application')\nFile \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\comtypes\\client_init_.py\", line 250, in CreateObject\nreturn _manage(obj, clsid, interface=interface)\nFile \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\comtypes\\client_init_.py\", line 188, in _manage\nobj = GetBestInterface(obj)\nFile \"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\comtypes\\client_init_.py\", line 112, in GetBestInterface\ninterface = getattr(mod, itf_name)\nAttributeError: module 'comtypes.gen.Word' has no attribute '_Application'\n\nMost websites seem to state that this should not happen???","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":755,"Q_Id":66482132,"Users Score":0,"Answer":"Problem:\nThis problem is caused by incorrect COM Interop settings in the Windows registry.\n(not python or its libraries)\nI've tested this using \"comtypes\" and \"win32api\", and multiple MS Office versions.\nThere seem to be issues with calls to COM objects,\nregarding some MS Office versions.\n\nSolution 1:\n\nClick on your Start menu and open the Control Panel\n\nClick on Uninstall a Program (or Add\/Remove programs in Windows XP)\n\nLocate the entry for Microsoft Office and click on it. After you click on it, you should see a button labelled Change appear either next to it, or at the top of the list (depending on what version of Windows you have). Click this Change button.\n\nOnce the Microsoft Office setup appears, choose the Repair option and click Next to have Microsoft Office repair itself. You may need to reboot your computer once this process is complete; Microsoft Office setup will tell you if you need to do this once it is done.\n\n\n\nSolution 2:\nInstall MS Office versions that are tested and functional with COM calls.\nHere are the results of the MS Office versions that I have tested:\nWorking MS Office versions: 2010, 2019, 365.\nNon-working MS Office versions: 2007, 2013.\n\nUseful COM registry paths to check:\nMS Word x64:\n\"HKEY_CLASSES_ROOT\\WOW6432Node\\Interface{00020970-0000-0000-C000-000000000046}\\TypeLib\"\nMS Word x32:\n\"HKEY_CLASSES_ROOT \\Interface{00020970-0000-0000-C000-000000000046}\\TypeLib\"\nBoth:\n\"HKEY_CLASSES_ROOT\\ WOW6432Node \\TypeLib{00020970-0000-0000-C000-000000000046}\"\n\n8.5 is for Office 2010\n8.6 is for Office 2013\n8.7 is for Office 2016\n\nCOM Interfaces:\n\"HKEY_CLASSES_ROOT\\WOW6432Node\\Interface{000C033A-0000-0000-C000-000000000046}\\TypeLib\\Version\"\n\n2.5 if for Office 2010\n2.7 is for Office 2013\n2.8 is for Office 2016\n\n\"HKEY_CLASSES_ROOT\\WOW6432Node\\Interface{000C0339-0000-0000-C000-000000000046}\\TypeLib\\Version\"\n\n2.5 if for Office 2010\n2.7 is for Office 2013\n2.8 is for Office 2016","Q_Score":0,"Tags":"python,anaconda,windows-10,spyder,comtypes","A_Id":66571184,"CreationDate":"2021-03-04T19:55:00.000","Title":"word = comtypes.client.CreateObject('Word.Application') generates error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am training a pretty intensive ML model using a GPU and what will often happen that if I start training the model, then let it train for a couple of epochs and notice that my changes have not made a significant difference in the loss\/accuracy, I will make edits, re-initialize the model and re-start training from epoch 0. In this case, I often get OOM errors.\nMy guess is that despite me overriding all the model variables something is still taking up space in-memory.\nIs there a way to clear the memory of the GPU in Tensorflow 1.15 so that I don't have to keep restarting the kernel each time I want to start training from scratch?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":161,"Q_Id":66482194,"Users Score":0,"Answer":"It depends on exactly what GPUs you're using. I'm assuming you're using NVIDIA, but even then depending on the exact GPU there are three ways to do this-\n\nnvidia-smi -r works on TESLA and other modern variants.\nnvidia-smi --gpu-reset works on a variety of older GPUs.\nRebooting is the only options for the rest, unfortunately.","Q_Score":0,"Tags":"python-3.x,tensorflow,gpu","A_Id":66482805,"CreationDate":"2021-03-04T20:00:00.000","Title":"Clearing memory when training Machine Learning models with Tensorflow 1.15 on GPU","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"An analytic task has been given to me to solve it by python and return back the result to the technical staff. I was asked to prepare the result in a jupyter notebook and such that the resulting code would be fully runnable and documented.\nHonestly, I just started using jupyter notebook and generally found it pretty useful and convenient in generating reports integrated with codes and figures. But I had to go into some level of difficulty when I wanted to use specific packages like graphviz and dtreeviz, which was beyond doing a simple pip install xxx.\nSo, how should I make sure that my code is runnable when I do not know what packages are available at the destination Jupyter notebook of the next guy who wants to run it or when they want to run it using a Jupiter Lab? especially regarding these particular packages!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":31,"Q_Id":66483484,"Users Score":1,"Answer":"One solution for you problem would be to use docker to develop and deploy your project.\nYou can define all your dependencies, create your project and build a docker image with them. With this image, you can be sure that anyone who is using it, will have the same infrastructure like yours.\nIt shouldn't take you a lot of time to learn docker and it will help you in the future.","Q_Score":0,"Tags":"jupyter-notebook,ipython,graphviz,runnable,jupyter-lab","A_Id":66759240,"CreationDate":"2021-03-04T21:42:00.000","Title":"How to make sure my jupyter notebook is runnable on any other computer or on any jupyter Lab?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python code that I want to run on a GPU server. Running it is time-consuming and sometimes, I get disconnected from the Internet or server and I have to re-run it. So, I need to let it run and shut down my computer. Is there any way?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":270,"Q_Id":66484631,"Users Score":0,"Answer":"If it is a windows server create a bat file.\nRun the python script and the shutdown command. You will have to be an admin to shutdown the computer from a script.\nbat file\nc:\\python27\\python.exe c:\\somescript.py %*\nshutdown \/s \/f \/t 0","Q_Score":0,"Tags":"python,server,gpu,taskscheduler","A_Id":66484766,"CreationDate":"2021-03-04T23:37:00.000","Title":"Executing a code on server while shutdown the computer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to plot the surface : W = (X*sin(Y)+Z)^2\nIs there a way to do it in Python?\nIn general tests I did on Google I did not find anything relevant.\nI would appreciate your help.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":128,"Q_Id":66491189,"Users Score":0,"Answer":"It depends on what you want to see. If you want to see samples of the function, you could go for a parallel coordinates plot. You could also fix x,y or z, so you get a 3-dimensional surface. You definitely need some interaction or constraint to plot this function, since it has 3 degrees-of-freedom.","Q_Score":0,"Tags":"python","A_Id":66491373,"CreationDate":"2021-03-05T10:50:00.000","Title":"How to plot 4-dimensional surface in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created python desktop software. Now I want to market that as a product. But my problem is, anyone can decompile my exe file and they will get the actual code.\nSo is there any way to encrypt my code and convert it to exe before deployment. I have tried different ways.\nBut nothing is working. Is there any way to do that?.Thanks in advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":528,"Q_Id":66491254,"Users Score":0,"Answer":"You can install pyinstaller per pip install pyinstaller (make sure to also add it to your environment variables) and then open shell in the folder where your file is (shift+right-click somewhere where no file is and \"open PowerShell here\") and the do \"pyinstaller --onefile YOUR_FILE\".\nIf there will be created a dist folder, take out the exe file and delete the build folder and the .spec I think it is.\nAnd there you go with your standalone exe File.","Q_Score":2,"Tags":"python,exe","A_Id":66491565,"CreationDate":"2021-03-05T10:55:00.000","Title":"How to encrypt and convert my python project to exe","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to create a Twitter bot but when I install the tweepy package it gives me the following error:\nModuleNotFoundError: No module named 'tweepy'\nI've tried uninstalling and installing tweepy and it still doesn't work. i'm running python 3.9.2","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":66492918,"Users Score":0,"Answer":"Found a solution: just type pip install tweepy in to the terminal next to the console","Q_Score":0,"Tags":"python-3.x","A_Id":66495391,"CreationDate":"2021-03-05T12:50:00.000","Title":"No module named 'tweepy'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I updated my R to the most recent version 4.0 and tried to install the 'umx' package which worked fine when I had the 3.6.3 version. I then changed my R back to 3.6.3 version and the 'umx' package still won't install. I get the below error:\n\ninstalling source package \u2018kableExtra\u2019 ...\n** package \u2018kableExtra\u2019 successfully unpacked and MD5 sums checked\n** using staged installation\n** R\n** inst\n** byte-compile and prepare package for lazy loading\nError in dyn.load(file, DLLpath = DLLpath, ...) :\nunable to load shared object '\/Library\/Frameworks\/R.framework\/Versions\/3.6\/Resources\/library\/gdtools\/libs\/gdtools.so':\ndlopen(\/Library\/Frameworks\/R.framework\/Versions\/3.6\/Resources\/library\/gdtools\/libs\/gdtools.so, 6): Library not loaded: \/opt\/X11\/lib\/libcairo.2.dylib\nReferenced from: \/Library\/Frameworks\/R.framework\/Versions\/3.6\/Resources\/library\/gdtools\/libs\/gdtools.so\nReason: image not found\nCalls: ... asNamespace -> loadNamespace -> library.dynam -> dyn.load\nExecution halted\nERROR: lazy loading failed for package \u2018kableExtra\u2019\nremoving \u2018\/Library\/Frameworks\/R.framework\/Versions\/3.6\/Resources\/library\/kableExtra\u2019\nWarning in install.packages :\ninstallation of package \u2018kableExtra\u2019 had non-zero exit status\n\nThe downloaded source packages are in\n\u2018\/private\/var\/folders\/88\/d4_sv_l174vcbkn5f8r6ytjc0000gn\/T\/RtmpSb59yk\/downloaded_packages\u2019\nNot sure why this is as everything was fine before I updated my R, and my expectation was that it would be fine again when going back to my original R version but this is not the case. Any help with this would be greatly appreciated!\nThanks,\nRionagh","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":162,"Q_Id":66494953,"Users Score":0,"Answer":"I was getting the same error for the last 2 days and then I wrote an issue in GitHub to the person who made this package. One of people answered my question by suggesting to update the version of R. Same like you, I was using 3.6.2 version of R and then I followed his suggestion, I updated my R version to 4.0.4 He also suggested me to upload \"systemfonts\" package before I upload the kableExtra by writing this code install.packages('systemfonts', dependencies = TRUE). Yet I think it was because of the old version that \u0131 was using. So the bottomline is, if you update your R to 4.0.4, and run these 2 codes;\ninstall.packages('systemfonts', dependencies = TRUE)\ninstall.packages('kableExtra', repos='https:\/\/cran.ma.imperial.ac.uk\/')\nit is just going to work I guess. I mean it worked for me.","Q_Score":2,"Tags":"python,r,installation","A_Id":66537632,"CreationDate":"2021-03-05T15:08:00.000","Title":"Not able to install umx packaging in R 3.6.3","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Dataframe with the following structure\n\n\n\n\ntime_start\ntime_end\nlabel\n\n\n\n\ntime\ntime + 1\naction\n\n\ntime + 1\ntime + 2\nsome_other_action\n\n\n\n\nI would like to take see the diff of time_start and previous row time_end. in this case (time + 1) - (time + 1) = 0\nI have tried df.diff, but that only yields the diff within either columns or rows.\nAny ideas of how I can perform this \"jagged diff\"?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":89,"Q_Id":66494998,"Users Score":1,"Answer":"The solution was what @Quang Hoang mentioned in a comment.\ndf['time_start'] - df['time_end'].shift()","Q_Score":0,"Tags":"python,pandas,datetime","A_Id":66495298,"CreationDate":"2021-03-05T15:10:00.000","Title":"Pandas dataframe diff between rows with column offset","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Pretty much what the title says - no matter what I run, even if it's just a print(\"hello\") it will get stuck executing it.\nI'm using the most recent Spyder version, updated today, I turned off my antivirus, started as administrator, started from anaconda prompt, tried to run the file, run a cell, run a selection - it always gets stuck.\nThe only way I managed to run some code was to first run Debug then exit Debug. After that, I could execute everything normally, but only until I restarted Spyder. Now not even the Debug trick will work.\nIt's a new PC with Windows 10. I also use Avira and Malwarebytes, but I turned them off while testing this.\nI really have no clue what to do, I spent a bunch of time Googling it and found some people with similar issues but none of them got answers. It doesn't even print an error I could look up...","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":778,"Q_Id":66495958,"Users Score":1,"Answer":"I've just run into the same issue on a new computer (spyder 5.0.5, spyder-kernels 2.0.5, ipython 7.27.0). Trying to run a file within the current IPython console causes the console to hang.\nThe workaround I found is to change the run configuration to \"execute in a dedicated console\".","Q_Score":3,"Tags":"python,console,spyder","A_Id":69446181,"CreationDate":"2021-03-05T16:15:00.000","Title":"iPython console in Spyder IDE hangs on any code execution","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have one API, is an flask application with python deployed on AWS EC2. Some endpoints need to connect on AWS Keyspace for make a query. But the method cluster.connect() is too slow, takes 5 seconds for connect and then run the query.\nWhat I did to solve it, was to start a connection when the application starts (when a commit is done on the master branch, I'm using CodePipeline), and then the connection is open all the time.\nI didn't find anything in the python cassandra driver documentation against this, is there any potential problem with this solution that I found?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":273,"Q_Id":66496938,"Users Score":2,"Answer":"It's a recommended way - open connection at start and keep it (and have one connection per application). Opening connection to a Cassandra cluster is an expensive operation, because besides connection itself, driver discovers the topology of the cluster, calculate token ranges, and many other things. Usually, for \"normal\" Cassandra this shouldn't be very long (but still expensive), and AWS's emulation may add an additional latency on top of it.","Q_Score":2,"Tags":"python,amazon-web-services,cassandra,amazon-keyspaces","A_Id":66497975,"CreationDate":"2021-03-05T17:24:00.000","Title":"Why is connecting to AWS keyspaces so slow with python's cassandra-driver?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am currently using an offline Windows 10 environment and I need to use pip to install a package.\nThe traditional pip install \"myPackage.tar.gz\" is not working because pip makes a network request and fails because my machine has no internet.\nWith this in mind I tried the following command to ignore dependency checking. pip install myPackage.tar.gz -f.\/ --no-index \u2013-no-deps. The command did install \u201cSuccessfully \u201d but when I tried to use the package I got a ModuleNotFoundError.\nMy question is does pip work offline? If not what would be a work around?\nThank you,\nMarco","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":117,"Q_Id":66496944,"Users Score":1,"Answer":"Do you have those packages already downloaded in your computer?\nIf not, at some point you will need an internet connection.\nThere's pip download command which lets you download packages without installing them:\npip download -r requirements.txt\n(In previous versions of pip, this was spelled pip install --download -r requirements.txt.)\nWhen you have your packages downloaded, use pip install --no-index --find-links \/path\/to\/download\/dir\/ -r requirements.txt to install what you have previously downloaded, without the need to access on internet","Q_Score":1,"Tags":"python,python-3.x,windows,pip,package","A_Id":66497037,"CreationDate":"2021-03-05T17:25:00.000","Title":"Does Python pip work in offline environments?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to install a module with pip, in my case pydub.\nHowever pip install pydub installs pydub in \/home\/\/.local\/lib\/python2.7\/\nSo when I run my script\npython3 myScript.py\nit tells me\nModuleNotFoundError: No module named 'pydub'\nHow do I get pip to install pydub for 3.x rather than 2.7?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":178,"Q_Id":66498764,"Users Score":1,"Answer":"Use pip3 install xxx, or better yet, python3 -m pip install xxx. The problem here is that by default pip is aliased to python2's installation.","Q_Score":0,"Tags":"python-3.x,python-2.7,pip,pydub","A_Id":66499060,"CreationDate":"2021-03-05T19:47:00.000","Title":"pip installs modules for python 2.7","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I had to create a linear regression in python, but this dataset has over 800 columns. Is there anyway to see what columns are contributing most to the linear regression model? Thank you.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":107,"Q_Id":66499686,"Users Score":0,"Answer":"Look at the coefficients for each of the features. Ignore the sign of the coefficient:\n\nA large absolute value means the feature is heavily contributing.\nA value close to zero means the feature is not contributing much.\nA value of zero means the feature is not contributing at all.","Q_Score":0,"Tags":"python","A_Id":66499816,"CreationDate":"2021-03-05T21:11:00.000","Title":"Top features of linear regression in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I loaded a Jupiter notebook in VS Code, I usually saw the message \"Connecting to Jupiter kernel: connecting to kernel\" .\nThen all of a sudden VS Code stopped connecting to Jupiter server.\nI installed notebook again using pip install notebook\nNow my Jupiter notebooks seem to work, however, I do not see the message \"Connecting to Jupiter kernel: connecting to kernel\"\nInstead, a message appears \"Couldn't find kernel 'Python 3.7.0 64-bit' that the notebook was created with. Using the current interpreter.\"\nIt is annoying. How can I get rid of it?\nP.S. I did not update Python itself and it is the same 3.7.0 64-bit version as before. The Python extension for VS Code is also the unchanged.\nThe notebook seems to run correctly, though.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":79,"Q_Id":66502427,"Users Score":1,"Answer":"It appears that there is some conflict with the initial creation of the notebook. If you save the notebook as a new file, it should clear the conflict.\nYou can Save as.. by using the shortcut Cmd + Shift + S or going to File and clicking Save as... Once you do that, you can change the name to a new file and try opening it again.","Q_Score":0,"Tags":"python,visual-studio-code,jupyter-notebook","A_Id":66502498,"CreationDate":"2021-03-06T04:38:00.000","Title":"Couldn't find kernel 'Python 3.7.0 64-bit' that the notebook was created with","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating an ERP web application using Django. How can i connect multiple apps inside of a project with one database. I am using the PostgreSQL database and also how can i centralized the database for all modules of ERP. How can i perform operations in other module and see if user is authenticated or not","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":220,"Q_Id":66504952,"Users Score":0,"Answer":"Your apps use only the database(s) set up in your settings.py file.","Q_Score":3,"Tags":"python-3.x,django,postgresql","A_Id":66511623,"CreationDate":"2021-03-06T11:01:00.000","Title":"Multiple applications inside a Django project","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Let's say I have a matrix M which looks like:\n[a b c l]\n[d e f k]\n[g h i o]\nI have to pick one element from every column such that I can minimise the maximum value of all those elements.\nFor eg. If I pick a,e,i,k my answer would be max(a,e,i,k)\nTo find the most optimum solution, is there any approach which is better than O(n^3)?\nI would appreciate some sort of pseudocode\/snippet of code if possible.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":66505219,"Users Score":0,"Answer":"Pick the minimum value of each column and compute the maximum (M) of those. Althrough there would be other combinations that produce the same maximum value, M cannot be further reduced because any value in M's column would be larger and picking (smaller) values in other columns would not change M which is the smallest in its own column.\nIf you were looking to minimize the total of these values, then there would be a need for some additional optimization, perhaps using dynamic programming. As it is, the problem is simple math.","Q_Score":0,"Tags":"python,matrix,dynamic-programming","A_Id":66506270,"CreationDate":"2021-03-06T11:38:00.000","Title":"Pick optimal combination from matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I want to get all the messages from the telegram servers I'm in and want to display the messages in the terminal.\nI used the telethon python library but the delay is too much takes like 2 to 3 seconds to fetch the message after it has already appeared in the browser.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":337,"Q_Id":66505327,"Users Score":1,"Answer":"The best way I found is to use the pyrogram library, it displays messages faster than the Telegram App itself which is what I wanted.","Q_Score":1,"Tags":"python,api,python-requests,telegram","A_Id":71085998,"CreationDate":"2021-03-06T11:50:00.000","Title":"How to get telegram messages without any delay from any server using the API or is there a get request that I can use?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a method to get information of a \"trend\" regarding some hashtag\/key word on Twitter. Let`s say I want to measure how often the hashtag\/key word \"Python\" is tweeted in time. For instance, today, \"Python\" is tweeted on average every 1 minute but yesterday it was tweeted on average every 2 minutes.\nI have tried various options but I am always bouncing off the twitter API limitations, i.e. if I try to download all tweets for a hashtag during the last (for example) day, only a certain franction of the tweets is downloaded (via tweepy.cursor).\nDo you have any ideas \/ script examples of achieving similar results? Libraries or guides to recommend? I did not find any help searching on the internet. Thank you.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":388,"Q_Id":66505663,"Users Score":0,"Answer":"Try a library called:\nGetOldTweets or GetOldTweets3\nTwitter Search, and by extension its API, are not meant to be an exhaustive source of tweets. The Twitter Streaming API places a limit of just one week on how far back tweets can be extracted from that match the input parameters. So in order to extract all historical tweets relevant to a set of search parameters for analysis, the Twitter Official API needs to be bypassed and custom libraries that mimic the Twitter Search Engine need to be used.","Q_Score":0,"Tags":"python,web-scraping,twitter,tweepy","A_Id":66505759,"CreationDate":"2021-03-06T12:33:00.000","Title":"Tweets scraping - how to measure tweeting intensity?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a long list with urls and some of them are \"parked-free\" by godaddy\nIs there any technical way to recognize such pages without opening them on browser?\nThe page is technically live and\nrequests.head('url').status_code\nreturns 200 so it doesn't help\nTrying to get the content, I only receive \"Enable Javascript...\" message\nI also tried to use some metatags but they are not visible in beautiful soup\nSelenium could probably help, but I would like to avoid it for this specific problem\nIs there any simplier solution for this?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":186,"Q_Id":66507046,"Users Score":2,"Answer":"If it's just godaddy, you can try either resolving the domain or trying to make a request with random path (say, \/dkfiwifhe). The few domain I tested all resolve to 34.102.136.180, and return HTTP 200 for any path.\nOf course, this can change anytime and likely won't work on other parking sites, sedo resolve to 91.195.241.137, including all the subdomain (godaddy return nxdomain for random subdomain and canonical naked domain for www), but return 403 for any path.\nDepending on how many unique parking sites in your list, you might as well just look up at list of parking sites provider and craft special script for all of them.\nAnother alternative is, some DNS providers allows filtering parked domain, so you can just attempt to resolve against them. Service recommendation is off topic though, so you can just google them yourself.","Q_Score":2,"Tags":"python,beautifulsoup,python-requests,urllib2","A_Id":66507468,"CreationDate":"2021-03-06T14:59:00.000","Title":"How can I recognize a \"parked free\" website?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a dataset that I need to process batchwise (due to API restrictions).\nThe sum of the column text_lenth of a batch cannot exceed 1000. And the maximum number of rows in a batch cannot be greater than 5.\nFor that I would like to add batch numbers to the single batches in order to process the data based on batch_numbers later.\nHow can I achieve that in pyspark (in Databricks). I am pretty new to all of this that I don't even know what to look for online.\nI really appreciate your help.\nThe tables below illustrate what I am trying to achieve:\nOriginal table\n\n\n\n\nid\ntext_length\n\n\n\n\n1\n500\n\n\n2\n400\n\n\n3\n200\n\n\n4\n300\n\n\n5\n100\n\n\n6\n100\n\n\n7\n100\n\n\n8\n100\n\n\n9\n100\n\n\n10\n300\n\n\n\n\nResulting table\n\n\n\n\nid\ntext_length\nbatch_number\n\n\n\n\n1\n500\n1\n\n\n2\n400\n1\n\n\n3\n200\n2\n\n\n4\n300\n2\n\n\n5\n100\n2\n\n\n6\n100\n2\n\n\n7\n100\n2\n\n\n8\n100\n3\n\n\n9\n100\n3\n\n\n10\n300\n3","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":411,"Q_Id":66507942,"Users Score":1,"Answer":"Unlike mck states, this is not the \"partition problem\".\nWhat the issues are, are that 1) Spark works with partitions - not just 1 such partition in order to be effective and 2) there is not a grouping attribute to ensure that 'batches' can form naturally or be distilled naturally within a partition only. Moreover, can we have negative numbers or fractions? - this is not stated. Max 5 entries it is stated, however.\n\nThis means that processing would need to be just based on one partition, but it may not be big enough aka OOM.\n\nTrying to process per partition is pointless as all work would need to be done per partition N, N+1, and so on die to offset effects in the partition N-1. I have worked out a solution here on SO that took partition boundaries into account, but this is against the principle of Spark and the use case was more simplistic.\n\nActually not a Spark use case. It is a sequential algorithm as opposed to parallel algorithm, use PL\/SQL, Scala, JAVA, C++.\n\nThe only way would be:\n\nlooping over a fixed size partition that has had zipWithIndex applied globally (for safety)\n\nprocess with Scala into batches - temp result\ntake all items from last created batch and union with next partition\nremove last batch from temp results\nrepeat cycle\n\n\n\n\nNB: Approximations to get around partitioning boundary aspects of data seem not to work --> the other answer proves that in fact. You get a compromise result, not the actually answer. And to correct it is not on fact that easy as batches have gaps and may be in other partitions as result of grouping.","Q_Score":3,"Tags":"python,dataframe,apache-spark,pyspark","A_Id":66517929,"CreationDate":"2021-03-06T16:25:00.000","Title":"Add batch number to DataFrame based on moving sum in spark","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I'm calling --onefile on my pyinstaller to get a file into an exe, I don't want to have to delete all these useless folders and files after I got the exe I wanted. Is there a way to tell it to not make these folders?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":66511429,"Users Score":0,"Answer":"I don't think you can. It needs all those folder and stuff to package the executable. You'll either have to suck it up or make a script that deletes them for you.\nEdit: I haven't used pyinstaller, but I do have experience with python and compiled languages, so I would guess that packaging a non-compiled language like python would need a lot of \"junk\" files. Also, if it ain't broke, don't fix it.","Q_Score":1,"Tags":"python,pyinstaller,exe","A_Id":66511529,"CreationDate":"2021-03-06T22:44:00.000","Title":"How do I get pyinstaller to not generate those extra folders","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I'm calling --onefile on my pyinstaller to get a file into an exe, I don't want to have to delete all these useless folders and files after I got the exe I wanted. Is there a way to tell it to not make these folders?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":66511429,"Users Score":0,"Answer":"Try -F to create a one-file bundled executable.","Q_Score":1,"Tags":"python,pyinstaller,exe","A_Id":66524894,"CreationDate":"2021-03-06T22:44:00.000","Title":"How do I get pyinstaller to not generate those extra folders","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why does the numpy.histogram return values of hist and bin_edges not have the same size? Instead bin_edges has (length(hist)+1). This becomes an issue when attempting a best fit line for the histogram because then the two operates have different sizes. What can I do to make the two match? Do I just trim off the last value from bin_edges? Which value from bin_edges doesn't correspond to its respective hist value?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":280,"Q_Id":66511554,"Users Score":0,"Answer":"If I ask you to count the number of people in a room by age ranges, you might tell me:\n\n10-17 years: one person\n18-29 years: three people\n30-50 years: two people\n\nThat's three bins but four edges (10, 18, 30, 50 in the way NumPy reports edges).\nIf you want to convert those four edges to three values which somehow identify the bins, you could:\n\nUse the lower value to represent each range (i.e. discard the last edge returned by NumPy).\nUse the upper value.\nUse the midpoint of each range.\nUse the mean, median or mode of each group's values.\n\nIt's up to you, NumPy isn't making this choice for you.","Q_Score":0,"Tags":"python,numpy","A_Id":66511605,"CreationDate":"2021-03-06T22:59:00.000","Title":"numpy.histogram return values aren't the same size","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a 2d game in pygame, and I want it to be available on the web. In my head, a fun way to challenge myself is to use web assembly to make the game available on the web. So I'm wondering if there is any way to take executable binary (.exe) and somehow \"compile\" .wasm if that's still sayable at this point) to","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":732,"Q_Id":66511854,"Users Score":2,"Answer":"Running a .exe via WebAssembly (WASM) in the browser will be extremely slow and inefficient but is still possible with a Wasm PC emulator ( eg v86 ) + an OS ( eg ReactOS ).\nBut if you can gain access to game sourcecode then\nwhat you could do is using CPython 3.11 which supports WebAssembly target ( with the help of Emscripten a C\/C++ compiler targeting WASM ).\nAdd to that the latest development version of PyGame and minor changes to the game main loop source code.\nYou can now run directly the game if you pack both sourcecode and assets with the emscripten File Packager instead of a .exe you will have a .wasm (cpython) + one .data (assets+code) and some javascript glue to load them.\nPlease note that currently there is no straightforward tool to do that at once, but situation will most likely evolve very quickly after release of CPython 3.11.","Q_Score":1,"Tags":"python,google-chrome,web,pygame,webassembly","A_Id":71694919,"CreationDate":"2021-03-06T23:50:00.000","Title":"Is there any way to take executable binary (.exe) and somehow \"compile\" .wasm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have cloned a random project and trying to run it in my local system. I am using visual studio code editor. After opening the extracted folder in editor, I have clicked on run.\nAt this point in time, a new chrome browser is getting opened and its showing that the server is down.\nI tried setting up a virtual environment and by using the command - \"python manage.py runserver\" .\nEven though I am unable to test that project.\nCould I get some insights to get rid of this issue as early as possible please?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":636,"Q_Id":66512924,"Users Score":0,"Answer":"First of all ,\ncheck if you already install django with pip install django and try to make python manage.py makemigrations and python manage.py migrate\nSecond , check for any config or env file that you need to complete to run the project.\nif none of this works , share the error on your command prompt.","Q_Score":0,"Tags":"python,django,testing","A_Id":66512991,"CreationDate":"2021-03-07T03:27:00.000","Title":"How to run and test a django cloned project from github in visual studio code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a python script whose dependencies are in the python environment. I want to convert python script to exe but before that i want to run python script without activating environment or automatically activate environment when the script starts. Is there any way i can achieve this so it will be easier for other people to use python script.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":66514222,"Users Score":0,"Answer":"Well, I have found the answer. This can be done by adding just two lines at the starting of your script.\nFor Python 3:\nactivate_this = '\/path\/to\/env\/bin\/activate_this.py'\nexec(compile(open(activate_this, \"rb\").read(), activate_this, 'exec'), dict(__file__=activate_this))\nFor Python 2:\nactivate_this = '\/path\/to\/env\/bin\/activate_this.py'\nexecfile(activate_this, dict(__file__=activate_this))","Q_Score":1,"Tags":"python","A_Id":66519795,"CreationDate":"2021-03-07T07:38:00.000","Title":"How can I run python script without activating python environment everytime?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is the module trax.fastmath of TRAX package deprecated? I am using this module, but ModuleNotFoundError is returned.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":179,"Q_Id":66514636,"Users Score":0,"Answer":"I replaced trax.fastmath with trax.math and things worked perfectly fine. I am still not sure if this has been updated in the Trax documentation.","Q_Score":0,"Tags":"python,trax","A_Id":66515661,"CreationDate":"2021-03-07T08:39:00.000","Title":"Is the module trax.fastmath deprecated?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new in scrapy.\nI get list of li tag from xpath selector : categories = categories_container.xpath('li')\nnow I have find out which category element has the css class of \"particular class which not in other li\".\nwhat I can do to find that li ?\nthanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":66516838,"Users Score":0,"Answer":"categories_container.xpath('\/\/li[contains(@class, \"particular class\")]').getall()\nor css\ncategories_container.css('li.classname').getall()","Q_Score":0,"Tags":"python,scrapy","A_Id":66532875,"CreationDate":"2021-03-07T13:07:00.000","Title":"find element has the class or not in scrapy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am looking for a solution that will allow me to use py file as a little library.\nShort description:\nI have a py script and there are a lot of common functions in it.\nInstead of each time use a big file I want to throw all common functions into separate py file, put into folder Tools and use it when I need.\nMy problems is that I cannot import this file from Tools, because my script does not see it.\nMy folder structure:\nC:\\some\\folders\\here\\my\\folder\\script.py\nC:\\some\\folders\\here\\Tools\\Library\\library.py\nAlso, it is not good for me to user init.py, because I haven't any python project, it is just one file without any other things.\nAre there any normal solutions?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":108,"Q_Id":66517385,"Users Score":1,"Answer":"Python interpreter searches for modules in 3 places:\n\ncurrent directory, that means the directory from which you run the script with your import statement\nlist of directories from PYTHONPATH\ninstallation-dependent list of directories, this is configures when Python in installed\n\nYou can also modify sys.path at runtime and include a directory where your module is located, but this is the worst solution and it's usually discouraged to do so.\nSetting PYTHONPATH will most likely be a solution in your situation.","Q_Score":1,"Tags":"python","A_Id":66517471,"CreationDate":"2021-03-07T14:07:00.000","Title":"How to use py file as external module without init","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been trying to do some coding with sockets recently and found out that i often get broken pipe errors, especially when working with bad connections.\nIn an attempt to solve this problem I made sure to sleep after every socket operation. It works but it is slow and ugly.\nIs there any other way of making a socket connection more stable?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":124,"Q_Id":66517476,"Users Score":1,"Answer":"...server and client getting out of sync\n\nBasically you say that your application is buggy. And the way to make the connection more stable is therefor to fix these bugs, not to work around it with some explicit sleep.\nWhile you don't show any code, a common cause of \"getting out of sync\" is the assumption that a send on one side is matched exactly by a recv on the other side. Another common assumption is that send will actually send all data given and recv(n) will receive exactly n bytes of data.\nAll of these assumptions are wrong. TCP is not a message based protocol but a byte stream. Any message semantics need to be explicitly added on top of this byte stream, for example by prefixing messages with a length or by having a unique message separator or by having a fixed message size. And the result of send and recv need to be checked to be sure that all data have been send or all expected data have been received - and if not more send or recv would need to be done until all data are processed.\nAdding some sleep often seems to \"fix\" some of these problems by basically adding \"time\" as a message separator. But it is not a real fix, i.e. it affects performance but it is also not 100% reliable either.","Q_Score":0,"Tags":"python,python-3.x,sockets","A_Id":66519467,"CreationDate":"2021-03-07T14:16:00.000","Title":"Python sockets really unreliable","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had a working setup where I'd type pip install some-library and then I could import it into my projects. Then I decided to install miniconda which installed another version of python (3.8) that my system started defaulting to.\nBy running this command in terminal (I'm on a mac): alias python=\/usr\/local\/bin\/python3 I managed to revert so that when I type python [something], my system uses the python located there (not the newly created one).\nIt seems that it's not as straightforward to get pip to do the same though. pip install some-library just installs stuff for the wrong python version.\nHow can one make pip install some-library install some-library to the python version located in \/usr\/local\/bin\/python3?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":133,"Q_Id":66518181,"Users Score":0,"Answer":"You can try pip3 install some-library for python 3. I hope that works fine!","Q_Score":0,"Tags":"python,python-3.x,macos,pip,version","A_Id":66526254,"CreationDate":"2021-03-07T15:31:00.000","Title":"How to make pip install stuff for another version of python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I know it might sound stupid, but I genuinely tried my best to understand if pip installs packages from the internet every single time or does it just clone and use the already globally installed packages when I am creating a venv?\nWhat exactly is the difference between pip install and pip download?\nWhat does it mean by\nCollecting package ...\nUsing cached ...\nand\nDownloading \nCan someone help me out...","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":78,"Q_Id":66520575,"Users Score":0,"Answer":"pip download replaces the --download option to pip install, which is now deprecated and was removed in pip 10.\n\npip download does the same resolution and downloading as pip install, but instead of installing the dependencies, it collects the downloaded distributions into the directory provided (defaulting to the current directory). This directory can later be passed as the value to pip install --find-links to facilitate offline or locked down package installation.\n\n\nThe idea behind the pip cache is simple, when you install a Python package using pip for the first time, it gets saved on the cache. If you try to download\/install the same version of the package on a second time, pip will just use the local cached copy instead of retrieving it from the remote register .\n\nIf you plan to use the same version of the package in another project then using cached packages is much faster.\nBut if pip installs the cached version of the package and you want to upgrade to the newest version of the package then you can simply upgrade by: pip install --upgrade","Q_Score":0,"Tags":"python-3.x,installation,pip","A_Id":66525250,"CreationDate":"2021-03-07T19:33:00.000","Title":"does pip reinstall libraries everytime when creating a virtual environment?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is a class named State that has several attributes, which I need to write unit tests for. The test will need to have certain actions that change attribute values of the State instance. Expected parameters are located inside of the dictionary. The unit test will compare the attributes of the State instance with the values of the dictionary for equality.\nThe question is what is the best place to keep the comparison logic at? I\nwas thinking about 2 options:\n\nAdd __eq__ method to the State class that contains comparison logic.\nAdd helper function inside of the test module that contains comparison logic.\n\nWhich one of the options is better and why?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":279,"Q_Id":66522130,"Users Score":0,"Answer":"You should probably take the __eq__ approach. This way you can also test for equality in the actual code in the future if necessary, while also being able to avoid a potentially messy situation with importing the helper function if you plan or might use type annotations in the future.\nGenerally you do not want your testing code to contain much logic at all, since test are meant to test code not implement more logic which may need to be tested itself. Beyond the actual test functions you test suites should be really very simple.\nThe only reason I can think to not take this approach is if the class you are testing has some unique use case where a __eq__ is not applicable and you want to avoid implementing one and potentially confusing yourself or future developers later on.","Q_Score":4,"Tags":"python,python-3.x,unit-testing,testing,python-unittest","A_Id":66522206,"CreationDate":"2021-03-07T22:25:00.000","Title":"What is the correct way to test classes in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is a class named State that has several attributes, which I need to write unit tests for. The test will need to have certain actions that change attribute values of the State instance. Expected parameters are located inside of the dictionary. The unit test will compare the attributes of the State instance with the values of the dictionary for equality.\nThe question is what is the best place to keep the comparison logic at? I\nwas thinking about 2 options:\n\nAdd __eq__ method to the State class that contains comparison logic.\nAdd helper function inside of the test module that contains comparison logic.\n\nWhich one of the options is better and why?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":279,"Q_Id":66522130,"Users Score":2,"Answer":"Outside. __eq__ should (in most cases) not be used to compare a specific object to a dict and give equality. The expected behaviour is to enable comparison between objects of the same (or inherited) type. If you're looking for a way to compare to State objects, it could be useful - but that doesn't seem to be the case here, according to your description.\nI'd also be careful about using __eq__ for specific tests if these tests do not explicitly test for equality, but for certain properties. A future change in __eq__ - i.e. the comparison requirement between objects of the same class, may not be have the same semantic meaning as what you're actually testing in your test. For example; a future change to __eq__ could introduce more similarity requirements than what your tests require (for example; are they actually the same object and not just similar). Since the expected behaviour for __eq__ is \"this represents exactly the same thing\", that may not be the same as what you're testing.\nKeep the comparison outside of your class - and if it's something you want to re-use in different contexts, either add it as a utility function in your project or add it as a specific function to your object. For now I'd just go with keeping it in your tests, and then moving it inside your project when it becomes necessary.\nThis all assumes that the comparison is simple. If there is actual logic and calculations involved - that does not belong in the test. Instead, add logic to your class that exposes the values directly in a properly testable format.\nA test should just check that the value returned matches what was expected. However, comparing a returned dict against expected values for that dict is perfectly valid.","Q_Score":4,"Tags":"python,python-3.x,unit-testing,testing,python-unittest","A_Id":66522307,"CreationDate":"2021-03-07T22:25:00.000","Title":"What is the correct way to test classes in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a plan for a mobile app which is in the very early stages of development. Background: The mobile app will be available on Apple and Android. It will allow the user to log in and see which local taxi companies will arrive the quickest, payment will be made through the app, a map will show the drivers location, in a nutshell.\nNow the tech part, i am currently studying Python, still consider myself a junior. Could anyone advise if Kivy with Python would be able to handle these kind of functionalities that the app will need?\nFurther more could anyone shine some light on what kind of back end technology and functionalities I will need to incorporate also.\nI have never created a Mobile app before so any advice or direction would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":66522422,"Users Score":0,"Answer":"You can technically achieve this with Kivy, but you'd be touching lots of features that are not well supported with existing helper modules - that includes payment processing, map display, to some extent login functionality (although this doesn't need anything special, it just adds to the pile).\nIf you aren't very experienced, I think you'll probably find it easier to use a more widely used framework with more examples.","Q_Score":0,"Tags":"python,mobile,kivy","A_Id":66522585,"CreationDate":"2021-03-07T23:04:00.000","Title":"Building Apps with Kivy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to know how to get the most recent message sent in a channel of a discord server with discord webhooks in python? I have not tried anything yet.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":206,"Q_Id":66523058,"Users Score":0,"Answer":"Webhooks are only meant for sending messages, not reading messages in a channel. The only way to get the last message from the channel is if you have a bot user in the server that can read message history in that channel.","Q_Score":0,"Tags":"python,discord,discord.py,webhooks","A_Id":66523831,"CreationDate":"2021-03-08T00:48:00.000","Title":"How to use dhooks to find the most recent message sent in a channel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an app which uses Pytesseract and I'm hosting it on PA. Tesseract is preinstalled\nbut apparently the version is old (3.04) when I run my code I get error:\n\"TSV output not supported. Tesseract >= 3.05 required\"\nHow can I upgrade it since I can't use sudo apt ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":65,"Q_Id":66525998,"Users Score":0,"Answer":"The latest version of Tesseract is not available on PythonAnywhere yet. It should be available with the next system image later this spring.","Q_Score":0,"Tags":"tesseract,python-tesseract,pythonanywhere","A_Id":66528060,"CreationDate":"2021-03-08T07:48:00.000","Title":"How do I update Tesseract on PA?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a dict like {'key1': fun1(...), 'key2': fun2(...), ...} and this functions could raise errors (undefined), is there a way to use try\/except to create the dictionary?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":66527963,"Users Score":0,"Answer":"You would have to either wrap the whole dictionary creation in a try\/except block, or, if you can modify the functions, put the code parts of the functions that could raise errors in try\/except blocks.","Q_Score":1,"Tags":"python","A_Id":66527995,"CreationDate":"2021-03-08T10:16:00.000","Title":"python, use try\/except in dictionary to avoid errors","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is wrong with the code when I get by python pyttsx3 a ModuleNotFoundErrorDer error.\nHere is my code: import pyttsx3 engine = pyttsx3.init() engine.say(\"hello\") engine.runAndWait() \nI am coding with Python 3.9","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":66529353,"Users Score":0,"Answer":"did you do pip install pyttsx3?\nthe pyttsx3 library is probaly not installed.","Q_Score":1,"Tags":"python","A_Id":66529452,"CreationDate":"2021-03-08T11:50:00.000","Title":"What does ModuleNotFoundErrorDer do","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Androguard to do the analysis of all Malware APKS (Ransomware, Adware, Trojan etc) but, I am curious to know something which is:\nWhenever I analyze the apk using androguard library using python, I keep getting notification of malware alerts from windows defender. So my question is if its safe to turn off antivirus when I am doing analysis of Infected APK. I am comparing its values and saving inside csv file. Will my computer be still safe?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":37,"Q_Id":66530342,"Users Score":1,"Answer":"Its depends what exactly you doing with malwares, I would sugest you to use some kind of emulator to hide behind additional layer of protection like Windows Sandbox or something like this.","Q_Score":0,"Tags":"python,android,apk,malware,androguard","A_Id":66530664,"CreationDate":"2021-03-08T13:01:00.000","Title":"Should I turn off Antivirus when analyzing Malware APK?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have requirement to run performance test suite, where each script will consist of set of REST APIs for one particular functionality on a given platform, like in Loadrunner or Jmeter we can run multiple scripts together or in parallel either using thread group in Jmeter or Controller in Loadrunner. Is it the same possible on Locust??","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":505,"Q_Id":66531063,"Users Score":0,"Answer":"In Locust every test is group\/scenario\/case is represented by a Locust User. You could develop several Users with all the tests scripts that you need to run and execute all of them in a single parallel Locust run configuration.","Q_Score":2,"Tags":"python,python-3.x,locust","A_Id":66533912,"CreationDate":"2021-03-08T13:49:00.000","Title":"Option in Locust to run multiple Locustfile together like in Jmeter running multiple Thread Groups in Parallel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have requirement to run performance test suite, where each script will consist of set of REST APIs for one particular functionality on a given platform, like in Loadrunner or Jmeter we can run multiple scripts together or in parallel either using thread group in Jmeter or Controller in Loadrunner. Is it the same possible on Locust??","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":505,"Q_Id":66531063,"Users Score":0,"Answer":"No it's not. We achieved it by running pararell jenkins builds next to each other.","Q_Score":2,"Tags":"python,python-3.x,locust","A_Id":66533220,"CreationDate":"2021-03-08T13:49:00.000","Title":"Option in Locust to run multiple Locustfile together like in Jmeter running multiple Thread Groups in Parallel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a way to print out the raw http query string when I use the request library in python3. It's for troubleshooting purposes. Anyone has an idea how to do this. I tried to use prepared requests, but it is not what I'm looking for.Any suggestion?\nthanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":77,"Q_Id":66531827,"Users Score":0,"Answer":"import requests\nresponse = requests.get('https:\/\/api.github.com')\nYou can use:\nresponse.text or response.content or response.raw","Q_Score":0,"Tags":"http,python-requests","A_Id":67973369,"CreationDate":"2021-03-08T14:41:00.000","Title":"output raw query string with python requests library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wanted to import the albumentations package to run a deep learning task, but it has conflicts and failed when I tried to install it in the current environment, so I used conda create --name to create a new one, and in the new environment the albumentations package is installed successfully, but I can not find it in the python interpreter setting, and the project keeps showing \"No module named 'albumentations' \", so , how to fix this problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":66533908,"Users Score":0,"Answer":"I just tried create a new Base environment in the Setting-->Python Interpreter--> add --> Virtualenv Environment , and in the new Base environment, the newly created interpreter will appear below the Existing environment option.","Q_Score":0,"Tags":"python,development-environment","A_Id":66537417,"CreationDate":"2021-03-08T16:53:00.000","Title":"how to add the newly created conda interpreter to a specific project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hello I have designed some algorithms that we would like to implement in our company's software (start-up) but some of them take too long (10-15 min) as it is handling big datasets.\nI am wondering if using for example Google Cloud to run my scripts, as it would use more nodes, it would make my algorithm to run faster.\nIs it the same to run a script locally in Jupyter for instance than running it within Cloud?\nThinking of using Spark too.\nThank you","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":35,"Q_Id":66535075,"Users Score":1,"Answer":"I think the only applicable answer is \"it depends\". The cloud is just \"someone else's computer\", so if it runs faster or not depends on the cloud server it's running on. For example if it is a data-intensive task with a lot of I\/O it might run faster on a server with a SSD than on your local machine with a HDD. If it's a processor intensive task, it might run faster if the server has a faster CPU than your local machine has. You get the point.","Q_Score":0,"Tags":"python,cloud","A_Id":66535186,"CreationDate":"2021-03-08T18:17:00.000","Title":"Does running scripts from Cloud (AWS\/Google\/Azure) make my algorithms faster?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Is there any monitor utils that can monitor the cpu usage of every code line use in python program.\nI know profile,cProfile and line_profiler. but they are only statistic the time of each line or function using the cpu.\nIf my program is io-intensive, it may be use a long time but doesn't really use the calculation of the cpu.\nSo I want to find a util which could monitor the real calculation of the cpu.\nHave anybody an idea?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":166,"Q_Id":66540808,"Users Score":0,"Answer":"cProfile and profile both accept a time base function as a parameter. Just pass time.clock instead of the default time.time.","Q_Score":1,"Tags":"python,cpu,monitor","A_Id":66541169,"CreationDate":"2021-03-09T04:27:00.000","Title":"how to monitor the cpu usage of every code line in python program","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use a scroll bar in a pyqt5 gui. I need it to increment by 0.01 (Total length should be around 390) for a single click on the arrow but it seems like the setSingleStep only takes integers. Is there a way to make a single step 0.01?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":63,"Q_Id":66541282,"Users Score":2,"Answer":"I think you must convert to integers these values 390x100 and 0.01x100 setRange(0,39000) and setSingleStep(1) and done :)","Q_Score":1,"Tags":"python,pyqt5,scrollbar,increment","A_Id":66543009,"CreationDate":"2021-03-09T05:27:00.000","Title":"Is there a way to set a float for the single step of a pyqt5 scrollbar?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The solution I found is to install tensorflow packages in pycharm's interpreter in each different new project. The file is incredibily huge and I think is a waste of space. Is there any solution to import terminal installed tensorflow to pycharm ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":66541789,"Users Score":0,"Answer":"I guess you mean that you want to reuse a virtual Python environment in PyCharm where TensorFlow is installed.\nFor example you have a conda environment created:\nconda create -n tensorflow -py 3.7.5\nconda activate tensorflow\npip install tensorflow\nAnd you create a new project in PyCharm. You can select an existing interpreter. Select the conda environment called tensorflow. Et voila.\nIf this is not the answer you needed, please give a more elaborate explanation of your question.","Q_Score":0,"Tags":"python,tensorflow,pip,pycharm","A_Id":66544562,"CreationDate":"2021-03-09T06:19:00.000","Title":"How to import terminal pip installed tensorflow into pycharm","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install PyQT5 on my Raspberry Pi and used the command sudo pip3 install pyqt5.\nBut it has been stuck on that for over an hour nowand I'm starting to get frustrated, since it still moves, so it didn't crash or anything. Is there a workaround for that or am I missing something?\nThanks in advance","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":4501,"Q_Id":66546886,"Users Score":9,"Answer":"I had the same problem and got impatient after a few dozen minutes...\nThen tried running the command with:\npip3 install --verbose PyQt5\nso this way I could always be sure that it didn't crash in the background.\nIt completed after almost 2 hours. The compilation takes some time...","Q_Score":8,"Tags":"python,pip,raspberry-pi,pyqt,pyqt5","A_Id":68454206,"CreationDate":"2021-03-09T12:28:00.000","Title":"Pip Install stuck on \"Preparing Wheel metadata...\" when trying to install PyQT5","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install PyQT5 on my Raspberry Pi and used the command sudo pip3 install pyqt5.\nBut it has been stuck on that for over an hour nowand I'm starting to get frustrated, since it still moves, so it didn't crash or anything. Is there a workaround for that or am I missing something?\nThanks in advance","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4501,"Q_Id":66546886,"Users Score":0,"Answer":"first upgrade your pip: python -m pip install --upgrade pip\nthan install PyQt5: pip install PyQt5","Q_Score":8,"Tags":"python,pip,raspberry-pi,pyqt,pyqt5","A_Id":70784351,"CreationDate":"2021-03-09T12:28:00.000","Title":"Pip Install stuck on \"Preparing Wheel metadata...\" when trying to install PyQT5","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I run test example: .\/bin\/run-example SparkPi 10 I get error below.\nEDIT:\nProblem is due to the fact, that I switched ti WIFI instead of Ethernet and this changed localhost. @mck direction to previous solutionhelped.\nSolution:\nAdd SPARK_LOCAL_IP in load-spark-env.sh file located at spark\/bin directory\nexport SPARK_LOCAL_IP=\"127.0.0.1\"\nI get error:\n\n WARNING: An illegal reflective access operation has occurred\n WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:\/home\/d\/spark\/jars\/spark-unsafe_2.12-3.1.1.jar) to constructor java.nio.DirectByteBuffer(long,int)\n WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform\n WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations\n WARNING: All illegal access operations will be denied in a future release\n 2021-03-09 15:37:39,164 INFO spark.SparkContext: Running Spark version 3.1.1\n 2021-03-09 15:37:39,214 INFO resource.ResourceUtils: ==============================================================\n 2021-03-09 15:37:39,215 INFO resource.ResourceUtils: No custom resources configured for spark.driver.\n 2021-03-09 15:37:39,215 INFO resource.ResourceUtils: ==============================================================\n 2021-03-09 15:37:39,216 INFO spark.SparkContext: Submitted application: Spark Pi\n 2021-03-09 15:37:39,240 INFO resource.ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)\n 2021-03-09 15:37:39,257 INFO resource.ResourceProfile: Limiting resource is cpus at 1 tasks per executor\n 2021-03-09 15:37:39,259 INFO resource.ResourceProfileManager: Added ResourceProfile id: 0\n 2021-03-09 15:37:39,335 INFO spark.SecurityManager: Changing view acls to: d\n 2021-03-09 15:37:39,335 INFO spark.SecurityManager: Changing modify acls to: d\n 2021-03-09 15:37:39,335 INFO spark.SecurityManager: Changing view acls groups to: \n 2021-03-09 15:37:39,335 INFO spark.SecurityManager: Changing modify acls groups to: \n 2021-03-09 15:37:39,335 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(d); groups with view permissions: Set(); users with modify permissions: Set(d); groups with modify permissions: Set()\n 2021-03-09 15:37:39,545 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,557 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,572 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,585 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,597 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,608 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,612 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,641 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,646 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,650 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,654 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,658 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,663 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,673 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,676 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,682 WARN util.Utils: Service 'sparkDriver' could not bind on a random free port. You may check whether configuring an appropriate binding address.\n 2021-03-09 15:37:39,705 ERROR spark.SparkContext: Error initializing SparkContext.\n java.net.BindException: Cannot assign requested address: Service 'sparkDriver' failed after 16 retries (on a random free port)! Consider explicitly setting the appropriate binding address for the service 'sparkDriver' (for example spark.driver.bindAddress for SparkDriver) to the correct binding address.\n at java.base\/sun.nio.ch.Net.bind0(Native Method)\n at java.base\/sun.nio.ch.Net.bind(Net.java:455)\n at java.base\/sun.nio.ch.Net.bind(Net.java:447)\n at java.base\/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)\n at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)\n at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:550)\n at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)\n at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)\n at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)\n at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)\n at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:248)\n at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)\n at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)\n at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)\n at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)\n at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n at java.base\/java.lang.Thread.run(Thread.java:834)\n 2021-03-09 15:37:39,723 INFO spark.SparkContext: Successfully stopped SparkContext\n Exception in thread \"main\" java.net.BindException: Cannot assign requested address: Service 'sparkDriver' failed after 16 retries (on a random free port)! Consider explicitly setting the appropriate binding address for the service 'sparkDriver' (for example spark.driver.bindAddress for SparkDriver) to the correct binding address.\n at java.base\/sun.nio.ch.Net.bind0(Native Method)\n at java.base\/sun.nio.ch.Net.bind(Net.java:455)\n at java.base\/sun.nio.ch.Net.bind(Net.java:447)\n at java.base\/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)\n at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:134)\n at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:550)\n at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1334)\n at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:506)\n at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:491)\n at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)\n at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:248)\n at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:356)\n at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)\n at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)\n at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)\n at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)\n at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n at java.base\/java.lang.Thread.run(Thread.java:834)\n 2021-03-09 15:37:39,730 INFO util.ShutdownHookManager: Shutdown hook called\n 2021-03-09 15:37:39,731 INFO util.ShutdownHookManager: Deleting directory \/tmp\/spark-b53dc8d9-adc8-454b-83f5-bd2826004dee","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":284,"Q_Id":66548256,"Users Score":0,"Answer":"Solution: Add SPARK_LOCAL_IP in load-spark-env.sh file located at spark\/bin directory export SPARK_LOCAL_IP=\"127.0.0.1\"","Q_Score":0,"Tags":"python,apache-spark,pyspark,apache-spark-sql","A_Id":66551389,"CreationDate":"2021-03-09T13:56:00.000","Title":"spark : Cannot assign requested address","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to run the tkinter mainloop in the background because I\u00b4m following the MVC Pattern, so the Controller gets a Interfaceobject to handle all cares. So I tried to thread it. RuntimeError: Calling Tcl from different apartment occures. Saw no further answers on this topic.\nDoes anyone have a clue what to do?\nThanks, Max","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":290,"Q_Id":66550695,"Users Score":0,"Answer":"thanks @Tom\u00e1s Gomez Pizarro for one solution. I myself came up with a solution. My Controller etc are set up with a reference(Object) to my Gui. the Gui itself, so the mainloop function is not called until everything is ready. So in Runtime every GuiEvent is passed to the Controller via a thread so that the main loop is not interferred with.\nRegards, Max","Q_Score":0,"Tags":"python,tkinter,model-view-controller","A_Id":66581298,"CreationDate":"2021-03-09T16:17:00.000","Title":"Tkinter mainloop in background","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to include a machine learning component in my Django project. I have the method written in python. How do I get to make it work on the website using Django. Can I simply drag the \".py\" file into the file structure and call it from an HTML page? I am new to Django any help would be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":178,"Q_Id":66551026,"Users Score":0,"Answer":"Yes you can directly copy file into your Django Directory structure. Let's say you have a file test.py and a function written in it as def print(). And you have copied the file in app directory. Then you can call it in views.py as from app.test import print. print function will be imported in views.py and you can use it to serve in html as you want.","Q_Score":0,"Tags":"python,django,opencv","A_Id":66551229,"CreationDate":"2021-03-09T16:36:00.000","Title":"How to integrate a python file onto a Django project?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Didn't find an answer regarding my particular question, so I'm sorry if this has been asked already.\nI've created a Python program that scrapes the articles posted on news websites by certain keywords. On average, when running it once in the evening, it would be searching through 2000 articles of the day. Now I obviously want this program to run on loop 24\/7 looking for new articles in realtime (or every 5 minutes). When it hits something based on my keywords, I get notified.\nTherefore, I wanted to know whether you guys have any good recommendations on hosting? I've heard about AWS Lambda but wanted to get a second opinion. Anything that costs below -$250 a month is possible :) Maybe someone has a similar project running or can confirm my idea with AWS.\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":526,"Q_Id":66555271,"Users Score":1,"Answer":"Great question, once your script starts do you ever run new scripts or can you just leave the terminal running?\nIn the latter case, you need Amazon ec2, not Lambda. Lambda is for running functions, an Ec2 is the \"cloud computer\" that you are looking for to \"host\" and run your program.\nLook into Ec2, and use EBS or EFB for storage. S3 is good for storing images, or links, or objects, but if you are using an Ec2 instance (cloud computer) and don't need to store your data as an object and don't need to use a dedicated MYSQL or NOSQL database, just store the info in your EBS or EFB. Remember, EBS and EFB are the hard drive of the computer (your ec2), and Amazon RDS is database, Amazon Aurora is inside RDS and is for MYSQL, PostGRESL, and S3 is like a image \/ object drive. For example, if you had an ebook you were going to distribute, you would store your ebook in S3.\nYou can set up an Ec2 and EBS for free too. Just use the free tier and use the t2.micro for ec2 instance. See how it runs for a few days and then go bigger when necessary.","Q_Score":0,"Tags":"python,amazon-web-services,web-scraping,aws-lambda,hosting","A_Id":66556908,"CreationDate":"2021-03-09T21:48:00.000","Title":"Python Web Scraping - How to scrape a News website 24\/7 for new articles?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"First I run command pip install virtualenv then after I run python -m virtualenv venv, I get this following error msg\n\"\/System\/Library\/Frameworks\/Python.framework\/Versions\/2.7\/Resources\/Python.app\/Contents\/MacOS\/Python: No module named virtualenv\"\nCuurently, I'm using python v2.7.16 and when I run pip freeze | grep virtualenv , I get virtualenv==20.4.2 so virtualenv is there. When I run which python I get \/usr\/bin\/python and I don't have .bash_profile when I run ls -a. I am using mac. What could be the reasons python not recognizing virtualenv when it's there?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":66556604,"Users Score":0,"Answer":"You may create .bash_profile and it is auto-recognised by the macintosh machine.\n\nPlease also run which pip and make sure the pip is in the same bin as your python (\/usr\/bin\/python)\n\n\nThe bottom line is pip used to install a package by default will install the packages in the bin directory that also stored your python executable.","Q_Score":2,"Tags":"python,python-2.7,virtualenv","A_Id":66556829,"CreationDate":"2021-03-10T00:17:00.000","Title":"Python not recognizing virtualenv","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I forgot the terminology of this. In Python, if you let a=1, later you can re-assign a=\"letter\".\nBut in some languages, once you let a=1, a has to stay as an integer forever.\nWhat do we call this in textbook?","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":76,"Q_Id":66558557,"Users Score":-2,"Answer":"Dynamic typing. Also related, in Python, to weak typing.\nAnswer is wrong, see comments. Leaving it here as it brings value in being corrected.","Q_Score":1,"Tags":"python","A_Id":66558595,"CreationDate":"2021-03-10T04:50:00.000","Title":"Terminology of changing the variable type","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to test embedded system product using Python scripting and I need some guidance.\nThe setup: A Raspberry pi have been connected to the embedded system PCB and I access that Raspberry pi remotely using ssh or vncserver.\nThe issue: I am using PyCharm on my laptop to access the Python scripts. Since the embedded system is connected to Raspberry pi, I am not able to debug using my laptop as I would not be able to get the values of variables etc while debugging. So I installed PyCharm on RPi but it keeps on crashing frequently which I guess is because RPi is not able to take that much load as it also connected to VNCServer while using PyCharm.\nWhat I am looking for: Some guidance how to debug in such a case so I can test whether scripts have issue or device is faulty or something else. A better and efficient method for debugging in such a scenario where there are multiple layers.\nI have limited exposure so I may be missing out something. Please feel free to correct me.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":66559317,"Users Score":0,"Answer":"Update: Just in case someone is looking for the answer to this question, Pycharm paid version allows remote debugging over ssh. That worked for me.","Q_Score":0,"Tags":"python,testing,raspberry-pi,automated-tests,embedded","A_Id":68292245,"CreationDate":"2021-03-10T06:14:00.000","Title":"Debugging Python scripting for testing Embedded system connected with Rasbperry pi (RPi)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem parsing DAG with error:\nBroken DAG: [\/usr\/local\/airflow\/dags\/test.py] No module named 'airflow.providers'\nI added apache-airflow-providers-databricks to requirements.txt, and see from the log that:\nSuccessfully installed apache-airflow-2.0.1 apache-airflow-providers-databricks-1.0.1 apache-airflow-providers-ftp-1.0.1 apache-airflow-providers-http-1.1.1 apache-airflow-providers-imap-1.0.1 apache-airflow-providers-sqlite-1.0.2 apispec-3.3.2 attrs-20.3.0 cattrs-1.3.0 clickclick-20.10.2 commonmark-0.9.1 connexion-2.7.0 flask-appbuilder-3.1.1 flask-caching-1.10.0 gunicorn-19.10.0 importlib-resources-1.5.0 inflection-0.5.1 isodate-0.6.0 marshmallow-3.10.0 marshmallow-oneofschema-2.1.0 openapi-schema-validator-0.1.4 openapi-spec-validator-0.3.0 pendulum-2.1.2 python-daemon-2.3.0 rich-9.2.0 sqlalchemy-jsonfield-1.0.0 swagger-ui-bundle-0.0.8 tenacity-6.2.0 termcolor-1.1.0 werkzeug-1.0.1\nBut the scheduler seems to be stuck:\nThe scheduler does not appear to be running. Last heartbeat was received 19 hours ago.\nHow can I restart it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2000,"Q_Id":66561217,"Users Score":0,"Answer":"well after remove all deps in the requirements , the worker in mwaa run normally , now can try test the bad deps","Q_Score":3,"Tags":"python,airflow,scheduler,directed-acyclic-graphs,mwaa","A_Id":66655747,"CreationDate":"2021-03-10T08:45:00.000","Title":"AWS Managed Airflow - how to restart scheduler?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working on Google Colaboratory, and I have to implement OCL (Object Constraint Language), I searched a lot, but I didn't find how to implement it. Can someone give me an idea please?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":221,"Q_Id":66562115,"Users Score":0,"Answer":"It is surely possible for you to implement OCL, duplicating the efforts of one of the existing Open Source implementations such as Eclipse OCL or USE. There is an official OMG specification that will define what you need to do, however it has many deficiencies that will require research to solve and design around. I would be surprised if you can implement a 'full' implementation of OCL from scratch with plausible accuracy in less than a person year.\nI suspect that you have mis-stated what you want to do or have misunderstood what someone has instructed you to do.","Q_Score":0,"Tags":"python,google-colaboratory,ocl","A_Id":66567998,"CreationDate":"2021-03-10T09:46:00.000","Title":"How to implement OCL (Object Constraint Language) in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I cannot open any .py file: when I run in the command prompt either \"python test.py\" or \"python3 test.py\" or \"py test.py\", it just says can't open file 'C:\\Users\\Ciela\\Desktop\\test.py': [Errno 2] No such file or directory.\n\nPython is installed, latest version\nAll other versions are uninstalled\nPython was automatically added to PATH during installation, I can see it in both User and System paths and the version is correct\nthe files can be opened in Python just by double-clicking them, although they shut off immediately (I know they work because the \"turtle module\" screen persists on the screen)\nThe OS is Windows 10 and I am a total noob trying to learn\n\nWhat could it be??","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":745,"Q_Id":66575046,"Users Score":0,"Answer":"I sorted it out. For anyone struggling with the same issue, the problem might be OneDrive. Windows10 automatically creates 2 desktops: the one in User, and the one in User\/OneDrive, where files are stored by default. Essentially I was looking for the files in the wrong desktop folder.","Q_Score":1,"Tags":"python,installation,windows-10","A_Id":66580075,"CreationDate":"2021-03-11T00:58:00.000","Title":"Cannot open .py files: \"[Errno 2] No such file or directory\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How can you tell in general when a website is rendering content in javascript? I typically use bs4 to scrape and I when I can't find a tag, I'm not sure if its because its javascript rendered (which bs4 cant detect) or if I did something wrong.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":130,"Q_Id":66575569,"Users Score":3,"Answer":"Compare the output of your request with the html returned from a browser request. In Chrome and Firefox, press F12 and the console will appear. Under the network tab you can see all the requests that have been made. If the Network tab is empty, refresh the page. The response from the first request in the Network tab should match the response you recieved from the Python request. If it doesn't match, either your request differs from the browser request, or javascript is doing some post processing.\nSubsequent requests in the Network tab may be from javascript running, from iframes, images, or much more.","Q_Score":3,"Tags":"python-3.x,web-scraping,beautifulsoup","A_Id":66575613,"CreationDate":"2021-03-11T02:16:00.000","Title":"Web scraping: How to tell in general if a page has content rendered in javascript","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So I'm making a discord bot, and I want it to be able to obtain the guild ID of the guild where a command was issued. How would I go about implementing this? I tried using bot.guilds, but I need a method of nailing down the exact id.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":43,"Q_Id":66575715,"Users Score":1,"Answer":"Have you tried doing message.guild.id ?\nThat should return the guild id for the guild the command was executed in.","Q_Score":0,"Tags":"python-3.x,discord.py","A_Id":66576105,"CreationDate":"2021-03-11T02:39:00.000","Title":"Getting current guild id for instance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was using TensorFlow to train a machine learning model.\nI use the command model.save('my_model.h5') to save my model .\nWhere is the exact location path that the model file is saved?\nWill it simply overwrite the old one if I run the code again?\nMany thanks.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":506,"Q_Id":66576166,"Users Score":0,"Answer":"Yes.Model will saved in the current directory and the model will overwrite the oldone when saving the same model with same name.if you need various model just change the name of the model while saving the model each time.","Q_Score":0,"Tags":"python,tensorflow","A_Id":66579179,"CreationDate":"2021-03-11T03:45:00.000","Title":"Where is the file that is saved by model.save() command","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have two tables like :\nUser:\nId | Name | Age |\n1 | Pankhuri | 24\n2 | Neha | 23\n3 | Mona | 25\nAnd another\nPrice log:\nId | type | user_id | price | created_at|\n1 | credit | 1 | 100 | 2021-03-05 12:39:43.895345\n2 | credit | 2 | 50 | 2021-03-05 12:39:43.895345\n3 | debit | 1 | 100 | 2021-03-04 12:39:43.895345\n4 | credit | 1 | 100 | 2021-03-05 12:39:43.895345\nThese are my two tables from where i need to get heighst credit price user with their total price count acoording to last week date..\ni want a result like :\nlike if i want to get result for user id 1 then result should be like:\npankhuri on date 04\/03 price 100\nand on date 5\npankhuri on date 05\/03 price 200\nwant only heighst price user in retirn with their price total on date basisi.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":66576629,"Users Score":0,"Answer":"You can either use GROUP BY, ORDER BY data DESC\/ASC, MAX or using sub query.","Q_Score":0,"Tags":"python,django,lis","A_Id":66577080,"CreationDate":"2021-03-11T04:47:00.000","Title":"Python queryto get height price user with count","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"list = [['100', '88', '', ''], ['100', '', '68', ''], ['100', '', '', '58'],['102', '28', '', ''], ['102', '', '2', ''], ['104', '11', '', ''], ['104', '', '2', ''], ['110', '2', '', ''], ['202', '', '14', ''], ['202', '37429', '', '']]\nneed to merge the sub list on first index value.\noutput = [['100', 88, '68', '58'], ['102', 28, '2', ''],['104', 11, '2', ''],['110', 2, '', ''],['202', '37429', 14, '']]","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":66576680,"Users Score":0,"Answer":"The more basic way is to make two nested loops, write a condition if the first index match, move them to a new list.\nYou can try to convert them to numpy array which will make them a 2d array then you can compare easier","Q_Score":0,"Tags":"python,data-structures","A_Id":66576735,"CreationDate":"2021-03-11T04:53:00.000","Title":"Merge sublists of nested list if first index element match","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to know how I can send messages in discord, without creating a bot.\nLike I want the program to send messages through my own account. Most of the results I got when I searched this up is to create a bot. But I would like to know if there's a way to do it without creating the bot. Thanks :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":66577819,"Users Score":0,"Answer":"To access discord with a bot trough your own account, you can't use a discord bot. What you could do, is to automate \"your input\" in discord. Imagine a google sheet for example and now recording your input to copy the first line, delete it afterwards, then paste it in discord and send the message. now you could repeat this for every line in the file.\n(You can find such program using google)\nBUT this solution restricts you to your input. Any events discord provides like on_member_join for example aren't useable for this approach. It's more a user bot than a discord bot","Q_Score":0,"Tags":"python,python-3.x,list,discord,bots","A_Id":66578384,"CreationDate":"2021-03-11T07:02:00.000","Title":"How to send messages in discord without using bot application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I made a telegram bot using telepot, one of the issues I have is that groups can still invite and make the bot join their channels, even with \/setjoingroups disabled. Is there a way to list these groups and leave them from the code or from @BotFather ?\nthanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":595,"Q_Id":66577829,"Users Score":0,"Answer":"You can look bot settings inline mode and group privacy","Q_Score":0,"Tags":"python,bots,telegram,telepot","A_Id":67948892,"CreationDate":"2021-03-11T07:04:00.000","Title":"Leaving group channels as a bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have array:\nx = np.array([1, 41, 32, 2, -8, 0, -97, 11])\nIf x[i] > 0, x[i] = 1\nIf x[i] < 0, x[i] = -1\nIf x[i] == 0, x[i] = 0\nSo expected output:\nx = np.array([1, 1, 1, 1, -1, 0, -1, 1])\nIs there a way to do this with a one liner in numpy without any loops? I wanted to use np.where but it only takes 2 conditions whereas I have 3. Thanks.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":93,"Q_Id":66581053,"Users Score":1,"Answer":"Indeed it takes 3 parameters:\nx = np.where(x > 0 , x, 1)","Q_Score":2,"Tags":"python,arrays,numpy","A_Id":66581174,"CreationDate":"2021-03-11T10:50:00.000","Title":"Replacing values in numpy array based on multiple conditions without using any loops","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some csv files which i read using tfiledelimited component and want to pass those rows along into python script on tsystem component for some manipulations.\nI have connected the output from tfiledelimited into tsystem and have set the schema but how to read those columns from csv reader into python script , please help..","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":207,"Q_Id":66581634,"Users Score":0,"Answer":"You can create a dummy shell wrapper and call python script inside that. Make sure proper python instance is available on host executing the script and accesible (\/bin\/python) and PATH is resolving to this directory.\nAnd call that wrapper from tSystem.","Q_Score":0,"Tags":"python,talend","A_Id":66610989,"CreationDate":"2021-03-11T11:27:00.000","Title":"Input from talend into tsystem python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Could anyone:\n\nwrite a very short example of a Python package created in PyCharm (under Windows)\n\nthen import it into a script\/module\n\nand finally run it from PyCharm Terminal?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":66582358,"Users Score":0,"Answer":"pycharm imports module from the root folder (e.g. src\/module) with folders being seperated by dots import src.module\npycharm will run any code by selecting the file and doing ctrl + shift + f10\nmodules in python are folders with a __init__.py which imports all methods\/classes\/variable into them","Q_Score":0,"Tags":"python,pycharm,python-import,python-packaging","A_Id":66582424,"CreationDate":"2021-03-11T12:13:00.000","Title":"Example of creating and importing a package in PyCharm (for Windows)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have successfully installed OpenMDAO that I just found on GitHub. However, the Microsoft MPI does not appear, and the same goes for the mpi4py and petsc4py, when all of these have supposedly been installed already...\nI used the pip install commands for the python related dependencies, and the Microsoft MPI is within the environment variables as well. Has anyone experienced this before, or can perhaps guide a little? I would really appreciate it, as a computer newbie.\nThank you beforehand!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":78,"Q_Id":66584895,"Users Score":0,"Answer":"Thanks for your answer! It is valuable information. However, it would be advantageous to be able to use the parallel mode, as I plan on simulating and optimizing big study cases. Is it that difficult that the time saved in parallel mode would not be worth it either? Unfortunately I must do this on Windows itself, have you knowledge of any other links that show the process for Windows 10 by any chance?\nMillion Thanks! :-)","Q_Score":1,"Tags":"python,mpi,openmdao","A_Id":66603624,"CreationDate":"2021-03-11T14:49:00.000","Title":"OpenMDAO dependencies error with Microsoft MPI, and python mpi4py, petsc4py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently updated my spyder version to 4, and I really miss a feature that I used to have. In the past, I was able to comment out a line of code that was creating a variable, and still run the code as long as that variable was in memory. This was extremely practical to test things on the fly.\nNow, this doesn't work. Is this on purpose or is there a way to reactivate this feature?\nThanks for your help","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":155,"Q_Id":66585269,"Users Score":2,"Answer":"OK, after being confused because I saw this on two versions on two platforms, I tried checking Tools > Preferences > Run > Run in console's namespace instead of an empty one under Run Settings. I had to repeat this in Run > Configuration per file\nas well. This fixed the issue, which I was seeing in both version 3.9 on Arch Linux and 4.2.3 on Win10.\nI think a lot of people who are migrating from MATLAB to an environment like Spyder might find this to be a serious impediment. A lot of MATLAB use involves tweaking and running small scripts by hand, essentially using the environment as a mainly script-driven interactive analysis environment.","Q_Score":1,"Tags":"python,spyder","A_Id":66699453,"CreationDate":"2021-03-11T15:11:00.000","Title":"spyder not keeping variables in memory anymore?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a TXT and CSV files with delimiter as \" ^@ \" (was able to view this in VIM editor, in notepad++ it shows as null )\nI want to use this as column separator for dataframe in Python(pandas). What should i use?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":176,"Q_Id":66592081,"Users Score":0,"Answer":"df = pd.read_csv(fl, sep='\\^\\@',engine='python') \nTry this if it works!","Q_Score":1,"Tags":"python,pandas,dataframe","A_Id":66592172,"CreationDate":"2021-03-11T23:32:00.000","Title":"read TXT file with delimiter as ^@ using pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to check if a specific key has a specific value in a dynamodb table with\/without retrieving the entire item. Is there a way to do this in Python using boto3?\nNote: I am looking to match a sort key with its value and check if that specific key value pair exists in the table,","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1886,"Q_Id":66592148,"Users Score":1,"Answer":"It sounds like you want to fetch an item by it's sort key alone. While this is possible with the scan operation, it's not ideal.\nDynamoDB gives us three ways to fetch data: getItem, query and scan.\nThe getItem operation allows you to fetch a single using it's primary key. The query operation can fetch multiple items within the same partition, but requires you to specify the partition key (and optionally the sort key). The scan operation lets you fetch items by specifying any attribute.\nTherefore, if you want to fetch data form DynamoDB without using the full primary key or partition key, you can use the scan operation. However, be careful when using scan. From the docs:\n\nThe Scan operation returns one or more items and item attributes by accessing every item in a table or a secondary index.\n\nThe scan operation can be horribly inefficient if not used carefully. If you find yourself using scans frequently in your application or in a highly trafficked area of your app, you probably want to reorganize your data model.","Q_Score":0,"Tags":"python,amazon-dynamodb","A_Id":66593453,"CreationDate":"2021-03-11T23:41:00.000","Title":"Is there a way to check if a key=value exists in a DynamoDB table?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to check if a specific key has a specific value in a dynamodb table with\/without retrieving the entire item. Is there a way to do this in Python using boto3?\nNote: I am looking to match a sort key with its value and check if that specific key value pair exists in the table,","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1886,"Q_Id":66592148,"Users Score":0,"Answer":"What Seth said is 100% accurate, however, if you can add a GSI you can use the query option on the GSI. You could create a GSI that is just the value of the sort key, allowing you to query for records that match that sort key. You can even use the same field, and if you don't need any of the data you can just project the keys, keeping the cost relatively low.","Q_Score":0,"Tags":"python,amazon-dynamodb","A_Id":66608130,"CreationDate":"2021-03-11T23:41:00.000","Title":"Is there a way to check if a key=value exists in a DynamoDB table?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This i my scenario:\nI have a python project that runs in cPython.\nand I have some .pyc, .so files in this project, and I don't have these files's source code.\nThis project runs well in cPython.\nBut if I change the interpreter to pypy, it can't import these modules which contained by the .pyc files and .so files.\nIs there any way that I can solve this problem?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":126,"Q_Id":66593459,"Users Score":0,"Answer":"You would need to decompile the code to get back some semblance of *.py files. There are various projects out there to do this: search for \"python decompile\". Sponsoring one of the efforts would probably go a long way towards getting a working decompiler.","Q_Score":0,"Tags":"python,module,cpython,pypy","A_Id":66594607,"CreationDate":"2021-03-12T02:45:00.000","Title":"How can I import a module from a pyc file or so file in Pypy?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My project deals with sharing screen of one computer on another, similar to TeamViewer.\nI use Python with Pygame to show the screen share, as my GUI.\nOne of my features is dragging a file\/directory on the screen share to send this file or the directory to the other computer. It's easier to be done as Pygame has the abillity to detect a DROP event and to get the dropped item's path.\nWhen I try to run this project on my computers, it works well. But when I'm trying to run it on the lab, as I'm trying to drop the file or directory on Pygame's screen, the cursor turns into a \"block\" sign and eventually what I dropped onto the screen is not detected, which also means that the DROP event is not triggered.\nI assume that the operation system could be the reason for this. Maybe a setting on the computers that causes a rejection of the \"drag and drop\". I use windows 10 on both computers. What should I do?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":77,"Q_Id":66595395,"Users Score":0,"Answer":"I tried to uninstall the old version I had of Pygame (1.9.6) and then to reinstall it to the most recent version (2.0.1 as of today). Now the Drag and Drop eventually works perfectly. I've come to a conclusion that this problem occured because of the old version of Pygame that actually didn't allow me to use this feature. Now it's fixed.","Q_Score":2,"Tags":"python,python-3.x,drag-and-drop,screensharing","A_Id":67810816,"CreationDate":"2021-03-12T06:43:00.000","Title":"Problem with dragging files on Pygame screen","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building a web scraper that has to retrieve quickly the text of a web page, from HTML only. I'm using Python, requests and BeautifulSoup.\nI would like to detect if the web page content is pure HTML or if it's rendered from Javascript. In this last case, I would just return an error message saying that this cannot be done.\nI know about headless browsers to render the Javascript but in this case I really just need to detect it the fastest way possible without having to render it.\nIt's not really possible to detect script tag as there are many in every webpage and that doesn't mean the text content is rendered in Javascript necessarily.\nIs there something I could check jn the HTML that tells me accurately that the body content will be rendered from Javascript?\nThank you","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":43,"Q_Id":66598057,"Users Score":1,"Answer":"There is nothing in the initial DOM that shows beforehand that the site is rendered with js. These are some stuff you could try:\n\nAnalyzing several websites and make a guess on where the site\nis rendered with js based on the page's content size.\nYou could also get the html of different pages of the site\nand compare the content length (for a js-rendered site, the contents\nof different pages are likely to be the same\/similar before any code is executed).\nCheck the content size of the scripts or detect the scripts names of\nfamous technologies like react, vue and angular","Q_Score":0,"Tags":"javascript,python,html,web-scraping,python-requests","A_Id":66615160,"CreationDate":"2021-03-12T10:18:00.000","Title":"How do I detect if a web page is rendered dynamically from Javascript in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I don't know if this is a problem with Heroku or not, but I hope you can help me guys.\nI am building a bot in python using the praw library where I use the submission.reply(\"something\") to comment some posts, but the thing is that it only works in my local machine.\nWhen I upload it to Heroku, it does not work, except for a post in a certain subreddit and I am not entirely sure why. After that one, it simply does not comment anything. I tested it with try\/except and the error is here for sure, but I can\u00b4t find the problem.\nHere is the error:\n\npraw.exceptions.RedditAPIException: RATELIMIT: \"Looks like you've been doing that a lot. Take a break for 3 minutes before trying again.\" on field 'ratelimit'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":169,"Q_Id":66601371,"Users Score":0,"Answer":"Looks like your bot is commenting way too fast for the reddit API guidelines. Try adding a delay timer of about 10 seconds after every comment.","Q_Score":0,"Tags":"python,heroku,reddit,praw","A_Id":68469428,"CreationDate":"2021-03-12T14:03:00.000","Title":"Heroku and Praw (Reddit API) submission.reply()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am building a Python 3.6 application which distributes specific jobs between available nodes over network. There is one server which builds jobs. Clients connect to the server and are assigned a job, which they return as finished after computation completes.\nA job consists of a dict object with instructions, which can get kind of large (> 65536 bytes, probably < 30 MB).\nIn my first attempt I used the Twisted library to exchange messages via a basic Protocol derived from twisted.internet.protocol. When sending a serialized object using self.transport.write() and receiving it on the other hand over the callback function dataReceived() only 65536 bytes are received. Probably that's the buffer size.\nIs there a \"simple\" protocol which allows me to exchange larger messages between a server and multiple clients in Python 3.6, without adding too much coding overhead?\nThanks a lot in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":66602702,"Users Score":0,"Answer":"Finally I used websockets. It works like a charm, even for large messages.","Q_Score":0,"Tags":"python,python-3.x,networking,tcp,twisted","A_Id":67690254,"CreationDate":"2021-03-12T15:25:00.000","Title":"Simple network protocol to send large dict\/JSON messages between python nodes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm new to application development and decided to use AWS services for this project. however, I am having difficulty deploying chalice. every time I run \"chalice deploy\", I get an error.\nhere are the steps I followed along with commands for Windows:\n\nupgraded my powershell\n\"virtualenv enve\" : then \".\\venv\\Scripts\\activate\" # install and run virtual environment\n\"pip install aws cli\" : # install aws command line interface\n\"aws configure\" : # configure my AWS_KEY and AWS_SCERET\n\"pip install chalice\" : # install chalice\n\"chalice new-project\": # created a new project\n\"chalice deploy\" # deploy\n\nI get\n\nAn error occurred (InvalidClientTokenId) when calling the GetRole\noperation: The security token included in the request is invalid.\n\nI'm able to use localhost and run my application but not able to deploy to the server. I don't know what i'm doing wrong. someone, please help!\nadditional info:\nmy operating system is windows 10. I upgraded my PowerShell to 7","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":118,"Q_Id":66605637,"Users Score":0,"Answer":"I somehow figured it out!. The error occurred because the command \"\n\nchalice deploy\n\n\" was used in the wrong directory. Make sure you are in the directory where your chalice file is before initializing it to deploy.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-ec2,aws-lambda,serverless","A_Id":67134224,"CreationDate":"2021-03-12T18:51:00.000","Title":"how to solve \"An error occurred (InvalidClientTokenId)\" AWS Chalice deployment error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating a python library that acts as library and not an app. It's a wrapper around a REST API that has complex responses and I want to abstract the parsing. The library can then be used by apps.\nThe REST API is around a SaaS offering so one configuration part would be the domain and another the api key. I'm aware I can create an ini file and use configparser. But that means I still need to pass the path of the config file to the library. A dict would however be preferable as that way the app can decide how to store the config. I also want to initialize the library exactly once and not have the options as method or constructor arguments (except for an initialization method).\nHow can I achieve that? Is it sensible to have the app require to call specific method like config(options: dict) before the library can be used?\nEDIT:\nBest analogy I can come up with is ORM for REST. Let's say there are Rooms, shelves and Books each backed by a set of API end points (GET, PUT etc). So I will want to create a class that wraps these calls for each item and each class like Book, Room etc needs access to the config.\nAn important thing is that these items are configurable in the system, eg. admins can add whatever properties to them they want and a GET call for example returns a list of properties with metadata about them (id, name, type etc). Hence the need for the wrapper.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":53,"Q_Id":66605725,"Users Score":-1,"Answer":"Well, here's an idea. How about writing your own config parser that uses dictionaries instead? You could write your config in JSON format and load it as a dictionary using json.loads. This way you can retrieve the configuration parameters from anywhere, including the web since JSON is web-friendly.","Q_Score":1,"Tags":"python,configuration","A_Id":66605834,"CreationDate":"2021-03-12T18:59:00.000","Title":"How do I make a python library configurable? (Initialization)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to sort a array in place like [3,2,1,2,4,5,3] so that it is in sorted order ascending but with all duplicates grouped at the end in sorted order. So the result for the array would be [1,4,5,2,2,3,3]. How can I do this without using python built in sort()?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":84,"Q_Id":66606418,"Users Score":1,"Answer":"Do it in two phases.\n\nJust sort the array in-place with your favorite in-place sorting algorithm\nScanning sorted array right to left, find the first DU subarray (where D is a bunch of duplicate values, and U is a tail of unique elements). Make it into UD. Keep going.\n\nThe second phase completes in O(n).","Q_Score":0,"Tags":"python,arrays,algorithm,sorting,data-structures","A_Id":66606869,"CreationDate":"2021-03-12T19:58:00.000","Title":"Sorting array ascending with duplicates at the end also sorted ascending","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Quoting w3schools,\n\nLike many other popular programming languages, strings in Python are arrays of bytes representing unicode characters.\n\nIs there any difference between 'abc' and ['a', 'b', 'c']? Is there a way to tell the difference between the first and the second example without using type()?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":145,"Q_Id":66606870,"Users Score":1,"Answer":"Strings in Python are not \"arrays of Bytes\".\nThey are first class objects - yes, the underlying object do\nhave a buffer where the bytes representing the unicode\ncharacters are stored - but that is opaque.\nNow, strings are sequences of one-character strings\n(not bytes, not other type). As thus, most code that also accepts\nor expects sequences of one-character strings will work with\na string of any-size, as well as any other such sequence,\nsuch as a list of one-character strings as in your example.\nOther than that, strings are fundamentally different from\n\"lists of strings\". Probably the most used way to visualize\nstrings is a simple \"print\", and a print of a string and\nsuch a list will differ enormously.\nThen, if you need to find the diference in code, using\ntype, or calling isinstance(myvar, str) should be enough.","Q_Score":0,"Tags":"python,arrays,string,types","A_Id":66606944,"CreationDate":"2021-03-12T20:35:00.000","Title":"Is there any difference between string and array of one-digit strings in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"#Adjust Date Info\nTSLA['date'] = TSLA['date'].astype(str)\nTSLA['date'] = pd.to_datetime(TSLA['date'])\nThe datatype of both columns is object.\nIve tried using .astype(str) on the date column thenn using a lambda function to extract the YYYY-MM-DD but the datatype doesn't change. It doesn't throw up an error either when applying the .astype(str)\n.to_datetime doesn't work either.\nThere are no missing values in either column. I'd appreciate any opinions as to what i'm doing incorrectly?\nSince i am unable to add images for now, the date column has the following values: YYYY-MM-DD HH-MM-SS-HH-MM-SS","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":50,"Q_Id":66611315,"Users Score":1,"Answer":"Alright, it seems that\nTSLA['date'] = pd.to_datetime(TSLA['date'],utc = True)\nfollowed by:\nTSLA['date'] = TSLA['date'].dt.date\ngot me the values i wanted i.e. YYYY-MM-DD.","Q_Score":0,"Tags":"python,pandas,dataframe,data-preprocessing","A_Id":66611900,"CreationDate":"2021-03-13T08:03:00.000","Title":"I'm having trouble extracting the year from the date column of this particular dataset","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can start python from windows cmd just typing \"python\", but it doesn't work from pycharm terminal - it writes, that \"python\" is not internal or external command, executable program, or batch file.\nSo, os.system('python file.py') or os.popen('python file.py') also doesn't work, but I have to start another python program in my project. How can I fix it?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":66611446,"Users Score":0,"Answer":"I found that there is the sys module, that has the exexutable variable, that is the path to python.exe, so I'll use the full path. This solved the problem on my computer, so, I think, this will work on other computers.","Q_Score":1,"Tags":"python,cmd,path,environment-variables,python-os","A_Id":66612602,"CreationDate":"2021-03-13T08:20:00.000","Title":"python can't be started from pycharm windows terminal without full path","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I would like to create a \"back to previous\" button after each response for users who realize that they inputted the wrong answer and would like to start over","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":86,"Q_Id":66611628,"Users Score":1,"Answer":"To do this in the last version of Watson Assistant you need to create an Intent (\"go_back\" for example) and then use the Jump functionality to go back to the previous node if the go_back intent is recognized -- a Jump must be manually configured in all nodes where you want \"go_back\" to be possible in your Dialog.\nI hope this answer helps you.","Q_Score":2,"Tags":"json,python-3.x,watson-assistant","A_Id":66620052,"CreationDate":"2021-03-13T08:47:00.000","Title":"Watson Assistant creating a go back option for all nodes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am learning django channels in pythonanywhere and pythonanywhere support redislite instead of redis.\nso i want to setup redislite to port 6379\nexactly like in redis command:redis-server --port 6379\nI don't seem to find exact answer.\nAny answer would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":90,"Q_Id":66612723,"Users Score":0,"Answer":"That's not redislite, that's redis. redislite is just a module that you import and use.","Q_Score":0,"Tags":"pythonanywhere,stackexchange.redis,node-redis,redislabs","A_Id":66636483,"CreationDate":"2021-03-13T11:07:00.000","Title":"How to setup redislite in specific port:6379","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How do I download QtDesigner for PyQt6? If there's no QtDesigner for PyQt6, I can also use QtDesigner of PyQt5, but how do I convert this .ui file to .py file which uses PyQt6 library instead of PyQt5?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3695,"Q_Id":66613380,"Users Score":2,"Answer":"You can install QtCreator or use command line pyuic6 -o main.py main.ui","Q_Score":0,"Tags":"python,pyqt5,qt-designer,pyqt6","A_Id":66614227,"CreationDate":"2021-03-13T12:13:00.000","Title":"Downloading QtDesigner for PyQt6 and converting .ui file to .py file with pyuic6","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been making simple \"voice assistant\". I imported pyttx3 using pip install pyttx3(and yes i added import pyttsx3 in the code), which successfully installed due to terminal outprint Successfully installed pyttsx3-2.90 but then when i try to run the code i get File \"c:\\Users\\teeki\\voiceassistant\\va.py\", line 1, in import pyttsx3 ImportError: No module named pyttsx3\nI already tried to lookup the problem, found some solutions, which didnt do anything for me. Some of things i tried:\n\nreinstalling pyttsx3 with pip uninstall pyttsx3 then installing it again\nInstalling it with pipenv\nChanging python interpeter back and forth to 2.7, 3.8, 3.9\n\nEDIT:im using visual studio code","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":111,"Q_Id":66613794,"Users Score":0,"Answer":"You can try py -m pip install pyttsx3. Also, make sure that it is installed into your PATH.\nIf that doesn't work, try pip3 install pyttsx3.","Q_Score":0,"Tags":"python,pyttsx3","A_Id":66613877,"CreationDate":"2021-03-13T12:58:00.000","Title":"I get import error after importing pyttsx3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a PhD student, and I have been creating Python programs to handle massive scientific calculations.\nEven after using optimal Computer Science algorithms, my scripts often take hours to complete.\nI recently tried to implement some of the heavier functions in JavaScript to compare its performance, and it was 10x times faster right away.\nThis left me wondering why is JavaScript so much Faster than Python, if both are interpreted languages. Could Python could ever catch up with this performance? (Perhaps restricting a few minor operations, or adding optional declarations to improve speed).\nPS. I have read that the performance improvements that I noticed in JavaScript are powered by advanced Google Chrome technology, so I guess my question could be rephrased as asking whether these technologies could also be applied to speed up standard Python.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":240,"Q_Id":66614587,"Users Score":2,"Answer":"Here the reason for python to be slow is because, python runs c programme at its backend.\nWhat I meant by it is that every variable\/ object you create in python has an 'C' struct defined at its backend for find that variables size,datatype, and other three parameters. Hence, each time you run the python code it first runs that c code in backed and shows you result.\nHence, python is much slower compared to javascript or java.","Q_Score":0,"Tags":"javascript,python-3.x,performance","A_Id":66614653,"CreationDate":"2021-03-13T14:24:00.000","Title":"Why is Python so much slower than JavaScript? Could it ever catch up?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"What is the difference between - and -- in Python and in Linux, I know about operators but in cmd line we will use python -m pip install --upgrade pip so that's the doubt.\nhope some can clear my doubt as soon as possible.\nthanks in advance! :-)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":359,"Q_Id":66614767,"Users Score":0,"Answer":"Using -- instead of - are just conventions used in naming\/invoking options on the command line, the single dash (-) is usually used on short options, the double dash (--) is used on long options. Many commands will accept either form but you must check the man page and\/or manual first.\nIn your example 'python -m pip install --upgrade pip' means:\n\npython -m pip means to run a module, in this case pip\ninstall --upgrade pip means you are telling pip to install an update for the package called pip, which will bring it to the latest version available.\n\nIf you are on Linux, you can see a summary of options for most commands by typing -h or --help after the command, for example python -h.\n-- is not a valid operator in Python, but - is.\nI hope this clears things up for you.","Q_Score":0,"Tags":"python,linux,windows,cmd","A_Id":66614946,"CreationDate":"2021-03-13T14:44:00.000","Title":"What is the difference between - and -- in python, linux and in windows cmd?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"What is the difference between - and -- in Python and in Linux, I know about operators but in cmd line we will use python -m pip install --upgrade pip so that's the doubt.\nhope some can clear my doubt as soon as possible.\nthanks in advance! :-)","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":359,"Q_Id":66614767,"Users Score":0,"Answer":"Python is a programming language. Linux is an operating system kernel.\nMy guess is that by \"in Linux\" you mean using a command shell like bash. Yes, the language that bash processes might reasonably be called a \"language.\" The command shell in Microsoft Windows is cmd.\nIf bash is what you mean, then Python and bash are two different languages; in the same way that Python, bash, Java, PHP, C++, and others are all different languages. Each may have its own meaning for the use of - and --.\nIt is always important to read the documentation. It is common practice at this time for executable programs to have command line options using - for single letter options and -- for long name options. When using bash, see the output of ls --help to see the short (single letter) and long options. -a and --all are equivalent.\nMost programs from Microsoft to be run in the cmd shell, and those designed for it, typically use \/ to specify options. See DIR \/? for a list of options that can be used with the DIR command.\nPowerShell uses - like bash to indicate options. However, the options can be long names. In a PowerShell console, use the command help Get-ChildItem -ShowWindow to see the options (called parameters) that can be used with the Get-ChildItem command.\nWhen in doubt, read the doc.","Q_Score":0,"Tags":"python,linux,windows,cmd","A_Id":66616893,"CreationDate":"2021-03-13T14:44:00.000","Title":"What is the difference between - and -- in python, linux and in windows cmd?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to open a notebook file that I was working on 3 days ago, however, I get the following error Unreadable Notebook: C:\\file path UnicodeDecodeError('utf-8) for Jupyter Notebook. How can I get this file to work again.\nI've reinstalled Anaconda and tried opening the file on different computers but it still doesn't work","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":255,"Q_Id":66617719,"Users Score":0,"Answer":"I just had this problem and this is what worked for me.\nOpen the problem notebook in a text editor and copy all the text. Open a new file in your text editor and paste in the text, then save the file. In my case the text editor did not have the option to save as a Jupyter Notebook, so I saved as .txt.\nMove the file into your Jupyter area. Rename it so the ending is .ipynb and not .txt. Then open the file as a notebook.\nAs I read through the now-working notebook I saw some comments had what looked like Chinese characters in them. I don't know exactly what happened there to make those appear but if you used Ctrl+\/ to comment like I do then it's possible that is where the issue is in your notebook. So if my janky method of converting file types does not work for you and you still really want to save your notebook enough to manually dig through all the markup and text then you can try checking your comments in the file to see if there are odd characters in there.","Q_Score":1,"Tags":"python,jupyter-notebook","A_Id":71560242,"CreationDate":"2021-03-13T19:32:00.000","Title":"Unreadable Notebook: C:\\file path UnicodeDecodeError('utf-8) for Jupyter Notebook","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"with tokenize I know you can split text into individual words, but I am confused on how to add characters to indicate the beginning and end of sentences after tokenizing. In my case I want to put ^ to indicate the beginning of the sentence and $ to indicate the end of the sentence. I am asking because I am trying to implement bigram probability models and this is for a school assignment, which is why this is a reinvent the wheel problem.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":66619452,"Users Score":0,"Answer":"tokenize is a part of python distribution, and intended to parse python source code. Is this actually a good tool for your problem? Have you tried nltk?","Q_Score":0,"Tags":"python,tokenize","A_Id":66619616,"CreationDate":"2021-03-13T23:07:00.000","Title":"Is there a Python function to mark the beginning and end of sentences with a specific character after tokenizing?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"list_=[1,3,5,7]\nfor i in list_:\nfor j in range(i):\nprint(i)\noutput-1,2,2,3,3,3,5,5,5,5,5 .","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":32,"Q_Id":66620156,"Users Score":1,"Answer":"The output you're providing is wrong.\nThe loop you're doing goes like this:\nIterate over provided list and then print out the element of that list (that you're currently iterating over) as many times as the value of the element.\nSo in your case you'd have: 1, then 3 times 3, 5 times 5 and 7 times 7.\nI've simplified your logic here, because for j in range(i) is actually iterating over: 0, 1, 2, ... i (where i equals 1, 3, 5 or 7 depending of the outer loop iteration).","Q_Score":1,"Tags":"python-3.x","A_Id":66620224,"CreationDate":"2021-03-14T01:15:00.000","Title":"Can you help me to understand the loop?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have the below columns in excel file i need to find the total runtime using python pandas.\n\n\n\n\nStage\nJobName\nBaseLineStartTime\nBaseLineEndTime\nStartTime-2Mar21\nEndTime-2Mar21\n\n\n\n\nApp1\nJobName1\n20:00:00\n20:11:45\n20:05:31\n20:18:43\n\n\nApp2\nJobName2\n20:00:00\n20:12:11\n20:05:31\n20:23:11\n\n\nApp9\nJobNamex\n20:11:46\n20:25:41\n20:23:12\n20:43:33\n\n\nDay1\nJobName1\n20:25:42\n20:30:42\n20:43:44\n20:48:44\n\n\nDay2\nJobName2\n20:30:43\n20:31:43\n20:48:45\n20:49:50\n\n\nDay2\nJobName3\n20:30:43\n20:40:43\n20:48:45\n20:58:45\n\n\n\n\nNote: I will have more columns based on the runtime dates.\nTo find the total run time using the logic (App9(EndTime) - App1 (StartTime) & (Day2(EndTime of jobname2 or jobname3 which runs later) - Day1(StartTime)\nI need to print the result in below format\n\n\n\n\nStage\nBaseLineRunTime\nRuntime-2Mar21\n\n\n\n\nApp\n00:25:41\n00:38:02\n\n\nDay\n00:15:01\n00:15:01","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":66621093,"Users Score":0,"Answer":"I tried this using the below option not sure if its the best\nNewElapsedTime = time_csv[time_csv['Stage'].str.contains('App')] MaxEndTime = datetime.strptime(max(NewElapsedTime['EndTime'],'%H:%M:%S') MinStartTime = datetime.strptime(max(NewElapsedTime['StartTime'],'%H:%M:%S') print(MaxEndTime - MinStartTime)\nNext trying to loop the columns and store these results.","Q_Score":0,"Tags":"python,pandas,numpy","A_Id":66631198,"CreationDate":"2021-03-14T04:29:00.000","Title":"To determine total runtime from excel using python pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently learning Python and I am just wondering in what situation one would use .remove() rather than .discard() when removing values from a set. Since .discard() does not raise an error when removing a element form a set if the element isn't present wouldn't it be the better one to use?","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":914,"Q_Id":66621918,"Users Score":2,"Answer":"Errors are raised to be caught and processed. They are not annoyances or hurdles. They are tools to identify conceptual errors, or to indicated unexpected behavior that one needs to pay attention to, or to deal with parts of the system that one does not have control over, or to use to control the flow of the code where the python doctrine says \u201efail rather than test\u201c i.e. let the code raise exceptions you expect rather than testing with if statements.\nIn the case of .discard() and .remove(): .discard() calls .remove() silently catch the exception in case the value was not there and silently returns. It\u2019s a shortcut for a silent .remove(). It might be suitable for your special use-case. Other use-cases might require an exception to be raised when the value does not exist.\nSo .remove() is the general case that gives the developer control over the exception and .discard() is just a special use case where the developer does not need to catch that execration.","Q_Score":0,"Tags":"python,set,operation","A_Id":66622000,"CreationDate":"2021-03-14T07:01:00.000","Title":"Why would someone use set.remove() instead of set.discard()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently learning Python and I am just wondering in what situation one would use .remove() rather than .discard() when removing values from a set. Since .discard() does not raise an error when removing a element form a set if the element isn't present wouldn't it be the better one to use?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":914,"Q_Id":66621918,"Users Score":0,"Answer":"Well each of it has its own use in different cases.\nFor example you want to create a program that takes input from user and can add, remove or change data. Then when the user chooses the remove option and suppose it enters a value that is not in data, you would want to tell him that it is not in the data ( most probably using the try - except ), you would like to use the remove() function rather than .discard()","Q_Score":0,"Tags":"python,set,operation","A_Id":66621973,"CreationDate":"2021-03-14T07:01:00.000","Title":"Why would someone use set.remove() instead of set.discard()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently learning Python and I am just wondering in what situation one would use .remove() rather than .discard() when removing values from a set. Since .discard() does not raise an error when removing a element form a set if the element isn't present wouldn't it be the better one to use?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":914,"Q_Id":66621918,"Users Score":0,"Answer":"Like you have said .discard() method does not raise any error on the other hand .remove() method does. So it depends on the situation. You can use discard method if you do not want to catch the ValueError do to something else with try-except blocks.","Q_Score":0,"Tags":"python,set,operation","A_Id":66621971,"CreationDate":"2021-03-14T07:01:00.000","Title":"Why would someone use set.remove() instead of set.discard()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on Flutter project in Android Studio platform and I faced a problem with how to write and run python API code inside my Flutter project without letting it as a backend code in another platform? since when I run my Flutter project that connected with python API code in another platform as a backend using post method, it's worked with the emulator but it does not work with my physical android device.\nSo is there any recommend solution for either the first problem or the second.\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":66627916,"Users Score":0,"Answer":"No it's not possible to write python code Inside Flutter code\nBut you can write your api in different framework like Django ,mongodb and use it in your Flutter app","Q_Score":0,"Tags":"python,android,api,flutter,backend","A_Id":66627972,"CreationDate":"2021-03-14T18:07:00.000","Title":"Is it possible to write python code inside Flutter using Android Studio?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently starting a kind of larger project in python and I am unsure about how to best structure it. Or to put it in different terms, how to build it in the most \"pythonic\" way. Let me try to explain the main functionality:\nIt is supposed to be a tool or toolset by which to extract data from different sources, at the moment mainly SQL-databases, in the future maybe also data from files stored on some network locations. It will probably consist of three main parts:\n\nA data model which will hold all the data extracted from files \/ SQL. This will be some combination of classes \/ instances thereof. No big deal here\n\nOne or more scripts, which will control everything (Should the data be displayed? Outputted in another file? Which data exactly needs to be fetched? etc) Also pretty straightforward\n\nAnd some module\/class (or multiple modules) which will handle the data extraction of data. This is where I struggle mainly\n\n\nSo for the actual questions:\n\nShould I place the classes of the data model and the \"extractor\" into one folder\/package and access them from outside the package via my \"control script\"? Or should I place everything together?\n\nHow should I build the \"extractor\"? I already tried three different approaches for a SqlReader module\/class: I tried making it just a simple module, not a class, but I didn't really find a clean way on how and where to initialize it. (Sql-connection needs to be set up) I tried making it a class and creating one instance, but then I need to pass around this instance into the different classes of the data model, because each needs to be able to extract data. And I tried making it a static class (defining\neverything as a@classmethod) but again, I didn't like setting it up and it also kind of felt wrong.\n\nShould the main script \"know\" about the extractor-module? Or should it just interact with the data model itself? If not, again the question, where, when and how to initialize the SqlReader\n\nAnd last but not least, how do I make sure, I close the SQL-connection whenever my script ends? Meaning, even if it ends through an error. I am using cx_oracle by the way\n\n\nI am happy about any hints \/ suggestions \/ answers etc. :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":66628729,"Users Score":0,"Answer":"For this project you will need the basic Data Science Toolkit: Pandas, Matplotlib, and maybe numpy. Also you will need SQLite3(built-in) or another SQL module to work with the databases.\nPandas: Used to extract, manipulate, analyze data.\nMatplotlib: Visualize data, make human readable graphs for further data analyzation.\nNumpy: Build fast, stable arrays of data that work much faster than python's lists.\nNow, this is just a guideline, you will need to dig deeper in their documentation, then use what you need in your project.\nHope that this is what you were looking for!\nCheers","Q_Score":1,"Tags":"python,python-3.x","A_Id":66628813,"CreationDate":"2021-03-14T19:32:00.000","Title":"How do I include a data-extraction module into my python project?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have started a project and to test if it works, I used a Pi zero. It has buttons and then sends MIDI Messages to my PC, depending on what buttons are pressed. So a simple MIDI Controller.\nNow I think that a Microcontroller, like the Pico would be better suitable for such a task, but it can only run MicroPython.\nSo my question is, wether you can import most or all of the python libaries into microPython or if I should use another MicroController that can run python.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":303,"Q_Id":66629306,"Users Score":1,"Answer":"Usually \"no.\" It won't fit.","Q_Score":0,"Tags":"python,microcontroller,micropython","A_Id":66629358,"CreationDate":"2021-03-14T20:38:00.000","Title":"Can you import python libaries into microPython?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have started a project and to test if it works, I used a Pi zero. It has buttons and then sends MIDI Messages to my PC, depending on what buttons are pressed. So a simple MIDI Controller.\nNow I think that a Microcontroller, like the Pico would be better suitable for such a task, but it can only run MicroPython.\nSo my question is, wether you can import most or all of the python libaries into microPython or if I should use another MicroController that can run python.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":303,"Q_Id":66629306,"Users Score":0,"Answer":"So my question is, wether you can import most or all of the python libaries into microPython or if I should use another MicroController that can run python.\n\nWhile MicroPython shares much of the syntax of Python (3.4), it is different enough that anything but the most trivial Python code will not run under MicroPython. In general, you should only expect to run code developed explicitly for MicroPython on a MicroPython capable device.\n\nSo my question is, wether you can import most or all of the python\nlibaries into microPython or if I should use another MicroController\nthat can run python.\n\nI don't believe there are any microcontrollers available that can run standard Python. The smallest device you're going to find is probably something like the Raspberry Pi Zero.","Q_Score":0,"Tags":"python,microcontroller,micropython","A_Id":66629360,"CreationDate":"2021-03-14T20:38:00.000","Title":"Can you import python libaries into microPython?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Am new to fastapi. I have worked with multiple web frameworks in other languages and found the common pattern of middlewares for various purposes. e.g. If I have an API route that I want to authenticate then I would use a middleware that does the authentication. If I want to augment the incoming request I would use a middleware. FastAPI does have middlewares (A very small section in docs) but also has dependencies. I was looking to authenticate my API routes and started looking for examples and all the examples I find use dependencies. What (dependency or middleware) would be recommended way to authenticate an API route and why?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3124,"Q_Id":66632841,"Users Score":13,"Answer":"The way I see it:\n\nDependency: you use it to run code for preparing variables, authentication and so on.\nMiddleware: you need to check some stuff first and reject or forward the request to your logic.\n\nThe middleware can be seen as a superset of a Dependency, as the latter is a sort of middleware that returns a value which can be used in the request. Though, in the middleware, you can log your requests or cache the results and access the response of the request (or even forward the request, call some other API and so on).\nTL;DR\nA Dependency is a sort of common logic that is required before processing the request (e.g. I need the user id associated to this token), while a Middleware can do that, it can also access the response to that request. Dependencies are the preferred way to create a middleware for authentication","Q_Score":9,"Tags":"python,fastapi","A_Id":66634433,"CreationDate":"2021-03-15T05:34:00.000","Title":"fastapi dependency vs middleware","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I try to install h5py with \"pip install h5py\"\n, I get this error :\n\nLoading library to get build settings and version: hdf5.dll error:\nUnable to load dependency HDF5, make sure HDF5 is installed properly\nerror: Could not find module 'hdf5.dll' (or one of its dependencies).\nTry using the full path with constructor syntax.\nERROR: Failed building wheel for h5py Failed to build h5py ERROR:\nCould not build wheels for h5py which use PEP 517 and cannot be\ninstalled directly\n\n\nI tried \"pip install --upgrade setuptools\" and also \"pip install --upgrade setuptools --ignore-installed\" , and it's do not reslove my problem\nI tried also to downgrade the pip, but it didn't solve the problem though.\nI use python 3.8\n\nThank you in advance !","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":977,"Q_Id":66639073,"Users Score":0,"Answer":"Try sudo apt install build-essential python3.8-dev libhdf5-dev if on Ubuntu or Debian.","Q_Score":0,"Tags":"python,pip,hdf5,h5py","A_Id":68492778,"CreationDate":"2021-03-15T13:37:00.000","Title":"Problem while installing h5py : Unable to load dependency HDF5, Could not find module 'hdf5.dll'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Created a Flash Restful API with various end points for my website. Some endpoints need the users username and password to get user specific data. I\u2019m currently sending these as parameters in the API call but I\u2019m assuming this isn\u2019t secure so how does one do this safely?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":66639116,"Users Score":0,"Answer":"you can make a seperate api route that acts as a login and returns a sessionID\/token on a successful login that can be used for authenticating to those endpoints you mentioned.","Q_Score":0,"Tags":"python,api,flask-restful","A_Id":66639816,"CreationDate":"2021-03-15T13:40:00.000","Title":"How to safely send a user data in API call","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have created an instance on GCP to run some machine learning model for an app I am working on for a little project. I want to be able to call one of the methods in one of the files from my app and I thought a Cloud Function would be suitable.\nTo make this question simpler, let's just imagine I have a file in my instance called hello.py and a method in this file called foo(sentence). And foo(sentence) simply returns the sentence parameter.\nSo how do I call this method foo in python.py and capture the output?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":330,"Q_Id":66639983,"Users Score":2,"Answer":"At Google Cloud (And Google also), \"all is API\". Thus, if you need to access from a product to another one, you need to do this through API calls.\nIf your Cloud Functions needs to invoke a script hosted on a Compute Engine VM instance, you need to expose an API on this Compute Engine instance. A simple flask server is enough, and expose it only on the private IP is also enough. But you can't directly access from your Cloud Functions code to the Compute Engine instance code.\nYou can also deploy a Cloud Functions (or a Cloud Run if you need more system packages\/libraries) with the model loaded in it, and like this perform all the computation on the same product.","Q_Score":0,"Tags":"javascript,python,node.js,google-cloud-platform,google-cloud-functions","A_Id":66657007,"CreationDate":"2021-03-15T14:34:00.000","Title":"How to call function in GCP Instance from Cloud Function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was wondering if there is a way the command prompt could run a python file when I click the run button on visual code studio. Is that possible? Thanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":31,"Q_Id":66642183,"Users Score":1,"Answer":"Yes, that is possible (in fact, a lot of developers and learners do this) and it can be done by following these steps:-\n\nDownload and install Python on your system and add the bin folder to the path, environment variables.\nInstall Code Runner Extension from Extension tab in VSCode\nSet the configuration of terminal in VSCode to Python\nCtrl+Shift+N is the hotkey to run the Python program in terminal in VSCode.","Q_Score":1,"Tags":"python,cmd","A_Id":66642263,"CreationDate":"2021-03-15T16:48:00.000","Title":"Is there a way that command prompt could run a python file from Visual Code Studio?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering if there is a way the command prompt could run a python file when I click the run button on visual code studio. Is that possible? Thanks in advance.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":66642183,"Users Score":0,"Answer":"All you do is enter this into command prompt: PATH OF FILE. For example, if my file was hello.py in Documents, then I would type in C:\/Documents\/hello.py","Q_Score":1,"Tags":"python,cmd","A_Id":66642253,"CreationDate":"2021-03-15T16:48:00.000","Title":"Is there a way that command prompt could run a python file from Visual Code Studio?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I currently have a bucket in Google Cloud Storage with .pdf files, and I want to split each .pdf file into a multiple one-page .pdf files.\nI can only load the files as BLOB's (), and I can't find a good answer on how to read as a PdfFileReader object.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":314,"Q_Id":66642586,"Users Score":1,"Answer":"Upon \"fetching\" the object\/file from the bucket, you can \"keep\" it in the cloud function memory as a string (of bytes) or save it into a temp \"directory\" (\/tmp) local to your cloud function (the memory fo that temp directory is allocated form the total memory available for the cloud function). After that, you may be able to process the data either as a string, or as a file. When you finish with processing, you probably would like to upload those files into some other storage bucket.","Q_Score":0,"Tags":"python,pdf,google-cloud-functions,google-cloud-storage","A_Id":66643268,"CreationDate":"2021-03-15T17:15:00.000","Title":"Reading PDF from Google Cloud Storage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I have 2 databases on different servers DB1 and DB2. DBlink is created on DB1 for DB2.\nI have only 1 table to be used which is present on DB1 for which I have to use dblink otherwise I can directly hit DB2. Is there any way to get exclude DB1 Dblink and have DB1 table data too?\nAlso, I don't have the right to create anything on DB2.\ne.g.\nselect * from tb1\njoin tb2\non tb1.col = tb2.col\nInstead of going through DB1 where dblink is present for DB2. I want to directly connect DB2 by getting of one table in DB2 or using python sqllite or sqlalchemy","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":130,"Q_Id":66644363,"Users Score":0,"Answer":"Is there any way to get exclude DB1 Dblink and have DB1 table data too?\n\nNo, there's not. Especially as you can't create anything on DB2 where all other tables reside. You have to access DB1, and the only way - as they are in different databases - is to use a database link.\nHowever! Some people say \"database\" while - in Oracle - it is a \"schema\" (which resides in the same database). I'd say that you don't have that \"problem\", i.e. that you know which is which. Though, if you don't then: if they are really just two different schemas in the same database, then you don't need a database link and can access DB1 data by other means (DB1 grants you select privilege; then reference its table by preceding its name with the owner name).","Q_Score":0,"Tags":"python,sql,oracle,sqlite","A_Id":66644530,"CreationDate":"2021-03-15T19:20:00.000","Title":"How to exclude DBLINK","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to keep my mouse events in sync with my objects after resizing the screen.\nI'm told I need to create a data structure to keep track of:\n\nResizing events\n\nNew coordinates to match the resize\n\n\nHow can I accomplish this using simple algebraic equations and integrate it into a resize event for accurate updating?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":66645103,"Users Score":1,"Answer":"Do it the other way around create a virtual game map, scale to the size of the window when drawing the scene and scale to the size of the virtual map when receiving an event.","Q_Score":2,"Tags":"python,python-3.x,pygame,pygame-surface,pygame2","A_Id":67031573,"CreationDate":"2021-03-15T20:19:00.000","Title":"In pygame how can i create a data struct that keeps track of resizing events and the coordinates of objects?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After a click, a mini banner (or a container drawer) opens, which I should click on it, but I can't interact with it.\nI was trying the \"driver.switchTo\" command but since there is no Iframe, I don't know how to do it.\nthe body is this:\n Go <\/a>","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":93,"Q_Id":66645314,"Users Score":1,"Answer":"I've had issues with this in the past, and largely I couldn't figure out what to do either. If you can just leave your computer alone during the process you can always try to use pyautogui to kind of cheat your way through it.","Q_Score":1,"Tags":"python,selenium,switch-statement","A_Id":66645596,"CreationDate":"2021-03-15T20:36:00.000","Title":"Switch to panel - Python Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hi I want to know If I am able to work directly with queries inside the \"models.py\" file to make results the default values of some fields. For example:\nclass NewModel(models.Model):\nSeats = models.IntegerField(default=)\n...\nThanks! :)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":228,"Q_Id":66645337,"Users Score":0,"Answer":"You can define a method on the model that returns the value you want and point the default of your field to that method.","Q_Score":0,"Tags":"python,django,django-models,django-queryset","A_Id":66646817,"CreationDate":"2021-03-15T20:38:00.000","Title":"How to make queries directly on the models.py file in django, to fill the fileds with the results?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am working on a Python Turtle Snake Program, and I want to make my Turtle longer.\nBy that, I mean that I increase the 1 Cube to 2 Cubes, then 3, etcetera, for my Snake Game.\nCan you please inform how I could do that with turtle?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":66647935,"Users Score":0,"Answer":"I once made a game like that. You can just create more turtles and use them as the body. Make the first turtle as the head of the snake. Then whenever it eats food,run a function that will create another turtle as a body component and put it in the previous position of head component. Repeat this everytime the snake eats a food.\nI am not so good in explaining,so sorry for any inconvenience : )","Q_Score":0,"Tags":"python-3.x,python-turtle","A_Id":69835659,"CreationDate":"2021-03-16T01:33:00.000","Title":"How to make a Turtle Longer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Kafka topic with 4 partitions, and I am creating an application written with python that consumes data from the topic.\nMy ultimate goal is to have 4 Kafka consumers within the application. So, I have used the class KafkaClient to get the number of partitions right after the application starts, then, I have created 4 threads, each one has the responsibility to create the consumer and to process messages.\nAs I am new to Kafka (and python as well), I don't know if my approach is right, or it needs enhancements (e.g. what if a consumer fails).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":127,"Q_Id":66651832,"Users Score":1,"Answer":"If a consumer thread dies, then you'll need logic to handle that.\nThreads would work (aiokafka or Faust might be a better library for this), or you could use supervisor or Docker orchestration to run multiple consumer processes","Q_Score":0,"Tags":"python,apache-kafka,kafka-consumer-api","A_Id":66656387,"CreationDate":"2021-03-16T08:51:00.000","Title":"Having same number of consumers as the number of partitions","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"What is the password for the command \"safaridriver --enable\"?\nI tried to do the command \"safaridriver --enable\" in the terminal my MAC asks me the password. I put the password for sudo command but it doesn't seem to be correct.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":66652103,"Users Score":0,"Answer":"If youre using Safari 10 the command \"safaridriver --enable\" wont work, there is no known workaround, you could upgrade a higher version. Using \"safaridriver --enable\" should work there.","Q_Score":1,"Tags":"python,selenium,safari","A_Id":66652198,"CreationDate":"2021-03-16T09:10:00.000","Title":"What is password for command \"safaridriver --enable\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My question is how to connect two BBB (Beagle Bone Black) into one PC using USB and communication with them at the same time.\nI am trying to write a python script that two BBB boards to communicate with each other. The idea is that these boards should communicate with each other using iperf and an external cape (OpenVLC).\nThe problem is that I need to use the iperf on the server-side and then read the data on the client-side.\nFor this purpose, I need to connect them to one PC to be able to read commands and write the results.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":151,"Q_Id":66652533,"Users Score":0,"Answer":"That's going to be a struggle. Both BBB grab the same IPs (192.168.6.2 and\/or 192.168.7.2 depending on your PC's operating system) on their virtual Ethernet adapters and assign the same address to the PC side (192.168.6.1 and\/or 192.168.7.1). You can change that in the startup scripts (google for details). Then you'd need to set your PC up to route traffic between the two which depends on your operating system. If you haven't done networking before, it's going to be hard.\nInstead of USB, I'd strongly recommend connecting all devices to a simple router box using Ethernet. It just works.","Q_Score":0,"Tags":"python,beagleboneblack,serial-communication","A_Id":66656103,"CreationDate":"2021-03-16T09:37:00.000","Title":"Connect two BBB to one PC using USB","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am very new to python and pygame and I'm currently making a very basic experiment with it. So far, I know how to draw an image on the screen, move it around, add a background, stuff like that. But what I have always been wondering is, how do I switch between screens in a game and how do I keep the code tidy when doing so?\nEvery time I try to add a new thing to my already unstable code, it takes only a few lines to break it and it becomes a confusing mess.\nIf for example, I want a start screen that shows the title and a \"press any key to continue\" kinda thing, how do I do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":86,"Q_Id":66653278,"Users Score":0,"Answer":"A quick fix:\n\nMake the entire screen white and then draw the second screen onto the first one.\nThen when you need the other screen, just refill the screen with black and then continue.\nThis can be achieved by putting the screens in their own separate functions.\n\nLet me know if this helps out","Q_Score":0,"Tags":"python,pygame","A_Id":66653375,"CreationDate":"2021-03-16T10:24:00.000","Title":"How to make multiple screens in pygame? (As in, a \"menu\" screen, a \"main\" screen, a \"start\" screen, etc.)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create Jira issues with data populated in a row in google sheet, I plan to put a button to read the contents of the row and create Jira issues, I have figured the Jira API wrote the script for it and also the Google sheets API to read the row values to put in the Jira API.\nHow do I link the button to the python script in my local machine in a simple manner, I went through other similar asks here, but they are quite old and hoping now some new way might be available.\nPlease help me achieve this in a simple way, any help is greatly appreciated.\nThank You and Stay Safe.","AnswerCount":3,"Available Count":1,"Score":0.2605204458,"is_accepted":false,"ViewCount":808,"Q_Id":66655415,"Users Score":4,"Answer":"Google sheets cannot run code on your local machine. That means you have a few options:\nClick the button locally\nInstead of clicking a button on the google sheet, you can run the script yourself from the command line. This is probably how you tested it, and not what you want.\nWatch the spreadsheet\nYou could have your python script setup to run every few minutes. This has the benefit of being very straightforward to setup (google for cron jobs), but does not have a button, and may be slower to update. Also, it stops working if you turn off your computer.\nMake script available for remote execution\nYou can make it so that your script can be run remotely, but it requries extra work. You could buy a website domain, and point it towards your computer (using dynamic dns), and then make the google sheet request your new url. This is a lot of work, and costs real money. This is probably not the best way\nMove the script into the cloud\nThis is probably what you want: cut your machine out of the loop. You can use Google AppScripts, and rewrite your jira code there. You can then configure the google AppScript to run on a button click.","Q_Score":3,"Tags":"python,google-sheets,google-sheets-api","A_Id":66772025,"CreationDate":"2021-03-16T12:39:00.000","Title":"Running a python script saved in local machine from google sheets on a button press","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need a method to get mouse position without mouse movement or mouse click using OpenCV in python.\nEVENT_MOUSEMOVE returns the position only when there is movement, but as I mentioned above, I want to get position without movement or click.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":67,"Q_Id":66656165,"Users Score":1,"Answer":"Not sure I understand this. If you register a callback for EVENT_MOUSEMOVE you can get the current position and store it in a global variable. Then you can get the position as often as you like, say every 50ms on a timer, even without further movement, by looking at the global variable. By definition, the mouse must be still at the last place it was when it last moved if it hasn't moved since.","Q_Score":0,"Tags":"python,opencv","A_Id":66657719,"CreationDate":"2021-03-16T13:27:00.000","Title":"Is there a way to get mouse position in opencv without mouse movement or click?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"(This is a mix between code and 'user' issue, but since i suspect the issue is code, i opted to post in StackOverflow instead of SuperUser Exchange).\nI generated a .csv file with pandas.DataFrame.to_csv() method. This file consists in 2 columns: one is a label (text) and another is a numeric value called accuracy (float). The delimiter used to separate columns is comma (,) and all float values are stored with dot ponctuation like this: 0.9438245862\nEven saving this column as float, Excel and Google Sheets infer its type as text. And when i try to format this column as number, they ignore \"0.\" and return a very high value instead of decimals like:\n(text) 0.9438245862 => (number) 9438245862,00\nI double-checked my .csv file reimporting it again with pandas.read_csv() and printing dataframe.dtypes and the column is imported as float succesfully.\nI'd thank for some guidance on what am i missing.\nThanks,","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":204,"Q_Id":66656487,"Users Score":0,"Answer":"By itself, the csv file should be correct. Both you and Pandas know what delimiter and floating point format are. But Excel might not agree with you, depending on your locale. A simple way to make sure is to write a tiny Excel sheet containing on first row one text value and one floating point one. You then export the file as csv and control what delimiter and floating point formats are.\nAFAIK, it is much more easy to change your Python code to follow what your Excel expects that trying to explain Excel that the format of CSV files can vary...\n\nI know that you can change the delimiter and floating point format in the current locale in a Windows system. Simply it is a global setting...","Q_Score":0,"Tags":"python,pandas,csv,floating-point","A_Id":66657153,"CreationDate":"2021-03-16T13:47:00.000","Title":"CSV cannot be interpreted by numeric values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"(This is a mix between code and 'user' issue, but since i suspect the issue is code, i opted to post in StackOverflow instead of SuperUser Exchange).\nI generated a .csv file with pandas.DataFrame.to_csv() method. This file consists in 2 columns: one is a label (text) and another is a numeric value called accuracy (float). The delimiter used to separate columns is comma (,) and all float values are stored with dot ponctuation like this: 0.9438245862\nEven saving this column as float, Excel and Google Sheets infer its type as text. And when i try to format this column as number, they ignore \"0.\" and return a very high value instead of decimals like:\n(text) 0.9438245862 => (number) 9438245862,00\nI double-checked my .csv file reimporting it again with pandas.read_csv() and printing dataframe.dtypes and the column is imported as float succesfully.\nI'd thank for some guidance on what am i missing.\nThanks,","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":204,"Q_Id":66656487,"Users Score":0,"Answer":"A short example of data would be most useful here. Otherwise we have no idea what you're actually writing\/reading. But I'll hazard a guess based on the information you've provided.\nThe pandas dataframe will have column names. These column names will be text. Unless you tell Excel\/Sheets to use the first row as the column name, it will have to treat the column as text. If this isn't the case, could you perhaps save the head of the dataframe to a csv, check it in a text editor, and see how Excel\/Sheets imports it. Then include those five rows and two columns in your follow up.","Q_Score":0,"Tags":"python,pandas,csv,floating-point","A_Id":66656683,"CreationDate":"2021-03-16T13:47:00.000","Title":"CSV cannot be interpreted by numeric values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I m trying to find out, how i can run and control different instances of Firefox using selenium in python. I m using the geckodriver, to run Firefox. I know, that i can have multiple Profiles in Firefox and that i can run a new Profile seperatly under about:profiles. This would solve my problem, if i knew how i can switch between the windows coresponding to the different profiles. I d be happy, if someone could tell me, how to do that and even more, if someone knows of a threat, where i can read up on it. Thanks for your time.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":66658967,"Users Score":0,"Answer":"Thanks for the answere. I ll read into it more. For now i ve found a simpler solution, to my problem. I can say driver1 = webdriver.Firefox(PATH) and driver2 = webdriver.Firefox(PATH) and then control them separately. Sorry that i bothered when it was so easy and thanks for the help.","Q_Score":0,"Tags":"python,selenium","A_Id":66664866,"CreationDate":"2021-03-16T16:05:00.000","Title":"Selenium python, multiple geckodriver instances","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building an application that uses two legged authentication. I got an API key and API Secret, but now I am confused.\nI am currently storing my api keys and secrets in a .yml file. But I would like to distribute the .app code, which will end up having the .yml file.\nBut the .app file will contain the .yml, file, which is bad since everyone will be able to see the API key and Secret.\nHow can I store the API key and Secret such that my application can access the key and secret without the users seeing it?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":578,"Q_Id":66660180,"Users Score":1,"Answer":"The answer depends on a few variables:\n\nIs your source included?\nIs it possible to use a server to call the API for you? If so, can you also apply restrictions to the call that the server makes?\nIs using compiled code for where you store the key an option? If so, is it possible to obfuscate it?\n\nHere are my suggestions for different scenarios from experience:\nThe source is not included and using a server is an option, and restrictions can be applied, however using compiled code is not an argument\nThen use a server to make requests. Let's say you need to make a call to example.com\/api\/v1, and you want to call a specific function with a specific set of arguments, then you can only allow requests to that specific API, with that specific set of arguments, and that specific function. This way, it means nothing to a potential attacker since it only calls to one function and nothing else.\nThe source is not included, using a server is not an option, and compiled code is not an option either\nWell, there's not much you can do, obfuscation is your best shot. The best way to do something like this is to hide it deep within your code, and make it obscure, etc., etc., etc.,\nThe source is included, using a server is not an option, but you can use compiled code\nUse really obfuscated assembly and don't share the source for that if you can. For instance, you can have red herring instructions, and just like before, you should hide it deep in your code.\nThe source is not included, using a server is not an option, but you can use compiled code\nFor this it's the same as above, since the source for the assembly wouldn't be included\nIf I didn't list your scenario here, then feel free to comment and I'll edit my answer","Q_Score":4,"Tags":"python,api,api-key","A_Id":67026216,"CreationDate":"2021-03-16T17:19:00.000","Title":"How to secure API keys in applications that will be distributed to clients","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am building an application that uses two legged authentication. I got an API key and API Secret, but now I am confused.\nI am currently storing my api keys and secrets in a .yml file. But I would like to distribute the .app code, which will end up having the .yml file.\nBut the .app file will contain the .yml, file, which is bad since everyone will be able to see the API key and Secret.\nHow can I store the API key and Secret such that my application can access the key and secret without the users seeing it?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":578,"Q_Id":66660180,"Users Score":0,"Answer":"While I consider the existing answer technically correct, it may be wroth pointing out that there are some security issues with hardcoding api keys in distributed software.\nThe nature of an API key is volatile, it is not designed to last forever.\nWhat would happen if the API key is invalidated? Wouldn't that render all distributed software useless then?\nAnd what would happen if the API key has write privileges and is compromised? How could you distinguish between legit and malicious writes?\nEven though I understand the overhead, a scenario were the end user can set dedicated keys, obtained by the end user itself, and a way to replace that initial key, would help with above two questions.\nOne of the API Key features is to be used by a machine that acts on behalf of a user, but if all the users are the same, this feature becomes meaningless.","Q_Score":4,"Tags":"python,api,api-key","A_Id":67091736,"CreationDate":"2021-03-16T17:19:00.000","Title":"How to secure API keys in applications that will be distributed to clients","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"So, I am using AWS athena where I have Data Source set to AwsDataCatalog, database set to test_db, under which I have a table named debaprc.\nNow, I have superset installed on an EC2 instance (in virtual environment). On the Instance, I have installed PyAthenaJDBC and PyAthena. Now, when I launch Superset and try to add a database, the syntax given is this:\nawsathena+rest:\/\/{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com\/{schema_name}?s3_staging_dir={s3_staging_dir}\nNow I have 2 questions -\n\nWhat do I provide for schema_name?\nI tried putting test_db as schema_name but it couldn't connect for some reason. Am I doing this right or do I need to do stuff differently?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":481,"Q_Id":66660946,"Users Score":1,"Answer":"It worked for me adding port 443 to the connection string as below and you can use test_db as schema_name:\nawsathena+rest:\/\/{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com:443\/{schema_name}?s3_staging_dir={s3_staging_dir}","Q_Score":0,"Tags":"python,amazon-web-services,sqlalchemy,amazon-athena,apache-superset","A_Id":66715212,"CreationDate":"2021-03-16T18:09:00.000","Title":"Connecting athena to superset","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm sorry for the vague wording in this question.\nI've created a login system with Tkinter but am not sure how to create a new window for the main menu of the app I want to make.\nI've tried opening a new window but that still leaves the login window open and if I attempt to delete it with the .destroy command it shuts down the whole thing.\nis there a way to completely refresh\/open a new window?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":98,"Q_Id":66662493,"Users Score":1,"Answer":"is there a way to completely refresh\/open a new window?\n\nThe simplest solution is to put all of your \"windows\" inside frames -- login window is all inside a single frame, the main window is all inside a single frame, etc.\nThen, when you're ready to switch from one to the other, just destroy the login frame and create the main window frame, all without having to destroy the main window.\nA similar solution is to simply delete all of the children in the root window and then add the new widgets. It's easiest if everything is in a single frame, but you can destroy all of the widgets one by one by calling winfo_children on the root window and iterating over the results.","Q_Score":0,"Tags":"python,tkinter","A_Id":66663155,"CreationDate":"2021-03-16T19:56:00.000","Title":"How to progress to next window in tkinter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm sorry for the vague wording in this question.\nI've created a login system with Tkinter but am not sure how to create a new window for the main menu of the app I want to make.\nI've tried opening a new window but that still leaves the login window open and if I attempt to delete it with the .destroy command it shuts down the whole thing.\nis there a way to completely refresh\/open a new window?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":98,"Q_Id":66662493,"Users Score":1,"Answer":"You can call old_root.destroy() and that will close the old window. It will also reset all of the global variables in tkinter so it will be as if you just imported tkinter for the first time. So you can create a new tkinter window using tk.Tk(). I find this approach more intuitive and easier to understand\/implement.\nEdit: You need to first pull the data out of your entries\/other widgets before you call old_root.destroy() otherwise the data will be deleted as well.","Q_Score":0,"Tags":"python,tkinter","A_Id":66663478,"CreationDate":"2021-03-16T19:56:00.000","Title":"How to progress to next window in tkinter","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been playing around with django channels + angular. I have created an app that simply sends notifications to front end with a counter 1,2,3,4. It works fine, except if I open the page in multiple tabs. I am also not able to disconnect from the websocket, I can use unsubscribe but it does not really close the connection but that's more kind of a angular question. Anyways how can I make my socket multithread, So if I make multiple requests from the same computer but from different tabs it will work and 2 different instances of the consumer will be created therefore, If I load 2 pages in the same computer the counter should be different independently increasing counter. Do I need redis for that ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":465,"Q_Id":66662639,"Users Score":0,"Answer":"my url router was missing .as_asgi()\nthis worked:\nURLRouter([path('wscrawpredict', CrawPredictConsume.as_asgi(),name=\"wscraw\")])","Q_Score":0,"Tags":"python,django,angular,websocket,django-channels","A_Id":66829614,"CreationDate":"2021-03-16T20:08:00.000","Title":"multiple independent websockets connections","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I mean, this very formula can mean different things whether there is a package named x, whether there is a module named x, whether there is a variable named x in one of them etc.\nI did not find an easy-to-understand, concise answer to this question, not even in the python documentation.\nThe answer would shed light on how python import work.","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":89,"Q_Id":66663798,"Users Score":-1,"Answer":"When you use import, the whole file is being imported as an class. If you have a module (file) named sum and inside this file you have a function called sum, using from sum import sum you get you only the function.","Q_Score":0,"Tags":"python,import","A_Id":66663952,"CreationDate":"2021-03-16T21:41:00.000","Title":"What does `from x import x` actually mean?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having trouble getting task scheduler to send an email via python. The particular python code I am using creates a monthly export, grabbing data from a SQL database, and then sends an email notifying the team where the export is located. For the email, I am using smtplib.\nHere is the issue. I can run the export from pycharm, and the export and the send email works fine. I can run the export from task scheduler and JUST the export runs. In other words, only when I run the .bat file of the code from task scheduler, the email doesn't send. Does anyone happen to know the solution? I've tried so many things from searches and nothing works. :(","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":66664296,"Users Score":0,"Answer":"The answer really depends on how you've packaged your code and how you are choosing to execute it.\nTo me it seems like you are executing it as a script that your task scheduler is pointed to, if this script calls your system interpreter, my first suggestion would be to make sure you have all of the dependencies installed for the version of python that your script is calling. So if you are calling python 3.9.1, then make sure smtplib is installed on your system's python for 3.9.1.\nFor why it's working in pycharm, code in pycharm is run in an isolated environment, so packages that you have installed for your project will only work on that specific project. Not outside of it or on other systems.","Q_Score":1,"Tags":"python,email,smtp,smtplib,taskscheduler","A_Id":66664435,"CreationDate":"2021-03-16T22:24:00.000","Title":"Running python export that includes a send email trigger through task scheduler not working?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Lets say I have an Apache webserver that hosts a webpage, the webpage contains a button, how can I trigger the execution of a python script on my machine (the server machine) when clicking on the button?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":66664790,"Users Score":0,"Answer":"One of the solutions out there is:\n\nMake an endpoint that invoke the python script (depending on your server, if it's python you will just call the script and if it's another function you can make a system call to invoke the python script)\nUse xml\/Ajax or any other call when you click the button to this endpoint\n\nPlease note that you need to secure your endpoint through JWT for maximum security","Q_Score":0,"Tags":"javascript,python,html","A_Id":66665096,"CreationDate":"2021-03-16T23:14:00.000","Title":"Is it possible to execute a python script on my machine from an HTML button?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Lets say I have an Apache webserver that hosts a webpage, the webpage contains a button, how can I trigger the execution of a python script on my machine (the server machine) when clicking on the button?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":66664790,"Users Score":0,"Answer":"One solution, that would work, is by using a web-request. I can't really give you any code since your request is very general, but I can give you a sort of procedure.\n\nPython WebApp is running on the same server\n\nWebsite makes XmlHttprequest with javascript to your set location\n\nThe defined function executes \/ executes your Python script\n\n\nAll this can be done easily by using a FrameWork like Flask.","Q_Score":0,"Tags":"javascript,python,html","A_Id":66664859,"CreationDate":"2021-03-16T23:14:00.000","Title":"Is it possible to execute a python script on my machine from an HTML button?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Lets say I have an Apache webserver that hosts a webpage, the webpage contains a button, how can I trigger the execution of a python script on my machine (the server machine) when clicking on the button?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":66664790,"Users Score":0,"Answer":"Easiest solution:\nCreate an endpoint on the webserver that executes the Python process every time someone accesses it (via GET\/POST, anything really), and then use AJAX in the browser (jQuery, VueJS+axios, native XmlHttpRequest) to connect the button to that endpoint.\nThis has a number of problems: anyone could GET the endpoint and trigger the process multiple times, which could crash your server. You'll need to password protect it unless you code the endpoint so that it can't be spammed.\nThis also spawns a process for every click. Another solution is to keep the Python process running and open a socket, which you can then send commands to from the endpoint on the local machine.\nMany ways this can be done.","Q_Score":0,"Tags":"javascript,python,html","A_Id":66664982,"CreationDate":"2021-03-16T23:14:00.000","Title":"Is it possible to execute a python script on my machine from an HTML button?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have my first ever hello_world.py file saved in documents\\python_work. It runs fine in the Sublime text editor but when I try to navigate to it through a command prompt window, it can't seem to find it. I open a command prompt and type cd documents and it works. I type dir and it shows me the files, but when I type cd python_work (which is the folder my file is in) I get:\n\nThe system cannot find the path specified.\n\nThe textbook had me add C:\\Python and C:\\Python\\Scripts to the PATH environment variables (not too sure why, just following directions), so perhaps I made a mistake during this process?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":66664949,"Users Score":2,"Answer":"If you are the the same folder as the file, type python(or py, depending on your python version) python file.py to run it.\nA quick way to get to the folder is find it in the file explorer, and where it shows the file path, click there and type 'cmd' and hit enter. It will open up the Command Prompt from that folder so you don't have to manually navigate to it.","Q_Score":1,"Tags":"python","A_Id":66664978,"CreationDate":"2021-03-16T23:31:00.000","Title":"How do I run my newly created hello_world.py file from a command prompt window?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to download the current 32bit version of WinPython. The 32bit version is needed for compatibility with pypyodbc and a 32bit MS Access database. I have had a 32bit version of 3.2.7 installed and running correctly for a while, but would like to upgrade to say 3.9 for a few reasons.\nBut on the usual download sites the WinPython 32 bit versions are all stripped back bundles, without the rich array of packages (pyQT,pyqtgraph, pyserial etc). What is the standard process to get a fully featured WinPython 32bit (v3.9.2)? Perhaps download an older version of the 32 bit version and then overwrite with the new version? Or download the current 64 bit version and install the minimal 32 bit version over the top? Download the minimum and install each needed package via pip?\nI know I am missing something, it can't be too hard... But have spent the day googling and not found the way forward.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":120,"Q_Id":66665417,"Users Score":0,"Answer":"Without having any better idea, I proceeded with just downloading the 'minimal 32 bit' installation, and then pip'ing the nine or ten packages I needed for my current suite of projects. Only took 15 minutes, and has resulted in a much smaller footprint on my hard disk.\nIf anyone has a different solution to my original question though, happy to hear it!\nMark","Q_Score":0,"Tags":"python-3.x,download,32-bit","A_Id":66729776,"CreationDate":"2021-03-17T00:33:00.000","Title":"WinPython 32bit download","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In VS Code, is there any way to run a Python source line in terminal (shift+Enter), ignore leading >>>?\nI heard some other editors have such functionality as one of their built-in features (e.g., IPython). In the case of VS Code, do you have any suggestions for set-ups or extensions?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":66665578,"Users Score":0,"Answer":"I think what you mean is ti run it in interactive mode, correct me if I am wrong.\n CTRL + SHIFT + T and then type\n run current file in interactive window.","Q_Score":0,"Tags":"python,visual-studio-code","A_Id":66665655,"CreationDate":"2021-03-17T00:59:00.000","Title":"Visual studio code: ignore leading \">>>\" in python examples","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"there is an OR operator | in regular expression (re module), but why there is no AND operator?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":43,"Q_Id":66666634,"Users Score":0,"Answer":"Because you can't have one section of a string be both 'abc' and 'def'. It is either one or the other.","Q_Score":0,"Tags":"python,python-re","A_Id":66666652,"CreationDate":"2021-03-17T03:34:00.000","Title":"why there is no AND operator in regular expression?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I have a python file named myfile.py in a certain directory.\nHow do I just call the python file directly without invoking python.\n\nmyfile.py\n\nand not\n\npython myfile.py","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1003,"Q_Id":66670581,"Users Score":0,"Answer":"Edit:\nTo be more precise.\njust typing the filename in the command line will not work.\nTyping start program.py however should work.\nWindows normally has information telling it which program is the default program for a given suffix.\nConcerning @go2nirvan's comment: Though windows is not an oracle, it can have information for each file suffix to know what's the related default application.\nEven many linux desktop application associate a default application to certain mime types.\nIf you click on .xls (depending on what is installed) either Excel, or OpenOfficeCalc or LibreOffice will be opened)\nWindows associates file suffixes to file types and file types to applications, that are supposed to start it.\nIf you open a CMD window and you type\nassoc .py\nYou should get an output similar to: (I don't have a windows machine nearby, so can't tell you the exact output)\n.py=Python.File\nThen type\nftype Python.File or whatever the previous command gave you and you should see which executable shall be used.\nThis should be something like\nc:\\System32\\py.exe\nwhich is a wrapper program, calling the real python executable according to some rules\nIf this doesn't work, then please tell which version of python you installed and how you installed it (for all users, for current user, ...)\nFrom command line you have to call (If I recall correctly)\nstart test.py and it will execute the file with the associated executable","Q_Score":1,"Tags":"python,windows,command-prompt","A_Id":66670849,"CreationDate":"2021-03-17T09:47:00.000","Title":"How to run python scripts without typing python in windows","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"So the default of anaconda is python 3.8, but you can invoke python2 by running python2 a_py_script.py. The issue comes from the fact that you'll need to import things (say biopython), and you can't import them as any conda install -c conda-forge biopython or pip install biopython will be understood to automatically slot it into python3.8 packages exclusively.\nI ask this because I have some python2.7 scripts that demand packages outside the default install scope and ideally I'd like to do this without having to create a new python=2.7 env and track down everything I need.\nI've tried pip2.7 install biopython and python2.7 -m pip install biopython to no avail. Could it be that I technically don't have python 2.7 even though I'm able to invoke it from command line via python2 because python3 just naturally has some special limited backwards compatibility to run my python2 scripts? (I did notice that conda list includes only 3.8 and no mention of 2.7)\nI've tried cloning my env but I don't know how to do it in such a way that swapts just the version of python. conda create --name py27test --clone base python=2.7 says too many arguments. I'd like to know if this is even advisible as my base environment I would presume is entirely built off of v3.8 so swapping out the python versions will just be bad time hence why this seems impossible?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":493,"Q_Id":66670918,"Users Score":0,"Answer":"You can't mix Python versions in a conda environment. You can call Python executables from outside your environment, but that's unadvisable for anything requiring dependencies. If you must use Python 2.7 and need dependencies installed, that needs to be done in a contained environment, one that does not mix Python 3 packages into it.\nIf you care about using your Python 2.7 scripts long-term, you should consider migrating them now; using unsupported software is only going to get harder over time.","Q_Score":0,"Tags":"python,pip,anaconda,conda,biopython","A_Id":66676641,"CreationDate":"2021-03-17T10:08:00.000","Title":"How do I install python2.7 packages under anaconda with python3.8 already in place?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi so I have been working on a notebook the last few days and showed it to my advisor yesterday and we walked through it together. I tried to start working on the project this morning and cannot find the file that I was working on. What is strange is that the directory that I was working in says it was last modified yesterday but when I look through the directory the file I am looking for cannot be found. I know that you are probably thinking \"this ding deleted the file on accident\" and although I really dont know how that could have happened, that is one suspicion of mine, but when looking at https:\/\/stackoverflow.com\/questions\/38819322\/how-to-recover-deleted-ipython-notebooks they mention that it should go to trash for my version of jupyter notebook upon deletion.\nI am asking if there is any way to possibly get the file back? or anywhere I can look for the file? I have looked in my trash can but it is not there.\nMacOS Big Sur 11.2.3\nJupyter NoteBook 6.1.5\nConda Version: 4.9.2\nConda-build version: 3.20.5\nPython: 3.8.5.final.0","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":66672653,"Users Score":0,"Answer":"I do not know why this was the case but here is:\nI could not find the file of interest. Did all methods from before in above link, once I looked in the icloud Desktop in my finder it suddenly appeared in the normal desktop directory. Idk why, but if this happens to you, check the icloud directory corresponding to the directory you are in and it may appear in the corresponding normal directory after. Lesson learned: do some version control.","Q_Score":0,"Tags":"python,macos,jupyter-notebook,anaconda,storage","A_Id":66673358,"CreationDate":"2021-03-17T11:59:00.000","Title":"Cannot find .ipynb in directory that says it was last modified recently but has no recently modified files","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making a simple project where I will have a downloadable scraper on an HTML website. The scraper is made in Python and is converted to a .exe file for downloading purposes. Inside the python code, however, I included a Google app password to an email account, because the scraper sends an email and I need the server to login with an available Google account. Whilst .exe files are hard to get source code for, I've seen that there are ways to do so, and I'm wondering, how could I make it so that anyone who has downloaded the scraper.exe file cannot see the email login details that I will be using to send them an email when the scraper needs to? If possible, maybe even block them from accessing any of the .exe source code or bytecode altogether? I'm using the Python libraries bs4 and requests.\nAdditionally, this is off-topic, however, as it is my first time developing a downloadable file, even whilst converting the Python file to a .exe file, my antivirus picked it up as a suspicious file. This is like a 50 line web scraper and obviously doesn't have any malicious code within it. How can I make the code be less suspicious to antivirus programs?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":91,"Q_Id":66672822,"Users Score":0,"Answer":"Firstly, why is it even sending them an email? Since they'll be running the .exe, it can pop up a window and offer to save the file. If an email must be sent, it can be from the user's gmail rather than yours.\n\nSecondly, using your gmail account in this way may be against the terms of service. You could get your account suspended, and it may technically be a felony in the US. Consult a lawyer if this is a concern.\n\nTo your question, there's basically no way to obfuscate the password that will be more than a mild annoyance to anyone with the least interest. At the end of the day, (a) the script runs under the control of the user, potentially in a VM or a container, potentially with network communications captured; and (b) at some point it has to decrypt and send the password. Decoding and following either the script, or the network communications that it makes will be relatively straightforward for anyone who wants to put in quite modest effort.","Q_Score":1,"Tags":"python,html,download,exe,source-code-protection","A_Id":66673330,"CreationDate":"2021-03-17T12:08:00.000","Title":"Python exe - how can I restrict viewing source and byte code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making a simple project where I will have a downloadable scraper on an HTML website. The scraper is made in Python and is converted to a .exe file for downloading purposes. Inside the python code, however, I included a Google app password to an email account, because the scraper sends an email and I need the server to login with an available Google account. Whilst .exe files are hard to get source code for, I've seen that there are ways to do so, and I'm wondering, how could I make it so that anyone who has downloaded the scraper.exe file cannot see the email login details that I will be using to send them an email when the scraper needs to? If possible, maybe even block them from accessing any of the .exe source code or bytecode altogether? I'm using the Python libraries bs4 and requests.\nAdditionally, this is off-topic, however, as it is my first time developing a downloadable file, even whilst converting the Python file to a .exe file, my antivirus picked it up as a suspicious file. This is like a 50 line web scraper and obviously doesn't have any malicious code within it. How can I make the code be less suspicious to antivirus programs?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":91,"Q_Id":66672822,"Users Score":1,"Answer":"Sadly even today,there is no perfect solution to this problem.\n\nThe ideal usecase is to provide this secret_password from web application,but in your case seems unlikelly since you are building a rather small desktop app.\nThe best and easiest way is to create a function providing this secret_password in a separate file,and compile this file with Cython,thing that will obcufate your script(and your secret_password) at a very good extend.Will this protect you from lets say Anonymous or a state security agency?No.Here comes the reasonable thinking about how secret and important really your password is and from who you mainly can be harmed.\nFinally before compiling you can 'salt' your script or further obscufate it with bcrypt or other libaries.\n\nAs for your second question antiviruses and specifically windows don't like programms running without installers and unsigned.\nYou can use inno setup to create a real life program installer.\nIf you want to deal with UAC or other issues related to unsigned programms you can sign your programm(will cost money).","Q_Score":1,"Tags":"python,html,download,exe,source-code-protection","A_Id":66673053,"CreationDate":"2021-03-17T12:08:00.000","Title":"Python exe - how can I restrict viewing source and byte code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm in a text2voice(indonesian language) project.I installed g2p-seq2seq for text2phoneme, it contains some codes from tf.contrib so only run with tf1.\nRecently I got a new phoneme2voice model which only support tf2. Is there anyway to make them run in one project?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":517,"Q_Id":66672932,"Users Score":0,"Answer":"This depends on your ultimate goal. If you want to be able to run the Tensorflow 2 model eagerly in the same python instance or tensorflow instance as a Tensorflow 1 model running in compatibility mode, you will be in a world of trouble. Once you turn on compatibility mode or turn off eager execution, you cannot turn it back on.\nI tried to do this for my own project. My quick fix was making a temporary untrainable copy of the model using the weights and biases which have to be extracted and stored in a some format (I suggest pickle files) that can be opened in the script which makes temporary models without causing an instantiation of tensorflow 1 or tensorflow 2 running in compat mode.\nUltimately, I had to completely rebuild the Tensorflow 1.X model creation and training script in Tensorflow 2.\nIf you don't need to run them in the same exact script with eager execution for the Tensorflow 2 model then is might... might work to just use compat mode. I know that hearing this sucks, but there really isn't much you can do if the conditions I stated apply to you.\nTLDR - It depends on exactly what you want or need, but the foolproof method is just completely rebuilding the model in Tensorflow 2.","Q_Score":0,"Tags":"python,tensorflow,pip,tensorflow2.0","A_Id":66752785,"CreationDate":"2021-03-17T12:16:00.000","Title":"Is there anyway to run tensorflow1 and tensorflow2 in one project?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have already installed opencv-python library but when i import it into the the compiler it does not work. I have also tried to install it using pip install opencv-python but it says Requirement already satisfied:. Which means that library is installed but still not works. I am using pycharm community edition and python 3.9.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":181,"Q_Id":66673284,"Users Score":0,"Answer":"try to install this\n\npip install opencv-python-headless","Q_Score":0,"Tags":"python,python-3.x,python-import,cv2,opencv-python","A_Id":67569808,"CreationDate":"2021-03-17T12:39:00.000","Title":"ImportError: DLL load failed while importing cv2: The specified module could not be found in pycharm with python 3.9","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any version of Python >= 3.4 installed from the official python.org includes pip.\nBut I couldn't find what is the default version of pip that \"comes\" with python 3.9? and is there a difference if I install it on a system with an older python version, and an exiting pip?\nI couldn't find an answer on google, and when I installed py3.9 on my pc, the pip3.9 is pip 9.0.1, which is the same as before, so I suspect if a pip3 version exists it does not update it.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2253,"Q_Id":66674659,"Users Score":0,"Answer":"As Axe319 commented:\nAccording to docs.python.org\/3\/library\/ensurepip.html the latest stable version is always bundled (today it is 21.0.1)","Q_Score":1,"Tags":"python,pip,python-3.9","A_Id":66794663,"CreationDate":"2021-03-17T14:01:00.000","Title":"What is the version of pip installed with python 3.9?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any version of Python >= 3.4 installed from the official python.org includes pip.\nBut I couldn't find what is the default version of pip that \"comes\" with python 3.9? and is there a difference if I install it on a system with an older python version, and an exiting pip?\nI couldn't find an answer on google, and when I installed py3.9 on my pc, the pip3.9 is pip 9.0.1, which is the same as before, so I suspect if a pip3 version exists it does not update it.","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":2253,"Q_Id":66674659,"Users Score":-1,"Answer":"pip3 -V or pip3 --version will give you the answer","Q_Score":1,"Tags":"python,pip,python-3.9","A_Id":66674885,"CreationDate":"2021-03-17T14:01:00.000","Title":"What is the version of pip installed with python 3.9?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Use case-I want to apply different masking policies based on datatype of columns.e.g if column has a datatype of name then I want to apply mask_name policy and if datatype is string then mask_string.How do I do that?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":166,"Q_Id":66674968,"Users Score":0,"Answer":"Here is the steps\nSteps 1: Create multiple Masking policies as per your business requirement. Even though it is one time requirement but you have to revisit if business rule changes.\nStep 2: Create a python program. Assign variables for each policy. Run loop for each data type and create Alter statement and assign it to a variable. Execute it in the Program or create a consolidated file for all alter statements and execute the file using SnowSQL.","Q_Score":0,"Tags":"python-3.x,snowflake-cloud-data-platform","A_Id":66681631,"CreationDate":"2021-03-17T14:18:00.000","Title":"How to mask all PII columns of all tables in a specific schema based on datatype of columns in Snowflake using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed WinPython64-3.9.2.0 on my Windows 10 laptop.\nI tried to make a GUI with Qt Designer, but when I click on Form and then on View Python Code... I get the following warning message:\n\nUnable to launch\nC:WPy64-3910\\python-3.9.1.amd64\\lib\\site-packages\\pyqt_tools\\Qt\\bin\\bin\\uic:\nProcess failed to start: The system cannot find the file specified.\n\nI click on OK and I don't get the Python code of the GUI.\nPlease, can you help me?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":465,"Q_Id":66676015,"Users Score":0,"Answer":"The solution is to open the Start menu and click Settings, type \"Edit environment variables for your account\" and click on it. You will get a new window called \"Environmental Variables\". Click \"New\" under \"User variables for username\" and you will get another window.\nNext to \"variable name:\" write \"Path\" and next to \"Variable value:\" write\n\nC:\\WPy64-3910\\python-3.9.1.amd64 ; C:\\WPy64-3910\\python-3.9.1.amd64\\Scripts\n\nNote that I used a semicolumn to separate the two paths.\nIn case you use LaTex editors like Texmaker remember to add also this after a semicolumn:\n\nC:\\Users\\ username \\AppData\\Local\\Programs\\MiKTeX\\miktex\\bin\\x64\\\n\nOtherwise Texmaker won't work.","Q_Score":0,"Tags":"python,python-3.x,qt,pyqt5","A_Id":66706758,"CreationDate":"2021-03-17T15:17:00.000","Title":"When I try to view the Python code from Qt Designer, in WinPython, I get an error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create a pdf document with reportlab, one of the texts is long, and reportlab writes it in a single line, it does not generate the break, I inserted the text in part, but I need to justify it, so that it looks good.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":66676393,"Users Score":0,"Answer":"You can wrap your text in reportlab.platypus.Paragraph to get auto wrapping","Q_Score":2,"Tags":"python,python-3.x,pdf,reportlab","A_Id":66677075,"CreationDate":"2021-03-17T15:39:00.000","Title":"How to justify text in ReportLab","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed Django not knowing I was to create a virtual environment before installing it.(I don't know if I am making sense) but now I cant activate my virtual environment? is there something that I can do to rectify this","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":761,"Q_Id":66678476,"Users Score":0,"Answer":"if Django is installed in local environment you can uninstall it using pip command as : $ pip uninstall .\nand if you want to create a virtual environment and install Django in it for that you can use pipenv as pipenv install django== but for that you need to install pipenv which you can install using pip as pip install pipenv","Q_Score":0,"Tags":"python,django,frameworks,virtualenv","A_Id":66678857,"CreationDate":"2021-03-17T17:43:00.000","Title":"How to activate virtual environment","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to convert a directory of RGB images to a numpy array but the resulting array is way bigger than the sum of sizes of all the images put together. What is going on here?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":399,"Q_Id":66682344,"Users Score":2,"Answer":"That's because image files are usually compressed, which means that the stored data will be smaller than the original file containing all pixel data, when you open a image using PIL, for example, you'll get access to all RGB values of all pixels, so, there's more data 'cus it's uncompressed.","Q_Score":1,"Tags":"python,python-3.x,numpy","A_Id":66682833,"CreationDate":"2021-03-17T22:47:00.000","Title":"Numpy array bigger than the total size of images it is made up of","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to check if .ipynb files can be executed through cmd line, I have looked at runipy and papermill. I am more specifically looking for exit code 0, but none of the packages mentioned above check if the code fails anywhere. Papermill still returns exit code 0 even after a python exception. Are there any other packages that do this?\nI am looking at something like\nsome_pkg execute-notebook my_notebook.ipynb\nwhich could give me an exit code based on if the entire code is run successfully or not.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":175,"Q_Id":66682852,"Users Score":1,"Answer":"You can use the following cli instruction from the cmd line:\njupyter nbconvert --execute your_notebook.ipynb\nFor that, you will need to install the package jupyter_contrib_nbextensions with pip install jupyter_contrib_nbextensions\nThis way should give you an error message if it fails.","Q_Score":1,"Tags":"python,jupyter-notebook,jupyter,jupyter-lab,jupyter-console","A_Id":66682986,"CreationDate":"2021-03-17T23:45:00.000","Title":"Check if jupyter notebooks run without errors from cli","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to python and am not really sure how importing libraries works, but I understand that you are supposed to use \"from PIL import Image\" in order to import the Image module (is it even called a module???) and then use Image.open(\"filename\") in order to open the Image, but I got a \"No module named 'PIL'\" error when I tried to import it from PIL. I then just did \"import Image\" which didn't raise any error flags but I still can't use Image.open without getting an \"Undefined variable from import\" error.\nAny help is appreciated!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":66684713,"Users Score":0,"Answer":"In Python, you may sometimes need to install a library before using it. To install a library goto command prompt and type this command:\npip3 install Pillow\nyou will see some stuff install and if you get no error message, you are good to go. Now rerun your program.","Q_Score":0,"Tags":"python,import","A_Id":66684795,"CreationDate":"2021-03-18T04:17:00.000","Title":"\"Undefined variable from import\" error when trying to use Image","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use M1 Mac. But cv2 doesn't work in M1 Mac.\nI need to use the code below.\nimport cv2\nimgs_omni = np.array([cv2.resize(plt.imread(dirs), dsize=(58, 33)) for dirs in imgs_omni_dir])\nHow can I replace cv2.resize to other library or code?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":167,"Q_Id":66686059,"Users Score":0,"Answer":"please elaborate more on why exactly cv2 doesn't work in your M1 Mac. Did you install openCV packages? And what platform are you using to implement the code? And what have you done to make sure nothing you try is not working?\nThere's some replacement similar to cv2, but it may or may not help due to the vagueness of your question:\n\nnumpy.resize() which can be found after importing numpy.\nanother one is the resize() function from skimage.\n\nBoth of the mentioned packages have well elaborated documentations. Please do, go ahead and read.","Q_Score":0,"Tags":"python,replace,cv2,opencv-python,apple-m1","A_Id":66686292,"CreationDate":"2021-03-18T06:48:00.000","Title":"How to replace cv2.resize() in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Today i opened my vscode editor and i saw that the code runner icon had disappeared despite that it was installed. I could only see the default run code option. Please help me solve it, I also tried uninstalling and installing the extension again and also did the same with vscode but the problem still persists![the run option os of default vscode not coderunner][1]","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2046,"Q_Id":66686647,"Users Score":0,"Answer":"Check the pyhon ... program language version and select the correct one if you are using several different versions.","Q_Score":0,"Tags":"python,visual-studio-code,vscode-code-runner,coderunner","A_Id":69559695,"CreationDate":"2021-03-18T07:39:00.000","Title":"Code Runner Icon not showing in Visual studio code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Today i opened my vscode editor and i saw that the code runner icon had disappeared despite that it was installed. I could only see the default run code option. Please help me solve it, I also tried uninstalling and installing the extension again and also did the same with vscode but the problem still persists![the run option os of default vscode not coderunner][1]","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2046,"Q_Id":66686647,"Users Score":0,"Answer":"Go to the extensions and then search for code runner after this install it and then it will appear.","Q_Score":0,"Tags":"python,visual-studio-code,vscode-code-runner,coderunner","A_Id":66686723,"CreationDate":"2021-03-18T07:39:00.000","Title":"Code Runner Icon not showing in Visual studio code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My task is to use a given user-defined function to create a dataset for a specific region.\nI've insert my specific data in the function\nBut only got this error:\nKeyError: \"[' magType ' ' nst' ' gap' ' dmin ' ' rms' ' place ' ' type '\\n ' horizontalError ' ' depthError ' ' magError ' ' magNst ' ' status '\\n ' locationSource ' ' magSource ' ' net' 'id' ' updated '] not found in axis\"\nHow do I solve this problem? Because when I look at my data I got all this information (magType etc.)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":66688051,"Users Score":0,"Answer":"The problem is that the columns you have for the pd.DataFrame you fetch from the url are;\n['time', 'latitude', 'longitude', 'depth', 'mag', 'magType', 'nst', 'gap', 'dmin', 'rms', 'net', 'id', 'updated', 'place', 'type', 'horizontalError', 'depthError', 'magError', 'magNst', 'status', 'locationSource', 'magSource']\nWhich does not match with the column names you inserted into the drop function. Reformat the column names [\" magType \",\" nst\",\" gap\",\" dmin \",\" rms\",\" place \",\" type \" ,\" horizontalError \",\" depthError \",\" magError \",\" magNst \",\" status \",\" locationSource \",\" magSource \",\" net\",\"id\",\" updated \"] in the drop function exactly matching with your pd.DataFrame object data.\nPS: You might wanna look into fstrings or .format. That will make your code look a lot cleaner.\nSecond PS: You also might want to not recursively concatanate data. As data gets bigger, the process will be remarkable slower. The better way to do it is create a list (e.g. dfList) and append the list with the dataframes you fetch from the urls. And then use the function pd.concat(dfList). Cheers!","Q_Score":0,"Tags":"python","A_Id":66688216,"CreationDate":"2021-03-18T09:17:00.000","Title":"User-defined function to create a dataset","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a GUI app that will be used supposedly by mutliple users. In my app, I use QAbstractTableModel to display a MS Access Database (stored on a local server, accessed by several PCs) in a QTableView. I developped everything I needed for unique user interaction. But now I'm moving to the step where I need to think about multi-user interaction.\nFor exemple, if user A changes a specific line, the instance of the app on user's B PC needs to update the changed line. Another example, if user A is modifying a specific line, and user B also wants to modify it, it needs to be notified as \"already being modified, wait please\", and once the modification from user A is done, the user B needs to see this modification updated before he has any interaction.\nToday, because of the local nature of the MS Access database, I have to update the table view a lot of time, based on user interaction, in order to not miss any database modification from other potential users. It is kinda greedy in terms of performance and resources.\nI was thinking about using Django in order make the different app instances communicate with each other, but maybe I'm overthingking it and may be there is other solutions.\nDunno if it's clear, I'm available for more informations !","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":193,"Q_Id":66688187,"Users Score":0,"Answer":"Perhaps you could simply store a \"lastUpdated\" timestamp on the row. With each update, you update that timestamp.\nNow, when you submit an update, you include that timestamp, and if the timestamps don't match, you let the user know, and handle the conflict on the frontend (Perhaps a simple \"overwrite local copy, or force update and overwrite server copy\" option).\nThat's a simple and robust solution, but if you don't want users wasting time writing updates for old rows, you could use WebSockets to communicate from a server to any clients with that row open for editing, and let them know that the row has been updated.\nIf you want to \"lock\" rows while the row is already being edited, you could simply store a \"inUse\" boolean and have users check the value before continuing.","Q_Score":0,"Tags":"python,database,ms-access,pyqt5,multi-user","A_Id":66688336,"CreationDate":"2021-03-18T09:26:00.000","Title":"How to handle multi-user database interaction with PyQt5","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've seen through the web that .exe taken from python scripts may often run into errors because of some \"hooks missing\", which happens because pyinstaller wasn't able to track some modules while creating the .exe file. I'm currently using Python IDLE 3.61 and the scripts works fine without any error. The .exe actually runs but, for instance, it simply crushes when I try to plot a table giving the error:\n\nNoModuleFoundError: 'No module=plotly.validators.table found'.\n\nBuilding the .exe also via cx_Freeze, I came up to the same sort of problem:\n\nModule plotly.validators.table has no Attribute CellsValidators\n\nwhich confirmed me the problem is caused by issues with plotly.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":117,"Q_Id":66688285,"Users Score":0,"Answer":"Uninstall plotly module.\nInstall older version of plotly module.\nNow try building .exe file\n\n(If not working again, try further more old versions of plotly)\nI hope it works.","Q_Score":0,"Tags":"python-3.x,cmd,pyinstaller,exe","A_Id":66867549,"CreationDate":"2021-03-18T09:33:00.000","Title":"Recurrent error when using pyinstaller & cx_Freeze","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I load a dataset for person reidentification. In my dataset there are two folders train and test.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":66688472,"Users Score":0,"Answer":"Checkout modules csv (import csv) or load your dataset via open(filename, \u201er\u201c) or so. It might be easiest if you provide more context\/info.","Q_Score":0,"Tags":"python-3.x,tensorflow2.0,spyder","A_Id":66752753,"CreationDate":"2021-03-18T09:45:00.000","Title":"How can I load my own dataset for person?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I load a dataset for person reidentification. In my dataset there are two folders train and test.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":66688472,"Users Score":0,"Answer":"I wish I could provide comments, but I cannot yet. Therefore, I will \"answer\" your question to the best of my ability.\nFirst, you should provide a general format or example content of the dataset. This would help me provide a less nebulous answer.\nSecond, from the nature of your question I am assuming that you are fairly new to python in general. Forgive me if I'm wrong in my assumption. With that assumption, depending on what kind of data you are trying to load (i.e. text, numbers, or a mixture of text and numbers) there are various ways to load the data. Some of the methods are easier than others. If you are strictly loading numbers, I suggest using numpy.loadtxt(). If you are using text, you could use the Pandas package, or if it's in a CSV file you could use the built-in (into Python that is) CSV package. Alternatively, if it's in a format that Tensorflow can read, you could use the provided load data functions.\nOnce you have loaded your data you will need to separate the data into the input and output values. Considering that Tensorflow models accept either lists or numpy arrays, you should be able to use these in your training and testing steps.","Q_Score":0,"Tags":"python-3.x,tensorflow2.0,spyder","A_Id":66752595,"CreationDate":"2021-03-18T09:45:00.000","Title":"How can I load my own dataset for person?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am in the process of bringing an API up from 2.7 to 3.8. It is built to communicate with an agent over a TCP socket. I am having an issue with a checksum not being calculated right. I get a message back from my agent that the Checksum is bad.\nI know it is not my agent because the CRC check on header only packets are being accepted and the commands are executed correctly on the agent side.\nThe line of code I am having an issue with is:\nbody += format(struct.pack(\"=L\", binascii.crc32(format(body).encode()) & 0xFFFFFFFF))\nPreviously on 2.7 this line of code had no encoding\/formatting.\nDoes anyone know what I am doing wrong?\nI have a strong feeling it pertains to the encoding of 'body string'. After breaking the line of code down to its components I confirmed that the int output of binascii.crc32() is different between 3.8 and 2.7 and doing a bit of reading on the variety of byte\/char types there are I have become quite lost.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":197,"Q_Id":66689493,"Users Score":0,"Answer":"so the correct line of code is to append the checksum and body to the packet buffer simultaneously rather than add the checksum to the body and then add the body to the packet buffer. this avoids a decoding stage that was causing an issue.\nbuf = buf + format(body).encode() + struct.pack(\"=L\", binascii.crc32(format(body).encode()) & 0xFFFFFFFF)\nthe original would output:\n'{\"datasetName\": \"X_train\", \"fileName\": \"\/test\/MNIST\/X_train_uint8.h5\"}b\\'y\\\\xf8D\\\\xec\\''\nthe correct solution seen in this answer outputs:\n'{\"datasetName\": \"X_train\", \"fileName\": \"\/test\/MNIST\/X_train_uint8.h5\"}y\\xf8D\\xec'","Q_Score":1,"Tags":"python,crc32,binascii","A_Id":66698964,"CreationDate":"2021-03-18T10:45:00.000","Title":"Calculating CRC32 checksum on string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I run \"WinPython Command Prompt.exe\", the working directory defaults to the scripts directory of the installation.\nCreating a shortcut to run the exe with a specific working directory does not seem to have an effect.\nIs it possible to have the directory after running \"WinPython Command Prompt.exe\" be something other than the scripts directory?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":358,"Q_Id":66693316,"Users Score":0,"Answer":"on latest version, drag&drop your directory over the icon, it should make your day.","Q_Score":0,"Tags":"python,windows","A_Id":66723382,"CreationDate":"2021-03-18T14:38:00.000","Title":"Is it possible to change default directory for WinPython Command Prompt.exe?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"Basically I have two models to run in sequence. However, the first is a object-based model trained in TF2, the second is trained in TF1.x saved as name-based ckpt.\nThe foundamental conflict here is that in tf.compat.v1 mode I have to disable_eager_execution to run the mode, while the other model needs Eager execution (otherwise ~2.5 times slower).\nI tried to find a way to convert the TF1 ckpt to object-based TF2 model but I don't think it's an easy way... maybe I have to rebuild the model and copy the weights according to the variable one by one (nightmare).\nSo Anyone knew if there's a way to just temporarily turn off the eager_excution? That would solve everything here... I would really appreciate it!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":410,"Q_Id":66695555,"Users Score":1,"Answer":"I regretfully have to inform you that, in my experience, this is not possible. I had the same issue. I believe the tensorflow documentation actually states that once it is turned off it stays off for the remainder of the session. You cannot turn it back on even if you try. This is a problem anytime you turn off eager execution, and the status will remain as long as the Tensorflow module is loaded in a particular python instance.\nMy suggestion for transferring the model weights and biases is to dump them to a pickle file as numpy arrays. I know it's possible because the Tensorflow 1.X model I was using did this in its code (I didn't write that model). I ended up loading that pickle file and reconstructing a new Tensorflow 2.X model via a for loop. This works well for sequential models. If any branching occurs the looping method won't work super well, or it will be hard successfully implement.\nAs a heads up, unless you want to train the model further the best way to load initialize those weights is to use the tf.constant_initializer (or something along those lines). When I did convert the model to Tensorflow 2.X, I ended up creating a custom initializer, but apparently you can just use a regular initializer and then set weights and biases via model or layer attributes or functions.\nI ultimately had to convert the Tensorflow 1.x + compat code to Tensorflow 2.X, so I could train the models natively in Tensorflow 2.X.\nI wish I could offer better news and information, but this was my experience and solution to the same problem.","Q_Score":0,"Tags":"python,tensorflow,computer-vision,tensorflow2.0","A_Id":66750043,"CreationDate":"2021-03-18T16:53:00.000","Title":"How to temporarily turn off\/on eager_execution in TF2.x?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I efficiently transfer newly arrived documents from Azure CosmosDb with MongoDb api to Postgres at regular intervals?\nI am thinking of using a python script to query MongoDB based on timedate, but I am open to other suggestions.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":66695615,"Users Score":0,"Answer":"You can use Azure Data Factory to achieve this. Use Azure Cosmos DB MongoDB API as source and Postgres as sink. Then use watermark to record your last modify time. Finally, create a schedule trigger to execute it.","Q_Score":0,"Tags":"python,mongodb,postgresql,azure","A_Id":66965222,"CreationDate":"2021-03-18T16:57:00.000","Title":"Using python I need to transfer documents from Azure CosmosDB with MongoDB api to Postrgres on a daily basis probably using Azure functions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the title suggests i want to shutdown pc without using modules such as os ,popen or subprocess to invoke system commands.i have searched alot but all the answers were using os module to invoke system commands.i want a pure python way of doing this.and also OS independent.Any help would be greatly appreciated!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":63,"Q_Id":66697404,"Users Score":1,"Answer":"This operation will always include operating system calls ,cause anyways you ask for an operating system action.A pure python module that would do the thing you ask ,anyways will use the things you want to avoid.So yes there is a way to do it with 'pure python' but you need to write the code for your case as i dont think any library exists by now(due to complexity for all cases for all actions).\nThe solution is pretty straight forward:\n\nDefine what os system you work with platform module(platform.system(),platform.release(),platform.version())\nWrite the os system calls for each platform.","Q_Score":0,"Tags":"python,python-3.x","A_Id":66697612,"CreationDate":"2021-03-18T18:56:00.000","Title":"How to shutdown pc in python without using system commands","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"As the title suggests i want to shutdown pc without using modules such as os ,popen or subprocess to invoke system commands.i have searched alot but all the answers were using os module to invoke system commands.i want a pure python way of doing this.and also OS independent.Any help would be greatly appreciated!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":63,"Q_Id":66697404,"Users Score":1,"Answer":"Much of the code that you write is actually being managed by the Operating System, and doesn't run independently from it. What you are trying to accomplish is a protected action, and needs to be invoked by the OS API.\nIf I understand correctly, programming languages like Python can't usually directly work with a computer's hardware, and even much more low level programming languages like C require use of the Operating System's APIs to take such action.\nThat's why most of the solutions you've found depend on the os package, Python doesn't have the ability to do it natively, it needs to make use of the aforementioned OS API.\nThis is a feature, not a bug, and helps keep programs from gaining access or visibility into other processes and protected operations.","Q_Score":0,"Tags":"python,python-3.x","A_Id":66697581,"CreationDate":"2021-03-18T18:56:00.000","Title":"How to shutdown pc in python without using system commands","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am using a Bing Maps key in the geopy.geocoder.Bing package to get latitude and longitude of addresses in a dataset. It has been working just fine for the last 2 months and then suddenly today I am getting this GeocoderAuthenticationFailure: Non-successful status code 401 error. I am unsure if it is a problem with the geopy function itself or if there is something broken with the API. I just looked at the key in my Bing Maps account and everything looks fine. I created a new key to see if that would work but it gave me the same error. Does anyone know if this is an error with the API or if there is something I can do to fix this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":135,"Q_Id":66698879,"Users Score":0,"Answer":"The error code came from a typo when calling the apikey variable from my env file. After fixing the typo it ran just fine.","Q_Score":0,"Tags":"python,api,bing-maps,geopy,geocoder","A_Id":66699490,"CreationDate":"2021-03-18T20:46:00.000","Title":"GeocoderAuthenticationFailure: Non-successful status code 401","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to get Google Cloud SDK working on my Windows 10 desktop, but when I use the SDK shell (which, as I understand it, is just command line but with the directory changed to where Cloud SDK is installed), running 'gcloud init' returns the following:\n'\"\"C:\\Program' is not recognized as an internal or external command,\noperable program or batch file.\nPython was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.\n'\"\"C:\\Program' is not recognized as an internal or external command,\noperable program or batch file.\nPython was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.\nPython was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases.\nIt then finishes the configuration and tells me 'Your Google Cloud SDK is configured and ready to use!' However, whenever I run any other commands, I get the same error popup again before it continues doing whatever the command does. I believe Python is installed correctly and added to Path, and when I call python from the same command line, same directory as my 'gcloud init' call, it functions as expected and opens a python console. Any ideas at what the problem might be? (or if it will even affect anything?)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":304,"Q_Id":66700328,"Users Score":0,"Answer":"Go to -> \"start\" and type \"Manage App Execution Aliases\". Go to it and turn off \"Python\"","Q_Score":0,"Tags":"python,google-cloud-platform,gcloud","A_Id":71111826,"CreationDate":"2021-03-18T23:03:00.000","Title":"Windows 'gcloud init' returns C:\\Program is not recognized, Python was not found (but Python works on cmd)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to incorporate gravity in a simulation involving cohesive surfaces. The geometry changes when I turn gravity on and I'm curious if I can \"Tie\" the surfaces together and allow gravity to be applied to the model. Once everything settles (i.e. kinetic energy is at a minimum) I can suppress the tie interface and look at cohesive surfaces between the two bodies of interest?\nI'm wondering if I can have a Gravity step with a tied constraint followed by a step that suppresses that tie constraint.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":339,"Q_Id":66702156,"Users Score":1,"Answer":"No, it is not possible. However, you can change the contact definition during the simulation. So, for example, you can use hard contact with no separation at the beginning of the analysis and after changing it to the one you need.","Q_Score":1,"Tags":"python,abaqus","A_Id":66705870,"CreationDate":"2021-03-19T03:30:00.000","Title":"Is it possible to suppress an Abaqus \"Tie Constraint\" on a different step of a simulation?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a school assignment that asks me to:\n1)Take my previously created dictionary and write it to a file as a string.\n2)Then import that dictionary to python again and invert it\n3)Write the inverted dictionary to a new file.\nI asked a question about this program previously and got a great answer but I'm still having a rough time getting through an error. When I try running the program I get a Type error: unhashable type: 'list'. I am assuming it has to do with the invert function.\nWhen I run that function in a separate script using the ChessPlayerProfile dictionary it seems to work fine. But when I try to use it in this script through the return dictionary from Read_Invert_Write function's output I get the error.\nAny ideas what I am doing wrong? Any help would be appreciated. Here is the code and output:\n\n\n import os\n import json\n \n ChessPlayerProfile = {\n \"Matt\": [(\"Rating: \", 2200), (\"FIDE ID: 0147632DF\"), (\"Member Status: \", True)],\n \"Will\": [(\"Rating: \", 2200), (\"FIDE ID: 3650298MK\"), (\"Member Status: \", False)],\n \"Jithu\": [(\"Rating: \", 1900), (\"FIDE ID: 5957200LH\"), (\"Member Status: \", True)],\n \"Lisa\": [(\"Rating: \", 2300), (\"FIDE ID: 7719328CX\"), (\"Member Status: \", False)],\n \"Nelson\": [(\"Rating: \", 2500), (\"FIDE ID: 6499012XX\"), (\"Member Status: \", True)],\n \"Miles\": [(\"Rating: \", 1600), (\"FIDE ID: 4392251TJ\"), (\"Member Status: \", True)],\n }\n \n \n def write2file():\n with open(\"chessdict.txt\", \"w+\") as f:\n json.dump(ChessPlayerProfile, f)\n \n \n \n def Read_Invert_Write():\n with open(\"chessdict.txt\", \"r\") as f:\n Content = (json.load(f))\n Content = dict(Content)\n invert(Content)\n with open(\"newoutput.txt\", \"w+\") as f:\n json.dump(Content, f)\n \n \n \n def invert(d):\n inverse = dict()\n for key in d:\n val = d[key]\n for item in val:\n if item not in inverse:\n inverse[item] = [key]\n else:\n inverse[item].append(key)\n print(inverse)\n return inverse\n \n \n def main():\n write2file()\n Read_Invert_Write()\n main()\n\n\nOutput:\n\n\n Traceback (most recent call last):\n File \"\/home\/vigz\/PycharmProjects\/pythonProject\/copytest2.py\", line 46, in \n main()\n File \"\/home\/vigz\/PycharmProjects\/pythonProject\/copytest2.py\", line 45, in main\n Read_Invert_Write()\n File \"\/home\/vigz\/PycharmProjects\/pythonProject\/copytest2.py\", line 24, in Read_Invert_Write\n invert(Content)\n File \"\/home\/vigz\/PycharmProjects\/pythonProject\/copytest2.py\", line 35, in invert\n if item not in inverse:\n TypeError: unhashable type: 'list'","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":216,"Q_Id":66703039,"Users Score":0,"Answer":"JSON doesn't have separate list or tuple types. It was based on Javascript, not Python. JSON has a single sequence type, JSON arrays, and the Python json module serializes both lists and tuples as JSON arrays.\nWhen you load the serialized JSON, JSON arrays get deserialized as Python lists. That means the values of your new dict are now lists of lists instead of lists of tuples. When you run your for item in val loop, item is a list instead of a tuple, and if item not in inverse is an attempt to perform a dict lookup with an unhashable key.\nUnrelated, if you meant for (\"FIDE ID: 0147632DF\") to be a tuple, you need a trailing comma: (\"FIDE ID: 0147632DF\",). Alternatively, it would probably make more sense to use a 2-element tuple, following the format of the other tuples: (\"FIDE ID: \", \"0147632DF\"). Or, instead of a list of 2-element tuples, a dict might be a more convenient data representation.","Q_Score":0,"Tags":"python,dictionary,hash","A_Id":66703093,"CreationDate":"2021-03-19T05:40:00.000","Title":"Unhashable type: List error in dictionary invert function","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python code to search in pdf documents. When I run the .py file, it executes fine.\nHowever, i have made an executable file with pyinstaller. this exe file is taking too long to open - almost 10 minutes.\nWhat could be the reason?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":404,"Q_Id":66704253,"Users Score":0,"Answer":"10 minutes is very very long. But it you build a single executable with pyinstaller, it will uncompress itself into a temp folder everytime it starts. That folder will contain the required libraries a modified version of the Python interpretor and your own code in pre-compiled form. If you use a slow disk unit or have little space on it, it will indeed take time.\nA possible way to speed up the start time is to have pyinstaller build a folder. This will save the initial uncompressing time.","Q_Score":0,"Tags":"python,pyinstaller,executable","A_Id":66704352,"CreationDate":"2021-03-19T07:44:00.000","Title":"Python Executable taking too much time","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using ubuntu 20.04. I created a new environment.\nconda create -n tfgpu python=3.8\nconda activate tfgpu\npip install tensorflow-gpu==2.3\njupyter notebook\nThen I open a previously created .ipynb file and I try to import tensorflow.\nimport tensorflow as tf\ntf.version\nversion is coming out to be 2.4.1\nI had an installation of tensorflow 2.4.1 in my base environment. Even if it is unrelated, I uninstalled that. I checked for tf 2.4.1 in other environment too but could not find any. When I write !pip uninstall tensorflow in that notebook, it says Skipping as no tensorflow installed. When I write !pip uninstall tensorflow-gpu, it uninstalled tensorflow==2.3. And after that it was still able to import tensorflow which was 2.4.1 version. I do not understand what is happening. I can say that I once also installed tf-nightly but was in a different environment. I am thinking it is something related to path of installation or does it have anything to do with environment name. It is really annoying, any help is appreciated. Thank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":142,"Q_Id":66705579,"Users Score":0,"Answer":"I missed installing jupyter notebook. I do not yet understand what was happening, but my mistake was that. The problem is resolved.","Q_Score":0,"Tags":"python,tensorflow,jupyter-notebook,version","A_Id":66705902,"CreationDate":"2021-03-19T09:28:00.000","Title":"Jupyter environment error, loading different tensorflow version than installed","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I modify a np array based on the current value and the index? When I just want to modify certain values, I use e.g. arr[arr>target_value]=0.5 but how do I only modify the values of arr > target_value where also the index is greater than a certain value?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":66706687,"Users Score":0,"Answer":"For that very specific example you would just use indexing I believe\neg arr[100:][arr[100:] > target_value]=0.5\nin general it could be conceptually easier to do these two things separately. First figure out which indices you want, then check whether they satisfy whatever condition you want.","Q_Score":0,"Tags":"python,arrays,numpy","A_Id":66706708,"CreationDate":"2021-03-19T10:45:00.000","Title":"Modify values in numpy array based on index and value","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to do a connection between a computer simulating being a server and another computer being a user, both with Linux.\nIn the first computer I've created a directory called \"server\" and in that directory I've done the following command:\npython3 -m http.server 8080\nThen I can see that directory going to the localhost. But what I want is to see that localhost from the other computer, I tried with wget, and the gnome system of sharing files but none of them worked, and I'm not seeing any solution online.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":968,"Q_Id":66708739,"Users Score":0,"Answer":"I'm not sure I fully understand your question but if you want to reach your folder from an other computer with your command python3, you can use the option -b followed by an IP address used on your linux.","Q_Score":0,"Tags":"python,html,linux,networking,server","A_Id":66708913,"CreationDate":"2021-03-19T13:03:00.000","Title":"How to connect to a Localhost in linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to do a connection between a computer simulating being a server and another computer being a user, both with Linux.\nIn the first computer I've created a directory called \"server\" and in that directory I've done the following command:\npython3 -m http.server 8080\nThen I can see that directory going to the localhost. But what I want is to see that localhost from the other computer, I tried with wget, and the gnome system of sharing files but none of them worked, and I'm not seeing any solution online.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":968,"Q_Id":66708739,"Users Score":0,"Answer":"If I'm understanding your question correctly you want to connect to the server from another machine on the same network.\nIf so you can run hostname -I on the server to output to the local IP address of the server, you can then use this on the other machine to connect to it (provided they are on the same network)","Q_Score":0,"Tags":"python,html,linux,networking,server","A_Id":66709470,"CreationDate":"2021-03-19T13:03:00.000","Title":"How to connect to a Localhost in linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to run a Pygame in windows 10 Powershell, I have tried using the python programName.py command but it just returns an error telling me that no such file or directory exists, specifically, it says C:\\Program Files (x86)\\Python36-32\\python.exe: can't open file 'spaceshooter.py': [Errno 2] No such file or directory . I don't have admin rights on this computer and I cannot use CMD either.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":66712107,"Users Score":0,"Answer":"the problem is that your computer is not able to find the find\nthe solution is to show the computer where it is located(the directory)\ngo to that folder where u have saved your program and copy the directory\nthen open powershell\nthen type cd (paste the copied directory)\nthen type py programName.py or python programName.py\nnow it should work \u270c","Q_Score":0,"Tags":"python,powershell,windows-10","A_Id":66712307,"CreationDate":"2021-03-19T16:34:00.000","Title":"Running python script in powershell","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create a bot for a game. Therefore I need to capture the screen however the Pyautogui's screenshot function is too slow for a game. How can I capture the screen directly without using the module. I tried looking for answer but the only thing I found were answers for mac or windows. Now I know there are many ways to capture a X window but which of them are actually fast enough for getting at least 30 FPS.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":119,"Q_Id":66713294,"Users Score":1,"Answer":"None of them. A full HD screen is about 8 MB. GPUs are optimized for getting data INTO memory, not getting data OUT of memory. The read path is always lower priority. When you add the overhead of Python, you are never going to get 30 FPS.","Q_Score":0,"Tags":"python,linux,opencv,pyautogui,xorg","A_Id":66713604,"CreationDate":"2021-03-19T17:53:00.000","Title":"How can I capture the screen fast using Pyautogui in Linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"For AWS Lambda handlers, does it matter if the code is async vs sync (eg async def vs def), with respect to the concurrency performance of multiple calls to that Lambda function? That is, if I write a synchronous function, will that function be called \"asynchronously\" anyway due to the way AWS internally handles Lambda invocations from separate calls? Or will the function start processing UserA, and queue UserB's request until it has completed UserA?\nBackstory: I'm writing some Python + SQLAlchemy Lambda functions. SQLAlchemy 1.4 just released asyncio support, but there will be some re-write overhead on my part. I'm wondering if I can just write traditional synchronous Python code without concurrency consideration on Lambda (where concurrency is something I'd definitely need to consider for custom Flask \/ FastAPI code).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":417,"Q_Id":66714861,"Users Score":1,"Answer":"If you have more than one request to invoke a lambda at the same time the service will attempt to run multiple instances of the lambda. You may run into throttling due to the concurrency limits on either the lambda itself, or the account. Lambda does not queue requests to the same lambda function. A function is never processing more than one request at a time. Even in cases where the system is providing a queuing mechanism to handle the requests, they are not a thread queue, like you might have with a server. They are really just internal SQS queues that are invoking the lambda with one request at a time.","Q_Score":0,"Tags":"aws-lambda,python-asyncio","A_Id":66715034,"CreationDate":"2021-03-19T19:57:00.000","Title":"AWS Lambda: does it matter if handlers are sync vs async, w.r.t. concurrent calls?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I accidentally stopped the Anaconda uninstalling. Now prompt is gone and Uninstall.exe is also gone and I can't seem to find a way to uninstall Anaconda now. If someone knows how can I uninstall Anaconda so I can reinstall it clean please help me.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":64,"Q_Id":66716516,"Users Score":0,"Answer":"I figured it out, I reinstalled Anaconda on a differente folder and ran Anaconda-clean on the conda terminal. Then I deleted the old folder and it was all good (until now at least).","Q_Score":0,"Tags":"python,anaconda,conda,uninstallation,anaconda3","A_Id":66727706,"CreationDate":"2021-03-19T22:49:00.000","Title":"Accidentally Anaconda uninstall cut short","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to deploy\/publish an Azure Function through VS-Code. Programming language used is PYTHON 3.7\nI am obviously not able to publish this because the resource group I am using allows Operation system: Linux. Looks like VS-Code tries publishing it as a Windows OS be default.\nHence, while publishing, I do not get an option to choose the OS I want to publish on.\nHowever, If I use Visual Studio, I have the option to choose the OS while publishing, but does not support Python.\nWhat am I missing?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":152,"Q_Id":66717239,"Users Score":0,"Answer":"First, I think python 3.7 is not supported by windows OS on azure. When you try to deploy your function app of python 3.7 in VS Code, it will deploy it to a Linux function app.\nSecond, if your VS Code has some problem, you can first create a function app of python Linux OS on azure, then use below command to deploy your function app:\nfunc azure functionapp publish ","Q_Score":0,"Tags":"visual-studio,visual-studio-code,operating-system,azure-functions,python-3.7","A_Id":66739670,"CreationDate":"2021-03-20T00:44:00.000","Title":"Azure Function with Visual Studio Code - No option to choose OS while publishing (Python 3.7)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to create a python 3 program with tkinter that will listen to my keyboard input even while the window is not \"focused,\" as in I would have a separate window open, say a browser and press the tab key, which would then put the window into focus. Is there any way that I could go about doing this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":78,"Q_Id":66717592,"Users Score":1,"Answer":"Tkinter has no support for this. Tkinter can only respond to events when it has focus.","Q_Score":0,"Tags":"python,python-3.x,tkinter","A_Id":66718224,"CreationDate":"2021-03-20T02:03:00.000","Title":"Python 3 Tkinter: How do you listen to user inputs when the window is not in focus?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Previous version of code wrote fine with Python 2.7 to AWS MySQL Version 8 with the following:\n\n\n\"\"\"INSERT INTO test_data(test_instance_testid,\nmeas_time,\ndata_type_name,\nvalue,\ncorner_case,\nxmit,\nstring_value)\nVALUES('15063', '2021-03-19 20:36:00', 'DL_chamber_temp', '23.4',\n'None', 'None', 'None')\"\"\"\n\nBut now, porting to Python 3.7 to the same server I get this:\n\npymysql.err.InternalError: (1366, \"Incorrect integer value: 'None' for column 'xmit' at row 1\")\n\nThis makes sense since it is a str value 'None' and not Python type None (although it used to work).\nIt is legal to fill these columns as NULL values--that is their default in the test_data table.\nIf I change the code and set the values to Python None, I get a different error which I don't understand at all:\n\n\"\"\"INSERT INTO test_data(test_instance_testid,\nmeas_time,\ndata_type_name,\nvalue,\ncorner_case,\nxmit,\nstring_value)\nVALUES('15063', '2021-03-19 20:36:00', 'DL_chamber_temp', '23.4',\nNone, None, None)\"\"\"\n\n\npymysql.err.InternalError: (1054, \"Unknown column 'None' in 'field list'\")\n\nI really appreciate any help or suggestions.\nThanks, Mike\nThanks for the help! Yes, NULL does work, but I'm stuck on how to handle value types on the fly within my method. Depending on the call I need to write a quoted value in one case and non-quoted NULL on others. For some reason my old code (illogically!) worked. I've tried everything I can think of without any luck.\nAny thoughts on how to do this?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":788,"Q_Id":66718431,"Users Score":0,"Answer":"make default value as NULL at PhpMyAdmin","Q_Score":0,"Tags":"python,mysql","A_Id":66739265,"CreationDate":"2021-03-20T05:09:00.000","Title":"MySQL Version 8 Python 3.7 Cannot write None value as NULL","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when I use browser.find_element_by_css_selector(\".LkLjZd ScJHi OzU4dc \").click(),\nMessage: no such element: Unable to locate element: {\"method\":\"css selector\",\"selector\":\".LkLjZd ScJHi OzU4dc \"}\nThis error occurs despite of element is that