GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
61,500,169
0
0
0
0
3
false
0
2020-01-25T09:49:00.000
0
4
0
Thonny : installing tensorflow and importing it
59,908,131
0
python,tensorflow,machine-learning,data-science,thonny
I use Thonny and the way to install it is tool>>Open system shell Then type in "pip3.6 install --upgrade TensorFlow"
I am having trouble importing and installing tensorflow. I can't install it via that Thonny manage package option nor via the command window for windows operators. I get the same error for both ways: ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none) Error: No matching distribution found for tensorflow I tried to convert back to python 3.6 but the issue still arises. This is annoying me because I cannot implement Machine Learning, which is something I am strongly passionate on. Any reasons or solutions would be appreciated
0
1
1,448
0
67,617,331
0
0
0
0
3
false
0
2020-01-25T09:49:00.000
0
4
0
Thonny : installing tensorflow and importing it
59,908,131
0
python,tensorflow,machine-learning,data-science,thonny
Here is how I got Tensorflow version 2.5.0 to install and import using python version 3.6.8, with Thonny version 3.3.7, on a windows 10 laptop;- When I installed Thonny, the default python interpreter was set to python 3.7.9. I need to change that, as follows;- Using Thonnys menus and options , use tools / options / interpreter . Use the pull down option selections to find the python 3.6.8 that must have been previously installed on your machine. On my machine it is located at C:\Users\XXXX\ApplicationData\Local\Programs\Python\Python36\python.exe Hit Ok and Thonny will show that it is now using python 3.6.8 in the lower shell ! Then install tensorflow using the Thonny menu tools / manage packages. I installed tensorflow version 2.5.0 without any problems ! Then, in the interactive shell, I tested it ;- Python 3.6.8 (C:\Users\con_o\AppData\Local\Programs\Python\Python36\python.exe) import tensorflow 2021-05-20 09:06:01.231885: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2021-05-20 09:06:01.258437: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. print(tensorflow.version) <module 'tensorflow._api.v2.version' from 'C:\Users\XXX\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\_api\v2\version\init.py'> print(tensorflow.version) 2.5.0 Note;- The last command print(tensorflow.version) does NOT work without the previous command print(tensorflow.version) being run first !! Note;- I could only install version 2.5.o of tensorflow. Thonny could not install any earlier version of Tensorflow! I hope this helps the many folk struggling with the very difficult Tensorflow installations.
I am having trouble importing and installing tensorflow. I can't install it via that Thonny manage package option nor via the command window for windows operators. I get the same error for both ways: ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none) Error: No matching distribution found for tensorflow I tried to convert back to python 3.6 but the issue still arises. This is annoying me because I cannot implement Machine Learning, which is something I am strongly passionate on. Any reasons or solutions would be appreciated
0
1
1,448
0
59,921,768
0
1
0
0
1
false
0
2020-01-26T02:03:00.000
0
2
0
Is there a way of setting a default precision that differs from double in Python?
59,914,993
0
python,scipy,precision
LSODA exposed through scipy.integrate is double precision only. You might want to look into some rescaling of variables, so that that thing which is 1e-12 becomes closer to unity. EDIT. In the comments, you indicated As I've stated three times, I am open to rewriting to avoid LSODA Then what you can try doing is to look over the code of solve_ivp, which is pure python. Feed it with decimals or mpmath high-precision floats. Observe where it fails, look for where it assumes double precision. Rewrite, remove this assumption. Rinse and repeat. Whether it'll work in the end, I don't know. Whether it's worth it, I suspect not, but YMMV.
I'm aware of Decimal, however I am working with a lot of code written by someone else, and I don't want to go through a large amount of code to change every initialization of a floating point number to Decimal. It would be more convenient if there was some kind of package where I could put SetPrecision(128) or such at the top of my scripts and be off to the races. I suspect no such thing exists but I figured I would ask just in case I'm wrong. To head off XY Problem comments, I'm solving differential equations which are supposed to be positive invariant, and one quantity which has an equilibrium on the order of 1e-12 goes negative regardless of the error tolerance I specify (using scipy's interface to LSODA).
0
1
384
0
60,014,615
0
1
0
0
1
false
0
2020-01-26T04:25:00.000
0
1
0
What seperator to use when creating csv of unknown contents
59,915,542
0
excel,python-3.x,csv,ocr
As Tim already stated, a correct CSV file should quote any field that contains characters which are used as field (mostly comma or semikolon) or line separaters (mostly \n or \r\n) in the CSV file. If you don't want to check the contens you can just quote all fields. Any good CSV import engine should be alright with that.
I'm trying to make a function that takes a picture of a table and converts that into a csv. The picture of the table could contain commas if it was say a bank statement i.e. $3,000. So what separator should I use when creating arbitrary csv document since the standard comma seperator would be confused with the commas in the contents of the csv when excel tries to read it.
0
1
17
0
59,939,869
0
0
0
0
1
false
0
2020-01-26T06:43:00.000
1
2
0
Reconstructing a matrix from an SVD in python 3
59,916,150
0.099668
python-3.x,numpy,matrix,linear-algebra,svd
I figured it out, just using the np.matmul() function and then just multiplying the 3 matrices of u s and v together was enough to get them back into an original matrix.
Hi so basically my question is I have a matrix which I've SVD decomposed and have it in the variables u, s, and v. I've made some alterations to the s matrix to make it diagonal, as well as altered some of the numbers. Now I'm basically trying to reconstruct it into a regular matrix from the 3 matrices back into the original matrix. Does anyone know of any functions that do this? I can't seem to find any examples of this within numpy.
0
1
3,020
0
59,926,374
0
0
0
0
1
false
1
2020-01-26T07:28:00.000
0
2
0
Word To Vec with Spacy, words into same categorie
59,916,393
0
python,spacy,word2vec
If you need word-vectors for words not in the model you're using, you'll have to either: find & use a different model that contains those words train your own model from your own training data, that contains many examples of those words' usages in context
i try do cluster words into the same category. Therfore i wanted to use Spacy Word2Vec. Its already working with easy words like banana apple and car. It shows the nearly same word. If the words gets more specific like car, battery, accumulator, accu and so on, if the were more technical, Spacy sends Zero vectos. So these words were not included into the bibliothek. Do you have some input for me? Furthermore, i have to do it in german. Thank you very much Jokulema
0
1
80
0
59,921,966
0
0
0
0
1
true
0
2020-01-26T19:05:00.000
1
1
0
How do you write German text into a CSV file?
59,921,723
1.2
python,selenium,file,web-scraping,utf
Try using ISO-8859-15 encoding. Also make sure that when you open the file in an editor, it's encoding is set to same encoding.
I'm trying to write text that was scraped from a German website into a CSV file. I tried using UTF-8 encoding as such: with open('/Users/filepath/result.csv', 'a', encoding='utf8') as f: f.write(text) But this is what appeared in the CSV file instead of with an ü: Projekt für Alleinerziehende mit mehreren (behinderten) Kindern
1
1
201
0
60,563,844
0
0
0
0
1
false
1
2020-01-26T20:20:00.000
0
1
0
Text classification into predefined categories
59,922,367
0
python,svm,text-classification
Yes, that would work. It is essentially an additional class called "non", for which the classifier will learn to classify all the documents into, that are labeled as that class. So when you use your final product, it will try to classify the new text data into the classes, including "non".
I am trying to classify text data into a few categories. But in the data set, there can be data that does not belong to any of the defined categories. And after deploying the final product, the product should be deal with text data that does not belong to the predefined category. To implement that solution I am currently using the SVM text classifier. And I am planning to define another category as "non" to deal with the data that does not belong to predefined categories. Is this a correct approach?
0
1
106
0
59,931,906
0
0
0
1
1
false
0
2020-01-27T13:09:00.000
0
3
0
How to export custom tables from R/Python to Excel?
59,931,829
0
python,r,excel,python-3.x
Have you tried using pandas ? In python.
I have to make 2 business-like reports about media. Therefore I have to analyze data and give colleagues an excel file with multiple custom formatted tables. Is there a way to make custom formatted tables in R or python and export them to excel? This way I can automate formatting the 3000+ tables :) Thanks in advance!
0
1
178
0
60,004,958
0
0
0
0
1
false
1
2020-01-27T14:17:00.000
0
2
0
Using scipy.optimize.minimize with a quantized integer range
59,932,915
0
python,scipy-optimize-minimize
If the question is, if you can use discrete/integer/categorical design variables in scipy.optimize.minimize, the answer is no. It is mainly focused on gradient-based optimization, and it assumes continues design variables and continuous objective function. You can implement some continuous approximation, or branch-and-bound or similar methods to solve for discrete design variables.
Can I use scipy.optimize.minimize to only restrict its answer to for example 2, 3, 4, 6, 8, 10? Just those values and not float values?
0
1
76
0
59,953,332
0
1
0
0
1
false
0
2020-01-27T16:43:00.000
0
3
0
How to speed up this Regular Expression in .Replace function?
59,935,416
0
python,regex,replace
I don't understand what caused the problem but I solved it. The only thing I changed is in step 2. I used apply/lambda function and it suddenly works and I have no idea why. Take the ID and replace "ELI-" with "@", save as ID2: df["ID2"] = df["ID"].str.replace("ELI-", "@") Find ID2 in the string of bk_name and replace it with "": df["new_name"] = df[["bk_name", "ID2"]].astype(str).apply(lambda x: x["bk_name"].replace(x["ID2"], ""), axis=1) Take the ID again, replace "ELI-" with "#", add it at the end of the string of new_name: df["new_name"] = df["new_name"] + df["ID"].str.replace("ELI-", "#")
I have 4.5 million rows to process, so I need to speed this up so bad. I don't understand how regex works pretty well, so other answers are so hard for me to understand. I have a column that contains IDs, e.g.: ELI-123456789 This numeric part of this ID is contained in this string in a columnm bk_name started with a "@": AAAAA@123456789_BBBBB;CCCCC; Now my goal is to change that string into this string, throw the ID at the end, started with a "#", save it in new_name: AAAAA_BBBBB;CCCCC;#123456789 Here's what I tried: Take the ID and replace "ELI-" with "@", save as ID2: df["ID2"] = df["ID"].str.replace("ELI-", "@") Find ID2 in the string of bk_name and replace it with "": df["new_name"] = df["bk_name"].replace(regex = r'(?i)' + df["ID2"].astype(str), value = "") Take the ID again, replace "ELI-" with "#", add it at the end of the string of new_name: df["new_name"] = df["new_name"] + df["ID"].str.replace("ELI-", "#") Now the problem is that step 2 is the line that takes most of the time, it took 6.5 second to process 6700 rows. But now I have 4.6 millions rows to process, it's been 7 hours and it's still running and I have no idea why. In my opinions, regex slows down my code. But I have no deeper understanding. So thanks for your help in advance, any suggestions would be appreciated :)
0
1
116
0
59,937,589
0
1
0
0
1
true
0
2020-01-27T18:44:00.000
0
1
0
Specific conditions for string replacement?
59,937,062
1.2
python-3.x,pandas
Maybe try something like this: df['column'] = df['column'].astype(str).replace('_MM','|MM') df['column'] = df['column'].astype(str).replace('GCA_','GCA')
I was wondering if it were possible to iterate over a pandas column and replace strings if a particular condition was met. Essentialy I have a dataframe column with 100s of strings all in the general format GCA_XXXXX.X_MMXXXX.X, although some are in the XXXX_MMXXXX.X format, and I need to remove one of that dashes and replace with '|' and only if it is followed by MM, if it is AFTER GCA then I need it to be replaced form '_' to ''. Is there any way in python that I can set this conditions and iterate a function over the column? thanks!
0
1
20
0
62,744,734
0
0
0
0
1
false
1
2020-01-27T19:02:00.000
0
1
0
How to erase pandas_dedupe.dedupe_dataframe training set?
59,937,320
0
python,pandas,python-dedupe
pandas-dedupe v1.3.1, you simply need to do the following: delete dedupe_dataframe_learned_settings and dedupe_dataframe_training.json; run dedupe_dataframe setting update_model=False [note: this is the default]. This is the standard procedure. If it does not work, please provide more info related to the error you get.
I am working with the python pandas_dedupe package, specifically with pandas_dedupe.dedupe_dataframe. I have trained the dedupe_dataframe module via the interactive prompts. But now I need to retrain the dedupe_dataframe module. How can I erase the training set and start from scratch? I have tried deleting the dedupe_dataframe_learned_settings and dedupe_dataframe_training.json files, but then then the python script throws an error. I work with PyCharm as my IDE. Any hint would be much appreciated. Thanks!
0
1
258
0
59,969,860
1
0
0
0
1
true
0
2020-01-29T14:33:00.000
0
3
0
How to get unsampled data from Google Analytics API in a specific day
59,969,386
1.2
python-3.x,google-analytics,google-analytics-api
setting sampling to LARGE is the only method we have to decide the amount of sampling but as you already know this doesn't prevent it. The only way to reduce the chances of sampling is to request less data. A reduced number of dimensions and metrics as well as a shorter date range are the best ways to ensure that you dont get sampled data
I am building a package that uses the Google Analytics API for Python. But, in severous cases when I have multiple dimensions the extraction by day is sampled. I know that if I use sampling_level = LARGE will use a sample more accurate. But, somebody knows if has a way to reduce a request that you can extract one day without sampling? Grateful
0
1
840
0
59,980,630
0
0
0
0
1
true
0
2020-01-30T06:26:00.000
1
1
0
MLPClassfier predict and predict_proba seem inconsistent
59,980,207
1.2
python,scikit-learn,neural-network,classification
Definition predict - classifies your input into a label. predict_proba - returns the predicted probability for each class in your model. Example If this were a binary problem, and your classes labelled 0 and 1. Then for some input you are testing, predict will return a class, let's say 1. However, predict_proba is going to give you the predicted probability that this input maps to class 1, which could, in this example, be 0.8. This is why their values do not match. Which is more accurate? You can't really compare their accuracy. However, you can treat the predict_proba as the confidence of your model for a particular class. For example, if you have three classes, and you tested one sample. Then you would receive an output of three real numbers: [0.11, 0.01, 0.88] You could treat this as your model having a high confidence that this input maps to the third class, as it has the highest probability of 0.88. In contrast, for some other input value, your model might spit out the following: [0.33, 0.32, 0.34] Your model still predicts the third class, as this has the highest probability. However, there is a low confidence that the third class is the true class.
I am using the MLPClassifier using the lbfgs solver. If I calculate the expected value using the predict_proba() method and the classes_ attribute, it does not match what the predict() method returns. Which one is more accurate? Does the value returned by predict() have to be one of the classes? Will it not interpolate between the classes? I have a continuously varying variable I want to predict.
0
1
251
0
67,877,414
0
0
0
0
1
false
10
2020-01-30T09:45:00.000
0
2
0
How to load Pickle file in chunks?
59,983,073
0
python,file,csv,pickle,chunks
I had a similar issue, where I wrote a barrel file descriptor pool, and noticed that my pickle files were getting corrupt when I closed a file descriptor. Although you may do multiple dump() operations to an open file descriptor, it's not possible to subsequently do an open('file', 'ab') to start saving a new set of objects. I got around this by doing a pickler.dump(None) as a session terminator right before I had to close the file descriptor, and upon re-opening, I instantiated a new Pickler instance to resume writing to the file. When loading from this file, a None object signified an end-of-session, at which point I instantiated a new Pickler instance with the file descriptor to continue reading the remainder of the multi-session pickle file. This only applies if for some reason you have to close the file descriptor, though. Otherwise, any number of dump() calls can be performed for load() later.
Is there any option to load a pickle file in chunks? I know we can save the data in CSV and load it in chunks. But other than CSV, is there any option to load a pickle file or any python native file in chunks?
0
1
3,192
0
59,986,740
0
0
0
0
2
true
2
2020-01-30T12:45:00.000
7
2
0
Why do i have to convert "uint8" into "float32"
59,986,353
1.2
python,keras,deep-learning,mnist
Well the reason is simple, the whole math for neural networks is continuous, not discrete, and this is best approximated with floating point numbers. The inputs, outputs, and weights of a neural network are continuous numbers. If you had integer outputs, they will still be converted to floating point at some point in the pipeline, in order to have compatible types where operations can be made. This might happen explicitly or implicitly, its better to be explicit about types. In some frameworks you might get errors if you do not cast the inputs to the expected types.
I just started looking into deep learning and started to build a CNN with Keras. So I've noticed that oftentimes when the Dataset MNIST is used, after importing the images, they are getting converted to float32-Datatype. So my question is, why is that the case? It seems like it should work fine with uint8-Data. What am I missing here? Why is float32 needed?
0
1
1,654
0
70,925,908
0
0
0
0
2
false
2
2020-01-30T12:45:00.000
0
2
0
Why do i have to convert "uint8" into "float32"
59,986,353
0
python,keras,deep-learning,mnist
The answer is: We should perform a lot of data augmentation and training as well in CNN. This will lead to a faster training experience!
I just started looking into deep learning and started to build a CNN with Keras. So I've noticed that oftentimes when the Dataset MNIST is used, after importing the images, they are getting converted to float32-Datatype. So my question is, why is that the case? It seems like it should work fine with uint8-Data. What am I missing here? Why is float32 needed?
0
1
1,654
0
60,012,708
0
1
0
0
1
false
1
2020-01-30T16:24:00.000
0
1
0
Python getting/posting data in threads
59,990,330
0
python,multithreading
After all I have created a table in my db that is countinuously updated via trigger-function functionality. On python side I have done Threads that call to this table asynchronously, get data and do their stuff. Perhaps, this is not the ideal solution, but it works and covers my current needs.
I have implemented python code that is listening my db and continuously stores data in pandas DataFrame (inserts new rows and updates old as infinite loop). The data comes to python as it appears in db (updates come on average every 1-5 seconds). The second part of my python code should do some stuff with this DataFrame. So what I want to do is to split my code into separate threads. The first thread will store data in DataFrame and the other thread (or more than one) will use the DataFrame to do some stuff and return results via plots, variables and so on. I have done some reaserch on threading, but I have not found the solution yet. Any help on this issue and/or example code is highly appreciated.
0
1
67
0
60,028,390
0
0
0
0
1
true
0
2020-01-31T16:34:00.000
1
1
0
Whats the fastest way to loop through sorted dask dataframe?
60,007,843
1.2
python-3.x,pandas,dask
By using iter_tuples, you are bringing each row back to the client, one by one. Please read up on map_partitions or map to see how you can apply function to rows or blocks of the dataframe without pulling data to the client. Note that each worker should write to a different file, since they operate in parallel.
I'm new to Pandas and Dask, Dask dataframes wrap pandas dataframes and share most of the same function calls. I using Dask to sort(set_index) a largeish csv file ~1,000,000 rows ~100columns. Once it's sorted I use itertuples() to grab each dataframe row, to compare with a row from a database with ~1,000,000 rows ~100 columns. But it's running slowly (takes around 8 hours), is there a faster way to do this? I used dask because it can sort very large csv files and has a flexible csv parsing engine. It'll also let me run more advanced operations on the dataset, and parse more data formats in the future I could presort the csv but I want to see if Dask can be fast enough for my use case, it would make things alot more hands off in the long run.
0
1
412
0
60,019,415
0
1
0
0
1
false
1
2020-01-31T22:37:00.000
1
1
0
ModuleNotFoundError: No module named 'numpy' after using 'pip install numpy'
60,012,193
0.197375
python,numpy,pip
Welcome to SO! It is likely that there are 2 python versions(Python2.X and Python3.X) are installed in your system. pip is likely pointing to your Python2.X and so if you want to used libraries installed in this version ===> use python to run pip3 is pointing to your Python3.X, so use Python3 in your terminal to use this. Note: To know installed libraries in your python2 use pip freeze or pip3 freeze for Python3 In case, if you are getting error for Python3 as not then this is like to be system path issues. If you are still having trouble, you learn more about Anaconda-Python which has a curated list of steps and guidelines that are easy for beginners too. Hope this helps!
I tried installing numpy using the command 'pip install numpy' and ran 'import numpy' but I received the error ModuleNotFoundError: No module named 'numpy' when I tried to re-run the code. After reading similar questions on SO, I ran pip3 install numpy, but since i already installed it, I received the message Requirement already satisfied. NOTE: I have read other similar questions on SO, but none of them solve my problem. Possible solution for future readers: Like @Sampath mentioned in his answer, I had two versions of python installed. To solve the issue, I manually deleted all the installation files of the older version.
0
1
1,294
0
60,022,224
0
0
0
0
1
false
0
2020-02-01T23:55:00.000
0
1
0
Neural network produing same output regardless of input
60,022,144
0
python,machine-learning,neural-network
You may have weights initialized to zero and if they aren't trained properly and stay zero for some reason, neural network will be producing bias as an output always.
I'm currently trying to implement a neural network in Python to play the game of Snake that is trained using a genetic algorithm (although that's a separate matter right now). Every network that plays the game does the same movement over and over (e.g. continues in a straight line, keeps turning left). There are 5 inputs to the network: the distance to an object (food, a boundary, its own tail) in all four directions, and the angle between the food and the direction the snake is facing. The three outputs represent turning left, continuing straight, and turning right. I've never worked with anything like this before so I have a fairly basic understanding at this point. The number of hidden layers and the number of nodes per layer is variable, and is something I have been altering a lot to test, but the snakes continue to each repeat the exact same motion. Any advice on why this is happening would be greatly appreciated, and how to fix it. I can show my code if it's useful.
0
1
77
0
60,024,009
0
1
0
0
1
false
1
2020-02-02T07:08:00.000
1
2
0
Where does pandas store the DataFrame while the program is running?
60,023,966
0.099668
python,pandas
Yes it is in memory, and yes when the dataset gets too large you have to use other tools. Of course you can load data in chucks, process one chunk at a time and write down the results (and so free memory for the next chunk). That works fine for some type of process like filtering and annotating while if you need sorting or grouping you need to use some other tool, personally I like bigquery from google cloud.
Is it in memory? If so, then it doesn't matter if I import chunk by chunk or not because eventually, when I concatenate them, they'll all be stored in memory. Does that mean for a large data set, there is no way to use pandas?
0
1
114