GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
59,488,669
0
1
0
0
2
false
0
2019-12-26T12:25:00.000
1
2
0
How many training data(sentences) are required for custom NER using spacy python?[Just rought idea]
59,488,470
0.099668
python,machine-learning,spacy,named-entity-recognition
For developing custom ner model at least 50-100 occurrences of each entity will be required along with their proper context. Otherwise if you have less data than your custom model will overfit on that. So, depending upon your data you will require atleast 200 to 300 sentences.
I want to know let's say I have 10 custom entities to recognize how much annotated training sentences should I give (Any rough idea) ?? Thank You, in Advance!! :) I am new to this, please help
0
1
1,690
0
59,490,154
0
1
0
0
2
false
0
2019-12-26T12:25:00.000
1
2
0
How many training data(sentences) are required for custom NER using spacy python?[Just rought idea]
59,488,470
0.099668
python,machine-learning,spacy,named-entity-recognition
For the custom NER model from Spacy, you will definitely require around 100 samples for each entity that too without any biases in your dataset. All this is as per my experience. Suggestion -: Spacy Custom model you can explore, but for production level or some good project, you can't be totally dependent on that only, You have to do some NLP/ Relation Extraction, etc. along with this. Hope this helps.
I want to know let's say I have 10 custom entities to recognize how much annotated training sentences should I give (Any rough idea) ?? Thank You, in Advance!! :) I am new to this, please help
0
1
1,690
0
59,493,316
0
0
0
0
1
false
3
2019-12-26T20:22:00.000
2
2
0
Does tf.keras.losses.categorical_crossentropy return an array or a single value?
59,493,127
0.197375
python,tensorflow,keras,loss-function
Most usual losses return the original shape minus the last axis. So, if your original y_pred shape was (samples, ..., ..., classes), then your resulting shape will be (samples, ..., ...). This is probably because Keras may use this tensor in further calculations, for sample weights and maybe other things. In a custom loop, if these dimensions are useless, you can simply take a K.mean(loss_result) before calculating the gradients. (Where K is either keras.backend or tensorflow.keras.backend)
I'm using a custom training loop. The loss that is returned by tf.keras.losses.categorical_crossentropy is an array of I'm assuming (1,batch_size). Is this what it is supposed to return or a single value? In the latter case, any idea what I could be doing wrong?
0
1
747
0
63,206,321
0
0
0
0
1
false
2
2019-12-26T20:34:00.000
0
3
0
Access output of intermediate layers in Tensor-flow 2.0 in eager mode
59,493,222
0
python,tensorflow,deep-learning,tensorflow2.0
The most straightforward solution would go like this: mid_layer = model.get_layer("layer_name") you can now treat the "mid_layer" as a model, and for instance: mid_layer.predict(X) Oh, also, to get the name of a hidden layer, you can use this: model.summary() this will give you some insights about the layer input/output as well.
I have CNN that I have built using on Tensor-flow 2.0. I need to access outputs of the intermediate layers. I was going over other stackoverflow questions that were similar but all had solutions involving Keras sequential model. I have tried using model.layers[index].output but I get Layer conv2d has no inbound nodes. I can post my code here (which is super long) but I am sure even without that someone can point to me how it can be done using just Tensorflow 2.0 in eager mode.
0
1
3,974
0
59,494,808
0
0
0
0
1
true
1
2019-12-27T00:23:00.000
3
1
0
Does correlation important factor in Unsupervised learning (Clustering)?
59,494,747
1.2
python,machine-learning,correlation,unsupervised-learning,feature-engineering
My assumption here is that you're asking this question because in cases of linear modeling, highly collinear variables can cause issues. The short answer is no, you don't need to remove highly correlated variables from clustering for collinearity concerns. Clustering doesn't rely on linear assumptions, and so collinearity wouldn't cause issues. That doesn't mean that using a bunch of highly correlated variables is a good thing. Your features may be overly redundant and you may be using more data than you need to reach the same patterns. With your data size/feature set that's probably not an issue, but for large data you could leverage the correlated variables via PCA/dimensionality reduction to reduce your computation overhead.
I am working with the dataset of size (500, 33). In particular the data set contains 9 features say [X_High, X_medium, X_low, Y_High, Y_medium, Y_low, Z_High, Z_medium, Z_low] Both visually & after correlation matrix calculation I observed that [X_High, Y_High, Z_High] & [ X_medium, Y_medium, Z_medium ] & [X_low, Y_low, Z_low] are highly correlated (above 85%). I would like to perform a Clustering algorithm (say K means or GMM or DBSCAN). In that case, Is it necessary to remove the correlated features for Unsupervised learning ? Whether removing correlation or modifying features creates any impact ?
0
1
1,456
0
59,678,549
0
0
1
0
1
false
0
2019-12-27T09:10:00.000
0
1
0
TextBlob Naive Bayes classifier for neutral tweets
59,498,416
0
python,nltk,sentiment-analysis,naivebayes,textblob
If you have only two classes, Positive and Negative, and you want to predict if a tweet is Neutral, you can do so by predicting class probabilities. For example, a tweet predicted as 80% Positive remains Postive. However, a tweet predicting as 50% Postive could be Neutral instead.
I am doing a small project on sentiment analysis using TextBlob. I understand there are are 2 ways to check the sentiment of tweet: Tweet polarity: Using it I can tell whether the tweet is positive, negative or neutral Training a classifier: I am using this method where I am training a TextBlob Naive Bayes classifier on positive and negative tweets and using the classifier to classify tweet either as 'positive' or 'negative'. My question is, using the Naive bayes classifier, can I also classify the tweet as 'neutral' ? In other words, can the 'sentiment polarity' defined in option 1 can somehow be used in option 2 ?
0
1
304
0
59,501,847
0
0
0
0
1
false
0
2019-12-27T12:55:00.000
1
2
0
Sentiment Classification using Doc2Vec
59,501,121
0.099668
python,nlp,gensim,doc2vec
To get a vector for an unseen document, use vector = model.infer_vector(["new", "document"]) Then feed vectorinto your classifier: preds = clf.predict([vector]).
I am confused as to how I can use Doc2Vec(using Gensim) for IMDB sentiment classification dataset. I have got the Doc2Vec embeddings after training on my corpus and built my Logistic Regression model using it. How do I use it to make predictions for new reviews? sklearn TF-IDF has a transform method that can be used on test data after training on training data, what is its equivalent in Gensim Doc2Vec?
0
1
217
0
59,694,232
0
0
0
0
1
false
0
2019-12-27T16:40:00.000
1
1
0
Is there a way find nearest neighbors with BallTree or KDTree using cosine similarity?
59,503,600
0.197375
python,scikit-learn,knn,nearest-neighbor,cosine-similarity
try to normalize your matrix and use euclidian metric.
I have very sparse and huge rating data which I should find top k neighbors for each session. I need to compare approximate and exact nearest neighbor algorithms but since the data is very big and sparse the computation of the exact method is taking days to compute with brute force. I want to use KD Trees or Ball Trees but they are not supporting cosine distance. Is there a way to convert other distance measures to cosine similarity by math or is there any other way to compute exact neighborhood?
0
1
302
0
59,507,255
0
1
0
0
1
true
1
2019-12-27T19:50:00.000
0
1
0
Remove synonyms of TFIDF results in python
59,505,444
1.2
python,nlp,tf-idf,cosine-similarity
First you'd want to make a choice between Stems and Lemmas (neither are Roots, mind you). Google the difference for more on that. You mention antonyms, but most are determined by prefix (e.g. important vs (un)important). So the Stemmer should leave most antonyms unchanged. As for synonyms, let's assume you're thinking only about words with the exact same Stem, because if you want to relate synonyms with completely unrelated roots, you'd be thinking about semantics and something like wordnet but that would likely complicate your problem beyond reasonable... From your question, you already have a Stemmer working in Python...The simplest solution would be using two dictionaries: One dictionary mapping stems/lemmas to the set/list of inflected/derived complete words (and/or their frequency). And a second dictionary mapping those complete words to their various positions in the documents you are indexing. That way you can stem the user input word, check for it in the top-k tf-idf/stem dictionary, and afterwards map the complete word with the second dictionary to its occurrences in the document set. (It's hard to elaborate further given your question.)
I am currently working on a project where get the top 10 most relevant words of set of document using tfidf in python. However, there are results where are get the same word and its plurial or adverb or so. To go around this problem, I decided to use stemming, but this leads to a problem where words and their antonyms can have the same root or by reducing a word to its root does not enable to go back and find that specific word in the document if a user was to search for it. Is there a nlp that might be better in this context than nlp? Any hint or link will be useful. I working on something that is very similar to youtube.
0
1
281
0
59,506,950
0
0
0
0
1
false
0
2019-12-27T22:57:00.000
0
2
0
PCA analysis incites memory allocation problem. How to solve this without reducing image resolution or number of images
59,506,774
0
python,image,pca,feature-extraction
As this problem stated, you have to reconfigure your system to make more RAM available to the run-time system. However, I suspect that you can finesse the problem by iterating through the images, rather than loading all of them into one large NumPy array. Rather than using the monolithic processing tools, you'll need to write code to perform the same computations serially.
using PCA (principle component analysis) to extract features from a set of 4K images giving me the memory error File "/home/paul90/.local/lib/python3.6/site-packages/sklearn/decomposition/_pca.py", line 369, in fit_transform U, S, V = self._fit(X) MemoryError: Unable to allocate array with shape (23339520, 40) and data type float32 I am trying to extract 30 features (# of components) from 4K images and getting this error. Pseudocode: immatrix = np.array([np.array(Image.open(im, 'r')).flatten() for im in file_list], 'f') x = StandardScaler().fit_transform(immatrix) pc_train = pca.fit_transform(x) Filelist is list of images (currently I have 600 images) I can't reduce the number of images in the list and can't reduce the initial 4K resolution. In this context,how can I solve this memory allocation issue? It will be a great help if anyone can tell me the steps to avoid the memory issues.
0
1
323
0
59,512,325
0
1
0
0
1
true
0
2019-12-28T14:59:00.000
0
1
0
Skilearn ImportError: DLL load failed: The specified module could not be found
59,512,003
1.2
python,scikit-learn,scipy
I am leaving this up to help others, I found the solution, had to unistall completely, reinstall and changing the reg file to allow long path download may have played a role in the beginning.
I am attempting to verify version of my Sklearn to verify the proper installation, earlier I was having an issue where I had to change my registry value of Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem@LongPathEnabled value to 1. Then I was able to install the files how ever, now when I attempt to check the version of my skilearn I get the following: Traceback (most recent call last): File "C:/Users/terry/pyversions.py", line 11, in <module> import sklearn File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\sklearn\__init__.py", line 74, in <module> from .base import clone File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\sklearn\base.py", line 20, in <module> from .utils import _IS_32BIT File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\sklearn\utils\__init__.py", line 25, in <module> from .fixes import np_version File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\sklearn\utils\fixes.py", line 18, in <module> import scipy.stats File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\stats\__init__.py", line 384, in <module> from .stats import * File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\stats\stats.py", line 179, in <module> from scipy.spatial.distance import cdist File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\spatial\__init__.py", line 102, in <module> from ._procrustes import procrustes File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\spatial\_procrustes.py", line 11, in <module> from scipy.linalg import orthogonal_procrustes File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\linalg\__init__.py", line 195, in <module> from .misc import * File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\linalg\misc.py", line 5, in <module> from .blas import get_blas_funcs File "C:\Users\terry\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\linalg\blas.py", line 215, in <module> from scipy.linalg import _fblas ImportError: DLL load failed: The specified module could not be found. My versions are as follows: Python: 3.7.6 (tags/v3.7.6:43364a7ae0, Dec 19 2019, 01:54:44) [MSC v.1916 64 bit (AMD64)] scipy: 1.4.1 numpy: 1.18.0 matplotlib: 3.1.2 pandas: 0.25.3 Please what do I need to do to fix my errors?
0
1
112
0
59,518,465
0
0
0
0
1
false
0
2019-12-28T20:33:00.000
0
2
0
How can i train a Image classifier model with tensorflow in Python and use the trained model in Java application?
59,514,643
0
java,python,android,sockets,tensorflow
python trained model server serve http api ,accept the image , return classification result
I'm trying to build a client-Server Application where client is an android device and server is a windows Pc.it would be easy to do socket programming in java for building the complete application.At first, I tried to build the server side completely in python because of the image classifier I wrote in python. but it got me into problems when I started working on the socket.Now, I want to use python trained model in java. please help me out.
1
1
60
0
59,529,321
0
0
0
0
1
false
0
2019-12-30T10:18:00.000
0
1
0
How can I see the possible values on a column
59,529,170
0
python,pandas,data-analysis
for numeric columns use: describe() for categorical columns use : value_counts()
I am working with Colab, I have a very large column and I want to see the possible values that are in this column.
0
1
164
0
59,534,703
0
0
0
0
1
true
1
2019-12-30T17:22:00.000
2
1
0
Difference between Series & Data Frame
59,534,544
1.2
python,pandas,dataframe,series
You can think of Series as a column in a DataFrame while the actual DataFrame is the table if you think of it in terms of sql
If we perform value_counts function on a column of a Data Frame, it gives us a Series which contains unique values' counts. The type operation gives pandas.core.series.Series as a result. My question is that what is the basic difference between a Series & a Data Frame?
0
1
819
0
59,546,917
0
1
0
0
1
true
0
2019-12-31T17:17:00.000
0
1
0
Python data correlation
59,546,892
1.2
python
no difference, if method='pearson'
Can anyone tell me what is the difference between the pearson correlation method and the normal corr() method? I expect it to be the same output, is that right?
0
1
41
0
59,571,636
0
0
0
0
1
false
0
2020-01-01T18:18:00.000
1
1
0
Replace finite values with '1' in a 3 dimension xarray
59,554,967
0.197375
gis,python-xarray
You can use the xr.where function. xr.where(data > 0, 1, data)
I am working on lake ice thickness in the northern hemisphere. My final data set is an xarray with dimensions [365,360,720] - (days,lat,lon) and a data varibale 'icethickness'. This data variable has 3 kinds of values. A finite value for ice thickness, zero for water and 'nan' for oceans. I want to convert all the finite values of this xarray to 1 and keep the zeros and nan as they are.
0
1
73
0
59,555,564
0
1
0
0
1
false
0
2020-01-01T18:42:00.000
1
2
0
Randomly select one/more functions from collection and apply combinations
59,555,146
0.099668
python,function
How about this functools.reduce(lambda acc, f: f(acc), random.sample(funs, n), image)?
I have a list of n functions: f1(),f2()..fn() etc and need to select n randomly and apply one/more of those in sequence to an object. What's the Pythonic way to do this? The use-case is to generate augmentations for images (for training an ML model) from a set of images, and apply (one/more) augmentation functions to an image.
0
1
45
0
59,556,331
0
0
0
0
1
true
0
2020-01-01T20:51:00.000
5
1
1
Invalid token when trying to login into Jupyter notebook with Docker jupyter/tensorflow-notebook
59,556,013
1.2
python,docker,jupyter-notebook
The answer turned out to be quite easy: there was an other Jupyter notebook server I wasn't aware of running. Closing it did the trick.
I pulled a (seemingly popular, supported by Jupyter) jupyter-tensorflow Docker image using docker pull jupyter/tensorflow-notebook and started it successfully with docker run -p 8888:8888 jupyter/tensorflow-notebook However, upon navigating to http://127.0.0.1:8888/?token=177a...., I am prompted for a password or token (despite the token already being present in the URL). I know of no password to use, and the token 177a... does not work. Any suggestions?
0
1
1,754
0
59,563,487
0
0
0
0
1
true
1
2020-01-02T12:21:00.000
3
1
0
Embedd existing ML model in apache flink
59,563,265
1.2
python,machine-learning,apache-flink,flinkml
There are two possible solutions depending on the model You are using: Possibly the simples idea is to create external service that will call the model and return the results and then simply call the service with AsyncFunction Use some library, again depending on Your model to load the pre-trained model inside a ProcessFunction's open method. And then simply calling the model for each data that arrived. The second solution has two disadvantages, first You need to have the Java version of the specific library available and the other is that You need to somehow externalize the metadata of the model if You want to be able to update it over time.
we are training machine learning models offline and persist them in python pickle-files. We were wondering about the best way to embedd those pickeled-models into a stream (e.g. sensorInputStream > PredictionJob > OutputStream. Apache Flink ML seems to be the right choice to train a model with stream-data but not to reference an existing model. Thanks for you response. Kind Regards Lomungo
1
1
825
0
59,569,070
0
0
0
0
1
false
0
2020-01-02T19:29:00.000
1
2
0
Finding corresponding eigenvalues to a set of eigenvector
59,568,960
0.099668
python,matrix,eigenvalue,eigenvector
Think about the definition of an eigenvector. An eigenvector v of a linear transformation represented by matrix A is a vector that only changes in magnitude, not direction, when that linear transformation is applied to it. The scalar change in magnitude of the eigenvector is its eigenvalue. You have a linear transformation, and you've been given its eigenvectors - to find the eigenvalues, all you need to do is apply the transformation to the eigenvectors and determine by what scalar each eigenvector got scaled.
I solved most of this assignment using Python, but not sure how to input (or even understand) the remaining information. Aren't you supposed to find the eigenvector given some eigenvalue, thus it appears in reverse in the following: Let the symmetric 3 × 3 matrix A be A = ((1 2 1), (2 5 0), (1 0 5)) If A has the following three eigenvectors, then find three corresponding eigenvalues: v1 = ((-5), (2), (1)); v2 = ((0), (-1), (2)); v3 = ((1), (2), (1))
0
1
196
0
59,652,333
0
0
0
0
1
true
0
2020-01-02T20:40:00.000
0
1
1
What's a compatible GLIBC version for tensorflow 2.0 for python3.6.8 on CentOS 6.9?
59,569,722
1.2
python,tensorflow,python-3.6,centos6,tensorflow2.0
Just tested and tensorflow 2.0 works fine with GLIBC 2.12. It was tensorflow 1.14 that's not compatible with GLIBC 2.12.
The server currently has CentOS 6.9 with GLIBC 2.12 and tensorflow 1.14. tensorflow is throwing error saying that it needs GLIBC 2.15. Now I want to upgrade tensorflow to 2.0 but I want to know what's a working GLIBC version for tensorflow 2.0 on CentOS 6.9 before doing so. I didn't seem to find any source to state this compatibility issue. Does anyone here know?
0
1
308
0
59,575,035
0
0
0
0
1
false
0
2020-01-02T22:49:00.000
0
2
0
Which clustering method is the standard way to go for text analytics?
59,571,031
0
python,cluster-analysis,text-mining
in my view, You can use LDA(latent Dirichlet allocation, it is more flexible in comparison to other clustering techniques because of having Alpha and Beta vectors that can adjust to the contribution of each topic in a document and word in a topic. It can help if the documents are not of similar length or quality.
Assume you have lot of text sentences which may have (or not) similarities. Now you want to cluster similar sentences for finding centroids of each cluster. Which method is the prefered way for doing this kind of clustering? K-means with TF-IDF sounds promising. Nevertheless, are there more sophisticated algorithms or better ones? Data structure is tokenized and in a one-hot encoded format.
0
1
51
0
59,572,241
0
0
0
0
1
false
0
2020-01-03T01:31:00.000
0
1
0
Is it a common practice to save and load NN models for multiple sessions in practice?
59,572,172
0
python,tensorflow,oop,neural-network
Yes, of course this is common practice. If you're trying to work with a large net that takes hours to train, it would be totally impractical to do anything other than save weights for future use. In fact, if you're at all concerned about reproducibility of your results, you should save any noteworthy iterations of your net.
I just started learning TF and realized that if I create a model of a NN, and then if I were to create a seession to do something with the NN such as getting an output value given the input value, and then after the session close, if I were to do something else with the NN again, then create a session for that, then it re-initializes the weights into random, which makes it pointless. So then is it a common practice to, say in the first session, I save the model, and then in the second session, I would load the weights do something else with the NN? I understand that TF is intended to be used with 1 session but when dealing with something more complex than a "simple" supervised classification problem, such as reinforcement learning, then I need to use the NN for various way for various reasons. So coding all of these such that i can execute all the functionalities within 1 session is very tiring and confusing at times. where I would rather create 1 session each to do something with the NN. But is it a poor practice to do so?
0
1
126
0
59,953,057
0
0
0
0
1
false
1
2020-01-03T03:03:00.000
0
4
0
How could I run tensorflow on windows 10? I have the gpu Geforce gtx 1650. Can I run tensorflow on it? if yes, then how?
59,572,687
0
python,python-3.x,tensorflow
Here are the steps for installation of tensorflow: Download and install the Visual Studio. Install CUDA 10.1 Add lib, include and extras/lib64 directory to the PATH variable. Install cuDNN Install tensorflow by pip install tensorflow
I want to do some ML on my computer with Python, I'm facing problem with the installation of tensorflow and I found that tensorflow could work with GPU, which is CUDA enabled. I've got a GPU Geforce gtx 1650, will tensorflow work on that. If yes, then, how could I do so?
0
1
2,593
0
67,939,485
0
1
0
0
1
false
1
2020-01-03T08:08:00.000
3
5
0
ModuleNotFoundError: No module named 'numpy'
59,575,113
0.119427
python-3.x,numpy
If you are using PyCharm, open Pycharm Go to File->Setting->Project->Python Interpreter->Click on '+'->search and select required package->install package
I tried importing numpy to carry out some array operations in Python: import numpy * But I got this error message: ModuleNotFoundError: No module named 'numpy' What do i do??
0
1
14,026
0
59,582,216
0
0
0
0
1
false
1
2020-01-03T12:28:00.000
1
1
0
What is the benefit of NLP sentence segmentation over Python algorithm?
59,578,665
0.197375
python-3.x,nlp
The sentence segmentation routines from the NLP libraries like SpaCy, NLTK, etc. handle edge cases much better and are more robust to handling punctuation and context. For example, if you choose to split sentences by treating a '.' as a sentence boundary, how would you handle a sentence like - "There are 0.5 liters of water in this bottle."?
I have a task in NLP to do a sentence segmentation, but I wonder, what are the advantages of using built-in NLP sentence segmentation algorithms, such as Spacy, NLTK, BERT etc, over Python '.' separator or similar algorithm? Is it the speed? or accuracy? or less line of code? How different or strong these algorithms over the ones that we can build ourselves in Python?
0
1
164
0
59,590,347
0
0
0
0
1
false
2
2020-01-04T06:18:00.000
0
1
0
Is there anyway to run GPT2 without GPU and TensorFlow
59,588,423
0
python,tensorflow,nlp,gpu
What I would do : sess.run([var for var in tf.trainable_variables]) to get trained parameters and save them as numpy array. rebuild a numpy only version of gpt2. But : This will take a substantial amount of work to rebuild the model You need to install tensorflow at least once to get the trained variables. As for the gpu, you don't need it actually, the model will run with a cpu version of tensorflow (gpu are especially needed to speed up training time).
GPT2 is an excellent OpenAI project for NLP. The developer requirement stated we need to use tensor Flow and GPU. I only want to use (not to train) the existing trained parameters. Is there any way to use GPT2 without the expensive hardware with GPU and without the need to install of Tensor Flow?
0
1
1,003
0
59,593,303
0
0
0
0
1
true
0
2020-01-04T17:28:00.000
1
1
0
Most efficient way to mask an opencv bgr with a boolean array
59,593,132
1.2
python,numpy,opencv
Answer is img[~mask,:] = [0,0,0] That ,: takes care of the other dimension so you don't get mismatch issues.
I have an image loaded with opencv with img.shape = (208, 117, 3). I also have a boolean numpy array with mask.shape = (208, 117). How to I make all pixels in the img (0,0,0) wherever the mask has False, otherwise leave the pixels as they are?
0
1
696
0
59,603,091
0
0
0
0
1
false
0
2020-01-05T18:01:00.000
0
1
0
exporting and indexing csv file with pandas
59,602,713
0
python,pandas,dataframe,indexing
While writing your files to csv, you can use set index = True in the to_csv method. This ensures that the index of your dataframe is written explicitly to the csv file
I've created a csv file with the column names and saved it using pandas library. This file will be used to create a historic record where the rows will be charged one by one in different moments... what I'm doing to add rows to this csv previously created is transform the record to a DataFrame and then using to_csv() I choose mode = 'a' as a parameter in order to append this record to the existing file. The problem here is that I would like to see and index automatically generated in the file everytime I add a new row. I already know when I import this file as a DF, an index is generated automatically, but this is within the idle interface...when I open the csv with Excel for example...the file doesn't have an index.
0
1
81
0
59,613,792
0
0
0
0
1
true
1
2020-01-05T22:07:00.000
2
1
0
jedi support in spyder 4
59,604,624
1.2
python,spyder
(Spyder maintainer here) Spyder 4.0.0 and 4.0.1 only work with Jedi 0.14.1. Newer nor older versions are supported, so be sure to have that exact version installed. By the way, although not required, Kite completions are better for scientific packages than Jedi ones.
I updated to Spyder 4 (4.0.0?) to get the latest and greatest features. At this time I found out that Spyder now has the option of using Kite for autocompletion. After trying Kite for a bit I wanted to revert back to the previous autocompletion setup. I thought this was possible by unchecking to use Kite in the preferences, but then autocompletion seemed to be completely broken. In conda I noticed my current version of Jedi was out of date, and I upgraded to the newest version. In the process I noticed that conda said it needed to downgrade Spyder for the Jedi upgrade. This gives me the impression that Spyder 4 just doesn't support Jedi. Is that the case?
0
1
1,717
0
59,617,838
0
0
0
0
1
true
0
2020-01-06T13:48:00.000
1
1
0
Placing examples in a mini-batch on the rows or columns of a matrix?
59,613,281
1.2
python-3.x,tensorflow2.0
Quoting @ShanqingCai Putting examples in a batch along the rows of a matrix (i.e., the first axis of the input tensor) is the prevailing way of training deep learning models. Virtually any tensorflow2 / keras example you can find follows that pattern. Putting them along any non-first axis is much rarer. As said, the row dimension is the preferred way of storing samples. I can think of two reasons why this is the case, TF does a lot of matrix multiplications involving batches of data. Therefore, by keeping batch dimension as the first dimension, you are able to continuously produce tensors using matrix multiplication, which also has the batch dimension as the first dimension. (e.g. [batch size, 10] . [10, 2] produces [batch size, 2]) The other reason is that the row dimension is the slowest changing dimension. Therefore by you can access an individual sample by taking a single contiguous chunk of memory which is always preferred when it comes to disk reading/ memory reading.
I am using tensorflow 2.0 to train a model. I am deciding about whether I should put multiple examples in a batch along the rows or columns of a matrix. Obviously this will affect how I design the model as well. Is there any practical advice on which is better?
0
1
56
0
59,617,330
0
0
0
0
1
true
0
2020-01-06T14:54:00.000
2
1
0
RandomForestRegressor for classification problems
59,614,227
1.2
python,scikit-learn,data-science,random-forest
The Random Forest regressor does differ somewhat from the Random Forest classifier when it comes to ensembling the decision trees: The classifier uses the mode of the predicted classes of the decision trees The regressor uses the mean of the predicted values of the decision trees Due to this difference the models can have different results. And in some cases this might result in the regressor performing better than the classifier. In addition to that I would say that if you tune your hyperparameters correctly, the classifier should perform better on a classification problem than the regressor.
I've been doing Applied Machine Learing in Python course on coursera and on Assignment of week 4 I`ve found something interesting. During my first attempt to complete the assignment I tried using RandomForestClassifier from sklearn to predict labels, but the model was overfitting and was showing poor test accuracy results. As an experiment I switched to RandomForestRegressor and, guess what, not only did it not overfit, but test accurary was also a lot higher. So, why does RandomForestRegressor perform a lot better on a binary classification problem?
0
1
78
0
59,625,798
0
0
0
0
1
false
1
2020-01-07T09:36:00.000
4
2
0
What is the replacement of Placeholder in Tensorflow 2.0
59,625,668
0.379949
python,tensorflow2.0
There is no replacement for placeholder in Tf2 as its default mode is eager execution , if want to use placeholder in tf2 than use tf.compat.v1 syntax and disable v2 behavior
I am new to Tensorflow and I have used tensorflow.placeholder() in tensorflow 1.0. But is there any replacement of placeholder.
0
1
4,307
0
59,661,270
0
1
0
0
1
false
0
2020-01-07T15:42:00.000
0
1
0
Image transmission between processes too slow in python's multiprocessing
59,631,595
0
python-3.x,ipc,python-multiprocessing,opencv4
EDIT: I found this question, which has been answered, and addressed my problem using sharedmem module
I'm trying to send images (4000, 3000, 3) between two processes. My first process acquires the image with a camera, attaches to it some metadata, another image and then sends the whole thing to the second process, which processes it. I want to have a maximal delay of 0.2 seconds between the moment the image is acquired and the moment the processing is over. Let's assume the way I acquire and process the image is optimal. I tried 2 methods to send the image, with a queue (mp.Queue) and with a shared array (mp.Array('i', 4000*3000*3)) Both took to much time. The Queue.put() method takes about 0.5 seconds to send the package. Copying the image in a shared array like this: shared_array[:] = img.copy() Takes about 2 seconds. So my question is, is anyone aware of a faster way to transit two images between two processes? Thanks for your time!
0
1
282
0
65,552,040
0
0
0
0
1
false
1
2020-01-08T03:52:00.000
0
2
0
How to detect image brightness and sharpness in python?
59,639,187
0
python-3.x,image-processing,ocr,python-tesseract
*Basic recommendation, using the gray images is better on OCR You can use the BRISQUE image quality assessment for scoring the image quality, it's available as a library. check it out, a smaller score means good quality.
I tried applying tesseract ocr on image but before applying OCR I want to improve the quality of the image so that OCR efficiency increase, How to detect image brightness and increase or decrease the brightness of the image as per requirement. How to detect image sharpness
0
1
328
0
59,644,182
0
0
0
0
1
false
5
2020-01-08T10:17:00.000
1
2
0
The sklearn.* module is deprecated in version 0.22 and will be removed in version 0.24
59,643,694
0.099668
python,scikit-learn,deprecation-warning
It's just a warning, for now -- until you upgrade sklearn to version 0.24. Then your code will need to be modified before it works. It's giving you a heads-up about this, so you can fix your code ahead of time. The modifications described below should work with your current version; you don't need to wait to upgrade before changing your code (at least, that's how these deprecation warnings usually work). The corresponding classes / functions should instead be imported from sklearn.neighbors. If I read this message correctly, it's saying that if you're using a function like sklearn.neighbours.kde.some_function() in your code now, you need to change it to sklearn.neighbours.some_function(). Anything that cannot be imported from sklearn.neighbors is now part of the private API. This seems to be saying that there may be some functions that will no longer be available to you, even using the modification above.
I am migrating a piece of software from Python 2.7 to Python 3. One problem that arises is: The sklearn.neighbors.kde module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.neighbors. Anything that cannot be imported from sklearn.neighbors is now part of the private API. I am not sure which line causes this, and not sure if it is an error or a warning, and what are the implications. On python 2.7 everything works fine. How do I get rid of this?
0
1
8,393
0
59,683,042
0
0
0
0
1
true
0
2020-01-08T20:05:00.000
1
1
0
Neural network filtering out everything else than cats/dogs
59,653,108
1.2
python-3.x,tensorflow,keras,conv-neural-network
Labelling next class as 'other' or something like that is the simplest way to do that. What I found, that because of many different types of images in 'other' class, training sample should be much bigger than just when differing between cats and dogs.
The question is simple, however I cannot find solution: How to recognise cats and dogs and filter out everything other? Another words: I have a big database with images of cats and dogs and all other photos mixed together, is there a way to say in output: cat -or- dog -or- something else? Either CNNs are not constructed to do such things efficiently? I'm using python / Keras / Tensorflow; solution with finding cats / dogs when I provide images of cats and dogs only - works fine.
0
1
39
0
59,658,376
0
0
0
0
3
false
0
2020-01-09T06:04:00.000
0
3
0
Is there any rules of thumb for the relation of number of iterations and training size for lightgbm?
59,658,070
0
python,lightgbm
In a similar problem in deep learning with Keras: I do it by using an early stopper and cross validation with train and validation data, and let the model optimize itself using validation data during trainings. After each training, I test the model with test data and examine the mean accuracies. In the mean time after each training I save the stopped_epoch from EarlyStopper. If CV scores are satisfying, I take the mean of stopped epochs and do a full training (including all data I have) with the number of mean stopped epochs, and save the model.
When I train a classification model using lightgbm, I usually use validation set and early stopping to determine the number of iterations. Now I want to combine training and validation set to train a model (so I have more training examples), and use the model to predict the test data, should I change the number of iterations derived from the validation process? Thanks!
0
1
764
0
59,661,123
0
0
0
0
3
false
0
2020-01-09T06:04:00.000
1
3
0
Is there any rules of thumb for the relation of number of iterations and training size for lightgbm?
59,658,070
0.066568
python,lightgbm
As you said in your comment, this is not comparable to the Deep Learning number of epochs because deep learning is usually stochastic. With LGBM, all parameters and features being equals, by adding 10% up to 15% more training points, we can expect the trees to look alike: as you have more information your split values will be better, but it is unlikely to drastically change your model (this is less true if you use parameters such as bagging_fraction or if the added points are from a different distribution). I saw people multiplying the number of iterations by 1.1 (can't find my sources sorry). Intuitively this makes sense to add some trees as you potentially add information. Experimentally this value worked well but the optimal value will be dependent of your model and data.
When I train a classification model using lightgbm, I usually use validation set and early stopping to determine the number of iterations. Now I want to combine training and validation set to train a model (so I have more training examples), and use the model to predict the test data, should I change the number of iterations derived from the validation process? Thanks!
0
1
764
0
59,810,976
0
0
0
0
3
false
0
2020-01-09T06:04:00.000
0
3
0
Is there any rules of thumb for the relation of number of iterations and training size for lightgbm?
59,658,070
0
python,lightgbm
I'm not aware of a well-established rule of thumb to do such estimate. As Florian has pointed out, sometimes people rescale the number of iterations obtained from early stopping by a factor. If i remember correctly, typically the factor assumes a linear dependence of the data size and the optimal number of trees. I.e. in the 10-fold cv this would be a rescaling 1.1 factor. But there is no solid justification for this. As Florian also pointed out, the dependence around the optimum is typically reasonably flat, so +- a bit of trees will not have a dramatic effect. Two suggestions: do k-fold validation instead of a single train-validation split. This will allow to evaluate how stable the estimate of the optimal number of trees is. If this fluctuates a lot between folds- do not rely on such estimate :) fix the size of the validation sample and re-train your model with early stopping using gradually increasing training set. This will allow to evaluae the dependence of the number of trees on the sample size and approximate it to the full sample size.
When I train a classification model using lightgbm, I usually use validation set and early stopping to determine the number of iterations. Now I want to combine training and validation set to train a model (so I have more training examples), and use the model to predict the test data, should I change the number of iterations derived from the validation process? Thanks!
0
1
764
0
59,674,053
0
0
0
0
1
false
8
2020-01-09T09:41:00.000
3
3
0
Holoviews charts sharing axis when combined and outputted
59,661,074
0.197375
python,holoviews,holoviz
Sander's response is correct and will solve your specific problem, but in this case it may not be addressing the root cause. HoloViews only links axes that are the same, and it sounds like you're plotting different quantities on the y axis in each plot. In that case, the real fix is to put in a real name for the y axis of each plot, something that distinguishes it from other things that you might want to plot on the y axis in some other plot you're showing. Then not only will HoloViews no longer link the axes inappropriately, the viewer of your plot will be able to tell that each plot is showing different things.
I'm using Holoviews to construct a dashboard of charts. Some of these charts have percentages in the y axis where as others have sums/counts etc. When I try to output all the charts I have created to a html file, all the charts change their y axis to match the axis of the first chart of my chart list. For example: Chart 1 is a sum, values go from 0 to 1000 Chart 2 is a % Chart 3 is a % when I combine these charts in holoviews using: Charts = Chart 1 + Chart 2 + Chart 3 The y axis of charts 2 and 3 become the same as chart 1. Does anyone know why this is happening and how I can fix it so all the charts keep their individual axis pertinent to what they are trying to represent. Thank you!
1
1
1,953
0
59,682,913
0
0
0
0
1
false
0
2020-01-09T20:33:00.000
0
1
0
Passing a specific Xarry Data Variable into a New Numpy Array
59,671,677
0
python,numpy,python-xarray
Try ds.x.values, where ds is the name of your dataset. .x (which is equivalent to ds["x"]) gets the x dataarray from your dataset and .values returns the numpy array with its values.
I have a xarray dataset, with data variables of: Data variables: hid (particle) float32 ... d (particle) float32 ... x (particle) float32 ... y (particle) float32 ... z (particle) float32 ... image (hologram_number, xsize, ysize) uint8 ... I am wondering if there is a way to take all of the x values for my 10,000 data points, and pass them into a new one dimensional numpy array? Any sort of direction would be amazing. I have been reading the xarray.Dataset API and I'm not really getting anywhere.
0
1
46
0
59,743,781
0
0
0
0
1
true
3
2020-01-10T00:03:00.000
5
1
0
Should I use tf.add or + to add two tensors in Tensorflow?
59,673,839
1.2
python-3.x,tensorflow2.0
In my understanding they are equivalent and just executes the __add__ magic function.
I am using Tensorflow 2.0 for Python 3. Suppose I have two tensor variables, x and y, and I want to compute their element-wise sum x + y. Should I just write x + y, or tf.add(x, y)? If they are not equivalent, when should I use one or the other?
0
1
1,972
0
59,691,150
0
0
0
0
1
false
1
2020-01-10T06:40:00.000
1
1
0
Get text size using opencv python
59,676,639
0.197375
python,opencv,image-processing
You can first use a text detector or a contour detector to get the height of one line of text. That can be used to get the size of text in pixels. But after that, you need some reference to convert it to font size (which is usually defined in points). All of this can be done in OpenCV. If you post an image with text, some of us will be able to provide more detailed answers.
im trying to get the font size of text in an image, is there any library in python or can it be done using opencv? Thanks in advance
0
1
1,072
0
59,677,083
0
0
0
0
1
false
1
2020-01-10T07:19:00.000
0
3
0
convert float64 to int (excel to pandas)
59,677,058
0
python,pandas
int(yourVariable) will cast your float64 to a integer number. Is this what you are looking for?
I have imported excel file into python pandas. but when I display customer numbers I get in float64 format i.e 7.500505e+09 , 7.503004e+09 how do convert the column containing these numbers
0
1
497
0
59,678,540
0
0
0
0
1
false
0
2020-01-10T09:01:00.000
1
1
0
Camera Calibration basic doubts
59,678,398
0.197375
python,opencv,image-processing,camera,camera-calibration
Camera calibration is supposed to do for the same camera. Purpose of calibrating a camera is to understand how much distortion the image has and to correct it before we use it to take actual pics. Even if you do not have the original camera, If you have the checkerboard images taken from that camera it is sufficient. Otherwise, look for a similar camera with features as similar as possible (focal length etc.) to take checker board images for calibration and this will somewhat serve your purpose.
I am starting out with computer vision and opencv. I would like to try camera calibration for the images that I have to see how it works. I have a very basic doubt. Should I use the same camera from which the distorted images were captured or I can use any camera to perform my camera calibration?
0
1
67
0
59,684,408
0
0
0
0
1
true
0
2020-01-10T14:04:00.000
1
1
0
random binary matrix with restrictions
59,683,287
1.2
python,constraints,binary-matrix
I don't think you need any constraint to fulfill the last conditions. Columns = 8, which is just half of 16. You can just simply copy the first 8 rows to the last 8 rows and reverse all the 0 and 1, then the column sum would be 8 and the first three conditions are met.
I want to create a binary 16*15 matrix with certain conditions. I use binary strings to make the matrix. I want my matrix to be as described: -The first and last two elements of each row must be alternative. -the sum of each row must be 8 or 7. -in each row, there should not be consecutive 1s or 0s. (one couple(00 or 11) is allowed in each row) . -the sum of the columns must be 8. there are 26 possible strings that can fulfill the first 3 conditions.how can I fulfill the last conditions? I have a code but it is not working because it takes so much time and it is almost impossible.is there any other way?
0
1
122
0
59,687,911
0
0
0
0
1
false
0
2020-01-10T19:21:00.000
0
2
0
How to predict a continuous variable without any output data ? All i have is Input data
59,687,810
0
python,machine-learning,predict,continuous
I may be misunderstanding but it appears that you don't have the necessary information to make the prediction. My understanding is that you have category information but no other associations. For some categories you might be able to hard code your prediction based on expert opinion. Predicting a ping sweep is basically benign, for example, just by knowing what it's called. For anything more dynamic you're going to need more information than you listed.
I am working on a cyber security project wherein we have to prioritize vulnerabilities based on the existing features which are mostly categorical variables (also including couple of ordinal variables). The objective here is to detect vulnerability that is most likely to be exploited, and thereby prioritizing it. Hence we have to predict a score of 0-10 . Whichever is the highest rating that we predict (in this case 10), will be the most critical vulnerability that needs immediate attention. All that we have are the categorical variables (as input features). Once again summarizing the problem here : Current Input features : All categorical variables (with couple of ordinal variables) Current Output feature : DOES NOT EXIST Expected Output : Predict a score in the range 0-10, with 10 being most critical vulnerability Never came across this kind of problem. It definitely looks like Regression is not the answer. Can you please share your thoughts on the same.
0
1
95
0
59,691,992
0
1
0
0
1
true
1
2020-01-11T05:01:00.000
0
1
0
not able to install scikit-learn on pycharm on windows
59,691,800
1.2
python,scikit-learn,pycharm
In PyCharm go to File -> Settings -> Project -> Project Interpreter. Click on the + sign on the right hand side of the window. Type the name of the library you want to install and PyCharm will add it for you. When PyCharm is unable to make that installation, it usually has to do with your virtual environment not being set up correctly.
I am trying to install scikit learn on Pycharm but it always shows an error: " numpy.distutils.system_info.NotFoundError: No lapack/blas resources found. " Also, pycharm is not able to recognize pre-installed libraries(basically I installed some libraries eg. numpy scikit learn using pip in cmd) it is not able to install other libraries such as matplotlib. The only library it installed is numpy, pandas. the project interpreter is correct, but still, i am getting an error while installing these libs.
0
1
1,024
0
59,693,366
0
0
0
0
1
true
2
2020-01-11T08:50:00.000
0
1
0
NLP AI logic - dialogue sequences with multiple parameters per sequence architecture
59,692,913
1.2
python,tensorflow,keras,deep-learning,artificial-intelligence
From the way I understand your question, you want to find emotions/actions based on a particular sentence. Sentence A has emotions as labels and Sentence B has actions as labels. Each of the labels has 4 different values with a total of 8 values. And you are confused about how to implement labels as input. Now, you can give all these labels their separate classes. Like emotions will have labels (1.2.3.4) and actions will have labels (5.6.7.8). Then concat both the datasets and run Classification through RNN. If you need to pass emotions/actions as input, then add them to vectorized matrix. Suppose you have Sentence A stating "Today's environment is very good" with happy emotion. Add the emotion with it's matrix row, like this: Today | Environment | very | good | health 1 | 1 | 1 | 1 | 0 Now add emotion such that: Today | Environment | very | good | health | emotion 1 | 1 | 1 | 1 | 0 | 2(for happy) I hope this answers your question.
I have a dataset of dialogues with various parameters (like if it is a question, an action, what emotion it conveys etc ). I have 4 different "informations" per sentence. let s say A replys to B A has an additive parameter in a different list for its possible emotions (1.0.0.0) (angry.happy.sad.bored) - an another list for it s possible actions (1.0.0.0) (question.answer.inpulse.ending) I know how to build a regular RNN model (from the tutorials and papers I have seen here and there), but I can t seem to find a "parameters" architecture. Should I train multiple models ? (like sentence A --> emotions, then sentence B -->actions) then train the main RNN separately and predicting the result through all models ? or is there a way to build one single model with all the information stored right at the beginning ? I apologize for my approximate English, witch makes my search for answers even more difficult.
0
1
61
0
59,707,606
0
0
0
0
1
false
0
2020-01-11T12:41:00.000
-1
2
0
Fitting with funtional parameter constraints in Python
59,694,481
-0.099668
python,curve-fitting,non-linear-regression
Like James Phillips, I was going to suggest SciPy's curve_fit. But the way that you have defined your function, one of the constraints is on the function itself, and SciPy's bounds are defined only in terms of input variables. What, exactly, are the forms of your functions? Can you transform them so that you can use a standard definition of bounds, and then reverse the transformation to give a function in the original form that you wanted? I have encountered a related problem when trying to fit exponential regressions using SciPy's curve_fit. The parameter search algorithms vary in a linear fashion, and it's really easy to fail to establish a gradient. If I write a function which fits the logarithm of the function I want, it's much easier to make curve_fit work. Then, for my final work, I take the exponent of my fitted function. This same strategy could work for you. Predict ln(y). The value of that function can be unbounded. Then for your final result, output exp(ln(y)) = y.
I have some data {x_i,y_i} and I want to fit a model function y=f(x,a,b,c) to find the best fitting values of the parameters (a,b,c); however, the three of them are not totally independent but constraints to 1<b , 0<=c<1 and g(a,b,c)>0, where g is a "good" function. How could I implement this in Python since with curve_fit one cannot put the parametric constraints directly? I have been reading with lmfit but I only see numerical constraints like 1<b, 0<=c<1 and not the one with g(a,b,c)>0, which is the most important.
0
1
672
0
59,699,915
0
0
0
0
1
false
0
2020-01-11T13:32:00.000
0
1
0
rtx 2070s failed to allocate gpu memory from device:CUDA_ERROR_OUT_OF_MEMORY: out of memory
59,694,874
0
python,tensorflow
Tensorflow memory management can be frustrating. Main takeaway: whenever you see OOM there is actually not enough memory and you either have to reduce your model size or batch size. TF would throw OOM when it tries to allocate sufficient memory, regardless of how much memory has been allocated before. On the start, TF would try to allocate a reasonably large chunk of memory which would be equivalent to about 90-98% of the whole memory available - 5900MB in your case. Then, when actual data starts to take more than that, TF would additionally try to allocate sufficient amount of memory or a bit more - 2.78G. And if that does not fit it would throw OOM, like in your case. Your GPU could not fit 5.9+2.8Gb. The last chunk of 2.78G might actually be a little more than TF needs, but it would anyhow be used later if you have multiple training steps because maximum required memory can fluctuate a bit between identical Session.run's.
tf 2.0.0-gpu CUDA 10.0 RTX2070super hi. i got a problem regarding allocating gmemory. The initial allocation of memory is 7GB like this. Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6994 MB memory) 2020-01-11 22:19:22.983048: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2020-01-11 22:19:23.786225: I tensorflow/stream_executor/cuda/cuda_driver.cc:830] failed to allocate 2.78G (2989634304 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory 2020-01-11 22:19:24.159338: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 Limit: 7333884724 InUse: 5888382720 MaxInUse: 6255411968 NumAllocs: 1264 MaxAllocSize: 2372141056 but i can only use 5900MB memory and the rest of memory always fails to be allocated. i guess that if whole gpu memory is used in rtx 2070s, i use 2 types data typse(float16, float32). so i got a policy by using this codes opt = tf.keras.optimizers.Adam(1e-4) opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) Still, the allocation always fails.
0
1
1,026
0
59,703,393
0
1
0
0
1
true
0
2020-01-11T17:50:00.000
0
1
0
error message of No module named 'tensorflow' in GCP AI Platform Notebook
59,697,085
1.2
tensorflow,google-cloud-platform,installation,python-import,gcp-ai-platform-notebook
You have probably selected an instance type that doesn't have tensorflow pre-installed. After you install a Python dependency you will have to restart the Python Kernel for updates to take effect by clicking on Kernel->Restart Kernel....
I launched a notebook with GCP AI Platform. Then, I tried to install tensorflow by: import tensorflow as tf There is an error message of No module named 'tensorflow' I tried to install it by: !pip install -U --user tensorflow==1.14.0 But the same error message appeared. As it is a GCP platform, I wonder why I need to install tensorflow. During Coursera training, I can import tensorflow directly without installation. I wonder if I missed anything. Grateful if you can help. Thank you
0
1
358
0
59,744,476
0
0
0
0
1
false
0
2020-01-11T18:22:00.000
0
1
0
How to feed variable-length of speech feature to RNN(LSTM) for Speech Recognition?
59,697,342
0
python,speech-recognition,lstm,recurrent-neural-network,speech-to-text
it depends on how your system built is it end-to-end training or did u use hand-engineering features such MFCC? one more note the main use of RNN's is to have a variable-length input.
I am trying to build a Speech Recognition System, which is a squence-to-sequence model. But I got confused about how to feed the extracted feature(fbank with the dimension of 40) to LSTM. As far as I have found, there are different methods to feed the data as input into LSTM. However, I have a doubt to fully understand them. I would be so thankful if someone tells me whether or not I am correct in the following cases. Case 1: In the convenient format [Batch_Size, Time_Step, Feature_Dim], If I select [1, None, 40], the length of each sequence(utterance) can be varied? if so, in this case I do not need to pad each sequence, am I right? Case 2: If all input sequences are padded to the same length, the Batch_Size can be any value like 64, 128 and etc? Finally, one more question, do I notice that the Time_Step in each Batch should be the same? I would be so thankful if someone can help me to get rid of my doubts or give me some suggestions.
0
1
327
0
59,707,182
0
0
0
0
1
true
0
2020-01-11T22:23:00.000
1
1
0
Average a dimension of a chunk (xarray)
59,699,168
1.2
python-xarray
X.groupby('time.year').mean(dim='time') should do the trick as long as time is a datetime64 object.
Let X be a variable I have stored in multiple files. In each files, X has dimension of (time, depth, latitude, longitude). I wish to do a yearly average, and each file store a year of data. To be efficient, I open the data with X = xarray.open_mfdataset('path'+'year_*_X.nc'). I do not want to average all the time axis, i.e.: X.mean(dim='time'). I observe that X store 'chunks' since it gives the 'chunksize', so maybe it can be used to average the time dimension of each chunks? In [1] : X Out[1] : <xarray.DataArray 'ty_trans_submeso' (time: 240, st_ocean: 50, yu_ocean: 200, xt_ocean: 360)> dask.array<shape=(240, 50, 200, 360), dtype=float32, chunksize=(12, 50, 200, 360)>
0
1
139
0
59,709,870
0
0
0
0
2
false
0
2020-01-12T15:18:00.000
0
2
0
Which data to plot to know what model suits best for the problem?
59,705,218
0
python,machine-learning,plot,prediction
If you are using off-the-shelf packages like sklearn, then many simple models like SVM, RF, etc, are just one-liners, so in practice, we usually try several such models at the same time.
I'm sorry, i know that this is a very basic question but since i'm still a beginner in machine learning, determining what model suits best for my problem is still confusing to me, lately i used linear regression model (causing the r2_score is so low) and a user mentioned i could use certain model according to the curve of the plot of my data and when i see another coder use random forest regressor (causing the r2_score 30% better than the linear regression model) and i do not know how the heck he/she knows better model since he/she doesn't mention about it. I mean in most sites that i read, they shoved the data to some models that they think would suit best for the problem (example: for regression problem, the models could be using linear regression or random forest regressor) but in some sites and some people said firstly we need to plot the data so we can predict what exact one of the models that suit the best. I really don't know which part of the data should i plot? I thought using seaborn pairplot would give me insight of the shape of the curve but i doubt that it is the right way, what should i actually plot? only the label itself or the features itself or both? and how can i get the insight of the curve to know the possible best model after that?
0
1
56
0
59,706,610
0
0
0
0
2
false
0
2020-01-12T15:18:00.000
1
2
0
Which data to plot to know what model suits best for the problem?
59,705,218
0.099668
python,machine-learning,plot,prediction
This question is too general, but I will try to give an overview of how to choose the model. First of all you should that there is no general rule to choose the family of models to use, it is more a choosen by experiminting different model and looking to which one gives better results. You should also now that in general you have multi-dimensional features, thus plotting the data will not give you a full insight of the dependance of your features with the target, however to check if you want to fit a linear model or not, you can start plotting the target vs each dimension of the input, and look if there is some kind of linear relation. However I would recommand that you to fit a linear model, and check if if this is relvant from a statistical point of view (student test, smirnov test, check the residuals...). Note that in real life applications, it is not likeley that linear regression will be the best model, unless you do a lot of featue engineering. So I would recommand you to use more advanced methods (RandomForests, XGboost...)
I'm sorry, i know that this is a very basic question but since i'm still a beginner in machine learning, determining what model suits best for my problem is still confusing to me, lately i used linear regression model (causing the r2_score is so low) and a user mentioned i could use certain model according to the curve of the plot of my data and when i see another coder use random forest regressor (causing the r2_score 30% better than the linear regression model) and i do not know how the heck he/she knows better model since he/she doesn't mention about it. I mean in most sites that i read, they shoved the data to some models that they think would suit best for the problem (example: for regression problem, the models could be using linear regression or random forest regressor) but in some sites and some people said firstly we need to plot the data so we can predict what exact one of the models that suit the best. I really don't know which part of the data should i plot? I thought using seaborn pairplot would give me insight of the shape of the curve but i doubt that it is the right way, what should i actually plot? only the label itself or the features itself or both? and how can i get the insight of the curve to know the possible best model after that?
0
1
56
0
59,742,768
0
1
0
0
1
true
0
2020-01-13T00:59:00.000
0
1
0
Importing the multiarray numpy extension module failed. Most likely you are trying to import a failed build of numpy
59,709,490
1.2
python,numpy,heroku
i solve it my solution was Run locally 1- python -m pip install python-dev-tools then 2- pip freeze > requirements.txt 3-git add . 4-git commit -m "v5" 5- heroku git:remote -a "your appname on heroku" 6-git push heroku master
my project run local on pycharm without error but when i deployed it on heroku error message appears :Importing the multiarray numpy extension module failed. i changed numpy version more than one but it does not work Django Version: 2.2.5 Exception Type: ImportError Exception Value: Importing the multiarray numpy extension module failed. Most likely you are trying to import a failed build of numpy. If you're working with a numpy git repo, try git clean -xdf (removes all files not under version control). Otherwise reinstall numpy. Original error was: libpython3.6m.so.1.0: cannot open shared object file: No such file or directory Exception Location: /app/.heroku/python/lib/python3.6/site-packages/numpy/core/init.py in , line 26 Python Executable: /app/.heroku/python/bin/python Python Version: 3.6.10 numpy version : 1.18.1 any solution?
1
1
322
0
59,722,344
0
0
0
0
1
true
0
2020-01-13T18:30:00.000
0
1
0
How to predict the player using random forest ML
59,722,197
1.2
python,machine-learning
To address such a problem, I would suggest creation of a custom target variable. Firstly, the transformation of names of players into dummy variables seems reasonable (Just make sure, the unique player is identified with the same first and last name combinations thereby, avoiding duplications and thus, having the correct dummy code for the player name). Now, to create the target variable "wins" - Use the two player names - P1, P2 of the match as input features for your model. Define the "wins" as 1 if P1 wins and 0 if P2 wins. Run your model with this set up. When you want to create a tournament and predict the winner, the inputs will be your 2 players and other match features. If, "wins" is close to 1, it means your P1 wins and output that player name.
I have to predict the winner of the Australian Open 2020. My dataset has these features: Location / Tournament / Date / Series / Court / Surface / Round / Winner / Loser etc. I trained my model using just these features 'Victory','Series','Court','Surface','WinRank','LoseRank','WPts','LPts','Wsets','Lsets','Weather' and I have a 0.93 accuracy but now I have to predict the name of the winner and I don't have any idea how to do it based on the model that I trained. Example: If I have Dimitrov G. vs Simion G using random forest the model has to give me one of them as the winner of the match. I transformed the names of the players in dummy variables but after that, I don't know what to do? Can anyone give me just an idea of how could I predict the winner? so I can create a Tournament, please?
0
1
30
0
59,902,346
0
0
0
0
1
true
2
2020-01-14T07:18:00.000
0
3
0
Importing libraries in AWS Lambda function code from S3 bucket
59,729,056
1.2
python,amazon-web-services,amazon-s3,deep-learning,aws-lambda
Ultimately, I changed my approach from using Lambda to using EC2. I deployed the whole code with libraries on an EC2 instance and then triggered it using Lambda. On EC2, it can also be deployed on Apache server to change port mapping.
I have to deploy a Deep Learning model on AWS Lambda which does object detection. It is triggered on addition of image in the S3 bucket. The issue that I'm facing is that the Lambda function code uses a lot of libraries like Tensorflow, PIL, Numpy, Matplotlib, etc. and if I try adding all of them in the function code or as layers, it exceeds the 250 MB size limit. Is there any way I can deploy the libraries zip file on S3 bucket and use them from there in the function code (written in Python 3.6) instead of directly having them as a part of the code? I can also try some entirely different approach for this.
0
1
1,106
0
59,742,877
0
0
0
0
1
false
0
2020-01-14T21:39:00.000
0
1
0
How to calculate the standard deviation of noise for an image?
59,742,157
0
python,image,opencv,image-processing,noise
I would try to pursue this by taking the image before denoise and applying a bitwise_and to it from the denoised image. This would give you just the noise. Then you could take the standard deviation and compare between the denoising algorithms.
I'm currently testing out various denoising algorithms on noisy images. Is there a way of measuring the standard deviation of the noise in the images that I'm denoising (before and after) with OpenCV(python) or any easier method?
0
1
565
0
59,773,204
0
0
0
0
1
true
1
2020-01-15T06:45:00.000
2
1
0
catboost classifier for class imbalance?
59,746,304
1.2
python,classification,catboost
For scale_pos_weight you would use negative class // positive class. in your case it would be 11 (I prefer to use whole numbers). For class weight you would provide a tuple of the class imbalance. in your case it would be: class_weights = (1, 11) class_weights is more flexible so you could define it for multi-class targets. for example if you have 4 classes you can set it: class_weights = (0.5,1,5,25) and you need to use only one of the parameters. for a binary classification problem I would stick with scale_pos_weight.
I am using catboost classifier for my binary classification model where I have a highly imbalance dataset of 0 -> 115000 & 1 -> 10000. Can someone please guide me in how to use the following parameters in catboostclassifier: 1. class_weights 2. scale_pos_weight ? From the documentation, I am under the impression that I can use Ratio of sum of negative class by sum of positive class i.e. 115000/10000=11.5 as the input for scale_pos_weight but I am not sure . Please let me know what exact values to use for these two parameters and method to derive that value? Thanks
0
1
3,731
0
59,761,188
0
0
0
0
1
false
0
2020-01-15T22:40:00.000
0
1
0
keras model prediction is nan after saving and loading
59,760,569
0
python,tensorflow,keras,google-colaboratory
As far as I know keras has its own function to save the model such as model.save('file.h5'), and the joblib library is used to save sklearn models.
I trained a neural network with google colab. I saved the neural network using joblib.dump() I then loaded the model on my PC using joblib.load() I made a prediction on the exact same sample, using the same model, on both colab and my PC. On colab, it has an output of [[0.51]]. On my pc, it has an output of [[nan]]. The model summary reports that the architecture of the model is the same. I checked the weights of the model I loaded on my PC, and the model on colab, and the weights are the exact same. Any ideas as to what I can do? Thank you. Quick update: even if I change all of my inputs to zero, the prediction is still nan.
0
1
358
0
61,222,708
0
0
0
0
1
true
2
2020-01-16T12:26:00.000
1
1
0
How to use "ignore" class with tensorflow object detection API?
59,769,698
1.2
python,tensorflow,object-detection,object-detection-api
Yes, you need to have another Class which is the object you don't want to detect. If you don't have this Other Class which includes everything that is not to be detected. The model will compare it to the existing class which is almost identical to the cards of interest. Some of the factors are: Similarity of Shape Similarity of Color Similarity of Symbols This is why even though it is not the card of interest (Skip, Reverse, and Draw 4), it would somehow have high "belongingness" to these three classes. Having another Class to dump all of these can significantly lessen the "belongingness" to the three classes of interest and as much as possible provide A LOT of Data during Training. If you don't want to have another class. You could overfit Skip, Reverse, and Draw 4 cards (close to 100%), then increase your threshold value of detection to (70-90%). Hope this will help you.
I have trained tensorflow object detection model (for num_steps:50000) using SSD (mobilenet-v1) on custom dataset. I got [email protected] ~0.98 and loss ~1.17. The dataset consist of uno playing card images (skip, reverse, and draw four). On all these cards, model performs pretty well as I have trained model only on these 3 card (around 278 images with 829 bounding boxes (25% bounding box used for testing i.e. validation) collected using mobile phone). However, I haven’t trained model on any other card but still it detects other cards (inference using webcam). How can I fix this? Should I also collect other class images (anything other than skip, reverse and draw four cards) and ignore this class in operation? So that model sees this class i.e. Label: Other images during training and doesn’t put any label during inference. I am not sure how to inform tensorflow object detection API that it should ignore images from Other class. Can anyone please provide pointer? Please share your views!
0
1
687
0
59,771,196
0
0
0
0
2
false
0
2020-01-16T13:30:00.000
1
3
0
Neural Networks Weight matrices explained
59,770,747
0.066568
python,neural-network
By your description I suppose that you have two layers, where the first layer outpits a tensor of batch x 40 and the second layer a batch x 1 tensor, meanwhile the input is a tensor of batch x 80. Then the weights dimentions are: W1: 80x40 -> first layer W2: 40x1-> out layer
Hi guys was wondering if anyone could help me with what I would actually answer for this question on a upcoming exam I have on AI in Python. The question confuses me as I thought I would usually need more info to answer but it is not provided. the question asked is A Python class is used to represent a neural network and the feed forward operation is called as indicated below: ' y_hat = NN.forward(X) ' where y_hat is the output and X is the input matrix. The neural network has an input size of 80, one hidden layer of size 40 and an output layer of size 1. What size will be the W1 and W2 matrices? If anyone could help me with this as my lecturer is not replying to the classes emails. Many Thanks!
0
1
1,373
0
59,771,216
0
0
0
0
2
false
0
2020-01-16T13:30:00.000
2
3
0
Neural Networks Weight matrices explained
59,770,747
0.132549
python,neural-network
Let's say : X input vector, (size 80*1) H hidden layer vector, (size 40*1) Y output vector, (size 1*1) You have : H = W1 * X Y = W2 * H So : W1 has size (40*80) W2 has size (1*40) Note : size (m*n) means m rows, n columns
Hi guys was wondering if anyone could help me with what I would actually answer for this question on a upcoming exam I have on AI in Python. The question confuses me as I thought I would usually need more info to answer but it is not provided. the question asked is A Python class is used to represent a neural network and the feed forward operation is called as indicated below: ' y_hat = NN.forward(X) ' where y_hat is the output and X is the input matrix. The neural network has an input size of 80, one hidden layer of size 40 and an output layer of size 1. What size will be the W1 and W2 matrices? If anyone could help me with this as my lecturer is not replying to the classes emails. Many Thanks!
0
1
1,373
0
59,771,829
0
1
0
0
1
true
0
2020-01-16T14:25:00.000
1
1
0
Anaconda Jupyter shows eroor when importing tensorflow
59,771,753
1.2
python,anaconda
try: $ conda install tensorflow==2.0.0
I am having the following error when importing TensorFlow: ERROR:root:Internal Python error in the inspect module. Below is the traceback from this internal error. I believe it is related to .dll ImportError: DLL load failed: The specified module could not be found. I am using Python version 3.7.4. Any advice? Thanks
0
1
146
0
59,773,876
0
0
0
0
1
false
4
2020-01-16T16:15:00.000
1
1
0
How to get descriptive statistics of all columns in python
59,773,763
0.197375
python-3.x,pandas,data-analysis
probably, some of your columns where in some type other than numerical. Try train.apply(pd.to_numeric) then train.describe()
I have a dataset with 200000 rows and 201 columns. I want to have descriptive statistics of all the variables. I tried: '''train.describe()''' But this is only giving the output for the first and last 8 variables. This there any method I can use to get the statistics for all of the columns.
0
1
678
0
59,810,718
0
0
0
0
1
true
0
2020-01-16T19:31:00.000
0
1
0
How to make feature vectors size equal for training neural networks?
59,776,583
1.2
python-3.x,wav,feature-extraction,mfcc
Adding zeros or zero padding is the most common method of making very short audio signals longer as well as it can be used to match the lengths of audio data before feature extraction. In my understanding, this does not affect the outcome of the analysis, specially as you are using a neural network.
I am training a neural network, but the feature vectors do not have the same size. This problem may be fixed by adding some zeros or removing some values, but the greater problem would be data loss or generating meaningless data. So, is there any approach to make them equal size, without mentioned weaknesses? Maybe transformation to other dimensions? I do not want to use random values or "NA".
0
1
160
0
59,803,137
0
0
0
0
1
false
0
2020-01-18T17:16:00.000
0
3
0
Set column value as the mean of a group in pandas
59,803,001
0
python,pandas
df['my_label_mean_temperature']= df.groupby('label', as_index=False)['temperature'].mean()
I have a data frame with columns X Y temperature Label label is an integer between 1 and 9 I want to add an additional column my_label_mean_temperature which will contain for each row the mean of the temperatures of the rows that has the same label. I'm pretty sure i need to start with my_df.groupby('label') but not sure how to calculate the mean on temperature and propagate the values on all the rows of my original data frame
0
1
38
0
59,807,451
0
0
0
0
1
false
1
2020-01-19T04:55:00.000
2
1
0
How can I use machine learning for time series problem
59,807,263
0.379949
python,machine-learning,time-series,prediction
You are looking for a ML model for time-series data. This is a huge field but I`ll try to write a few important notes: Try to generate a dataframe where each row is a timestamp and each column is a feature. Now you can generate rolling features - for example rolling mean/std of your features, using a few different time windows. split your data to train and test - this is a very tricky part in time series data. You should be very careful with this. You have to split the data by time (not randomly), in order to simulate the real world where you learn from the past and predict the future. You must verify that you haven't a leakage - for example, if you use as a feature "rolling mean of the last week", you must verify that you didn't calculated your signal for validation using data from the train set. Train a baseline model using classic ML methods - for example boosting trees etc. In the next steps you can improve your baseline and then continue with more advanced models (LSTM etc)
Hello I have a time series data that basically behaves in a sawtooth manner. After each maintenance period, the signal always goes up before going down until a maintenance happens which will cause the signal to increase again. I am trying to predict the signal and see what happens to the signal if I schedule future maintenance. I am new to time series and I am not sure which model should I use to predict the data. I have looked at cross correlation but it doesnt seem to take into account any events that will influence the signal like my problem. I just what what happens after each maintenance event and the signal follws a similar trend all the time after each maintenance period where it goes up and down. Any suggestions?
0
1
58
0
60,538,738
0
1
0
0
1
true
0
2020-01-19T05:09:00.000
0
2
0
Find unique number of sequence and reads
59,807,333
1.2
python
As it is, the problem isn't defined well enough - it is underconstrained. (I'm going to be using case-sensitive sequences of [A-Za-z] for examples, since using unique characters makes the reasoning easier, but the same things apply to [1-4] and [ACGT] as well; For the same reason, I'm allowing only single-character differences in the examples. When I include a number in parenthesis after a sequence, it denotes the read) Just a few examples off the top of my head: For {ABCD, ABCE}, which one should be selected as the real sequence? By random? What about {ABCD, ABCE, ABCE}? Is random still okay? For {ABCD, ABCE, ABED}, should ABCD(3) be selected, since there's a single-letter difference between it and the other two, even though there's a two-letter difference between ABCE and ABED? For {ABCE, ABED}, should ABCD(2) be selected, since there's a single-letter difference between it and the other two, even though the sequence doesn't exist in the input itself? For {ABCD, ABCZ, ABYZ}, should ABCZ(3) be selected? Why not {ABCD(2), ABYZ(2)}? For {ABCD, ABCZ, ABYZ, AXYZ}, should {ABCD(2), AXYZ(2)} be selected? Why not {ABCZ(3), ABYZ(3)}? (Or maybe you want it to chain, so you'd get a read of 4, even though the maximum difference is already 3 letters?) In the comments, you said: I am just listing a very simple example, the real case is much longer. How long? What's the minimum length? (What's the maximum?) It's relevant information. And finally - before I get to the meat of the problem - what are you doing this for? If it's just for learning - as a personal exercise - that's fine. But if you're actually doing some real research: For all that is good and holy, please research existing tools/libraries for dealing with DNA sequences and/or enlist the help of someone who is familiar with those. I'm sure there are heaps of tools available that can do better and faster, than what I'm about to present. That being said... Let's look at this logically. If you have a big collection of strings, and you want to quickly find if it contains a specific string, you'd use a set (or a dictionary, if there's associated data). The problem, of course, is that you don't want to find only exact matches. But since the number of allowed errors is constrained and extremely small, there are some easy workarounds. For one, you could just generate all the possible sequences with the allowable amount of error, and try to lookup each of them - but that really only makes sense if the if the strings are short and there's only one allowable error, since the amount of possible error combinations scales up really fast. If the strings are long enough, and aren't expected to generally share large chunks (unless they're within the allowable error, so the strings are considered same), you can make the observation that if there's a maximum of two modifications, and you cut a string into 3 parts (it doesn't matter if there's leftovers), then one of the parts must match the corresponding part of the original string. This can be extended to insertions and deletions by generating the 3 parts for 3 different shifts of the string (and choosing/dealing with the part lengths suitably). So by generating 9 keys for each sequence, and using a dictionary, you can quickly find all sequences that are capable of matching the sequence with 2 errors. (Of course, as I said at the start, this doesn't work if a large part of unrelated strings share big chunks: If all of your strings only have differences at the beginning, and have the same end, you'll just end up with all the strings grouped together, and no closer to solving the problem) (Also: If the sequence you want to select doesn't necessarily exist in the input, like described in the 4th example, you need 5 parts with 5 shifts to guarantee a matching key, since the difference between the existing sequences can be up to 4) An example: Original sequence: ABCDEFGHIJKLMNOP Generated parts: (Divided into 3 parts (of size 4), with 3 different shifts) [ABCD][EFGH][IJKL]MNOP A[BCDE][FGHI][JKLM]NOP AB[CDEF][GHIJ][KLMN]OP If you now make any two modifications to the original sequence, and generate parts for it in the same manner, at least one of the parts will always match. If the sequences are all approximately the same size, the part size can just be statically set to a suitable value (there must be at least 2 characters left over after the shift, as shown here, so a string with two deletions can still generate the same keys). If not, eg. powers of two can be used, taking care to generate keys for both sides when the string length is such that matching sequences could fall into a neighbouring size bucket. But those are in essence just examples of how you could approach coming up with solutions when presented with this kind of a problem; just random ad hoc methods. For a smarter, more general solution, you could look at eg. generalized suffix trees - they should allow you to find matching sequences with mismatches allowed very fast, though I'm not sure if that includes insertions/deletions, or how easy that would be to do.
I have a data frame which contains multiple number sequences, i.e.: 1324123 1235324 12342212 4313423 221231; ... these numbers met the following requirement: the number of each digit is from 1 - 4. What I want to do is find all unique sequences and their reads. Regarding the unique sequence, two-digit differences are allowed. For example: 12344 12344 12334 1234 123444 are considered as the same sequence and the original sequence is 1234 and the associated read is 5. I want to accomplish this in python and only basic python packages are allowed: numpy, pandas, etc. EDIT the real case is DNA sequence. For a simple DNA sequence ATGCTAGC, due to reading errors, the output of this actual sequence might be: ATGCTAG(deleted), ATGCTAGG(altered), ATGCTAGCG(insertion), ATGCTAGC(unchanged). These four sequences are considered the same sequence, and read is the time of appearance.
0
1
122
0
67,048,383
0
1
0
0
3
false
13
2020-01-19T11:16:00.000
0
6
0
How to install TensorFlow with Python 3.8
59,809,495
0
python,python-3.x,tensorflow,python-3.8
Currently it does support python 3.8 all we need to do is create a new environment ,select 'update index' , select uninstalled and one can find tensorflow for installing
Whenever I try to install TensorFlow with pip on Python 3.8, I get the error that TensorFlow is not found. I have realized later on that it is not supported by Python 3.8. How can I install TensorFlow on Python 3.8?
0
1
55,356
0
61,622,264
0
1
0
0
3
false
13
2020-01-19T11:16:00.000
3
6
0
How to install TensorFlow with Python 3.8
59,809,495
0.099668
python,python-3.x,tensorflow,python-3.8
Tensorflow does not support Python 3.8 at the moment. The latest supported Python version is 3.7. A solution is to install Python 3.7, this will not affect your codes since Python 3.7 and 3.8 are very similar. Right now Python 3.7 is supported by more frameworks like TensorFlow. Soon Python 3.8 will have more supported frameworks, and that´s when you can install TensorFlow for Python 3.8.
Whenever I try to install TensorFlow with pip on Python 3.8, I get the error that TensorFlow is not found. I have realized later on that it is not supported by Python 3.8. How can I install TensorFlow on Python 3.8?
0
1
55,356
0
69,379,579
0
1
0
0
3
false
13
2020-01-19T11:16:00.000
0
6
0
How to install TensorFlow with Python 3.8
59,809,495
0
python,python-3.x,tensorflow,python-3.8
Instead of pip or conda command, I used pip3 command and it worked.
Whenever I try to install TensorFlow with pip on Python 3.8, I get the error that TensorFlow is not found. I have realized later on that it is not supported by Python 3.8. How can I install TensorFlow on Python 3.8?
0
1
55,356
0
59,871,447
0
0
0
0
1
true
1
2020-01-20T02:58:00.000
0
1
0
Does it make sense to use sample_weights for balanced datasets?
59,816,484
1.2
python-3.x,scikit-learn,classification
ANSWER After a couple rounds of testing and more research, I've discovered that yes, it does make sense to add more weight to the 0's with a balanced binary classification dataset, if your goal is to decrease the chance of over-predicting the 1's. I ran two separate training sessions using a weight of 2 for 0's and 1 for the 1's, and then again vice versa, and found that my model predicted less 1's when the weight was applied to the 0's, which was my ultimate goal. In case that helps anyone. Also, I'm using SKLearn's Balanced Accuracy scoring function for those tests, which takes an average of each separate class's accuracy.
I have limited knowledge about sample_weights in the sklearn library, but from what I gather, it's generally used to help balance imbalanced datasets during training. What I'm wondering is, if I already have a perfectly balanced binary classification dataset (i.e. equal amounts of 1's and 0's in the label/Y/class column), could one add a sample weight to the 0's in order to put more importance on predicting the 1's correctly? For example, let's say I really want my model to predict 1's well, and it's ok to predict 0's even though they turn out to be 1's. Would setting a sample_weight of 2 for 0's, and 1 for the 1's be the correct thing to do here in order to put more importance on correctly predicting the 1's? Or does that matter? And then I guess during training, is the f1 scoring function generally accepted as the best metric to use? Thanks for the input!
0
1
81
0
59,817,648
0
1
0
0
1
false
3
2020-01-20T05:37:00.000
0
2
0
My Dataset is showing a string when it should be a curly bracket set/dictionary
59,817,518
0
python,string,list,dictionary,type-conversion
what you have is a set of words, curly brackets are for Dictionary use such as {'Alex,'19',Marry','20'} its linking it as a key and value which in my case it name and age, rather than that you can use to_list command in python maybe it suits your needs.
My dataset has a column where upon printing the dataframe each entry in the column is like so: {"Wireless Internet","Air conditioning",Kitchen} There are multiple things wrong with this that I would like to correct Upon printing this in the console, python is printing this:'{"Wireless Internet","Air conditioning",Kitchen}' Notice the quotations around the curly brackets, since python is printing a string. Ideally, I would like to find a way to convert this to a list like: ["Wireless Internet","Air conditioning","Kitchen"] but I do not know how. Further, notice how some words so not have quotations, such as Kitchen. I do not know how to go about correcting this. Thanks
0
1
182
0
68,297,432
0
1
0
0
5
false
143
2020-01-20T12:26:00.000
2
16
0
Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation
59,823,283
0.024995
python,python-3.x,tensorflow,keras,tensorflow2.0
download CUDA Toolkit 11.0 RC To solve the issue, I just find cudart64_101.dll on my disk ( C:\Program Files\NVIDIA Corporation\NvStreamSrv) and add it as variable environment that is add value (C:\Program Files\NVIDIA\Corporation\NvStreamSrv)cudart64_101.dll to user's environment variable Path).
I just installed the latest version of Tensorflow via pip install tensorflow and whenever I run a program, I get the log message: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found Is this bad? How do I fix the error?
0
1
357,438
0
72,103,706
0
1
0
0
5
false
143
2020-01-20T12:26:00.000
0
16
0
Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation
59,823,283
0
python,python-3.x,tensorflow,keras,tensorflow2.0
This could be caused by the version of python you are running as well, I was using the python 3.7 from the microsoft store and I run into this error, switching to python 3.10 fixed it.
I just installed the latest version of Tensorflow via pip install tensorflow and whenever I run a program, I get the log message: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found Is this bad? How do I fix the error?
0
1
357,438
0
62,364,379
0
1
0
0
5
false
143
2020-01-20T12:26:00.000
2
16
0
Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation
59,823,283
0.024995
python,python-3.x,tensorflow,keras,tensorflow2.0
Tensorflow gpu 2.2 and 2.3 nightly (along CUDA Toolkit 11.0 RC) To solve the same issue as OP, I just had to find cudart64_101.dll on my disk (in my case C:\Program Files\NVIDIA Corporation\NvStreamSrv) and add it as variable environment (that is add value C:\Program Files\NVIDIA\Corporation\NvStreamSrv)cudart64_101.dll to user's environment variable Path).
I just installed the latest version of Tensorflow via pip install tensorflow and whenever I run a program, I get the log message: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found Is this bad? How do I fix the error?
0
1
357,438
0
65,183,422
0
1
0
0
5
false
143
2020-01-20T12:26:00.000
3
16
0
Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation
59,823,283
0.037482
python,python-3.x,tensorflow,keras,tensorflow2.0
I installed cudatoolkit 11 and copy dll C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\bin to C:\Windows\System32. It fixed for PyCharm but not for Anaconda jupyter: [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 6812190123916921346 , name: "/device:GPU:0" device_type: "GPU" memory_limit: 13429637120 locality { bus_id: 1 links { } } incarnation: 18025633343883307728 physical_device_desc: "device: 0, name: Quadro P5000, pci bus id: 0000:02:00.0, compute capability: 6.1" ]
I just installed the latest version of Tensorflow via pip install tensorflow and whenever I run a program, I get the log message: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found Is this bad? How do I fix the error?
0
1
357,438
0
60,424,108
0
1
0
0
5
false
143
2020-01-20T12:26:00.000
-4
16
0
Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation
59,823,283
-1
python,python-3.x,tensorflow,keras,tensorflow2.0
A simpler way would be to create a link called cudart64_101.dll to point to cudart64_102.dll. This is not very orthodox but since TensorFlow is looking for cudart64_101.dll exported symbols and the nvidia folks are not amateurs, they would most likely not remove symbols from 101 to 102. It works, based on this assumption (mileage may vary).
I just installed the latest version of Tensorflow via pip install tensorflow and whenever I run a program, I get the log message: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found Is this bad? How do I fix the error?
0
1
357,438
0
59,824,522
0
0
0
0
1
false
0
2020-01-20T13:22:00.000
0
4
0
Uninstall tensorflow 2.1.0
59,824,224
0
python,tensorflow
This problem means that you use different pythons in your terminal and your script. You mentioned PyCharm. By default, it creates a virtual environment for a new project. It also can use your global python. In any case, open a terminal in PyCharm (it should be in the bottom of the window or View -> Tool Windows -> Terminal or Option + F12 on Mac, Alt + F12 on Windows). From there you should run pip uninstall tensorflow. PyCharm automatically activates the correct python environment in the terminal.
I am having problems with uninstalling TensorFlow. I have a Python script that uses TensorFlow. I want it to use TensorFlow1.15, but it is currently using TensorFlow 2.1.0. I deleted TensorFlow via my cmd: pip uninstall tensorflow and pip uninstall tensorflow-gpu. When I run these commands again it says that TensorFlow is not installed. However, I can see that my script says it is using TensorFlow 2.1.0 (I added the line:print(tf.__version__) in my script). Does anyone know where this TensorFlow 2.1.0 is installed and how I can delete it from my PC?
0
1
7,509
0
59,837,712
0
0
0
0
1
false
0
2020-01-21T08:57:00.000
0
1
0
Reducing tuple values with reduceByKey in Pyspark
59,837,355
0
python,pyspark,mapreduce
Ok, I found the issue. When reducing for example (id, (x, x, 1)), (id, (y, y, 1)), (id, (z, z, 1)), in the first reduce you would obtain (id, 2), (id, (z, z, 1)) so when trying to reducing again, the number 2 in the first element is non subscriptable, I must keep the data structure during the process.
I'm starting to work with the MapReduce paradigm with Pyspark, I'm stuck with a problem and I don't know if it a programming error or that I shouldn't do it this way. I have data from which I am extracting the following information with map for each line: (id, (date, length, counter)), I did it this way to extract all the info I need from the raw data file and filtering the noisy lines so I don't have to use the raw data file again. Btw: Counter is originally 1, and it is intended for addition in future reduceByKey. Now the data looks like this: data = [('45', ('28/5/2010', 0.63, 1)), ('43', ('21/2/2012', 2.166, 1)), ('9', ('12/1/2009', 2.33, 1))] First, I am trying to count the number of key-value pairs, so it is a simple reduceByKey adding the counters, I tried to do it this way: data.reduceByKey(lambda a,b: a[2] + b[2]) which provides the following error TypeError: 'int' object is not subscriptable. If a and b are supposed to get the value of the pair, the element 2 is supposed to have the counter of it, I cannot get my head around it. Is it better to map several times the raw data file extracting every-time a different needed value? Should I map this data variable extracting (key, value) pairs each time with the value needed from the tuple? Is it just that I'm making a programming error? Any guidance is welcome, thanks!
0
1
158
0
59,838,258
0
0
0
0
1
false
0
2020-01-21T09:34:00.000
0
1
0
How to classify data into N classes + one "garbage" class (or leave some data out)?
59,838,018
0
python,machine-learning,scikit-learn,classification,noise
I would recommend to make a binary classifier that classifies into "garbage"/"non-garbage" classes make a regular N-classes classifier that classifies "non-garbage" inputs
I am trying to classify images of cells into N classes ("cell 1", "cell 2" ...) but some of the images are just noise and I would like to either not classify them or put them into a "garbage" class. I have tried the latter, but it is not very successful and I suspect it is because my "garbage" class is very heterogenous. Any suggestions on how to allow excluding some data from classification, or classify it as noise? I am using python and sklearn, but I would welcome either specific python/sklearn tips or generic machine learning algorithms.
0
1
205
0
59,847,592
0
0
0
0
1
true
1
2020-01-21T16:17:00.000
0
1
0
API calls from NLTK, Gensim, Scikit Learn
59,845,191
1.2
python,api,nlp,nltk,gensim
Generally with NLTK, gensim, and scikit-learn, algorithms are implemented in their source code, and run locally on your data, without sending data elsehwere for processing. I've never noticed any documentation/functionality of these packages mentioning a reliance on an remote/cloud service, nor seen users discussing the same. However, they're each large libraries, with many functions I've never reviewed, and with many contributors adding new options. And I don't know if the project leads have stated an explicit commitment to never rely on external services. So a definitive, permanent answer may not be possible. To the extent such security is a concern for your project, you should carefully review the documentation, and even source code, for those functions/classes/methods you're using. (None of these projects would intentionally hide a reliance on outside services.) You could also develop, test, and deploy the code on systems whose ability to contact outside services is limited by firewalls – so that you could detect and block any undisclosed or inadvertent communication with outside machines. Note also that each of these libraries in turn relies on other public libraries. If your concern also extends to the potential for either careless or intentionally, maliciously-inserted methods of private data exfiltration, you would want to do a deeper analysis of these libraries & all other libraries they bring-in. (Simply trusting the top-level documentation could be insufficient.) Also, each of these libraries have utility functions which, on explicit user demand, download example datasets or shared non-code resources (like lists of stopwords or lexicons). Using such functions doesn't upload any of your data elsewhere, but may leak that you're using specific functionality. The firewall-based approach mentioned above could interfere with such download steps. Under a situation of maximum vigilance/paranoia, you might want pay special attention to the use & behavior of such extra-download methods, to be sure they're not doing any more than they should to change the local environment or execute/replace other library code. Finally, by sticking to widely-used packages/functions, and somewhat older versions that have remained continuously available, you may benefit from a bit of "community assurance" that a package's behavior is well-understood, without surprising dependencies or vulnerabilities. That is, many other users will have already given those code-paths some attention, analysis, & real-usage – so any problems may have already been discovered, disclosed, and fixed.
I plan to use NLTK, Gensim and Scikit Learn for some NLP/text mining. But i will be using these libraries to work with my org data. The question is while using these libraries 'do they make API calls to process the data' or is the data taken out of the python shell to be processed. It is a security question, so was wondering if someone has any documentation for reference. Appreciate any help on this.
0
1
67
0
64,513,424
0
0
0
0
1
false
1
2020-01-22T10:21:00.000
1
2
0
Alternative way to run wav2letter Facebook AI Research Speech to Text model on Windows Machine
59,857,456
0.099668
python-3.x,deep-learning,speech-recognition,speech-to-text
Wav2Letter has different train- and inference-time dependencies. I am assuming you will be performing training on CUDA backend. If so, you need ArrayFire and Flashlight. For inference, besides basic dependancies (such as cereal for serialization, etc.) you don't need either. The FAIR team has provided their own implementations of neural network layers (linear, conv1d, etc.) based on FBGEMM (FB General Matrix Multiplication) backend. FBGEMM can both be compiled for CPU and CUDA backend - on intel-based CPUs, it can further be accelerated using intel's optimized MKL math library and on CUDA backend, using cuDNN. You are free to add your own implementations of backend based on say, LibTorch or C++ TensorFlow and submit a PR.
I'm trying to implement Speech-to-text using wav2letter. As far as I have researched the model uses the Arrayfire tensor library with dependency on flashlight ML library. Now, the flashlight library is built for Linux based system. Is there any way to run this model on the Windows-based system.
0
1
626
0
59,859,213
0
0
0
0
1
false
0
2020-01-22T11:17:00.000
0
1
0
Is Tensorflows object detection API producing incorrect ymin and ymax bounding box coordinates?
59,858,497
0
python,tensorflow,object-detection,bounding-box
It turns out I'd over looked the fact that python's coordinate system has 0,0 in the top left and the API accounts for this by making top = ymin and bottom = ymax within visualization_utils.py. So a ymax = 1073 would indeed be near the bottom of the frame.
I'm running the API over a series of video frames to track objects through a scene and I'm extracting the bounding box coordinates for each object in order to calculate the centre of each bounding box. It would appear, however, that there is an offset in the ymin and ymax coordinates. The scene is of a person walking the field of view with the bottom of the frame matched up to the persons feet (which would infer a very small value for ymin and a ymax value that would not extend to the stop of the frame. However the API gives the following normalised box coordinates [452.26962089538574, 197.93473720550537, 1073.7505388259888, 639.3438720703125]. The absolute coordinates are [0.41876816749572754, 0.10309100896120071, 0.9942134618759155, 0.3329916000366211] For reference the video is 1920 x 1080. The same frame put into MATLABs video labler app (when translated into [ymin xmin ymax xmax]) returns [8.396575927734375, 57.50376892089844, 722.7988586425781, 431.51695251464844]. I'm aware they won't exactly match because I've manually drawn in the box as ground truth (this is especially true for the x coordinates), however the ymin and ymax should be pretty close and these results seem much more realistic. Has anyone come across this before? The bounding boxes are drawn correctly onto the image when the API runs the inference, so I'm at bit of a loss as to what's happening. As I'm taking data directly from boxes = detection_graph.get_tensor_by_name('detection_boxes:0') and storing it on each iteration.
0
1
141
0
59,859,426
0
0
0
0
1
false
2
2020-01-22T11:33:00.000
4
1
0
How can I deploy my features in a Machine Learning algorithm?
59,858,819
0.664037
python,machine-learning,sentiment-analysis,feature-selection
basically, to make things very "simple" and "shallow", all algorithm takes some sort of a numeric vector represent the features the real work is to find how to represent the features as vector which yield the best result, this depends by the feature itself and on the algorithm using for example to use SVM which basically find a separator plane, you need to project the features on some vectors set which yield a good enough separation, so for instance you can treat your features like this: Emotion icons - create a vector which represent all the icons present in that tweet, define each icon to an index from 1 to n so tweet represented by [0,0,0,2,1] means the 4th and 5th icons are appearing in his body 2 and 1 times respectively Exclamation marks - you can simply count the number of occurrences (a better approach will be to represent some more information about it like the place in a sentence and such...) Intensity words - you can use the same approach as the Emotion icons basically each feature can be used alone in the SVM model to classify good and bad you can merge all those features by using 3 models of SVM and then classify by majority vote or some other method or you can create a long vector by concatenating all of the vectors, and feed it to the SVM this is just a one approach, you might tweak it or use some other one to fit your data, model and goal better
I’m way new to ML so I have a really rudimentary question. I would appreciate it if one clarifies it for me. Suppose I have a set of tweets which labeled as negative and positive. I want to perform some sentiment analysis. I extracted 3 basic features: Emotion icons Exclamation marks Intensity words(very, really etc.). How should I use these features with SVM or other ML algorithms? In other words, how should I deploy the extracted features in SVM algorithm? I'm working with python and already know how should I run SVM or other algorithms, but I don't have any idea about the relation between extracted features and role of them in each algorithm! Based on the responses of some experts I update my question: At first, I wanna appreciate your time and worthy explanations. I think my problem is solving… So in line with what you said, each ML algorithm may need some vectorized features and I should find a way to represent my features as vectors. I want to explain what I got from your explanation via a rudimentary example. Say I have emoticon icons (for example 3 icons) as one feature: 1-Hence, I should represent this feature by a vector with 3 values. 2-The vectorized feature can initial in this way : [0,0,0] (each value represents an icon = :) and :( and :P ). 3-Next I should go through each tweet and check whether the tweet has an icon or not. For example [2,1,0] shows that the tweet has: :) 2 times, and :( 1 time, and :p no time. 4-After I check all the tweets I will have a big vector with the size of n*3 (n is the total number of my tweets). 5-Stages 1-4 should be done for other features. 6-Then I should merge all those features by using m models of SVM (m is the number of my features) and then classify by majority vote or some other method. Or should create a long vector by concatenating all of the vectors, and feed it to the SVM. Could you please correct me if there is any misunderstanding? If it is not correct I will delete it otherwise I should let it stay cause It can be practical for any beginners such as me... Thanks a bunch…
0
1
83
0
59,859,181
0
0
0
0
2
true
0
2020-01-22T11:37:00.000
0
3
0
How to avoid pandas from adding indexes in dataframes when using CSV files
59,858,908
1.2
python,pandas,dataframe,indexing
Use this : df = pd.read_csv("nameOfFile.csv", index_col="nameOfColToUseAsIndex") and put the name of your first column in "nameOfColToUseAsIndex".
I'm using dataframes and CSV files to manipulate data. Most of the time, my dataframes, or the one provided by the API I'm using don't have indexes. If they have indexes, especially when writing and reading CSV files, I just remove them by using the name of the column, "unnamed:0". But this time, to_CSV places indexes in my CSV file without naming the column. So I must use df.drop(df.columns[0], axis=1, inplace=True). But for pandas, the first column is the first named, not the real first one. I already used index=False, but it just removed an important column instead of not adding indexes. How can I remove the first column which isn't named and haven't index to find it ?
0
1
1,352
0
59,859,039
0
0
0
0
2
false
0
2020-01-22T11:37:00.000
2
3
0
How to avoid pandas from adding indexes in dataframes when using CSV files
59,858,908
0.132549
python,pandas,dataframe,indexing
Writing pandas dataframes into file via to_csv() method has optional parameter index, which you can set to false to prevent it from writing its own index: df.to_csv('filename.csv', index=False)
I'm using dataframes and CSV files to manipulate data. Most of the time, my dataframes, or the one provided by the API I'm using don't have indexes. If they have indexes, especially when writing and reading CSV files, I just remove them by using the name of the column, "unnamed:0". But this time, to_CSV places indexes in my CSV file without naming the column. So I must use df.drop(df.columns[0], axis=1, inplace=True). But for pandas, the first column is the first named, not the real first one. I already used index=False, but it just removed an important column instead of not adding indexes. How can I remove the first column which isn't named and haven't index to find it ?
0
1
1,352
0
59,860,211
0
0
0
0
2
false
1
2020-01-22T12:43:00.000
0
2
0
Training model on custom data
59,860,042
0
python,tensorflow
It has low probability that your model implemented to a camera has no accuracy because you have only 32 images. Anyway before you've had about up to 10% loss (It seems to be about 90% accuracy), so it should work I think the problem is not in the amount of images. After training your model you need to save the coefficients of model trained. Make sure that you implemented model trained, and not you don't use model from scratch
I am trying to try an object detection model on a custom data set. I want it to recognize a specififc piece of metal from my garage. I took like 32 photos and labelled them. The training goes well, but up to 10% loss. After that it goes very slow, so I need to stop it. After that, I implemented the model on camera, but it has no accuracy. Could it be because of the fact that I have only 32 images of the object? I have tried with YoloV2 and Faster RCNN.
0
1
42
0
59,860,321
0
0
0
0
2
false
1
2020-01-22T12:43:00.000
0
2
0
Training model on custom data
59,860,042
0
python,tensorflow
Just labeling will not help in object detection. What you are doing is image classification but expecting results of object detection. Object detection requires bounding box annotations and changes in the loss function which is to be fed to the model during each backpropagation step. You need some tools to do data annotations first, then manipulate your Yolov2/Fast-RCNN codes along with the loss function. Train it well and try using Image Augmentations to generate little more images because 32 images are less. In that case, you might end up in a pitfall of getting higher training accuracy but less test accuracy. Training models in fewer images sometimes lead to unexpected overfitting. Only then you should try to implement using the camera.
I am trying to try an object detection model on a custom data set. I want it to recognize a specififc piece of metal from my garage. I took like 32 photos and labelled them. The training goes well, but up to 10% loss. After that it goes very slow, so I need to stop it. After that, I implemented the model on camera, but it has no accuracy. Could it be because of the fact that I have only 32 images of the object? I have tried with YoloV2 and Faster RCNN.
0
1
42
0
59,940,960
0
0
0
0
2
false
1
2020-01-22T19:16:00.000
0
2
0
Tensorflow Hanging on Google Compute Engine Using nohup
59,866,803
0
python,tensorflow,google-compute-engine
Try to use absolute paths in your execution instead of relative paths
I am trying to run a TensorFlow model that I estimate will take roughly 11 hours. As such, I would like to use nohup so I can exit my terminal keep the process running. I use the following command to do so: nohup python3 trainModel.py > log.txt & My model appears to be running as normal, but gets hung up with the last message outputted being: 2020-01-22 19:06:24.669183: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 It is normal for my model to output this, however when I am not using nohup, the rest of the code still executes. What do I need to do to have this command run as it does when I am not using nohup?
0
1
89
0
59,869,881
0
0
0
0
2
false
1
2020-01-22T19:16:00.000
0
2
0
Tensorflow Hanging on Google Compute Engine Using nohup
59,866,803
0
python,tensorflow,google-compute-engine
could you send the exit status code of the execution? echo $? This will help to get an exact idea of the error Furthermore, can you try to send the standard error output to the log.txt file, for instance nohup python3 trainModel.py 2> log.txt & Standard output will be redirected to the nohup file and log.txt will contain the standard output error I Hope this helps
I am trying to run a TensorFlow model that I estimate will take roughly 11 hours. As such, I would like to use nohup so I can exit my terminal keep the process running. I use the following command to do so: nohup python3 trainModel.py > log.txt & My model appears to be running as normal, but gets hung up with the last message outputted being: 2020-01-22 19:06:24.669183: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 It is normal for my model to output this, however when I am not using nohup, the rest of the code still executes. What do I need to do to have this command run as it does when I am not using nohup?
0
1
89
0
59,973,866
0
0
0
0
1
false
0
2020-01-23T01:16:00.000
0
1
0
Model Serving with sklearn and gRPC
59,870,409
0
python,scikit-learn,thread-safety,grpc,grpc-python
I don't know anything about sklearn specifically, but in general, gRPC is thread-safe, and it's fine to make many concurrent calls to the same method.
Say I have an trained RandomForestClassifier model from sklearn. We're using gRPC to serve that model and provide predictions in real-time in a high traffic situation. From what I understand, gRPC can support multiple threads run concurrently. Can they all simultaneously make calls to RF's predict method without running into concurrency issues? Is sklearn's predict thread-safe? If this isn't the case, it sounds like you'd have to load individual copies of the model in each worker thread.
0
1
127
0
59,874,911
0
0
0
0
1
true
0
2020-01-23T08:55:00.000
1
1
0
Is there pre-trained doc2vec model in Python 3.7?
59,874,635
1.2
python,python-3.7
The answer is, I'm afraid, no. Not at this time. Also, I think the python 2 models that are available are word2vec, not doc2vec
Is there any pre-trained doc2Vec models, trained in python 3 on news articles data. Most of the models that I am finding are in older versions. And, it's difficult to implement in python 3.7.
0
1
54
0
59,884,342
0
0
0
0
1
false
0
2020-01-23T16:00:00.000
0
1
0
Constant accuracy over epochs
59,882,584
0
python,tensorflow,machine-learning,deep-learning,artificial-intelligence
In order to fully answer this question (specific to your case) we'd need to know what loss function you are using and how you measure accuracy. In general, this can certainly happen for a variety of reasons. The easiest reason to illustrate is with a simple classifier. Suppose you have a 2-class classification problem (for simplicity) and an input $x$ and label (1, 0), i.e. the label says it belongs to class 1 and not class 2. When you feed $x$ through your network you get an output: $y=(p_1, p_2)$. If $p_1 > p_2$ then the prediction is correct (i.e. you chose the right class). The loss function can continue to go down until $p_1=1$ and $p_2=0$ (the target). So, you can have lots of correct predictions (high accuracy) but still have room to improve the output to better match the labels (room for improved loss).
I am training a gan and I am the accuracy doesn't change over epoch meanwhile the loss is deacresing. Is there something wrong or it is normal because it's a gan? Thank you in advance.
0
1
88
0
59,888,228
0
0
0
0
1
false
0
2020-01-23T22:58:00.000
1
1
0
How should metrics be added to a multi-headed TensorFlow estimator?
59,888,227
0.197375
python,tensorflow,machine-learning,metrics
Build a class around the model creation that holds the model configuration and use a member function for the metric_fn parameter. class ModelBuilder: # constructor storing configuration options in self def __init__(self, labels, other_config_args): self.labels = labels ... # Function for building the estimator with multiple heads (multi-objective) def build_estimator(self, func_args): heads = [] for label in self.labels: heads.append(tf.estimator.MultiClassHead(n_classes=self.nclasses[label], name=label)) head = tf.estimator.MultiHead(heads) estimator = tf.estimator.DNNEstimator(head=heads,...) # or whatever type of estimator you want estimator = tf.estimator.add_metrics(estimator, self.model_metrics) return estimator # Member function that adds metrics to the estimator based on the model configuration params def model_metrics(self, labels, predictions, features): metrics = {} for label in self.labels: # generate a metric for each head name metrics['metric_name'] = metric_func(features,labels,predictions[(label,'logits')]) return metrics
I previously created metrics for a TensorFlow Classifier referencing predictions['logits'] to calculate the metrics. I have changed the model from a Classifier to an Estimator in order to enable multi-objective learning (using MultiHead). However, this has caused Python to throw an error since now the elements of predictions are keyed by pairs of the head name and original key, e.g. ('label1','logits') for a head with name 'label1'. I'd like to allow for dynamic generation of metrics based on a configuration file in order to more easily train and test a variety of models with different label combinations. The problem now is that the metric_fn parameter to tf.estimator.add_metrics does not take any additional parameters to allow for dynamically determined or constructed metrics. How can I generate an estimator with multiple heads and custom metrics for each head?
0
1
373
0
61,153,120
0
0
0
0
3
false
0
2020-01-25T09:49:00.000
0
4
0
Thonny : installing tensorflow and importing it
59,908,131
0
python,tensorflow,machine-learning,data-science,thonny
Tensorflow can be installed in Thonny by Tools -> Open System Shell pip install --upgrade tensorflow
I am having trouble importing and installing tensorflow. I can't install it via that Thonny manage package option nor via the command window for windows operators. I get the same error for both ways: ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none) Error: No matching distribution found for tensorflow I tried to convert back to python 3.6 but the issue still arises. This is annoying me because I cannot implement Machine Learning, which is something I am strongly passionate on. Any reasons or solutions would be appreciated
0
1
1,448