GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
59,083,949
0
0
0
0
1
false
0
2019-11-28T07:38:00.000
0
1
0
How to find anomalies in wind-sensor TimeSeries data?
59,083,825
0
python,machine-learning,deep-learning,time-series,data-science-experience
Very broad question so this will be a generic/broad answer: To define anomalies you'll need to think and define what you consider normal. Usually we consider two things in terms of (time series) data; data availability: is the data there that you expect? Usually monitorred by looking at a row count over time (are you inserting more or less data than expected) also counting null values could be used here, but this already leads into the question of data quality: data quality: are the values in ranges you expect? are they in the type/format you expect, etc. you can use standard deviations/variance/normal distribution to monitor this. Or hard limit and define which values you accept/expect (min, max for instance)
I have time series data set which contain TimeStamp[hour base] and wind sensor value. I need to find anomalies from this data set. What are the techniques to find out anomalies ? How to find anomalies with only these two features ( TimeStamp, sensor-value ) ?
0
1
50
0
59,084,811
0
0
0
0
1
false
0
2019-11-28T07:51:00.000
0
1
0
How to Execute R Machine Learning Model using Python REST API?
59,084,012
0
python,r,rest,api,rpy2
You will need a Python based REST framework like Flask(Django or Pyramid will also do) to do this. You need to understand how to write a REST API by going through their respective docs. You can basically hide the R model behind a REST Resource. This resource will be responsible in receiving the inputs for the model. Using the RPY library you can invoke the model by passing required parameters and return a response to the API client.
I need some help in R Machine Learning model execution by a Python REST API. I have 2 executable ML model, developed in R Language by my colleague. Now I need to deploy those models as a REST API, as we need to pass some input parameter and take output from those Models as a return statement. I found, we can do this with Python, using RPY2 or Plumber Libraries. So I tried to implement Flask REST API for R model deployment, but not getting the exact reference from google for my challenges and for learning. I am new in R-Language, just in last 2 weeks, I have explored some basis of R. Can someone please share me some reference to my query or any other approach or code reference to implement PYTHON REST API to execute R model by passing some input field. Thanks in advance
0
1
331
0
59,116,976
0
0
0
0
2
false
0
2019-11-28T12:32:00.000
1
2
0
NN multidim regression with matrix as output
59,089,002
0.099668
python,tensorflow,keras,neural-network,regression
This is the answer to a question regarding unknown labels. You have to know labels before using any supervised algorithm. Otherwise, there is no way you can train a model. You need to think of solving this problem by employing one of the unsupervised techniques, such as k-means algorithm, Gaussian Mixture Models, or Classification And Regression Trees, etc. For example, one of the suggestions I would give is to try k-means with a fixed number of k, which is 1000 in your case, run the algorithm a couple of times and see if centroids are any close to elements in your output. Then, you could classify your output based on how close they are to one of the nearest centroids. Then all elements of input belonging to individual centroids would be classified as such. EDIT. After reconsidering your example, I think that perhaps k-NN would be much more helpful to your problem. In k-NN, outputs are considered as neighbours. Each point from the input is assigned to the closest neighbour. In the end, you already have the output but you don't know how to map all elements in both the input and the output. I have just realised your problem is "the mapping". It is a good chance k-NN would solve that meaning it would create labels for all elements in the input that correspond to elements in the output. Once it is done, the Neural Network model can be trained.
I want to build a simple NN for regression purposes; the dimension of my input data reads (100000,3): meaning I have 1mio particles and their corresponding x,y,z coordinates. Out of these particles I want to predict centers which the particles correspond to where the data of the centers read (1000,3). my question is: since the input array should have the same number of samples as target arrays how can I solve this problem? actually my mapping is from (100000,3) -> (1000,3) because on average about 100 particles belong to one center. to train the model i will use many of those datasets with the right centers as output; after that i want to predict out of one new set of particle coordinates the corresponding centers.
0
1
68
0
59,106,465
0
0
0
0
2
false
0
2019-11-28T12:32:00.000
1
2
0
NN multidim regression with matrix as output
59,089,002
0.099668
python,tensorflow,keras,neural-network,regression
All you have to do is to match the sizes. Assuming you know what particle belongs to what center that shouldn't be too hard. So in your case you should have a matrix of (1000000,3)(atoms) and a vector of (1000000,)(centers) as their labels. This means that each entry in the vector corresponds to one row in the atom matrix.
I want to build a simple NN for regression purposes; the dimension of my input data reads (100000,3): meaning I have 1mio particles and their corresponding x,y,z coordinates. Out of these particles I want to predict centers which the particles correspond to where the data of the centers read (1000,3). my question is: since the input array should have the same number of samples as target arrays how can I solve this problem? actually my mapping is from (100000,3) -> (1000,3) because on average about 100 particles belong to one center. to train the model i will use many of those datasets with the right centers as output; after that i want to predict out of one new set of particle coordinates the corresponding centers.
0
1
68
0
59,095,070
0
0
0
0
1
false
2
2019-11-28T16:34:00.000
0
2
0
Is there any supervised clustering algorithm or a way to apply prior knowledge to your clustering?
59,093,163
0
python,machine-learning,cluster-analysis,unsupervised-learning,supervised-learning
A standard approach would be to use the dendrogram. Then merge branches only if they agree with your positive examples and don't violate any of your negative examples.
In my case I have a dataset of letters and symbols, detected in an image. The detected items are represented by their coordinates, type (letter, number etc), value, orientation and not the actual bounding box of the image. My goal is, using this dataset, to group them into different "words" or contextual groups in general. So far I achieved ok-ish results by applying classic unsupervised clustering, using DBSCAN algorithm, but still this is way tοo limited on the geometric distance of the samples and so the resulting groups cannot resemble the "words" I am aiming for. So I am searching for a way to influence the results of the clustering algorithm by using the knowledge I have about the "word-like" nature of the clusters needed. My possible approach that I thought was to create a dataset of true and false clusters and train an SVM model (or any classifier) to detect whether a proposed cluster is correct or not. But still for this, I have no solid proof that I can train a model well enough to discriminate between good and bad clusters, plus I find it difficult to efficiently and consistently represent the clusters, based on the features of their members. Moreover, since my "testing data" will be a big amount of all possible combinations of the letters and symbols I have, the whole approach seems a bit too complicated to attempt implementing it without any proof or indications that it's going to work in the end. To conclude, my question is, if someone has any prior experience with that kind of task (in my mind sounds rather simple task, but apparently it is not). Do you know of any supervised clustering algorithm and if so, which is the proper way to represent clusters of data so that you can efficiently train a model with them? Any idea/suggestion or even hint towards where I can research about it will be much appreciated.
0
1
1,973
0
59,107,037
0
0
0
0
1
false
0
2019-11-29T08:58:00.000
0
1
0
Is there a way to use pre-trained R ML model in python web app?
59,101,644
0
python,r,machine-learning,web-applications
Not sure what calling R code from Python has to do with ML models. If you have a trained model, you can try converting it into ONNX format (emerging standard), and try using the result from Python.
More of a theoretical question: Use case: Create an API that takes json input, triggers ML algorithm inside of it and returns result to the user. I know that in case of python ML model, I could just pack whole thing into pickle and use it easily inside of my web app. The problem is that all our algorithms are currently written in R and I would rather avoid re-writing them to python. I have checked a few libraries that allow to run R code within python but I cannot find a way to pack it "in a pickle way" and then just utilize. It may be stupid but I have not had much to do with R so far. Thank you in advance for any suggestions!
0
1
60
0
59,104,620
0
0
0
0
1
false
3
2019-11-29T12:10:00.000
-1
3
0
Effective way to map 15k cities in Python
59,104,543
-0.066568
python,algorithm,sorting,dataframe
Try mapping by first letters of a city that will reduce your work load
I have a data set of around 15k observations. This observations are city names from all over the world. This Data set has been populated by people from many different countries which means that i have several duplicates of the same city in different languages. see below DF extract: city_name bruselas brussel brussels brussels brussels auderghem bruxelles bruxelles belgium munchen munchenstein munchwilen munderkingen mundolsheim mungia munguia munich munich munich munich germany munich munchen munich rupert mayer strasse The task is to map all cities in the DF to its english name but, becaue the cities are in different format and in different languages i am finding it very difficult to come up with a solution other than perform this task manually which is not productive as we have 15,000+ observations to go through. The final data set should look something like this(using a few of the observations above only): city_name mapped_city brussels auderghem Brusels bruxelles Brusels bruxelles belgium Brusels munchen Munich munich germany Munich Any help would be greatly appreciated
0
1
333
0
59,178,620
0
0
0
0
1
true
4
2019-11-30T03:20:00.000
1
1
1
Encountering "Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED" on a previously working system
59,112,898
1.2
python,tensorflow
There was indeed a system-wide upgrade. Updating cuda to cuda 10.2 and nvidia-driver to 440 and making libcudnn7 7.6.5 fixed the problem.
Everything was ok around a week ago. Even though I am running on a server, I really don't think much has changed. Wonder what could have caused it. Tensorflow has version 2.1.0-dev20191015 Anyway, here is the GPU status: NVIDIA-SMI 430.50 Driver Version: 430.50 CUDA Version: 10.1 Epoch 1/5 2019-11-29 22:08:00.334979: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2019-11-29 22:08:00.644569: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7 2019-11-29 22:08:00.647191: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED 2019-11-29 22:08:00.647309: E tensorflow/stream_executor/cuda/cuda_dnn.cc:337] Possibly insufficient driver version: 430.50.0 2019-11-29 22:08:00.647347: W tensorflow/core/framework/op_kernel.cc:1655] OP_REQUIRES failed at cudnn_rnn_ops.cc:1510 : Unknown: Fail to find the dnn implementation. 2019-11-29 22:08:00.647393: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Unknown: Fail to find the dnn implementation. At the end, I get: UnknownError: [_Derived_] Fail to find the dnn implementation. [[{{node CudnnRNN}}]] [[sequential/bidirectional/forward_lstm/StatefulPartitionedCall]] [Op:__inference_distributed_function_18158] Function call stack: distributed_function -> distributed_function -> distributed_function The code gets traced back to here: 174 history = model.fit(training_input, training_output, epochs=EPOCHES, 175 batch_size=BATCH_SIZE, --> 176 validation_split=0.1) Thank you.
0
1
389
0
59,117,449
0
1
0
0
1
true
0
2019-11-30T10:43:00.000
1
3
0
ModuleNotFoundError: No module named 'tensorflow.python' Anaconda
59,115,365
1.2
python,tensorflow,anaconda
Apparently the reason was the Python version (which is strange as according to documentation Tensorflow supports Python 3.7). I downgraded to 3.6 and I am able to import Tensorflow again
I have been using Tensorflow on Anaconda for a while now, but recently I have been getting the mentioned error when trying to import Tensorflow. This has been asked here multiple times, so I tried suggested solutions but nothing worked so far (reinstalling tensorflow (both normal and gpu versions), reinstalling Anaconda). When running help('modules') tensorflow appears in module list. But even after I run pip uninstall tensorflow and pip uninstall tensorflow-gpu tensorflow still remains in module list when running help('modules'). What can I do to fix this?
0
1
1,414
0
59,123,027
0
1
0
0
3
false
2
2019-11-30T13:06:00.000
0
4
0
No module named 'tensorflow.python.tools'; 'tensorflow.python' is not a package
59,116,456
0
python,tensorflow
Found a noob problem. I was using my file name as csv.py which already exist in python library, which I think was messing up the paths. But don't know how yet.
Everything was working smoothly until I started getting the following error: Traceback (most recent call last): File "", line 1, in File "/home/user/Workspace/Practices/Tensorflow/tensorflow2/venv/lib/python3.7/site-packages/tensorflow/init.py", line 98, in from tensorflow_core import * File "/home/user/Workspace/Practices/Tensorflow/tensorflow2/venv/lib/python3.7/site-packages/tensorflow_core/init.py", line 40, in from tensorflow.python.tools import module_util as _modle_util ModuleNotFoundError: No module named 'tensorflow.python.tools'; 'tensorflow.python' is not a package My environment setup: python-3.7 Using venv module to create virtual environment tensorflow 2.0.0 pip 19.0.3 Manjaro Linux Now, I even can't import tensorflow module as well. It gives same above error. Tried reinstalling with cache and without cache as well, but no luck. Recreated virtual environment as well, no luck. This is really weird and have no clue where to start troubleshooting as well. Looking at virtual environment site packages, everything seems fine.
0
1
12,652
0
68,268,774
0
1
0
0
3
false
2
2019-11-30T13:06:00.000
0
4
0
No module named 'tensorflow.python.tools'; 'tensorflow.python' is not a package
59,116,456
0
python,tensorflow
You don't need to uninstall tensorflow what version you have because it will take time to reinstall. You can fix this issue just by installing tensorflow==2.0. pip install tensorflow==2.0
Everything was working smoothly until I started getting the following error: Traceback (most recent call last): File "", line 1, in File "/home/user/Workspace/Practices/Tensorflow/tensorflow2/venv/lib/python3.7/site-packages/tensorflow/init.py", line 98, in from tensorflow_core import * File "/home/user/Workspace/Practices/Tensorflow/tensorflow2/venv/lib/python3.7/site-packages/tensorflow_core/init.py", line 40, in from tensorflow.python.tools import module_util as _modle_util ModuleNotFoundError: No module named 'tensorflow.python.tools'; 'tensorflow.python' is not a package My environment setup: python-3.7 Using venv module to create virtual environment tensorflow 2.0.0 pip 19.0.3 Manjaro Linux Now, I even can't import tensorflow module as well. It gives same above error. Tried reinstalling with cache and without cache as well, but no luck. Recreated virtual environment as well, no luck. This is really weird and have no clue where to start troubleshooting as well. Looking at virtual environment site packages, everything seems fine.
0
1
12,652
0
60,205,686
0
1
0
0
3
false
2
2019-11-30T13:06:00.000
1
4
0
No module named 'tensorflow.python.tools'; 'tensorflow.python' is not a package
59,116,456
0.049958
python,tensorflow
I just faced this problem right now. I ran the source code on another computer and it showed the same error. I went ahead and compared the version of TensorFlow and turns out that the other computer was running tensorflow==2.1.0 and mine was running tensorflow==1.14.0. In short, downgrade your tensorflow installation (pip install tensorflow==1.14.0)
Everything was working smoothly until I started getting the following error: Traceback (most recent call last): File "", line 1, in File "/home/user/Workspace/Practices/Tensorflow/tensorflow2/venv/lib/python3.7/site-packages/tensorflow/init.py", line 98, in from tensorflow_core import * File "/home/user/Workspace/Practices/Tensorflow/tensorflow2/venv/lib/python3.7/site-packages/tensorflow_core/init.py", line 40, in from tensorflow.python.tools import module_util as _modle_util ModuleNotFoundError: No module named 'tensorflow.python.tools'; 'tensorflow.python' is not a package My environment setup: python-3.7 Using venv module to create virtual environment tensorflow 2.0.0 pip 19.0.3 Manjaro Linux Now, I even can't import tensorflow module as well. It gives same above error. Tried reinstalling with cache and without cache as well, but no luck. Recreated virtual environment as well, no luck. This is really weird and have no clue where to start troubleshooting as well. Looking at virtual environment site packages, everything seems fine.
0
1
12,652
0
59,144,477
0
0
0
1
1
true
0
2019-12-02T16:47:00.000
3
1
0
Performance difference in json data into BigQuery loading methods
59,143,310
1.2
python,google-bigquery
It's for two different logics and they have their own limits. Load from file is great if you can have your data placed in files. A file can be up to 5TB in size. This load is free. You can query data immediately after completion. The streaming insert, is great if you have your data in form of events that you can stream to BigQuery. While a streaming insert single request is limited up to 10MB, it can be super parallelized up to 1 Million rows per second, that's a big scale. Streaming rows to BigQuery has it's own cost. You can query data immediately after streaming, but for some copy and export jobs data can be available later up to 90 minutes.
What is the performance difference between two JSON loading methods into BigQuery: load_table_from_file(io.StringIO(json_data) vs create_rows_json The first one loads the file as a whole and the second one streams the data. Does it mean that the first method will be faster to complete, but binary, and the second one slower, but discretionary? Any other concerns? Thanks!
0
1
88
0
59,149,250
0
0
0
0
1
false
0
2019-12-03T01:23:00.000
0
1
0
Multiple Labels as Training data for ML
59,148,940
0
python,machine-learning,keras
You can use the method apply_transform of the ImageDataGenerator in which you can specify the parameters of the transformation you want while for example saving these parameters in a list or another structure, using them later as features.
Using Keras in Python to create a CNN that pumps out the angle of rotation and zoom of an image. I am working on create the training data. I have a few questions though. I plan on using Keras Preprocessing as the tool to manipulate before I train, but is there a way to save what angle and zoom is used so that I can use those as the trainable parameters? If not, is there something easier to use?
0
1
44
0
59,160,973
0
0
0
0
1
false
0
2019-12-03T14:42:00.000
0
1
0
How to deal with different categories in pytorch train, test, and holdout set
59,159,578
0
python,pytorch
u can try to use one hot encoding instead PS: this is a suggestion not an answer
I have a tabular pytorch model that takes in cities and zipcodes as a categorical embeddings. However, I can't stratify effectively based on those columns. How can I get pytorch to run if it's missing a categorical value in the test set that's not in the train set, or has a categorical value in the holdout set that was not in the train/test set?
0
1
39
0
59,160,453
0
0
0
0
1
true
0
2019-12-03T15:24:00.000
2
2
0
Is there a function to convert a string into a number and back for machine learning
59,160,301
1.2
python,pandas,keras
You should consider a one-hot encoding, which can be done easily with pandas via the get_dummies function. This will create binary columns for each "category" (i.e. unique string).
I have a lot of strings in a pandas dataframe, I want to assign every string a number for keras. the string represent a location: CwmyNiVcURtyAf+o/6wbAg== I want to turn it into a number and back again. I'm using keras, tensorflow and pandas. Does one of the modules contain a function which does that? Or do I have to write a hashtable? Like this: CwmyNiVcURtyAf+o/6wbAg== => 1 CwmyUSVcbBtiBQEkAN4bVbA= => 2 CwmypSVdCRNYBv4MAFUTSRY= => 3 CwnBoiVCjRNPBAAJ/ysTHw== => 4 CwnBoiVCjRNfBv5QAEITCA== => 5 CwmyUSVcbBtiBQEkAN4bVbA= => 2 I have ~8000 locations and each location is 15 times in the Dataframe
0
1
373
0
59,172,700
0
0
0
0
1
false
0
2019-12-04T09:22:00.000
0
2
0
Emphasis on a feature while training a vanilla nn
59,172,607
0
python-3.x,machine-learning,scikit-learn,neural-network,hyperparameters
First of all, I would make sure that this feature alone has a decent prediction probability, but I am assuming that you already made sure of it. Then, one approach that you could take, is to "embed" your 359 other features in a first layer, and only feed in your special feature once you have compressed the remaining information. Contrary to what most tutorials make you believe, you do not have to add in all features already in the first layer, but can technically insert them at any point in time (or even multiple times). The first layer that captures your other inputs is then some form of "PCA approximator", where you are embedding a high-dimensional feature space (359 dimensions) into something that is less dominant over your other feature (maybe 20-50 dimensions as a starting point?) Of course there is no guarantee that this will work, but you might have a much better chance of getting attention on your special feature, although I am fairly certain that in general you should still see an increase in performance if the single feature is strongly enough correlated with your output. The other question that is still open is the kind of task you are training for, i.e., whether you are doing some form of classification (if so, how many classes?), or regression. This might also influence architectural choices, and the amount of focus you can/should put on a single feature.
I have some 360 odd features on which I am training my neural network model. The accuracy I am getting is abysmally bad. There is one feature amongst the 360 that is more important than the others. Right now, it does not enjoy any special status amongst the other features. Is there a way to lay emphasis on one of the features while training the model? I believe this could improve my model's accuracy. I am using Python 3.5 with Keras and Scikit-learn. EDIT: I am attempting a regression problem Any help would be appreciated
0
1
28
0
59,182,082
0
0
0
0
1
false
1
2019-12-04T16:14:00.000
1
1
0
How to represent the trend (upward/downward/no change) of data?
59,180,323
0.197375
python,pandas,math,regression,data-science
If you expect the trend to be linear, you could fit a linear regression to each row separately, using time to predict number of occurences of a behavior. Then store the slopes. This slope represents the effect of increasing time by 1 episode on the behavior. It also naturally accounts for the difference in length of the time series.
I have a dataset where each row represents the number of occurrences of certain behaviour. The columns represent a window of a set amount of time. It looks like this: +----------+----------+----------+----------+-----------+------+ | Episode1 | Episode2 | Episode3 | Episode4 | Episode5 | ... | +----------+----------+----------+----------+-----------+------+ | 2 | 0 | 1 | 3 | | | | 1 | 2 | 4 | 2 | 3 | | | 0 | | | | | | +----------+----------+----------+----------+-----------+------+ There are over 150 episodes. I want to find a way to represent each row as a trend, whether the occurrences are exhibiting more/less. I have tried to first calculate the average/median/sum of every 3/5/10 cells of each row (because each row has different length and many 0 values), and use these to correlate with a horizontal line (which represent the time), the coefficients of these correlations should tell the trend (<0 means downward, >0 upward). The trends will be used in further analysis. I'm wondering if there is a better way to do this. Thanks.
0
1
192
0
59,188,878
0
0
0
0
1
false
0
2019-12-05T04:34:00.000
0
1
0
Categorical variables with only two values
59,188,318
0
python-3.x,encoding,categorical-data,one-hot-encoding,labeling
There are only a few cases where LabelEncoder is useful because of the ordinality issue. If your categorial features are ordinal then use LabelEncoder otherwise use One-hot encoding. But, One-hot encoding increases dimension. In this case, I typically employ One-hot encoding followed by PCA for dimensionality reduction.
I am dealing with different datasets that have only Categorical variables/features with only two values such as (temperature = 'low' and 'high') or (light = 'on' and 'off' or '0' and '1'). I am not really sure whether to use "one-hot encoding" or "Label Encoding" method to train my models. I am working on a classification problem and using some supervised machine learning algorithms. I used "Label Encoding" and I got a pretty good result. I feel there could be something that I did was wrong. I am not sure if I should use "one-hot encoding" or not. In case of Categorical variables with only two values which method should I use to convert the variables?
0
1
239
0
59,196,692
0
0
0
0
1
false
3
2019-12-05T06:08:00.000
0
3
0
Find the maximum result after collapsing an array with subtractions
59,189,207
0
python,arrays,algorithm
The other answers are fine, but here's another way to think about it: If you expand the result into individual terms, you want all the positive numbers to end up as additive terms, and all the negative numbers to end up as subtractive terms. If you have both signs available, then this is easy: Subtract all but one of the positive numbers from a negative number Subtract all of the negative numbers from the remaining positive number If all your numbers have the same sign, then pick the one with the smallest absolute value at treat it as having the opposite sign in the above procedure. That works out to: If you have only negative numbers, then subtract them all from the least negative one; or If you have only positive numbers, then subtract all but one from the smallest, and then subtract the result from the remaining one.
Given an array of integers, I need to reduce it to a single number by repeatedly replacing any two numbers with their difference, to produce the maximum possible result. Example1 - If I have array of [0,-1,-1,-1] then performing (0-(-1)) then (1-(-1)) and then (2-(-1)) will give 3 as maximum possible output Example2- [3,2,1,1] we can get maximum output as 5 { first (1-1) then (0-2) then (3-(-2)} Can someone tell me how to solve this question?
0
1
113
0
59,281,571
0
0
0
0
1
false
2
2019-12-05T10:37:00.000
4
1
0
Is it Valid to Aggregate SHAP values to Sets of of Features?
59,193,277
0.664037
python,shap
From Lundberg, package author: "The short answer is yes, you can add up SHAP values across the columns to get the importance of a whole group of features (just make sure you don't take the absolute value like we do when going across rows for global feature importance). The long answer is that when Shapley values "fairly" allocate credit for interaction effects between features, they assume each feature in an interaction effect should get equal credit for the interaction. This means that for high order interaction terms you might get slightly different results when running Shapley values before (and summing) vs. after grouping features (since the new group just gets one chunk of the interaction pie so to speak, as opposed to multiple chunks when it was several features). These differences are typically small though so I wouldn't sweat it much since both ways are reasonable."
SHAP values seem to be additive and e.g. the overall feature importance plot simply adds the absolute SHAP values per feature and compares them. This allows us to use SHAP for global importance aswell as local importance. We could also get feature importance for a particular subset of data records the same way. By the same token, is it valid to get aggregate SHAP values for sets of variables? e.g. "Height", "Weight" and "Eye Colour" into "HumanDescription" or "Temperature", "Humidity" and "Air-Pressure" into "Weather"and rank them accordingly. Theoretically, I can't see why not but would appreciate feedback on this in case of any gotchas.
0
1
1,514
0
59,200,836
0
0
0
0
2
false
2
2019-12-05T17:41:00.000
0
3
0
Combining logistic and continuous regression with scikit-learn
59,200,594
0
python,machine-learning,scikit-learn,regression
If your target data Y has multiple columns you need to use multi-task learning approach. Scikit-learn contains some multi-task learning algorithms for regression like multi-task elastic-net but you cannot combine logistic regression with linear regression because these algorithms use different loss functions to optimize. Also, you may try neural networks for your problem.
In my dataset X I have two continuous variables a, b and two boolean variables c, d, making a total of 4 columns. I have a multidimensional target y consisting of two continuous variables A, B and one boolean variable C. I would like to train a model on the columns of X to predict the columns of y. However, having tried LinearRegression on X it didn't perform so well (my variables vary several orders of magnitude and I have to apply suitable transforms to get the logarithms, I won't go into too much detail here). I think I need to use LogisticRegression on the boolean columns. What I'd really like to do is combine both LinearRegression on the continuous variables and LogisticRegression on the boolean variables into a single pipeline. Note that all the columns of y depend on all the columns of X, so I can't simply train the continuous and boolean variables independently. Is this even possible, and if so how do I do it?
0
1
458
0
59,203,141
0
0
0
0
2
false
2
2019-12-05T17:41:00.000
0
3
0
Combining logistic and continuous regression with scikit-learn
59,200,594
0
python,machine-learning,scikit-learn,regression
What i understand you want to do is to is to train a single model that both predicts a continuous variable and a class. You would need to combine both loses into one single loss to be able to do that which I don't think is possible in scikit-learn. However I suggest you use a deep learning framework (tensorflow, pytorch, etc) to implement your own model with the required properties you need which would be more flexible. In addition you can also tinker with solving the above problem using neural networks which would improve your results.
In my dataset X I have two continuous variables a, b and two boolean variables c, d, making a total of 4 columns. I have a multidimensional target y consisting of two continuous variables A, B and one boolean variable C. I would like to train a model on the columns of X to predict the columns of y. However, having tried LinearRegression on X it didn't perform so well (my variables vary several orders of magnitude and I have to apply suitable transforms to get the logarithms, I won't go into too much detail here). I think I need to use LogisticRegression on the boolean columns. What I'd really like to do is combine both LinearRegression on the continuous variables and LogisticRegression on the boolean variables into a single pipeline. Note that all the columns of y depend on all the columns of X, so I can't simply train the continuous and boolean variables independently. Is this even possible, and if so how do I do it?
0
1
458
0
59,209,501
0
0
0
0
1
true
0
2019-12-06T08:07:00.000
0
1
0
The naming and the sorting of the trained RF model's features in Python
59,209,196
1.2
python,python-3.x,machine-learning,data-science,random-forest
The algorithm works independent from your column names. You can name your columns whatever you want in most algorithms(except fbprophet etc.) But there is one important point here: When you want to predict a dataset result you need to give your dataset columns respect to training model columns' order. In your case you can rename your columns f1, f2, f3.. to abc1, abc2, def3.. but you cannot shuffle their order.
So I have trained a RandomForest model on a fairly simple customer data. The prediction is either 1 or 0 telling if a customer will churn or not. Let's say I have 10 features called 'f1', 'f2', 'f3' and so on... As the model has already been trained I took another period of the similar data to see how the model performs. But in this data the features could be shuffled in a different way. (for example 'f3', 'f10', 'f1', ...). Will the model look at the name of the features or it won't matter for it and it will think that 'f1' is 'f3'? Let's say the type of the data is the same in each column. The reason I am asking this is because to check this theory I renamed 'f3' column name to 'a' and to my astonishment the model worked anyways. What are your thoughts?
0
1
110
0
59,209,729
0
0
0
0
1
false
0
2019-12-06T08:38:00.000
0
1
0
How to create blender with opencv-python
59,209,586
0
python,python-3.x,opencv
Patented stuff is usually just moved to the contrib repository, so you have to clone the original OpenCV repo, then add the contrib over and maybe modify a few compile options to get all your required things back and running.
I'm using opencv3.4 to stitch images with a lot of customization, but cv.detail_MultiBandBlender seems only avaiable in opencv 4.x but with 4.x surf is "patented and is excluded". Is there any hack so that I can use blender with opencv-python3.4?
0
1
427
0
59,212,992
0
0
0
0
2
false
1
2019-12-06T08:56:00.000
1
2
0
Will removing a column having same values for all observations affect my model?
59,209,830
0.099668
python,r,pandas,machine-learning,data-science
A Machine Learning Model is nothing but a mathematical equation i.e. y = f(x) in which y = Target/Dependent Variable f(x) = Independent Variables(In our case a DataFrame containing the Train/Test Data) So technically, ML models quantifies and estimates about for what value of X, what will the probable output y. Assuming a single whole column is constant. So, a relationship between y and f(x=constant) is meaningless because for whatever value of y, that x will remain same. No mathematical relationship is possible except for the only option that y is also an constant. Which we can safely assume isn't the case, else why else you will build a model to get a constant value. Hence, we can safely drop any constant column, which doesn't add any variation in data to the DataFrame to save computational time, as that column won't affect y in any sense.
One of the columns in my dataset has the same value for all observations/rows. Should I remove that column while building a machine learning model? Will removing this column affect my model/performance metric? If I replace all the values with a different constant value, will it change the model/performance metric?
0
1
1,263
0
59,210,087
0
0
0
0
2
false
1
2019-12-06T08:56:00.000
2
2
0
Will removing a column having same values for all observations affect my model?
59,209,830
0.197375
python,r,pandas,machine-learning,data-science
If one of your column in the dataset is having the same values, you can drop this column as it will not do any help to your model to differentiate between two different labels while on the other hand, it can even negatively affect your model by creating a bias in the data. For Example: Consider you have two different fruits, like one is Green Apple and one is Guava. Then, both of these fruits will have the same color i.e. "Green", so that basically means that you just can not differentiate both these fruits on the basis of their color, but if they have been two different colored fruits, you could have used this feature to differentiate between them. Hope it helps clarifying what you should do with such a column with same set of observations. Thanks.
One of the columns in my dataset has the same value for all observations/rows. Should I remove that column while building a machine learning model? Will removing this column affect my model/performance metric? If I replace all the values with a different constant value, will it change the model/performance metric?
0
1
1,263
0
64,245,990
0
0
0
0
1
false
12
2019-12-06T13:36:00.000
5
3
0
python tsne.transform does not exist?
59,214,232
0.321513
python,machine-learning
As the accepted answer says, there is no separate transform method and it probably wouldn't work in a a train/test setting. However, you can still use TSNE without information leakage. Training Time Calculate the TSNE per record on the training set and use it as a feature in classification algorithm. Testing Time Append your training and testing data and fit_transform the TSNE. Now continue on processing your test set, using the TSNE as a feature on those records. Does this cause information leakage? No. Inference Time New records arrive e.g. as images or table rows. Add the new row(s) to the training table, calculate TSNE (i.e. where the new sample sits in the space relative to your trained samples). Perform any other processing and run your prediction against the row. It works fine. Sometimes, we worry too much about train/test split because of Kaggle etc. But the main thing is can your method be replicated at inference time and with the same expected accuracy for live use. In this case, yes it can! Only drawback is you need your training database available at inference time and depending on size, the preprocessing might be costly.
I am trying to transform two datasets: x_train and x_test using tsne. I assume the way to do this is to fit tsne to x_train, and then transform x_test and x_train. But, I am not able to transform any of the datasets. tsne = TSNE(random_state = 420, n_components=2, verbose=1, perplexity=5, n_iter=350).fit(x_train) I assume that tsne has been fitted to x_train. But, when I do this: x_train_tse = tsne.transform(x_subset) I get: AttributeError: 'TSNE' object has no attribute 'transform' Any help will be appreciated. (I know I could do fit_transform, but wouldn't I get the same error on x_test?)
0
1
6,221
0
59,246,265
0
1
0
0
1
true
0
2019-12-06T14:08:00.000
0
1
0
How to build whl package for pandas?
59,214,739
1.2
python,pandas,python-wheel
You cannot pack back an installed wheel. Either you download a ready-made wheel with pip download or build from sources: python setup.py bdist_wheel (need to download the sources first).
Hi I have a built up Python 2.7 environment with Ubuntu 19.10. I would like to build a whl package for pandas. I pip installed the pandas but do not know how to pack it into whl package. May I ask what I should do to pack it. Thanks
0
1
139
0
59,226,070
0
0
0
0
1
true
3
2019-12-07T08:56:00.000
6
1
0
ROS: Is ZeroMQ better for large data streams, e.g. raw images, than native image topic?
59,224,453
1.2
python,zeromq,ros
10E6 [B] over a private, 100% free 100E6 [b/s] channel takes no less ~0.8 [s] _5E6 [B] over a private, 100% free 100E6 [b/s] channel takes no less ~0.4 [s] Q : What are the limitations in <something> on large data streams? Here we always fight a three-fold Devil mix of: Power( for data processing, a 10[MB]->5[MB] compression + RAM-I/O not being excluded ) + Time( latency + jitter of the E2E data-transport across the series of transport-channels ) + Errors( uncertainties of content delivery, completeness and authenticity over the E2E data-transport path ) In the ROS domain, being a system for coordinated control-loops' constrained sub-systems, there is one more problem - not meeting the "in-time-ness" causes the control to fail into principally unstable territory. Countless examples of what happens when this border has been crossed - from production line falling into panic, resulting in an immediate emergency total-stop state, to damaged tools, products, equipment and continued colliding, still crashing during still continued operations ( when collision-detection and emergency total-stops were not implemented safe ). Q : would a separate protocol such as ZeroMQ be better for this task? ZeroMQ has excellent performance ( does not add much on the Time leg of the Devil-mix, yet it always depends on having (in-)sufficient resources ( the Power to smoothly process ) ZeroMQ has excellent performance scalability, sure, if the Power leg of the Devil-mix permits. ZeroMQ has excellent properties for the Errors leg of the Devil-mix - we get warranty of Zero-Errors - it either delivers the message (the payload) as a bit-to-bit identical copy of the original content, or nothing at all. This warranty may look strange, a sure overkill for blurred or noisy images - yet, it is a fair strategy for not having additional Power and Time-uncertainty issues due to error-detection/limited-recovery. Yet it leaves us free with choices, how to handle (if needed), within a given, constrained, Time- and Power-domains, the main duty - the ROS control-loops' stability - with missing or re-transmit requested payloads, given Errors were indirectly detected from time-stamping or monotonic-ordinal indexing et al. ROS Topics, on the contrary, are limited to a single PUB/SUB formal communication-pattern archetype only and fixed to use either a TCPROS transport-class ( ZeroMQ may use a faster, L3+protocol stack-less { inproc:// | ipc:// } or, if needed, even extended to a mil-std guaranteed-delivery or a distributed grid-computing tipc:// or hypervisor-orchestrated vmci:// transports ) or UDPROS, which is currently available in roscpp only and lossy, but having lower latency, compared to TCPROS.
Fairly new to ROS, but haven't been able to find this information searching around. We're building an instrument where we need to transfer large data streams over the network on a 100Mbit limited cable. Preferably we need to transfer RAW images (~10MB a piece) or we can do some lossless compression resulting in about 5MB a piece. Is this perfectly fine for ROS with native image topics, or would a separate protocol such as ZeroMQ be better for this task? What are the limitations in ROS on large data streams? Hope that someone with the knowledge could take a second to share some experience. Thanks!
0
1
1,161
0
60,907,594
0
0
0
0
1
false
0
2019-12-07T21:25:00.000
0
1
0
Can't read .avi files using Python OpenCV 4.1.2-dev
59,230,366
0
python-3.x,opencv,artificial-intelligence,opencv3.1,opencv4
Originally I used cv2.VideoWriter_fourcc(*'XVID') getting the same error switch the (*'XVID') to (*'MJPG') I am using a raspberry pi Gen. 4 (4GB) with image: Raspbian Buster Lite
I wanna run my opencv3.1 programs, but when i try to read a file using cv2.VideoCapture shows me the error: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): ./../images/walking.avi in function 'icvExtractPattern' But, when i using the camera with cv2.VideoCapture(0) it works perfectly. I verify the file path and using the relative and the absolute path, but still not working. I gonna wait for your answers. Thanks a lot
0
1
1,168
0
59,231,944
0
1
0
0
1
true
1
2019-12-08T02:25:00.000
5
3
0
Rounding large exponential numbers e.g. (6.624147...e+25 to 6.62e+25)
59,231,931
1.2
python,python-3.x,rounding
I think, if I understand your problem correctly, you could use float("%.2e" % x) This just converts the value to text, in exponential format, with two fractional places (so 'pi' would become "3.14e+00"), and then converts that back to float. It will work with your example, and with small numbers like 5.42242344e-30 For python 3.6+, it's better to use float(f"{x:.2e}") - thanks @gabriel-jablonski
I have a list of very large numbers I need to round. For example: 6.624147027484989e+25 I need to round to 6.62e25. However, np.around, math.ceiling, round(), etc... are not working. I'm thinking because instead of round 6.624147027484989e+25 to 6.62e25, it's just making it an integer while I actually need to make the entire number much smaller... if that makes sense.
0
1
179
0
59,236,950
0
0
0
0
2
true
6
2019-12-08T14:39:00.000
3
3
0
How to add a new class to an existing classifier in deep learning?
59,236,502
1.2
python,keras,deep-learning,multiclass-classification,online-machine-learning
You probably have used a softmax after 3 neuron dense layer at the end of the architecture to classify into 3 classes. Adding a class will lead to doing a softmax over 4 neuron dense layer so there will be no way to accommodate that extra neuron in your current graph with frozen weights, basically you're modifying the graph and hence you'll have to train the whole model from scratch -----or----- one way would be loading the model and removing the last layer , changing it to 4 neurons and training the network again! This will basically train the weights of the last layer from scratch . I don't think there is anyway to keep these(weights of the last layer) weights intact while adding a new class .
I trained a deep learning model to classify the given images into three classes. Now I want to add one more class to my model. I tried to check out "Online learning", but it seems to train on new data for existing classes. Do I need to train my whole model again on all four classes or is there any way I can just train my model on new class?
0
1
5,204
0
60,471,776
0
0
0
0
2
false
6
2019-12-08T14:39:00.000
3
3
0
How to add a new class to an existing classifier in deep learning?
59,236,502
0.197375
python,keras,deep-learning,multiclass-classification,online-machine-learning
You have to remove the final fully-connected layer, freeze the weights in the feature extraction layers, add a new fully-connected layer with four outputs and retrain the model with images of the original three classes and the new fourth class.
I trained a deep learning model to classify the given images into three classes. Now I want to add one more class to my model. I tried to check out "Online learning", but it seems to train on new data for existing classes. Do I need to train my whole model again on all four classes or is there any way I can just train my model on new class?
0
1
5,204
0
59,395,618
0
0
1
0
1
false
1
2019-12-08T17:44:00.000
1
2
0
Find how similar a text is - One Class Classifier (NLP)
59,238,140
0.099668
python,twitter,nlp,classification,text-classification
Sam H has a great answer about using your dataset as-is, but I would strongly recommend annotating data so you have a few hundred negative examples, which should take less than an hour. Depending on how broad your definition of "activism" is that should be plenty to make a good classifier using standard methods.
I have a big dataset containing almost 0.5 billions of tweets. I'm doing some research about how firms are engaged in activism and so far, I have labelled tweets which can be clustered in an activism category according to the presence of certain hashtags within the tweets. Now, let's suppose firms are tweeting about an activism topic without inserting any hashtag in the tweet. My code won't categorized it and my idea was to run a SVM classifier with only one class. This lead to the following question: Is this solution data-scientifically feasible? Does exists any other one-class classifier? (Most important of all) Are there any other ways to find if a tweet is similar to the ensable of tweets containing activism hashtags? Thanks in advance for your help!
0
1
540
0
59,262,244
0
0
0
0
1
false
0
2019-12-09T10:34:00.000
0
1
0
Existing Tensorflow model to use GPU
59,246,985
0
python,tensorflow
Not enough to give exact answer. Have you installed tensorflow-gpu separately? Check using pip list. Cause, initially, you were using tensorflow (default for CPU). Once you use want to use Nvidia, make sure to install tensorflow-gpu. Sometimes, I had problem having both installed at the same time. It would always go for the CPU. But, once I deleted the tensorflow using "pip uninstall tensorflow" and I kept only the GPU version, it worked for me.
I made a TensorFlow model without using CUDA, but it is very slow. Fortunately, I gained access to a Linux server (Ubuntu 18.04.3 LTS), which has a Geforce 1060, also the necessary components are installed - I could test it, the CUDA acceleration is working. The tensorflow-gpu package is installed (only 1.14.0 is working due to my code) in my virtual environment. My code does not contain any CUDA-related snippets. I was assuming that if I run it in a pc with CUDA-enabled environment, it will automatically use it. I tried the with tf.device('/GPU:0'): then reorganizing my code below it, didn't work. I got a strange error, which said only XLA_CPU, CPU and XLA_GPU is there. I tried it with XLA_GPU but didn't work. Is there any guide about how to change existing code to take advantage of CUDA?
0
1
38
0
59,253,034
0
0
0
0
1
true
0
2019-12-09T12:23:00.000
1
1
0
how to select the metric to optimize in sklearn's fit function?
59,248,882
1.2
python,machine-learning,optimization,scikit-learn
This is not possible with Support Vector Machines, as far as I know. With other models you might either change the loss that is optimized, or change the classification threshold on the predicted probability. SVMs however minimize the hinge loss, and they do not model the probability of classes but rather their separating hyperplane, so there is not much room for manual adjustements. If you need to focus on Sensitivity or Specificity, use a different model that allows maximizing that function directly, or that allows predicting the class probabilities (thinking Logistic Regressions, Tree based methods, for example)
When using tensorflow to train a neural network I can set the loss function arbitrarily. Is there a way to do the same in sklearn when training a SVM? Let's say I want my classifier to only optimize sensitivity (regardless of the sense of it), how would I do that?
0
1
396
0
59,252,280
0
0
0
0
1
false
0
2019-12-09T13:11:00.000
0
1
0
Applying identical Canny to two different images
59,249,664
0
python,opencv,canny-operator
To archieve comparable results you should resize the bigger image to the size of the smaller one. Image upscaling is "creating" information which isn't contained in your image, that's why you see the blur. Using interpolation=cv2.INTER_AREA should deliver good results, if you used a camera for images acquisition.
I have two images - the images are identical but of different sizes. Currently I complete a Canny analysis of the smaller image using track bars in an interactive environment. I want to have this output created on the second (larger) image - when I apply the same parameters the output is different I've tried to use cv.resize however the output is blurred significantly Any help is appreciated Thanks in advance
0
1
26
0
59,268,457
0
0
0
0
1
false
1
2019-12-09T13:36:00.000
2
2
0
How to add new Category in the CNN for Attendance by AI
59,250,070
0.197375
python,keras,neural-network,artificial-intelligence,conv-neural-network
You don't need classification. Classification is not the solution for everywhere problem. You should look into these: Cosine Similarity Siamese Network You can use existing models from FaceNet or OpenCV. Since they are already trained on a huge dataset of faces, you can extract feature vector easily. Store the feature vector for every new student. Then compute similarities(existing image, current image) based on distance or similarity score mark attendance. This is scable and much faster approach. No training or retraining.
I am working on the project with the group and we have decided to make the project on the ' Automatic Attendance System by AI ' I have learned the CNNs to categorize the objects i.e dogs and cats. With that knowledge, we have decided to make the attendance system based on CNN. ( Please tell me if we shouldn't take this path or the technology if you find something bad here... ) But continuing it with CNN, let's say we have trained the model with 2 students, and on the last layer we put the two neurons as they are just two, right...? Now the third comes, now to train his face to the NN, I have to change the model's structure and retrain every faces again... If we apply the project to the big institute, where hundreds of students are there and if we want to train the model for each individual student, the nthis is not the feasible solution to recreate the model.. So we thought, we will fix the model's output layer size to let's say 50. So only 50 faces can be trained per model. But it is not always possible that there will always be 50. They may 40 or if one gets in with ne admission, then 41. So how to re-train the network with existing weights ? ( The same question is asked somewhere I know, but please direct me with my situation ) Or is there any other technology to use... ? Please Direct me...
0
1
99
0
68,878,419
0
1
0
0
1
false
3
2019-12-09T16:29:00.000
1
2
0
conda environment: does each new conda environment needs a new kernel to work? How can I have specific libraries for all my environments?
59,252,973
0.099668
python,anaconda,conda,windows-subsystem-for-linux,jupyter-lab
To my best understanding: You need ipykernel in each of the environments so that jupyter can import the other library. In my case, I have a new environment called TensorFlow, then I activate it and install the ipykernel, and then add it to jupyter kernelspec. Finally I can access it in jupyter no matter the environment is activated or not.
I use ubuntu (through Windows Subsystem For Linux) and I created a new conda environment, I activated it and I installed a library in it (opencv). However, I couldn't import opencv in Jupyter lab till I created a new kernel that it uses the path of my new conda environment. So, my questions are: Do I need to create a new kernel every time I create a new conda environment in order for it to work? I read that in general we should use kernels for using different versions of python, but if this is the case, then how can I use a specific conda environment in jupyter lab? Note that browsing from Jupyter lab to my new env folder or using os.chdir to set up the directory didn't work. Using the new kernel that it's connected to the path of my new environment, I couldn't import matplotlib and I had to activate the new env and install there again the matplotlib. However, matplotlib could be imported when I was using the default kernel Python3. Is it possible to have some standard libraries to use them with all my conda environments (i.e. install some libraries out of my conda environments, like matplotlib and use them in all my enviroments) and then have specific libraries in each of my environments? I have installed some libraries through the base environment in ubuntu but I can't import these in my new conda environment. Thanks in advance!
0
1
1,076
0
59,259,166
0
0
0
0
1
false
0
2019-12-09T22:58:00.000
0
2
0
Outlier detection DBSCAN
59,257,864
0
python,machine-learning,dataset,outliers,dbscan
Are describing a classification problem, not a clustering problem. Also that data does not have a bottom of density, does it? Last but not least, (A) click fraud is heavily clustered, not outliers, (B) noise (low density) is not the same as outlier (rare) and (C) first get the data, then speculate about possible algorithms, because what if you can't get the data?
I am working on school's project about Outlier detecttion. I think i will create my own small dataset and use DBSCAN to work with it. I think i will try to create a dataset that about a click on ads on a website is cheat or not. Below is detail information of the dataset that i am gona create. Dataset Name: Cheat Ads Click detection. Column:value source:                                 (categorical) url: 0, redirect: 1, search: 2 visited_before:                    (categorical) no:1, few_time: 1, fan: 2 time_on_site(seconds):       (numerical) time user working on the site before leaving by seconds. active_type:                         (categorical) fake_active: 0 (like they just open website but don't do anythings but click ads), normal_active: 1, real_acive: 2 (Maybe i will let it become score of active: float value from 0 to 10.) Cheat (label):                        (categorical) no: 0, yes: 1 Maybe i will have some more other columns like number of times user click on ads,... My question is do you think that DBSCAN can work well on this dataset? If yes, can you please give me some tips to make a great dataset or to create dataset faster? And if no, please suggest me some other datasets that DBSCAN can work well with theme. Thank you so much.
0
1
1,315
0
59,265,331
0
1
0
0
1
false
2
2019-12-10T10:30:00.000
0
2
0
Numpy array comprehension
59,265,151
0
python,numpy,list-comprehension
A fundamental problem here is that numpy arrays are of static size whereas python lists are dynamic. Since the list comprehension doesn't know a priori how long the returned list is going to be, one necessarily needs to maintain a dynamic list throughout the generation process.
Is there a way to do a numpy array comprehension in Python? The only way I have seen it does is by using list comprehension and then casting the results as a numpy array, e.g. np.array(list comprehension). I would have expected there to be a way to do it directly using numpy arrays, without using lists as an intermediate step. Also, is it possible to overload list operators, i.e. [ and ], so that the results is a numpy array, not a list.
0
1
3,420
0
59,268,849
0
0
0
0
1
false
0
2019-12-10T11:39:00.000
0
1
0
How can i get all the prediction probability value?
59,266,375
0
python,keras
What i've done is include a randomized feature. This way the network won't be purely deterministic
I'm doing stock prediction using keras.While prediction i get only one possible result.I need to view all the probability value for example, input 100 120 100 120, target while training 100 while prediction if i give the same input it returns 120 as a output So that,is there any possibility of view prediction probability value?
0
1
36
0
59,276,925
0
0
0
0
1
false
0
2019-12-10T23:36:00.000
-1
2
0
Pandas set index or reindex without changing the order of the data frame
59,276,899
-0.099668
python,pandas
df=df.reset_index(drop=True)? – ansev 1 min ago
Hello I have a dataframe I sorted so the index is not in order so I want to reorder the index so that sorted values have an index that is sequential I have not been able to figure this out should I remove the index or is there a way to set the index? When I reindex it should sorts by the index which unsorts by index.
0
1
762
0
59,279,941
0
1
0
0
2
false
2
2019-12-11T06:13:00.000
8
4
0
How to check if an object is an np.array()?
59,279,803
1
python,arrays
isinstance(obj, numpy.ndarray) may work
I'm trying to build a code that checks whether a given object is an np.array() in python. if isinstance(obj,np.array()) doesn't seem to work. I would truly appreciate any help.
0
1
3,934
0
59,279,975
0
1
0
0
2
false
2
2019-12-11T06:13:00.000
0
4
0
How to check if an object is an np.array()?
59,279,803
0
python,arrays
The type of what numpy.array returns is numpy.ndarray. You can determine that in the repl by calling type(numpy.array([])). Note that this trick works even for things where the raw class is not publicly accessible. It's generally better to use the direct reference, but storing the return from type(someobj) for later comparison does have its place.
I'm trying to build a code that checks whether a given object is an np.array() in python. if isinstance(obj,np.array()) doesn't seem to work. I would truly appreciate any help.
0
1
3,934
0
59,893,011
0
1
0
0
1
true
0
2019-12-11T09:06:00.000
3
2
0
How do we approximately calculate how much memory is required to run a program?
59,282,135
1.2
python,tensorflow,memory,memory-management
In Object Detection, most of the Layers used will be CNNs and the Calculation of Memory Consumption for CNN is explained below. You can follow the same approach for other layers of the Model. For example, consider a convolutional layer with 5 × 5 filters, outputting 200 feature maps of size 150 × 100, with stride 1 and SAME padding. If the input is a 150 × 100 RGB image (three channels), then the number of parameters is (5 × 5 × 3 + 1) × 200 = 15,200 (the +1 corresponds to the bias terms), which is fairly small compared to a fully connected layer.7 However, each of the 200 feature maps contains 150 × 100 neurons, and each of these neurons needs to compute a weighted sum of its 5 × 5 × 3 = 75 inputs: that’s a total of 225 million float multiplications. Not as bad as a fully con‐nected layer, but still quite computationally intensive. Moreover, if the feature maps are represented using 32-bit floats, then the convolutional layer’s output will occupy 200 × 150 × 100 × 32 = 96 million bits (about 11.4 MB) of RAM.8 And that’s just for one instance! If a training batch contains 100 instances, then this layer will use up over 1 GB of RAM! More understanding about Memory Consumption can be found from the below Question and the respective Answer: Question: Consider a CNN composed of three convolutional layers, each with 3 × 3 kernels, a stride of 2, and SAME padding. The lowest layer outputs 100 feature maps, the middle one outputs 200, and the top one outputs 400. The input images are RGB images of 200 × 300 pixels. What is the total number of parameters in the CNN? If we are using 32-bit floats, at least how much RAM will this network require when making a prediction for a single instance? What about when training on a mini-batch of 50 images? Answer is mentioned below: Let’s compute how many parameters the CNN has. Since its first convolutional layer has 3 × 3 kernels, and the input has three channels (red, green, and blue), then each feature map has 3 × 3 × 3 weights, plus a bias term. That’s 28 parame‐ ters per feature map. Since this first convolutional layer has 100 feature maps, it has a total of 2,800 parameters. The second convolutional layer has 3 × 3 kernels, and its input is the set of 100 feature maps of the previous layer, so each feature map has 3 × 3 × 100 = 900 weights, plus a bias term. Since it has 200 feature maps, this layer has 901 × 200 = 180,200 parameters. Finally, the third and last convolutional layer also has 3 × 3 kernels, and its input is the set of 200 feature maps of the previous layers, so each feature map has 3 × 3 × 200 = 1,800 weights, plus a bias term. Since it has 400 feature maps, this layer has a total of 1,801 × 400 = 720,400 parameters. All in all, the CNN has 2,800 + 180,200 + 720,400 = 903,400 parameters. Now let’s compute how much RAM this neural network will require (at least) when making a prediction for a single instance. First let’s compute the feature map size for each layer. Since we are using a stride of 2 and SAME padding, the horizontal and vertical size of the feature maps are divided by 2 at each layer (rounding up if necessary), so as the input channels are 200 × 300 pixels, the first layer’s feature maps are 100 × 150, the second layer’s feature maps are 50 × 75, and the third layer’s feature maps are 25 × 38. Since 32 bits is 4 bytes and the first convolutional layer has 100 feature maps, this first layer takes up 4 x 100 × 150 × 100 = 6 million bytes (about 5.7 MB, considering that 1 MB = 1,024 KB and 1 KB = 1,024 bytes). The second layer takes up 4 × 50 × 75 × 200 = 3 million bytes (about 2.9 MB). Finally, the third layer takes up 4 × 25 × 38 × 400 = 1,520,000 bytes (about 1.4 MB). However, once a layer has been computed, the memory occupied by the previous layer can be released, so if everything is well optimized, only 6 + 9 = 15 million bytes (about 14.3 MB) of RAM will be required (when the second layer has just been computed, but the memory occupied by the first layer is not released yet). But wait, you also need to add the memory occupied by the CNN’s parameters. We computed earlier that it has 903,400 parameters, each using up 4 bytes, so this adds 3,613,600 bytes (about 3.4 MB). The total RAM required is (at least) 18,613,600 bytes (about 17.8 MB). For more information, refer "Memory Requirements" Section of "Chapter 13, Convolutional Neural Networks" of the Book, "Hands on Machine Learning with Scikit-Learn and Tensorflow" (pdfs are availble online).
Today I was trying to implement an object detection API in Tensorflow. After carrying out the training process, I was trying to run the program to detect objects in webcam. As I was running it, the following message was printed in the terminal: Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.05GiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available Due to the performace issue it seems I am getting a lot of false positives. How can we calculate beforehand the memory required to run this program, or any program? I am not asking how much memory it is using, which we can find out. I am using Python.
0
1
2,441
0
59,301,763
0
0
0
0
1
true
0
2019-12-11T13:58:00.000
0
1
1
Azure Installing Pandas Module
59,287,420
1.2
python,azure,azure-web-app-service
I solved this by using the SSH terminal instead of the Kudu terminal. I find no reason why it was not working in the Kudu Remote Execution terminal, but using "pip install pandas" in Azure's SSH terminal solved it.
I have been trying to install Pandas on my Azure App Service (running Flask) for a long time now but nothing seems to work. I tried to use wheel, created a wheelhouse directory manually and tried to install the relevant Pandas .whl file (along with its dependent packages) but it still doesn't work. This approach gives me the following error - "Could not find a version that satisfies the requirement" "No matching distribution found for ..." A simple "pip install pandas" also doesn't work - when I do this, in my Kudu Bash client the command gets stuck on "Cleaning up.." and nothing gets installed.
1
1
537
0
59,291,330
0
0
0
0
1
true
1
2019-12-11T16:34:00.000
3
1
0
Is it normal to get different graphs for same data after umap
59,290,251
1.2
python,r,ggplot2,scikit-learn
Yes it is. Dimensions reduction algorithms like tSNE and uMAP are stochastic, so every time you run the clustering and values will be different. If you want to keep the same graph you need to set a common seed. You can achieve that in R by setting the seed (e.g. set.seed(123)) before calling uMAP (or set flag if the function allows that). np.random.seed(123) should work in python scikit.
I am not sure how can I describe all the steps that I am taking but basically my question is simple: I use same code, same data from text file, gather some statistics about that data and then use umap for 2D reduction. Is it normal to have different graphs when I plot the result? I use scikit-learn, umap-learn, ggplot2. The continuation of the problem is when I use hdbscan. Because every time I run the code, the plot is different, then cluster size and clusters become different and so on. I am wondering if this is something expected or not, basically.
0
1
1,264
0
59,290,915
0
0
0
0
1
false
1
2019-12-11T17:07:00.000
0
2
0
Sum of neighbors in tensorflow
59,290,785
0
python,tensorflow
A new convolutional layer with the filter size of 3x3 and filters initialized to 1 will do the job. Just be careful to declare this special filter as an untrainable variable otherwise your optimizer would change its contents. Additionally, set padding to "SAME" to get the same size output from that convolutional layer. The pixels at the edges will have zero neigbors in that case.
I have a tensorflow model with my truth data in the shape (N, 32, 32, 5) ie. 32x32 images with 5 channels. Inside the loss function I would like to calculate, for each pixel, the sum of the values of the neighboring pixels for each channel, generating a new (N, 32, 32, 5) tensor. The tf.nn.pool function does something similar but not exactly what I need. I was trying to see if tf.nn.conv2d could get me there but I'm not sure what I'd need to use as the filter parameter in this case. Is there a specific function for this? Or can I use conv2d somehow?
0
1
188
0
59,318,510
0
0
0
0
1
false
0
2019-12-13T01:06:00.000
1
1
0
Concatenating 'N' 2D arrays in NumPy with varying dimensions into one 3D array
59,314,807
0.197375
python,numpy,keras,numpy-ndarray
Keras does allow for variable length input to an LSTM but within a single batch all inputs must have the same length. A way to reduce the padding needed would be to batch your input sequences together based on their length and only pad up to the maximum length within each batch. For example you could have one batch with sequence length 100 and another with sequence length 150. But I'm afraid there is no way to completely avoid padding. During inference you can use any sequence length.
I have N samples of 2D features with variable dimensions along one axis. For example: Sample 1 : (100,20) Sample 2 : (150,20) Sample 3 : (90,20) Is there a way to combine all N samples into a 3D array so that the first dimension (N,?,?) denotes the sample number? PS: I wish to avoid padding and reshaping, and want to find a way to input the features with their dimensions intact into an LSTM network in Keras. Any other suggestions to achieve the same are welcome.
0
1
124
0
59,319,017
0
0
0
0
3
false
1
2019-12-13T08:39:00.000
0
3
0
Changes to model performance by changing random_state of XGBClassifier
59,318,853
0
python,xgboost,feature-selection,xgbclassifier
random_state parameter just helps in replicating results every time you run your model. Since you are using cross_validation, assuming it is k-fold, then all your data will go into train and test and the CV score will be anyways average of the number of folds you decide. I believe you can set on any random_state and quote the results from CV.
I trained a XGBClassifier for my classification problem and did Hyper-parameter tuning over huge grid(probably tuned every possible parameter) using optuna. While testing, change of random_state changes model performance metrics (roc_auc/recall/precision), feature_importance and even model predictions (predict_prob). What does this tell me about my data? Since I have to take this model in production, how should I tackle this for model to be more robust? Stay with one random_state (say default 0) which we use during cross_validation and use it on out-of-sample as well. During cross_validation, on top of each param_combination, run few random_state(say 10) and take avg model performance.
0
1
729
0
59,320,839
0
0
0
0
3
false
1
2019-12-13T08:39:00.000
1
3
0
Changes to model performance by changing random_state of XGBClassifier
59,318,853
0.066568
python,xgboost,feature-selection,xgbclassifier
These are my two cents. Take the answer with a grain of salt. The XGB classifier is a boosting algorithm, which naturally depends on randomness (so is a Random Forest for example). Hence, changing seed will inherently change the training of the model and its output. Different seeds will also change the CV splits and alter further the results. Further, boosting aims to reduce variance as it uses multiple models (bagging) and at the same time it reduces bias as it trains each subsequent model based on the previous models' errors (the boosting part). However, boosting models can, in principle, overfit. In fact, if your base learner is not weak it will easily overfits the data and there won't be any residuals or errors for the subsequent models to build upon. Now, for your problem, you should first verify that you are not overfitting your model to the data. Then you might want to fix a certain number of seeds (you still want to be able to reproduce the results so it's important to fix them) and average the results obtained across the seeds.
I trained a XGBClassifier for my classification problem and did Hyper-parameter tuning over huge grid(probably tuned every possible parameter) using optuna. While testing, change of random_state changes model performance metrics (roc_auc/recall/precision), feature_importance and even model predictions (predict_prob). What does this tell me about my data? Since I have to take this model in production, how should I tackle this for model to be more robust? Stay with one random_state (say default 0) which we use during cross_validation and use it on out-of-sample as well. During cross_validation, on top of each param_combination, run few random_state(say 10) and take avg model performance.
0
1
729
0
59,321,064
0
0
0
0
3
false
1
2019-12-13T08:39:00.000
1
3
0
Changes to model performance by changing random_state of XGBClassifier
59,318,853
0.066568
python,xgboost,feature-selection,xgbclassifier
I tend to think if the model is sensitive to the random seed, it isn't a very good model. With XGB can try and add more estimators - that can help make it more stable. For any model with a random seed, for each candidate set of parameter options (usually already filtered to a shortlist of candidate), I tend to run a bunch of repeats on the same data for different random seeds and measure the difference in the output. I expect the evaluation metric standard deviation to be small (relative to mean), and the overlap of the predictions in each class to be very high. If either of these is not the case I don't accept the model. If it is the case, I simply pick one of the candidate models at random - it should not matter what the random seed is! I still record the random seed used - this is still needed to recreate results!
I trained a XGBClassifier for my classification problem and did Hyper-parameter tuning over huge grid(probably tuned every possible parameter) using optuna. While testing, change of random_state changes model performance metrics (roc_auc/recall/precision), feature_importance and even model predictions (predict_prob). What does this tell me about my data? Since I have to take this model in production, how should I tackle this for model to be more robust? Stay with one random_state (say default 0) which we use during cross_validation and use it on out-of-sample as well. During cross_validation, on top of each param_combination, run few random_state(say 10) and take avg model performance.
0
1
729
0
59,324,955
0
0
0
0
1
false
0
2019-12-13T14:48:00.000
0
1
0
Implementing trained-model on camera
59,324,845
0
python,tensorflow
Congratulations :) First of all, you use the model to recognize the objects, the model learned from the data, minor detail. It really depends on what you are aiming for, as the comment suggest, you should probably provide a bit more information. The simplest setup would probably be to take an image with your webcam, read the file, pass it to the model and get the predictions. If you want to do it live, you are gonna have the stream from the webcam and then pass the images to the model.
I just trained my model successfully and I have some checkpoints from the training process. Can you explain to me how to use this data to recognize the objects live with the help of a webcam?
0
1
37
0
59,326,907
0
0
0
0
1
false
0
2019-12-13T16:59:00.000
2
1
0
Pandas agg how to count rows where a condition is true
59,326,882
0.379949
python,pandas
(x == 0).sum() counts the number of rows where the condition x == 0 is true. x.sum() just computes the "sum" of x (the actual result depends on the type).
I am using lambda function and agg() in python to perform some function on each element of the dataframe. I have following cases lambda x: (x==0).sum() - Question: Does this logically compute (x==0) as 1, if true, and 0, if false and then adds all ones and zeros? or is it doing something else? lambda x: x.sum() - Question: This is apparent, but still I'll ask. This adds all the elements or x passed to it. Is this correct?
0
1
592
0
59,327,644
0
0
0
0
1
false
3
2019-12-13T17:48:00.000
0
3
0
pandas read_csv. How to ignore delimiter before line break
59,327,525
0
python,pandas,file
Specifying which columns to read using usecols will be a cleaner approach or you can drop the column once you have read the data but this comes with an overhead of reading data that you don't need. The generic approach will require you the create a regex parser which will be more time consuming and more messy.
I'm reading a file with numerical values. data = pd.read_csv('data.dat', sep=' ', header=None) In the text file, each row end with a space, So pandas wait for a value that is not there and add a "nan" at the end of each row. For example: 2.343 4.234 is read as: [2.343, 4.234, nan] I can avoid it using , usecols = [0 1] but I would prefer a more general solution
0
1
3,252
0
59,327,905
0
0
0
0
1
true
0
2019-12-13T18:10:00.000
1
1
0
Conv2D to Conv3D
59,327,819
1.2
python,conv-neural-network,dicom,medical
Yes, you can, but there are a few things to change. Your kernel will now need to be in 3D, so the argument kernel_size must be a 3 integer tuple. Same thing for strides. Note that the CNN you will modify will probably be in 3D already (e.g., 60, 60, 3) if it's designed to train on colored images. The only difference is that you want the neural net to not only detect features in 3 separate 60x60 windows, but through the three windows. In other words, not 3 times 2D, but 3D. tl;dr yes you can, just change kernel_size and strides. The default values of the keras.layers.Conv3D are adjusted accordingly anyway.
I have 3D medical images and wanted to know if I have a CNN that uses Conv2D can I just change the Conv2D to a Conv3D? If not what would I need to change?
0
1
721
0
60,968,553
0
0
0
0
1
false
0
2019-12-13T18:48:00.000
-1
1
0
Best Python GloVe word embedding package
59,328,248
-0.197375
python-3.x,word-embedding,glove
If you are using python3, gensim would be the best choice. for example: from gensim.scripts.glove2word2vec import glove2word2vec will fetch the gloVe module. Saul
What is the best Python GloVe word embedding package that I can use? I want a package that can help modify the co-occurrence matrix weights. If someone can provide an example, I would really appreciate that. Thanks, Mohammed
0
1
213
0
59,332,455
0
0
0
0
1
true
0
2019-12-14T04:54:00.000
3
1
0
How to make predictions with a decision tree on a dataset without a target value?
59,332,410
1.2
python
Decision tree is a supervised algorithm. That means you must use some target value(or lable) to build the tree(dividing node's value based on information gain rule).
Every tutorial I have found about machine learning includes testing an algorithm on a dataset that has target values and then it finds how accurate the algorithm is by testing its predictions on the test set. What if you then receive all of the data except for the target value and you want to make target value predictions to see if they come true in the future?Every tutorial I have seen has been with data that they already know the future target value predictions.
0
1
27
0
59,333,699
0
1
0
0
1
false
0
2019-12-14T07:57:00.000
1
1
0
OpenCV from Python shows different results for JPG and PNG images?
59,333,332
0.197375
python-3.x,image,opencv,image-processing,computer-vision
The problem is inherent to the image format you are using. There are majorly two type of compression techniques(all image formats: jpeg, png, webp are compression techniques): Lossless Compression Lossy compression As the name suggests, Lossless compression technique do not change the underlying matrix data while compression, for example: PNG. And the Lossy compression technique may substitute the underlying data with some nearest round off values to save some space, for example: JPEG. In your case while you are using JPEG format to save the image, it may distort some RGB pixel information, so if your threshold range are really tight then you will have different results as compared to PNG.
I have been working to create an OMR bubble sheet scanner using OpenCV from Python. I have created the bubble-sheets (using OpenCV) in PNG format. Then I am using OpenCV to read those files. OpenCV does a perfect job on PNG images, it works perfectly.... but when I use this on JPG files, it simply doesn't! Lists running out of indexes because it cannot detect all the proper contours. I know this doesn't make sense, but I tried saving the same PNG image directly into JPG (using Paint), and the same results. It works on PNG version, but doesn't work on JPG version. It also shows a different number of contours in the images even the images are exact same. Any help regarding this would be greatly appreciated, thanks!
0
1
1,643
0
59,433,454
0
0
0
0
2
false
17
2019-12-14T16:15:00.000
24
5
0
Which loss function and metrics to use for multi-label classification with very high ratio of negatives to positives?
59,336,899
1
python,machine-learning,keras,multilabel-classification,vgg-net
What hassan has suggested is not correct - Categorical Cross-Entropy loss or Softmax Loss is a Softmax activation plus a Cross-Entropy loss. If we use this loss, we will train a CNN to output a probability over the C classes for each image. It is used for multi-class classification. What you want is multi-label classification, so you will use Binary Cross-Entropy Loss or Sigmoid Cross-Entropy loss. It is a Sigmoid activation plus a Cross-Entropy loss. Unlike Softmax loss it is independent for each vector component (class), meaning that the loss computed for every CNN output vector component is not affected by other component values. That’s why it is used for multi-label classification, where the insight of an element belonging to a certain class should not influence the decision for another class. Now for handling class imbalance, you can use weighted Sigmoid Cross-Entropy loss. So you will penalize for wrong prediction based on the number/ratio of positive examples.
I am training a multi-label classification model for detecting attributes of clothes. I am using transfer learning in Keras, retraining the last few layers of the vgg-19 model. The total number of attributes is 1000 and about 99% of them are 0s. Metrics like accuracy, precision, recall, etc., all fail, as the model can predict all zeroes and still achieve a very high score. Binary cross-entropy, hamming loss, etc., haven't worked in the case of loss functions. I am using the deep fashion dataset. So, which metrics and loss functions can I use to measure my model correctly?
0
1
23,020
0
63,974,451
0
0
0
0
2
false
17
2019-12-14T16:15:00.000
7
5
0
Which loss function and metrics to use for multi-label classification with very high ratio of negatives to positives?
59,336,899
1
python,machine-learning,keras,multilabel-classification,vgg-net
Actually you should use tf.nn.weighted_cross_entropy_with_logits. It not only for multi label classification and also has a pos_weight can pay much attention at the positive classes as you would expected.
I am training a multi-label classification model for detecting attributes of clothes. I am using transfer learning in Keras, retraining the last few layers of the vgg-19 model. The total number of attributes is 1000 and about 99% of them are 0s. Metrics like accuracy, precision, recall, etc., all fail, as the model can predict all zeroes and still achieve a very high score. Binary cross-entropy, hamming loss, etc., haven't worked in the case of loss functions. I am using the deep fashion dataset. So, which metrics and loss functions can I use to measure my model correctly?
0
1
23,020
0
59,349,161
0
1
0
0
1
true
1
2019-12-14T22:53:00.000
2
1
0
Compute Engine n1-standard only use 50% of CPU
59,339,838
1.2
python,multithreading
In this case, the task utilizes only one of the two processors that you have available so that's why you see only 50% of the CPU getting used. If you allow pytorch to use all the CPUs of your VM by setting the number of threads, then it will see that the usage goes to 100%
I'm running an heavy pytorch task on this VM(n1-standard, 2vCpu, 7.5 GB) and the statistics show that the cpu % is at 50%. On my PC(i7-8700) the cpu utilization is about 90/100% when I run this script (deep learning model). I don't understand if there is some limit for the n1-standard machine(I have read in the documentation that only the f1 obtain 20% of cpu usage and the g1 the 50%). Maybe if I increase the max cpu usage, my script runs faster. Is there any setting I should change?
0
1
75
0
59,342,082
0
0
1
0
1
true
0
2019-12-15T02:32:00.000
0
1
0
8Puzzle game with A* : What structure for the open set?
59,340,795
1.2
python,algorithm,complexity-theory,a-star,sliding-tile-puzzle
The open set should be a priority queue. Typically these are implemented using a binary heap, though other implementations exist. Neither an array-list nor a dictionary would be efficient. The closed set should be an efficient set, so usually a hash table or binary search tree, depending on what your language's standard library defaults to. A dictionary (aka "map") would technically work, but it's conceptually the wrong data-structure because you're not mapping to anything. An array-list would not be efficient.
I'm developing a 8 Puzzle game solver in python lately and I need a bit of help So far I finished coding the A* algorithm using Manhattan distance as a heuristic function. The solver runs and find ~60% of the solutions in less than 2 seconds However, for the other ~40%, my solver can take up to 20-30 minutes, like it was running without heuristic. I started troubleshooting, and it seems that the openset I use is causing some problems : My open set is an array Each iteration, I loop through the openset to find the lowest f(n) (complexity : O(n) ) I have the feeling that O(n) is way too much to run a decent A* algorithm with such memory used so I wanted to know how should I manage to make the openset less "time eater" Thank you for your help ! Have a good day EDIT: FIXED I solved my problem which was in fact a double problem. I tried to use a dictionary instead of an array, in which I stored the nodes by their f(n) value and that allowed me to run the solver and the ~181000 possibilities of the game in a few seconds The second problem (I didn't know about it because of the first), is that I didn't know about the solvability of a puzzle game and as I randomised the initial node, 50% of the puzzles couldn't be solved. That's why it took so long with the openset as the array.
0
1
164
0
59,346,258
0
1
0
0
1
false
0
2019-12-15T15:05:00.000
0
1
0
Matplotlib throw errors and doesn't work when I try to import it
59,345,149
0
python,matplotlib
It seems like your package uninstall did not finish properly and something of that google package has been left behind. You need to either move some of your source files at correct destination or uninstall anaconda and reinstall again.
I'm a newbie in programming and python (only 30 days). I've installed Anaconda and working in the Spyder IDE. Everything was going fine and have been adding packages as necessary while I was learning different things until now. Now, I'm getting an error when I'm trying to import Matplotlib. Can anyone advise me what to do in simple terms, please? Error processing line 1 of C:\Anaconda\lib\site-packages\matplotlib-3.1.2-py3.7-nspkg.pth: Traceback (most recent call last): File "C:\Anaconda\lib\site.py", line 168, in addpackage exec(line) File "", line 1, in File "", line 580, in module_from_spec AttributeError: 'NoneType' object has no attribute 'loader' Remainder of file ignored
0
1
270
0
59,348,501
0
0
0
0
1
false
0
2019-12-15T19:29:00.000
0
2
0
Add new data to model sklearn: SGD
59,347,375
0
python,scikit-learn
Do you think it has other way to do a first learning and then add new data more important for the model? Keras? Thanks guys
I made models with sklearn, something like this: clf = SGDClassifier(loss="log") clf.fit(X, Y) And then now I would like to add data to learn for this model, but with more important weight. I tried to use partial_fit with sample_weight bigger but not working. Maybe I don't use fit and partial_fit as good, sorry I'm beginner... If someone know how to add new data I could be happy to know it :) Thanks for help.
0
1
141
0
59,377,401
0
0
0
0
1
true
0
2019-12-16T09:06:00.000
0
1
0
cope with high variance or keep training
59,353,398
1.2
python,tensorflow,neural-network,statistics,precision-recall
I put more training data. Now I use 70000 records instead of 45000. My results: precision: 0.81765974, recall: 0.65085715 on test-data precision: 0.83833283, recall: 0.708 on training-data I am pretty confident that this result is as good as possible. Thanks for reading
I built a neural network of the dimensions Layers = [203,100,100,100,2]. So I have 203 features and get two classes as a Result. I think, in my case, it would not be necessary to have two classes. My result is the prediction of a customer quitting his contract. So I guess one class would be sufficient (And 1 being quit, 0 being stay). I built the network with two classes to keep it flexible if I want to add more output-classes in the future. I put dropout,batch_normalization, and weight-decay. I am training with an Adam-optimizer. At the end of the day, I come up with precision: 0.7826087, recall: 0.6624 on test-data. precision: 0.8418698, recall: 0.72445 on training-data This means if I predict a customer to quit, I can be 78% confident that he really quits. On the opposite, if he quits his contract, I predicted with 66% that he will do so. So my classifier doesn´t work too bad. One thing keeps nagging at me: How do I know if there is any chance to do better still? In other words: Is there a possibility to calculate the Bayes-error my setup determines? Or to say it clearer: If the difference of my training-error and test-error is high like this, can I conclude for sure, that I am having a variance problem? Or is it possible that I must cope with the fact the test-accuracy cannot be improved? What else can I try to train better?
0
1
45
0
60,356,275
0
0
0
0
2
true
1
2019-12-16T16:28:00.000
-1
2
0
RPA : How to do back-end automation using RPA tools?
59,360,594
1.2
python-3.x,rpa,automationanywhere
There are several ways to do it. It is especially useful when your backed are 3rd party applications where you do not have lot of control. Many RPA products like Softomotive WinAutomation, Automation Anywhere, UiPath etc. provide file utilities, excel utilities, db utilities, ability to call apis, OCR capabilities etc., which you can use for backed automation.
I would like to know how back-end automation is possible through RPA. I'd be interested in solving this scenario relative to an Incident Management Application, in which authentication is required. The app provide: An option useful to download/export the report to a csv file Sort the csv as per the requirement Send an email with the updated csv to the team Please let me know how this possible through RPA and what are those tools available in RPA to automate this kind of scenario?
0
1
442
0
59,442,056
0
0
0
0
2
false
1
2019-12-16T16:28:00.000
2
2
0
RPA : How to do back-end automation using RPA tools?
59,360,594
0.197375
python-3.x,rpa,automationanywhere
RPA tools are designed to automate mainly front-end activities by mimicing human actions. It can be done easily using any RPA tool. However, if you are interested in back-end automation the first question would be, if specific application has an option to interact in the way you want through the back-end/API? If yes, in theory you could develop RPA robot to run pre-developed back-end script. However, if all you need would be to run this script, creating robot for this case may be redundant.
I would like to know how back-end automation is possible through RPA. I'd be interested in solving this scenario relative to an Incident Management Application, in which authentication is required. The app provide: An option useful to download/export the report to a csv file Sort the csv as per the requirement Send an email with the updated csv to the team Please let me know how this possible through RPA and what are those tools available in RPA to automate this kind of scenario?
0
1
442
0
59,362,487
0
0
0
0
1
false
0
2019-12-16T18:26:00.000
0
2
0
How can I transform a string variable to a categorical variable in two different datasets, keeping the same conversion?
59,362,334
0
python,pandas,scikit-learn
In general, it is recommended to use the OrdinalEncoder when you are sure or know that there exists an 'ordered' relationship between the categories. For example, the grades F, B-, B, A- and A : for each of these it makes sense to have the encoding as 1,2,3,4,5 where higher the grade, higher is the weight ( in the form of the encoded category). In your current case, it would be better to use a OneHot encoder for the Country column before splitting into train/test datasets.
I'm building a model and I have two dataframes in Pandas. One is the training data and the other the testing data. One of the variables is the country. I was thinking about using OrdinalEncoder() to convert the country column to a categorical column. E.g.: "USA" will be 1 in the new column, "Brazil" will be 2 and so on. However, I want the same conversion for the two dataframes. If "USA" in the training data becomes 1 as a categorical column, I want that "USA" in the testing data also becomes 1. Is that possible? How so? Thanks in advance
0
1
38
0
59,363,215
0
0
0
0
1
true
1
2019-12-16T19:16:00.000
2
2
0
Difference between "counts" and "number of observations" in matplotlib histogram
59,362,968
1.2
python,matplotlib,histogram
I think the wording in the documentation is a bit confusing. The count is the number of entries in a given bin (height of the bin) and the number of observation is the total number of events that go into the histogram. The documentation makes the distinction about how they normalized because there are generally two ways to do the normalization: count / number of observations - in this case if you add up all the entries of the output array you would get 1. count / (number of observations * bin width) - in this case the integral of the output array is 1 so it is a true probability density. This is what matplotlib does, and they just want to be clear in this distinction.
The matplotlib.pyplot.hist() documentation describes the parameter "density" (its deprecated name was "normed") as: density : bool, optional If True, the first element of the return tuple will be the counts normalized to form a probability density, i.e., the area (or integral) under the histogram will sum to 1. This is achieved by dividing the count by the number of observations times the bin width and not dividing by the total number of observations. With the first element of the tuple it refers to the y-axis values. It says that it manages to get the area under the histogram to be 1 by: dividing the count by the number of observations times the bin width. What is the difference between count and number of observations? In my head they are the same thing: the number of instances (or number of counts or number of observations) the variable value falls into a certain bin. However, this would mean that the transformed number of counts for each bin is just one over the bin width (since # / #*bin_width = 1/bin_width) which does not make any sense. Could someone clarify this for me? Thank you for your help and sorry for the probably stupid question.
0
1
1,174
0
59,369,038
0
1
0
0
1
false
0
2019-12-17T00:12:00.000
0
1
0
Accessing SAS(9.04) from Anaconda
59,365,941
0
python,sas,anaconda,saspy
SAS datasets are ODBC compliant. SasPy is for running SAS code. If the goal is to read SAS datasets, only, use ODBC or OleDb. I do not have Python code but SAS has a lot of documentation on doing this using C#. Install the free SAS ODBC drivers and read the sas7bdat. The drivers are on the SAS website. Writing it is different but reading should be fine. You will lose some aspects of the dataset but data will come through.
We are doing a POC to see how to access SAS data sets from Anaconda All documentation i find says only SASpy works with SAS 9.4 or higher Our SAS version is 9.04.01M3P062415 Can this be done? If yes any documentation in this regard will be highly appreciated Many thanks in Advance!
0
1
181
0
59,368,454
0
0
0
0
1
true
0
2019-12-17T05:48:00.000
0
1
0
Gensim Word2Vec or FastText build vocab from frequency
59,368,232
1.2
python,gensim,word2vec,fasttext
It "builds a vocabulary from a dictionary of word frequencies". You need a vocabulary for your gensim models. Usually you build it from your corpus. This is basically an alternative option to build your vocabulary from a word frequencies dictionary. Word frequencies for example are usually used to filter low or high frequent words which are meaningless for your model.
I wonder what does .build_vocab_from_freq() function from gensim actually do? What is the difference when I'm not using it? Thank you!
0
1
397
0
59,381,955
0
0
1
0
1
true
1
2019-12-17T10:48:00.000
0
1
0
Can you change the precision globally of a piece of code in Python, as a way of debugging it?
59,372,579
1.2
python,numpy,scipy
You can try using mpmath, but YMMV. generally scipy uses double precision. For a vast majority of cases, analyzing the sources of numerical errors is more productive than just trying to reimplement everything with higher widths floats.
I am solving a system of non-linear equations using the Newton Raphson Method in Python. This involves using the solve(Ax,b) function (spsolve in my case, which is for sparse matrices) iteratively until the error or update reduces below a certain threshold. My specific problem involves calculating functions such as x/(e^x - 1) , which are badly calculated for small x by Python, even using np.expm1(). Despite these difficulties, it seems like my solution converges, because the error becomes of the order of 10^-16. However, the dependent quantities, do not behave physically, and I suspect this is due to the precision of these calculations. For example, I am trying to calculate the current due to a small potential difference. When this potential difference becomes really small, this current begins to oscillate, which is wrong, because currents must be conserved. I would like to globally increase the precision of my code, but I'm not sure if that's a useful thing to do since I am not sure whether this increased precision would be reflected in functions such as spsolve. I feel the same about using the Decimal library, which would also be quite cumbersome. Can someone give me some general advice on how to go about this or point me towards a relevant post? Thank you!
0
1
43
0
59,410,610
0
0
0
0
1
true
1
2019-12-17T15:38:00.000
0
2
0
ImportError: dll load failed while importing _openmp_helpers: The specified module could not be found while importing sklearn package
59,377,573
1.2
python-3.x,scikit-learn,openmp,dllimport,sklearn-pandas
Tried hard to solve it in IDLE but it didn't got rectified. Finally overcame it by installing anaconda IDE and using jupyter notebook.
import sklearn version--3.8.0 64-bit Traceback (most recent call last): File "", line 1, in import sklearn File "C:\Users\SAI-PC\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn__init__.py", line 75, in from .utils._show_versions import show_versions File "C:\Users\SAI-PC\AppData\Local\Programs\Python\Python38\lib\site-packages\sklearn\utils_show_versions.py", line 12, in from ._openmp_helpers import _openmp_parallelism_enabled ImportError: DLL load failed while importing _openmp_helpers: The specified module could not be found.
0
1
1,319
0
59,381,734
0
0
0
0
1
true
0
2019-12-17T20:11:00.000
0
1
0
sklearn.metrics Prevent unlabeled predictions from being classified as false positives
59,381,435
1.2
python,machine-learning,scikit-learn
I believe I figured it out. The labels parameter of precision_recall_fscore_support allows you to specify which labels you desire to use. Therefore, by using labels=list(set(y_true).union(set(y_pred)).difference(set(["-1"]))) I am able to obtain the desired behavior.
I have a multiclass, single label classifier which predicts some samples as "-1", which means that it is not confident enough to assign the sample a label. I would like to use sklearn.metrics.precision_recall_fscore_support to calculate the metrics for the model, however I am unable to prevent the "-1" classifications from being considered as false positives. The only thing I can think of is to do this on a "per class" basis for the metrics then do a weighted-average over the metrics excluding the "-1" class (i.e. the micro option in precision_recall_fscore_support while excluding "-1" false positives). Is there any standardized way to do this in sklearn without having to compute the averages myself?
0
1
20
0
59,387,758
0
0
0
0
1
false
1
2019-12-18T03:24:00.000
0
2
0
Best way to classify a series of data in Python
59,385,064
0
python,numpy,opencv,statistics,regression
If my assumptions are true I don't see a reason for any complex classifier. I'd simply check if the angle always gets larger or always gets smaller. Everytime this rule is followed you add 1 to a quality counter. If the rule is broken you reduce the quality counter by 1. In the end you divide the quality counter by the total amount of measured angles -> and then you decide a threshold for a good quality ratio. Sorry if I don't understand the issue any better - an actual image could help a lot.
I have been working on image processing problem and I have preprocessed a bunch of images to find the most prominent horizontal lines in those images. Based on this data, I want to classify if the image has a good perspective angle or a bad angle. The data points are angles of lines I was able to detect in a sequence of images. Based on the perspective of the image, I know this data sometimes represents a "good-angle" image, and in some other cases, it represents a "bad-angle" image. I tried np.polyfit, finding slopes of lines, finding derivatives of slopes, and several other methods but unable to find a simple metric that is so obvious by just looking at this data. These are examples of "Good angles". You can notice they start from positive ones, and later ones are are negative. Good angle data [7.97, 7.99, 9.01, 5.07, 5.01, 14.81, 8.86, -2.11, -0.86, 1.06, 0.86, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.97, 0.92, -0.95, -2.05, -2.2, -2.78, -2.93, -2.8, -2.99, -2.88, -2.94, -2.81, -3.04, -3.07, -3.0] [3.96, 4.12, 6.04, 6.03, 6.08, 5.99, 6.99, 6.81, 6.81, 6.1, 6.1, 4.06, 3.98, 4.03, 3.92, 3.95, 3.84, 3.94, 4.07, 3.95, 3.87, 2.65, 1.88, 0.0, 0.0, -0.94, -1.06, -1.81, -1.81, -3.95, -4.09, -4.0, -3.93] [8.75, 10.06, 9.02, 9.96, 9.89, 10.08, 9.99, 10.0, 10.02, 9.95, 4.04, 4.03, 3.93, -1.18, -0.95, -1.12, -1.02, -1.76, -1.92, -2.06, -5.99, -5.83, -6.01, -4.96, -7.84, -7.67] These are examples of "Bad Angle" images. You can notice they start from negative numbers, and later ones are positive. You can also notice that these are significantly larger numbers than 0. Bad Angle Data [-13.92, -14.93, -4.11, -4.04, -2.18, 17.12, 18.01, 16.91, 15.95, 16.75, 14.16, 14.04] [-14.93, -14.93, -7.92, -4.04, -5.91, -4.98, 16.08, 16.26, 16.24] [11.81, -9.77, -10.2, -9.96, -10.09, -6.81, 2.13, 3.02, 2.77, 3.01, 2.78, 5.92, 5.96, 5.93, 2.96, 3.06, 1.03, 2.94, 6.2, 5.81, 5.04, 7.13, 5.89, 5.09, 4.89, 3.91, 4.15, 17.99, 6.04, 5.67, 7.24, 16.34, 17.02, 16.92, 15.99, 16.93, 15.76] As this is based off of data captured from real images, we do have some irregularities in the dataset. I would like to avoid any glitches and use a metric that can classify my arrays as Good angle or bad angles.
0
1
64
0
59,394,538
0
0
0
0
1
true
0
2019-12-18T09:34:00.000
2
1
0
Dividing MST for kruskal clustering
59,389,090
1.2
python,algorithm,cluster-analysis
In Kruskal's algorithm, MST edges are added in order of increasing weight. If you're starting with an MST and you want to get the same effect as stopping Kruskal's algorithm when there are N connected components, then just delete the N-1 highest-weight edges in the MST.
I made a c# application that draw random point on the panel. I need to cluster this points according to euclidian distance. I already implement kruskal algorithm. Normally, there must be number of minimum spanning tree up to written number. For instance, when the user want to clusters drawn point for 3 clusters , end of the kruskal algorithm there must be 3 huge MST. But I did it in a different way. I made a one huge MST, now I have to divide this MST into written number of clusters. For example, point number = 5 , cluster number 2 my kruskal output is = 0-3:57 1-2:99 1-4:102 from-to:euclidian distance Problem is I don't know in where I should cut these MST for create clusters
0
1
233
0
59,394,913
0
1
0
0
1
false
1
2019-12-18T13:50:00.000
3
2
0
Cannot import from sklearn import c
59,393,468
0.291313
python,scikit-learn
I have never seen KNearestNeighbor in sklearn. There is two thing you can do instead of KNearestNeighbor from sklearn.neighbors import KNeighborsClassifier or from sklearn.neighbors import NearestNeighbors I think 1st option is the option which you want now
I am working on jupyter notebook on a python assignment and I am trying to import KNearestNeighbor from sklearn but I am getting the error: ImportError: cannot import name 'KNearestNeighbor' from 'sklearn' (C:\Users\michaelconway\Anaconda3\lib\site-packages\sklearn__init__.py) I have checked and I do have sklearn installed: version 0.22 Any ideas please?
0
1
1,512
0
59,445,732
0
0
0
0
1
false
1
2019-12-18T17:43:00.000
0
2
0
I must compress many similar files, can I exploit the fact they are similar?
59,397,512
0
python,zip,compression
A 'zip-basis' is interesting but problematic. You could preprocess the files instead. Take one file as a template and calculate the diff of each file compared to the template. Then compress the diffs.
I have a dataset with many different samples (numpy arrays). It is rather impractical to store everything in only one file, so I store many different 'npz' files (numpy arrays compressed in zip). Now I feel that if I could somehow exploit the fact that all the files are similar to one another I could achieve a much higher compression factor, meaning a much smaller footprint on my disk. Is it possible to store separately a 'zip basis'? I mean something which is calculated for all the files together and embodies their statistical features and is needed for decompression, but is shared between all the files. I would have said 'zip basis' file and a separate list of compressed files, which would be much smaller in size than each file zipped alone, and to decompress I would use the share 'zip basis' every time for each file. Is it technically possible? Is there something that works like this?
0
1
71
0
59,398,539
0
0
0
0
1
false
0
2019-12-18T18:35:00.000
0
1
0
Line Graph in django template
59,398,219
0
python,django
as pointed out by @roganjosh, you'll render the graph using a js library. So in the views.py you'll have to add your data to the context, and then render it in the template using a js library. I personally like plotly.js, they have a neat and easy to use interface. D3.js is also a very popular data visualisation library.
To be very honest, I've played with matplotlib a little but iam new to django. I have wandered all over google to search to plot a line graph in django using csv. Unfortunately i couldnt find anything apart from 'bokeh' and 'chartit'. Although they arent very useful to help me make a start. My Goal: I need to plot a line graph, where x-axis have dates, y-axis have some numbers. Now, what should my views.py look like? What thing should i include in template? Anyone please help me out, or send me some provide some video tutorial to start with
1
1
139
0
59,403,299
0
0
0
0
1
false
3
2019-12-19T04:20:00.000
3
2
0
Pandas not working: DataFrameGroupBy ; PanelGroupBy
59,403,256
0.291313
python-3.x,pandas
I guess you are using an older version of tqdm. Try using a version above tqdm>=4.23.4. The command using pip would be, pip install tqdm --upgrade
I have just upgraded python and I cannot get pandas to run properly, please see below. Nothing appears to work. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tqdm/_tqdm.py", line 613, in pandas from pandas.core.groupby.groupby import DataFrameGroupBy, \ ImportError: cannot import name 'DataFrameGroupBy' from 'pandas.core.groupby.groupby' (/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas/core/groupby/groupby.py) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "code/analysis/get_cost_matrix.py", line 23, in tqdm.pandas() # Gives us nice progress bars File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/tqdm/_tqdm.py", line 616, in pandas from pandas.core.groupby import DataFrameGroupBy, \ ImportError: cannot import name 'PanelGroupBy' from 'pandas.core.groupby' (/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/pandas/core/groupby/init.py)
0
1
2,138
0
63,578,193
0
0
0
0
1
true
0
2019-12-19T11:42:00.000
1
1
0
ImageAI Object detection with prediction training
59,409,085
1.2
python,tensorflow,keras,imageai
No, location of objects is only possible with detection because it works on coordinates (bounding box), label error means you have to annotate your dataset.
I have successfully trained a predictor model - so with no labels using ModelTraining class. Currently, I can use CustomImagePrediction.predictImage() to return a value of what it thinks is in the picture. I want to be able to detect the location of the object in the image, not just what it thinks it is. This functionality is in CustomObjectDetection but this is obviously a different class (gives a no label error as it requires the other training method, with the labels). Is it possible to achieve this with a predictor model?
0
1
135
0
59,428,792
0
0
0
0
1
true
1
2019-12-20T14:07:00.000
0
1
0
Examining explained and unexplained variance of the DV
59,426,564
1.2
python,machine-learning,statistics,regression
I would begin analysis by finding the R-squared (R2) value of a model with all predictor variables, and then determine the change in R-squared when iteratively leaving out each predictor variable one at a time. Such an analysis should weed out the predictors with minimal impact on the regression, and give you a good idea of the impact for the remaining predictor variables. I would choose the R-squared fit statistic for this analysis as it is generally held to explain the amount of dependent data variance explained by the model, and I calculate R-squared by using numpy as "R2 = 1.0 - (numpy.var(regression_error) / numpy.var(dependent_data))"
What statistical techniques should I adopt when trying to determine how much my independent variables explain the variance of my dependent variable? For further context - I have been asked to develop a model in Python with the aim of examining the extent to which the predictor variables impact upon the response variable. Having usually focused on developing models for predictive purposes, I am unsure of where to start here.
0
1
105
0
59,429,840
0
0
0
0
1
false
2
2019-12-20T18:14:00.000
1
2
0
Is there a way to select only column labels using Pyhon Pandas library without any rows?
59,429,705
0.099668
python,pandas
Just dodataframe.columns to get all column names
This might be a silly question to ask, however, it is for a specific task in a multi-step process to clean up some data. Basically, each column label is a location represented as a series of long numbers. Each column contains measurement values in each subsequent row for those locations. I do not need the measurements, only the locations (hence why I just need the column labels only). The reason I need this is because I need to replace some mixed up column labels in one CSV file with the correct column labels from another CSV file. I cannot do this in Excel since there are too many columns to read in (over 300,000 columns). I am essentially looking for a way to do a coded "Copy" and "Paste" from one file to another using Pandas if it can be done. I had a considered just dropping the columns I do not need, however, because the columns are labelled as numbers, I'd be filtering based on a multiple set of a conditions. I thought this method would be easier. Thank you for your help.
0
1
702
0
59,431,088
0
0
0
0
1
true
1
2019-12-20T19:18:00.000
0
1
0
replacement has length zero in R from a python code
59,430,331
1.2
python,r,numerical-methods
Keep in mind that R indexes are 1-based, while in Python they are 0-based. In your code, the first time through the for loop, u[i, j - 1] evaluates to u[2, 0], or numeric(0). This is what produces the error.
I was doing numerical method in R and Python I have applied leapfrog method in python and it worked perfectly but I want to do the slimier thing in R. Here you can see my code Possibly I have tried doing u[2,2]=beta*(u[1,1]-2*u[2,1]+u[3,1]) this works, here I can see that the error is due to the bold statement means due to u[2,0] does not exist. But the same code worked in python, Please help to resolve the error while executing the loop u[i, j - 1] evaluates to u[2, 0], or numeric(0). This is what produces the error. Is there any solution.
0
1
81
0
59,441,386
0
0
0
0
1
true
1
2019-12-21T11:10:00.000
2
1
0
Calculate mean across one specific dimension of a 4D tensor in Pytorch
59,435,653
1.2
python,numpy,computer-vision,pytorch,tensor
The first part of the question has been answered in the comments section. So we can use tensor.transpose([3,0,1,2]) to convert the tensor to the shape [1024,66,7,7]. Now mean over the temporal dimension can be taken by torch.mean(my_tensor, dim=1) This will give a 3D tensor of shape [1024,7,7]. To obtain a tensor of shape [1024,1,7,7], I had to unsqueeze in dimension=1: tensor = tensor.unsqueeze(1)
I have a PyTorch video feature tensor of shape [66,7,7,1024] and I need to convert it to [1024,66,7,7]. How to rearrange a tensor shape? Also, how to perform mean across dimension=1? i.e., after performing mean of the dimension with size 66, I need the tensor to be [1024,1,7,7]. I have tried to calculate the mean of dimension=1 but I failed to replace it with the mean value. And I could not imagine a 4D tensor in which one dimension is replaced by its mean. Edit: I tried torch.mean(my_tensor, dim=1). But this returns me a tensor of shape [1024,7,7]. The 4D tensor is being converted to 3D. But I want it to remain 4D with shape [1024,1,7,7]. Thank you very much.
0
1
4,489
0
59,439,160
0
0
0
0
1
false
0
2019-12-21T17:16:00.000
0
2
0
False Positive Rate in Confusion Matrix
59,438,262
0
python,pandas
It is possible to have FPR = 1 with TPR = 1 if your prediction is always positive no matter what your inputs are. TPR = 1 means we predict correctly all the positives. FPR = 1 is equivalent to predicting always positively when the condition is negative. As a reminder: FPR = 1 - TNR = [False Positives] / [Negatives] TPR = 1 - FNR = [True Positives] / [Positives]
I was trying to manually calculate TPR and FPR for the given data. But unfortunately I dont have any false positive cases in my dataset and even no true positive cases. So I am getting divided by zero error in pandas. So I have an intuition that fpr=1-tpr. Please let me know my intuition is correct if not let know how to fix this issue. Thank you
0
1
650
0
70,914,645
0
1
0
0
2
false
0
2019-12-21T22:42:00.000
0
2
0
Import yfinance as yf
59,440,380
0
python,conda,yahoo,yfinance
The best way to install a python library is to: Open a terminal or cmd or powershell on your system. If using a virtual environment, activate the environment in your terminal or cmd or powershell. For e.g., A virtual environment named virtual_environment created using virtualenv in C://Users/Admin/VirtualEnvironments/ directory can be activated by opening a terminal or cmd or powershell in that directory and running python -m virtualenv virtual_environment/Scripts/activate. For ubuntu users, python -m virtualenv virtual_environment/bin/activate. To install a library named python-library, Run pip install python-library. To install a specific version of that library, like a library named python-library with version 2.0.1 can be installed by Running the command pip install python-library==2.0.1 Similarly, you simply have to run pip install yfinance If you want to install a specific version, like 0.1.70, then run command pip install yfinance==0.1.70. If you have a .whl file of that library like python-library.whl, and want to install that wheel, then run command pip install python-library.whl.
Import yfinance as yf Should run normally on conda but get this message ModuleNotFoundError Traceback (most recent call last) in 1 import pandas as pd ----> 2 import yfinance as yf 3 import matplotlib.pyplot as plt ModuleNotFoundError: No module named 'yfinance' Strange? As should be simple to install?
0
1
6,906
0
68,372,729
0
1
0
0
2
false
0
2019-12-21T22:42:00.000
0
2
0
Import yfinance as yf
59,440,380
0
python,conda,yahoo,yfinance
If you using anaconda, try downloan yfinance using Powershell Prompt.
Import yfinance as yf Should run normally on conda but get this message ModuleNotFoundError Traceback (most recent call last) in 1 import pandas as pd ----> 2 import yfinance as yf 3 import matplotlib.pyplot as plt ModuleNotFoundError: No module named 'yfinance' Strange? As should be simple to install?
0
1
6,906
0
59,991,016
0
1
0
0
1
false
1
2019-12-22T02:02:00.000
0
2
0
python: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler
59,441,173
0
python-3.x
It means that you have columns in your dataset of type integer. It's a warning, so you are good to go if you need to scale your features for a regression or a neural network.
I don't understand this message /opt/conda/envs/Python36/lib/python3.6/site-packages/sklearn/preprocessing/data.py:645: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler. return self.partial_fit(X, y) /opt/conda/envs/Python36/lib/python3.6/site-packages/ipykernel/__main__.py:2: DataConversionWarning: Data with input dtype uint8, int64 were all converted to float64 by StandardScaler. from ipykernel import kernelapp as app My code is : X = Feature_test testX= preprocessing.StandardScaler().fit(X).transform(X) What does it mean?? And how can I fix it?
0
1
1,496
0
59,447,620
0
0
0
0
1
false
0
2019-12-22T19:49:00.000
0
1
0
Re train a saved model with same Dataset
59,447,567
0
python,tensorflow,machine-learning,keras,deep-learning
If you are using keras, basically calling fit will start from pre-existing weights and therefore the first epoch will consist of the 5th epochs. Therefore, the error will already be lower. However, beware that the processing time for each epoch will be equivalent. You will only start from a model which do not have random weights.
I want to know if i retrain my saved model which i ran with 4 epochs, be faster with same image set at 10 epochs? My data set consists of 2 folders of training and validation with 5 classes and 3000 training and 1000 validation images
0
1
150
0
61,602,101
0
0
0
0
1
false
8
2019-12-23T09:26:00.000
0
3
0
Tensorboard: AttributeError: 'Model' object has no attribute '_get_distribution_strategy'
59,452,858
0
python-3.x,tensorflow,deep-learning,tensorboard,tensorflow2.0
This error mostly happens because of mixed imports from keras and tf.keras. Make sure that throughout the code exact referencing of libraries is maintained. For example instead of model.add(Conv2d()) try model.add(tf.keras.layers.Conv2D()) , applying this for all layers solved the problem for me.
I'm getting this error when i use the tensorboard callback while training. I tried looking for answers from posts related to tensorboard errors but this exact error was not found in any stackoverflow posts or github issues. Please let know. The following versions are installed in my pc: Tensorflow and Tensorflow GPU : 2.0.0 Tensorboard: 2.0.0
0
1
5,342
0
59,467,879
0
0
0
0
1
false
1
2019-12-24T08:58:00.000
0
2
0
numpy array to the just number in the array
59,466,258
0
python,numpy
A numpy array of rank 0 is a scalar (it's got shape ()) and will behave like a scalar everyone. You can treat it like that. You're perhaps mixing it up with an array of rank 1, e.g., np.array([99.79928571]). You can also wrap your list into np.array to get an array of float64. Perhaps that looks nicer to your eye.
I got array list looks like [array(99.75142857), array(99.79928571), array(99.82238095), array(99.83857143), array(99.85), array(99.85738095), array(99.86285714), array(99.86767857)] I'm not sure what is this array but I just want to ge a numbers [99.75142857,99.79928571....] this array() means numpy array
0
1
226
0
59,480,515
0
0
0
0
1
true
0
2019-12-24T22:06:00.000
0
1
0
Numerically stable way to compute conditional covariance matrix using linalg.solve
59,473,735
1.2
python,linear-algebra,matrix-inverse
Please don't inv—it's not as bad as most people think, but there's easier ways: you mentioned how np.linalg.solve(A, b) equals A^{-1} . b, but there's no requirement on what b is. You can use solve to solve your question, A - np.dot(B, np.linalg.solve(D, C)). (Note, if you're doing blockwise matrix inversion, C is likely B.transpose(), right?)
I know that the recommendation is not to use linalg.inv and use linalg.solve when inverting matrices. This makes sense when I have situation like Ax = b and I want to get x, but is there a way to compute something like: A - B * D^{-1} * C without using linalg.inv? Or what is the most numerically stable way to deal with the inverse in the expression? Thanks!
0
1
416
0
59,474,328
0
1
0
0
1
true
0
2019-12-24T22:42:00.000
1
1
0
Processing a Corpus For a word2vec Implementation
59,473,926
1.2
python,machine-learning,nlp,word2vec
Hashtable lookups can be very fast, and repeated lookups may not contribute much to the overall runtime. But the only way to really know the potential speedup of your proposed optimization is to implement it, and profile it in comparison to the prior behavior. Also, as you note, to be able to re-use a single-pass token-lookup, you'd need to store those results somewhere. Google's word2vec.c code, like many other implementations, seeks to work well with input corpuses that are far larger than addressable memory. Writing the interim tokenization to disk would require extra code complication, and extra working space on disk, compared to the baseline of repeated lookups. So: even if it did speed things a little, implementors might consider the extra complexity undesirable.
As part of a class project, I'm trying to write a word2vec implementation in Python and train it on a corpus of ~6GB. I'm trying to code a reasonably optimized solution so I don't have to let my PC sit for days. Going through the C word2vec source code, I notice that there, each thread reads words from a file, and takes the time to look up the index of every word. At the end, it stores a "sentence" of word indexes. Wouldn't it be logical to translate the whole corpus into one containing integer indexes of the appropriate words? That way, time isn't lost during training on hash-table lookups, while the translation process is a one-time expense. I understand that for extremely large corpuses, you are effectively doubling the amount it takes on disk, which you might want to avoid. However, if you do have the memory, wouldn't this offer a noticeable increase in efficiency? Or am I just overestimating the impact of a table lookup?
0
1
47
0
71,549,947
0
0
0
0
1
false
1
2019-12-25T02:06:00.000
0
1
0
tensorflow gather then reduce_sum
59,474,657
0
python,tensorflow,neural-network
I think you can just multiply by sparse matrix -- I was searching if the two are internally equivalent then I landed on your post
Let's say I have a matrix M of size 10x5, and a set of indices ix of size 4x3. I want to do tf.reduce_sum(tf.gather(M,ix),axis=1) which would give me a result of size 4x5. However, to do this, it creates an intermediate gather matrix of size 4x3x5. While at these small sizes this isn't a problem, if these sizes grow large enough, I get an OOM error. However, since I'm simply doing a sum over the 1st dimension, I never need to calculate the full matrix. So my question is, is there a way to calculate the end 4x5 matrix without going through the intermediate 4x3x5 matrix?
0
1
145
0
60,563,891
0
0
0
0
1
false
1
2019-12-25T06:36:00.000
0
1
0
how to use SVM to classify if the shape of features for each sample is matrix? Is it simply to reshape the matrix to long vector?
59,475,835
0
python,svm
Yes, that would be the approach I would recommend. It is essentially the same procedure that is used when utilizing images in image classification tasks, since each image can be seen as a matrix. So what people do is to write the matrix as a long vector, consisting of every column concatenated to one another. So you can do the same here.
I have 120 samples and the shape of features for each sample is matrix of 15*17. how to use SVM to classify? Is it simply to reshape the matrix to long vector?
0
1
28
0
59,479,125
0
0
0
0
1
false
10
2019-12-25T13:43:00.000
0
2
0
How to convert a grayscale image to heatmap image with Python OpenCV
59,478,962
0
python,image,opencv,image-processing,computer-vision
You need to convert the image to a proper grayscale representation. This can be done a few ways, particularly with imread(filename, cv2.IMREAD_GRAYSCALE). This reduces the shape of the image to (54,960) (hint, no third dimension).
I have a (540, 960, 1) shaped image with values ranging from [0..255] which is black and white. I need to convert it to a "heatmap" representation. As an example, pixels with 255 should be of most heat and pixels with 0 should be with least heat. Others in-between. I also need to return the heat maps as Numpy arrays so I can later merge them to a video. Is there a way to achieve this?
0
1
20,273