[{"Question":"I would like to reuse the same Spark application across multiple runs of a python script, that uses it's Spark session object. That is, I would like to have a Spark application running in the background, and access it's Spark session object from within my script. Does anybody know how to do that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":69312330,"Users Score":0,"Answer":"To the best of my knowledge, it is not possible. It is the security model of Spark to isolate each session to a distinct app.\nWhat I have done in the past:\n\nbuild a small REST server on top of Spark that listens to specific command. At boot time, the server creates the session and load the data, so that forthcoming transformations are fast.\n\ncache data in Delta lake, you still have the boot time and data ingestion, but it\u2019s much faster that accessing data from several sources and preparing the data.\n\n\nIf you describe a bit more your use-case, I may be able to help a little more.","Q_Score":2,"Tags":"python,apache-spark,pyspark","A_Id":69313698,"CreationDate":"2021-09-24T09:03:00.000","Title":"How to request the spark session object from a background-running Spark application from within a python script?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using pandas.read_csv() to read a csv file, but characters like the copyright symbol are getting converted to \ufffd\nFor example, in excel I will see this value - \n\/ORGANIZATION\/AFFLUENT-ATTACH\u00c3\u00a9-CLUB-2\nIn jupyter notebook in turns to this - \n\/ORGANIZATION\/AFFLUENT-ATTACH\ufffd-CLUB-2 in one dataframe \n\/ORGANIZATION\/AFFLUENT-ATTACH\u00c9-CLUB-2 in the other\nI need to do an inner join of 2 dataframes, both of which have a column with these unique IDs, but values like these are getting left out.\nI thought it might be something to do with the enconding, so I found that the encoding type is cp1252 for both csv files. I do not know if this information is useful.\nPlease help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":69315906,"Users Score":0,"Answer":"Try to change the encoding of the file to UTF-8 or UTF-16 while reading.","Q_Score":0,"Tags":"python,pandas,dataframe,csv,data-science","A_Id":69315970,"CreationDate":"2021-09-24T13:32:00.000","Title":"How to deal with special characters like \ufffd in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using pandas to read in csv data to my python script.\nBoth csv files have the same encoding (Windows-1252).\nHowever with one of the files I get an error when reading the csv file with pandas, unless I specify the encoding parameters in pd.read_csv().\nDoes anyone know why I need to specify the encoding in one csv and not the other? Both csv's contain similar data (strings and numbers).\nThank you","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":69322728,"Users Score":0,"Answer":"That just means that one of the files has a character outside the range 0x00 to 0x7F. It's only the highest 128 values where the encoding makes a difference. All it takes is one n-with-tilde or one smart quote mark.","Q_Score":0,"Tags":"python,pandas,csv","A_Id":69322833,"CreationDate":"2021-09-25T02:38:00.000","Title":"Encoding csv error with Pandas - have to encode one csv file but not the other -both have same encoding","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently, I've been interested in Data analysis.\nSo I researched about how to do machine-learning project and do it by myself.\nI learned that scaling is important in handling features.\nSo I scaled every features while using Tree model like Decision Tree or LightGBM.\nThen, the result when I scaled had worse result.\nI searched on the Internet, but all I earned is that Tree and Ensemble algorithm are not sensitive to variance of the data. \nI also bought a book \"Hands-on Machine-learning\" by O'Relly But I couldn't get enough explanation.\nCan I get more detailed explanation for this?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":139,"Q_Id":69323275,"Users Score":0,"Answer":"Though I don't know the exact notations and equations, the answer has to do with the Big O Notation for the algorithms.\nBig O notation is a way of expressing the theoretical worse time for an algorithm to complete over extremely large data sets. For example, a simple loop that goes over every item in a one dimensional array of size n has a O(n) run time - which is to say that it will always run at the proportional time per size of the array no matter what.\nSay you have a 2 dimensional array of X,Y coords and you are going to loop across every potential combination of x\/y locations, where x is size n and y is size m, your Big O would be O(mn)\nand so on. Big O is used to compare the relative speed of different algorithms in abstraction, so that you can try to determine which one is better to use.\nIf you grab O(n) over the different potential sizes of n, you end up with a straight 45 degree line on your graph.\nAs you get into more complex algorithms you can end up with O(n^2) or O(log n) or even more complex. -- generally though most algorithms fall into either O(n), O(n^(some exponent)), O(log n) or O(sqrt(n)) - there are obviously others but generally most fall into this with some form of co-efficient in front or after that modifies where they are on the graph. If you graph each one of those curves you'll see which ones are better for extremely large data sets very quickly\nIt would entirely depend on how well your algorithm is coded, but it might look something like this: (don't trust me on this math, i tried to start doing it and then just googled it.)\nFitting a decision tree of depth \u2018m\u2019:\n\nNa\u00efve analysis: 2m-1 trees -> O(2m-1 n d log(n)).\neach object appearing only once at a given depth: O(m n d log n)\n\nand a Log n graph ... well pretty much doesn't change at all even with sufficiently large numbers of n, does it?\nso it doesn't matter how big your data set is, these algorithms are very efficient in what they do, but also do not scale because of the nature of a log curve on a graph (the worst increase in performance for +1 n is at the very beginning, then it levels off with only extremely minor increases to time with more and more n)","Q_Score":0,"Tags":"python,data-analysis,decision-tree,ensemble-learning,feature-scaling","A_Id":69323379,"CreationDate":"2021-09-25T05:00:00.000","Title":"Why Does Tree and Ensemble based Algorithm don't need feature scaling?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently, I've been interested in Data analysis.\nSo I researched about how to do machine-learning project and do it by myself.\nI learned that scaling is important in handling features.\nSo I scaled every features while using Tree model like Decision Tree or LightGBM.\nThen, the result when I scaled had worse result.\nI searched on the Internet, but all I earned is that Tree and Ensemble algorithm are not sensitive to variance of the data. \nI also bought a book \"Hands-on Machine-learning\" by O'Relly But I couldn't get enough explanation.\nCan I get more detailed explanation for this?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":139,"Q_Id":69323275,"Users Score":0,"Answer":"Do not confuse trees and ensembles (which may be consist from models, that need to be scaled).\nTrees do not need to scale features, because at each node, the entire set of observations is divided by the value of one of the features: relatively speaking, to the left everything is less than a certain value, and to the right - more. What difference then, what scale is chosen?","Q_Score":0,"Tags":"python,data-analysis,decision-tree,ensemble-learning,feature-scaling","A_Id":69358985,"CreationDate":"2021-09-25T05:00:00.000","Title":"Why Does Tree and Ensemble based Algorithm don't need feature scaling?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am at a total loss as to why this is impossible to find but I really just want to be able to groupby and then export to excel. Don't need counts, or sums, or anything else and can only find examples including these functions. Tried removing those functions and the whole code just breaks.\nAnyways:\nHave a set of monthly metrics - metric name, volumes, date, productivity, and fte need. Simple calcs got the data looking nice, good to go. Currently it is grouped in 1 month sections so all metrics from Jan are one after the other etc. Just want to change the grouping so first section is individual metrics from Jan to Dec and so on for each one.\nInitial data I want to export to excel (returns not a dataframe error)\ndfcon = pd.concat([PmDf,ReDf])\ndfcon['Need'] = dfcon['Volumes'] \/ (dfcon['Productivity']*21*8*.80)\ndfcon[['Date','Current Team','Metric','Productivity','Volumes','Need']]\ndfg = dfcon.groupby(['Metric','Date'])\ndfg.to_excel(r'S:\\FilePATH\\GroupBy.xlsx', sheet_name='pandas_group', index = 0)\nThe error I get here is: 'DataFrameGroupBy' object has no attribute 'to_excel' (I have tried a variety of conversions to dataframes and closest I can get is a correct grouping displaying counts only for each one, which I do not need in the slightest)\nI have also tried:\ndfcon.sort('Metric').to_excel(r'S:\\FILEPATH\\Grouped_Output.xlsx', sheet_name='FTE Need', index = 0)\nthis returns the error: AttributeError: 'DataFrame' object has no attribute 'sort'\nAny help you can give to get this to be able to be exported grouped in excel would be great. I am at my wits end here after over an hour of googling. I am also self taught so feel like I may be missing something very, very basic\/simple so here I am!\nThank you for any help you can provide!\nPs: I know I can just sort after in excel but would rather learn how to make this work in python!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":85,"Q_Id":69327492,"Users Score":2,"Answer":"I am pretty sure sort() doesnt work anymore, try sort_values()","Q_Score":0,"Tags":"python,pandas,dataframe,sorting,pandas-groupby","A_Id":69327634,"CreationDate":"2021-09-25T15:20:00.000","Title":"python groupby to dataframe (just groupby to data no additional functions) to export to excel","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a numpy array with size (1000,6) and I fill part of it each time during my program. I need to find the first location of zero in this array. for this, I used np.where( array==0). but the output is a tuple of size 2 and each item of it is a numpy array and I do not how can I find the first index of occurring zero in this array. what should I do about this?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":80,"Q_Id":69329249,"Users Score":1,"Answer":"The first element of the tuple that you got should be the index you are looking.","Q_Score":1,"Tags":"python,python-3.x","A_Id":69329279,"CreationDate":"2021-09-25T19:10:00.000","Title":"how do I find the index of an specific value in numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When i am using \"optimizer = keras.optimizers.Adam(learning_rate)\" i am getting this error\n\"AttributeError: module 'keras.optimizers' has no attribute 'Adam\". I am using python3.8 keras 2.6 and backend tensorflow 1.13.2 for running the program. Please help to resolve !","AnswerCount":5,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":36123,"Q_Id":69334001,"Users Score":22,"Answer":"Use tf.keras.optimizers.Adam(learning_rate) instead of keras.optimizers.Adam(learning_rate)","Q_Score":13,"Tags":"python-3.8","A_Id":69420237,"CreationDate":"2021-09-26T10:29:00.000","Title":"AttributeError: module 'keras.optimizers' has no attribute 'Adam'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When i am using \"optimizer = keras.optimizers.Adam(learning_rate)\" i am getting this error\n\"AttributeError: module 'keras.optimizers' has no attribute 'Adam\". I am using python3.8 keras 2.6 and backend tensorflow 1.13.2 for running the program. Please help to resolve !","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":36123,"Q_Id":69334001,"Users Score":0,"Answer":"I think you are using Keras directly. Instead of giving as from keras.distribute import \u2014> give as from tensorflow.keras.distribute import \nHope this would help you.. It is working for me.","Q_Score":13,"Tags":"python-3.8","A_Id":69362432,"CreationDate":"2021-09-26T10:29:00.000","Title":"AttributeError: module 'keras.optimizers' has no attribute 'Adam'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When i am using \"optimizer = keras.optimizers.Adam(learning_rate)\" i am getting this error\n\"AttributeError: module 'keras.optimizers' has no attribute 'Adam\". I am using python3.8 keras 2.6 and backend tensorflow 1.13.2 for running the program. Please help to resolve !","AnswerCount":5,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":36123,"Q_Id":69334001,"Users Score":1,"Answer":"There are ways to solve your problem as you are using keras 2.6 and tensorflow too:\n\nuse (from keras.optimizer_v2.adam import Adam as Adam) but go through the function documentation once to specify your learning rate and beta values\nyou can also use (Adam = keras.optimizers.Adam).\n(import tensorflow as tf) then (Adam = tf.keras.optimizers.Adam)\n\nUse the form that is useful for the environment you set","Q_Score":13,"Tags":"python-3.8","A_Id":70180330,"CreationDate":"2021-09-26T10:29:00.000","Title":"AttributeError: module 'keras.optimizers' has no attribute 'Adam'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to make partial dependence plots for the random forest with multiple classification in Python (using scikit-learn)?\nI'm raising a separate question about this because I'm not sure if such a function exists in scikit-learn. I've seen a few examples in R already. If the function doesn't exist, I will make the request in scikit-learn github, but just want to double-check with the community before making the request.\nIf you know of any other Python package other than scikit learn that could conduct the plot, please let me know. Thanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":198,"Q_Id":69341312,"Users Score":1,"Answer":"You have to specify the class for which you want to plot the partial dependencies. This is done by the parameter \"target\" in the plot_partial_dependence function\nFor example, if you have three target classes \"low\", \"medium\", \"high\", you would say plot_partial_dependence(estimator, ..., target='high').\nHowever, I'm still trying to find some answers regarding the interpretations of partial dependency plots for multi-class-classifiers. If you have some information, let me know.","Q_Score":0,"Tags":"python,plot,scikit-learn,random-forest,multiclass-classification","A_Id":69557193,"CreationDate":"2021-09-27T04:42:00.000","Title":"Is there a way to make partial dependence plots for random forest with multiple classification in Python (using scikit-learn)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how to get back 10 from df.iloc[10] where df = pd.DataFrame({'a':np.arange(1,12)})?\nI tried df.index but it returns a weird np.array which doesn't contain anything close to 10.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":69342451,"Users Score":0,"Answer":"The most simple solution if the index matches the row numbers is df.iloc[10].name which returns 10","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":69342824,"CreationDate":"2021-09-27T07:03:00.000","Title":"Get index of DataFrame row","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I occasionally get the above error when making requests with the python requests library to Qualtrics APIs.\nIn a nutshell, I have a Google Cloud Function on Google Cloud that will trigger when a csv file is placed on a specific Cloud Storage Bucket. The function will create a Qualtrics distribution list on Qualtrics, upload the contacts and then download the distribution links.\nEvery day, three files are uploaded on Cloud Storage, each for a different survey, and so three Google Cloud instances will be started.\nMy gripes with the issues are:\n\nit doesn't happen regularly, in fact the workflow correctly worked for the past year\nit doesn't seem to be tied to the processed files: when the function crashes and I manually restart it by reuploading the same csv into the bucket, it will work smoothly\n\nThe problem started around when we added the third daily csv to the process, and tend to happen when two files are being processed at the same time. For these reasons my suspects are:\n\nI'm getting rate limited by Qualtrics (but I would expect a more clear message from Qualtrics)\nThe requests get in some way \"crossed up\" when two files are processed. I'm not sure if requests.request implicitly opens a session with the APIs. In that case the problem could be generated by multiple sessions being open at the same time from two csv being processed at the same time\n\nAs I said, the error seem to happen without a pattern, and it has happened on any part of the code where I'm doing a request to the APIs, so I'm not sure if sharing extensive code is helpful, but in general the requests are performed in a pretty standard way:\nrequests.request(\"POST\", requestUrl, data=requestPayload, headers=headers)\nrequests.request(\"GET\", requestUrl, headers=headers)\netc\ni.e.: I'm not using any particular custom option","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":318,"Q_Id":69343342,"Users Score":1,"Answer":"In the end I kind of resolved the issue with a workaround:\n\nI separated the processing of the three csv so that there is no overlap in processing time between two files\nimplemented a retry policy in the POST request\n\nSince then, separating processing time for the files reduced substantially the number of errors (from one or more each day to around 1 error a week), and even when they happen the retry policy circumvents the error at the first retry.\nI realize this may not be the ideal solution, so I'm open to alternatives if someone comes up with something better (or even more insights on the root problem).","Q_Score":1,"Tags":"python,python-requests,google-cloud-functions,qualtrics","A_Id":69553893,"CreationDate":"2021-09-27T08:21:00.000","Title":"Qualtrics API, getting \"[SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] decryption failed or bad record mac (_ssl.c:2570)\"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just have a question on CNN which is should the model take all inputs used in training to predict new sample? what if i want to build a system for a hospital that predicts the disease from image and some features such as age and height but the user doesn\u2019t need to enter the features in case they are not available, so he can input the image only. Is that possible to do that in CNN? because as I know all input used for training should be entered for testing and predicting new data","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":69345545,"Users Score":0,"Answer":"As I understood as per your description you want to predict the age and height from the image of patience. In that case for training, you need proper data and model. In model training, you have to specify X_train, Y_train at least. From there, the model will learn.\nX_train - provided image of a person\nY_train(label) - the characteristic you want to provide (height & age)\nFor predicting purposes you have to modify the input image the same as you did before for X_train. then if you feed it into a trained model it will give you the prediction of height & age.","Q_Score":0,"Tags":"python,tensorflow,conv-neural-network","A_Id":69345717,"CreationDate":"2021-09-27T11:06:00.000","Title":"Should the CNN take all the inputs used for training to predict new samples?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got 17,000 CSV files, each ordered by timestamp (some with missing data). The total CSV files are around 85GB, which is much larger than my 32GB RAM.\nI'm trying to figure out the best way to get these into a time-aligned, out-of-memory data structure, such that I can compute things like PCA.\nWhat's the right approach?\n(I've tried to set up an xarray.DataSet, with dim=(filename, time), and then I'm trying to xr.merge() on each CSV file into the DataSet, but it gets slower with every insert, and I expect it will crash when RAM runs out.)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":69346655,"Users Score":0,"Answer":"Have you tried dd.read_csv(...).\nDask reads CSVs in a lazily and can perform certain operations in a streaming manner, so you can run an analysis on a larger than memory dataset.\nMake sure that Dask is able to properly set divisions when you read in your data. Once the data is read, check dd.divisions and make sure they're values.\nYou can also use a Dask cluster to access more memory of course.\nThose files are really small and Dask typically works best with partitions that are around 100MB. You might want to compact your data a bit.","Q_Score":1,"Tags":"dask,python-xarray","A_Id":69418573,"CreationDate":"2021-09-27T12:30:00.000","Title":"What's the best way to handle large timeseries in dask \/ xarray?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Normally when you construct a cubic spline with SciPy you get a spline with C2 continuity, which means the spline's derivative is continuous, and the derivative's derivative is continuous as well.\nI would like to create a cubic spline without those guarantees -- in other words, the spline would be a C0 function.\nThe motivation is to efficiently represent a continuous function on an embedded device. A smooth derivative is not needed, and in fact just causes the spline to have more error (as compared to the original function) than it would otherwise have.\nI know I could write the code to choose the cubic polynomial coefficients on my own, but wondering if there's a simple way to do it with existing code.\nBetween knots I'd be minimising mean squared error between the function and the fitted cubic.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":69353693,"Users Score":0,"Answer":"The more complicated your make your polynomial (e.g. 3rd order), the more constraints you need on your boundary conditions (e.g. C2). If you try to fit data to a cubic spline with only C0 conditions, then the problem is under-determined. You might as well fit with a line in that case. Use piecewise linear fit.","Q_Score":0,"Tags":"python,scipy,spline","A_Id":69353820,"CreationDate":"2021-09-27T22:02:00.000","Title":"How to construct a cubic spline with C0 continuity","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Normally when you construct a cubic spline with SciPy you get a spline with C2 continuity, which means the spline's derivative is continuous, and the derivative's derivative is continuous as well.\nI would like to create a cubic spline without those guarantees -- in other words, the spline would be a C0 function.\nThe motivation is to efficiently represent a continuous function on an embedded device. A smooth derivative is not needed, and in fact just causes the spline to have more error (as compared to the original function) than it would otherwise have.\nI know I could write the code to choose the cubic polynomial coefficients on my own, but wondering if there's a simple way to do it with existing code.\nBetween knots I'd be minimising mean squared error between the function and the fitted cubic.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":69353693,"Users Score":0,"Answer":"Not out of the box, no.\nWith fixed breakpoints, it's just a linear least squares problem (with a continuity constraint), which you'll need to solve yourself.","Q_Score":0,"Tags":"python,scipy,spline","A_Id":69363573,"CreationDate":"2021-09-27T22:02:00.000","Title":"How to construct a cubic spline with C0 continuity","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2 variables in a pandas dataframe which are being used in a calculation (Var1 \/ Var2) and the values consist of both floating point values and missing values (which I chose to coerce to 0). In my end calculation I am receiving 'inf' values and NA values. The NA values are expected but how do I derive a useable number instead of the 'inf' values?\nsome 'inf' values are appearing when VAR1 = float and Var2 = 0, others appear when both VAR1 and VAR2 are floats.\nMy initial approach was to round the floats to 2 significant figures before the calculation but I still received the inf values.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":42,"Q_Id":69354329,"Users Score":1,"Answer":"You may be getting inf because you are dividing by zero. For example, if var1 = 5 and var2 = 0, then you are computing 5 \/ 0.\nIn pure Python this returns a ZeroDivisionError, but in lots of data libraries they avoid throwing this error because it would crash your code. Instead, they output inf, or \"infinity\".\nWhen var1 and var2 are both floats, it may be that var2 is extremely small. This would result in var1 \/ var2 being extremely large. At a certain point, Python doesn't let numbers get any larger and simply represents them as inf.\nRounding wouldn't help, because if var2 = 0, then it would round to 0, and if var2 is very small, it would also round to 0. As discussed earlier, dividing by zero causes the inf.","Q_Score":0,"Tags":"python,pandas,dataframe,data-cleaning,inf","A_Id":69354468,"CreationDate":"2021-09-27T23:52:00.000","Title":"Unwanted 'Inf' values in calculated measures","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i'm looking for a framework that is able to solve the following Data Science issue:\nI have several Teachers that can work for X amount of hours a week and have several subject that they can teach.\n\nTeacher 1: Math\nTeacher 2: Math + English\nTeacher 3: Sports + Art + English\nTeacher 4: Math + Art\nTeacher 5: Sports + Math + English\n\nIn a school, every subject needs a specific amount of hours per week. Some more than others\n\nMath: 12 hours\nEnglish: 8 hours\nArt: 4 hours\nSport: 2 hours\n\nLets say one teacher can do 2-3 hours just so you get my point^^\nThe Solution im looking for is a Framework or Algoritm that is filled (trained) with the data and then is able to distribute the teachers so all the subjects are capped or at least as close as possible. That means maybe Teacher 2 needs to teach only Math and Teacher 5 needs to teach 50% Sport and 50% English or 30% Math \/ 40% Sport \/ 30% English.\nSomeone mentioned Prolog but im not sure if it can handle this kind of problem? Maybe im wrong?\nIs there something that is fitting for my problem or am i destined to code that algorithm from scratch on my own?\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":112,"Q_Id":69357856,"Users Score":1,"Answer":"The first step seems to be to translate a research problem (or a series of problem statements) into a precise form. Problem Characterization\/Problem Conceptualization seems to be a technique for resolving that issue. Once the approach has been conceptualised, a technique must be identified for each of the sub-models and submodules.\nBreaking down the high problem statement into smaller problems is called problem conceptualizing. For every subproblem, a technique must be identified, and the methodology must be determined by the assumptions that have been stated previously.\nRealization of a Solution: Determines whether the assumptions are reasonable or whether the solutions meet his needs.\nThis can be compared to a flowchart that he has been creating with these subproblems, and also in general, it is attempting to reach a granularity level where he would determine the issue class. As a result, these issues can be classified as being either function optimization or categorization issues.","Q_Score":0,"Tags":"javascript,python,prolog,data-science,data-science-experience","A_Id":69446214,"CreationDate":"2021-09-28T07:57:00.000","Title":"LF Framework to solve Data Science Issue","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm structuring a monitoring system for a photovoltaic plant with pvlib. As the modules are bifacial and are mounted on a solar tracker (2p), I am using pvfactors. I believe I have already resolved the dependencies: pvfactors 1.5.1, pvlib 0.7.0, and shapely reinstalled via conda.\nAs the modules do not have parameters for the Sandia model, I intend to use the de Soto model.\nI plan to run the code automatically once a day with the weather data collected during the period.\nI would like to know if anyone has any code developed with pvfactors and single diode models for the modules.\nSure of your attention, thank you in advance!\nBen Possatto","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":69394067,"Users Score":0,"Answer":"You can model a single-axis tracked bifacial system using pvlib.tracking.SingleAxisTracker (inherits from a PVSystem instance) to calculate surface_tilt and surface_azimuth, then pass those results to pvfactors_timeseries to get the front and rear irradiance.","Q_Score":0,"Tags":"python,pvlib,pv","A_Id":69503094,"CreationDate":"2021-09-30T14:32:00.000","Title":"How model tracked bifacial PV modules with python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When we use a pretrained model, e.g. vgg16, as a backbone of the whole model which plays as a feature extractor, the model's data flow can be depicted as below:\nData --> vgg16 --> another network --> output\nAs for now, I've set False require_grads flags for all parameters in vgg16, and exclude those parameters from my optimizer's param list, so the vgg16 will not be modified during the training period.\nBut when I step further in my study, I'm now wondering which mode should vgg16 be used in? Should we call vgg16.eval() before running training epochs?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":167,"Q_Id":69401248,"Users Score":0,"Answer":"However, in the general case, if you are freezing the model (with requires_grad = False) then you are not updating the running statistics anymore and should therefore use the running statistics, i.e. put the model in eval mode.\nVGG's backbone does not have any normalization layers nor dropouts. So in the end it does not matter whether you put the backbone into eval or training mode.","Q_Score":1,"Tags":"python,pytorch,vgg-net,pre-trained-model","A_Id":69402332,"CreationDate":"2021-10-01T06:10:00.000","Title":"When using pretrained model(vgg, resnet like) as backbone, should we use it in `eval mode` or in `train mode`?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset where, after exploring data, I detect some patron:\n\nThe entire dataset have, imagine, 9 numerical variables, 1 dichotomous variable (take 'A' or 'B' value) and 1 numerical output\nThe output is a cost (in \u20ac)\nI find a sklearn regression model that, when 'A', using 4 of 9 variables I can predict output with good performance.\nI find another sklearn regression model that, when 'B', using the last 5 variables, I can predict output with good performance.\nIf I try to find a model which predict output with all the variables as input, encoding the dichotomous one with One-Hot-Encoder, the model has a bad performance.\n\nMy goal is to implement a unique model in Azure Machine Learning, using a .joblib\/.pkl, but with this approach, I have two separated models with the same output (a cost) but different inputs, depending of dichotomous variable.\nIs there any way to merge the two models into a single one? So that with the 10 inputs, estimate a single output (internally discriminate options 'A' and 'B' to select the correct model and its inputs).\nNotice that using something like Voting Ensemble it's not valid because there are different inputs on each category (or I think it so)\nI accept another approach as a solution. Thank you.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":78,"Q_Id":69402391,"Users Score":0,"Answer":"As you want to predict a value (regression), you can just train the two models separately (with the columns of your choice), you predict the output for each one and the prediction of the ensemble model is the mean of the two outputs.","Q_Score":0,"Tags":"python,machine-learning,scikit-learn,categorical-data,azure-machine-learning-service","A_Id":69414832,"CreationDate":"2021-10-01T07:55:00.000","Title":"Merge distinct sklearn models into a single one","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to plot graphs that share variables and datasets from other CoLab files, I would like to know how I could access those variables.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":103,"Q_Id":69424034,"Users Score":1,"Answer":"You could create a new folder 'VARIABLES' where the variables are saved, read, and re-written (i.e. updated) as txt or csv files. Otherwise, defining a variable in one Colab Notebook will only be accessible within that Colab Notebook and not between Colab Notebooks.","Q_Score":0,"Tags":"python,variables,jupyter-notebook,dataset,google-colaboratory","A_Id":69453199,"CreationDate":"2021-10-03T10:38:00.000","Title":"How to access\/share datasets from different Colab notebooks","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to use optuna to tune hyperparameters of xgboost, but because of memory restriction, I can't set the attribute n_trials too high otherwise it would report MemoryError, so I'm wondering that if I set n_trials=5 and run the program for 4 times, would the result be similar to that I set n_trials=20 and run the program for one time?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":177,"Q_Id":69446166,"Users Score":0,"Answer":"Yes, if you use the same database to store the study among different runs.","Q_Score":1,"Tags":"python,xgboost,optuna","A_Id":69446334,"CreationDate":"2021-10-05T07:05:00.000","Title":"A question about the \"n_trials\" in optuna","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For tf.keras.losses.SparseCategoricalCrossentropy(), the documentation of TensorFlow says\n\"Use this crossentropy loss function when there are two or more label classes.\"\nSince it covers two or more labels, including binary classification, then does it mean I can use this loss function for any classification problem? When do I have to use those binary loss such as tf.keras.losses.BinaryCrossentropy and similar ones?\nI am using TensorFlow 2.3.1","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":125,"Q_Id":69452672,"Users Score":1,"Answer":"BinaryCrossentropy ie like a special case of CategoricalCrossetropy with 2 classes, but BinaryCrossentropy is more efficient than CategoricalCrossentropy in calculation.\nWith CategoricalCrossentropy loss you should take the outputs as 2 dimension, while with BinaryCrossentropy 1 dimension is enough. It means you can reduce the weights by a half at the last layer with BinaryCrossentropy loss.","Q_Score":1,"Tags":"python,tensorflow,keras","A_Id":69452997,"CreationDate":"2021-10-05T14:50:00.000","Title":"Does \"tf.keras.losses.SparseCategoricalCrossentropy()\" work for all classification problems?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I got this err:\nimport pandas as pd\nModuleNotFoundError: No module named 'pandas'\nMy versions installed are:\nPython ver 3.9.7\npandas 1.3.3\npip 21.2.4\nPyCharm 11.0.12\nI can see pandas installed in pycharm, but when I am importing it I got that err.\nAny clue?\nThank you","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":731,"Q_Id":69469229,"Users Score":0,"Answer":"Try to reinstall pandas package.\ntype = pip install pandas\nwait for some time and then your panda package will get installed","Q_Score":0,"Tags":"python,pandas,pip,pycharm","A_Id":69469306,"CreationDate":"2021-10-06T16:17:00.000","Title":"Python : ModuleNotFoundError: No module named 'pandas'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to find a library (any language, but preferably C# or Python) which will let me open an XLSX file, iterate through the chart objects, and find data about the chart - ideally including the data backing the chart.\nThe Pandas Python package, or ExcelDataReader NuGet package have useful functionality for opening the file and reading a grid of numbers, as well as ways to add charts, but I don't find any way to read the charts.\nCurious to hear from anyone who has ideas\/solutions.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":49,"Q_Id":69490094,"Users Score":1,"Answer":"Hey I have a good solution for C#. In C# you can use OLEDB, this allows you to connect a C# code to a excel or access database (so long the database is in the C# code files). You don't need to get any addons for this is you have C# on Visual Studio.","Q_Score":0,"Tags":"python,c#,excel,pandas,charts","A_Id":69490121,"CreationDate":"2021-10-08T03:19:00.000","Title":"Reading chart data from an Excel file","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For some classification needs. I have multivariate time series data composed from 4 stelite images in form of (145521 pixels, 4 dates, 2 bands)\nI made a classification with tempCNN to classify the data into 5 classes. However there is a big gap between the class 1,2 with 500 samples and 4,5 with 1452485 samples.\nI' am wondering if there is a method that help me oversamling the two first classes to make my dataset more adequate for classification.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":36,"Q_Id":69500822,"Users Score":2,"Answer":"actually there is a lib in python for that \"imbalanced-learn\" (although u can do it manually) .\nyou can check the docs it's very easy to use","Q_Score":1,"Tags":"python,classification,oversampling","A_Id":69500888,"CreationDate":"2021-10-08T19:20:00.000","Title":"oversampling some classes from time series data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do you convert a column of dates of the form \"2020-06-30 15:20:13.078196+00:00\" to datetime in pandas?\nThis is what I have done:\n\npd.concat([df, df.date_string.apply(lambda s: pd.Series({'date':datetime.strptime(s, '%Y-%m-%dT%H:%M:%S.%f%z')}))], axis=1)\npd.concat([df, df.file_created.apply(lambda s: pd.Series({'date':datetime.strptime(s, '%Y-%m-%dT%H:%M:%S.%f.%z')}))], axis=1)\npd.concat([df, df.file_created.apply(lambda s: pd.Series({'date':datetime.strptime(s, '%Y-%m-%dT%H:%M:%S.%f:%z')}))], axis=1)\n\nI get the error - time data '2020-06-30 15:20:13.078196+00:00' does not match format in all cases.\nAny help is appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":523,"Q_Id":69506719,"Users Score":1,"Answer":"None of the formats mentioned by you above matches your sample.\nTry this\n\n\"%Y-%m-%d %H:%M:%S.%f%z\" (Notice the space before %H).","Q_Score":0,"Tags":"python,pandas,string,dataframe,datetime","A_Id":69508559,"CreationDate":"2021-10-09T12:25:00.000","Title":"Dealing with \"+00:00\" in datetime format","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there an efficient way to calculate the optimal swaps required to sort an array? The element of the array can be duplicated, and there is a given upper limit=3. (the elements can be in {1,2,3})\nFor example:\n1311212323 -> 1111222333 (#swaps: 2)\nAlready found similar questions on Stackoverflow, however, we have new information about the upper limit, that can be useful in the algorithm.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":169,"Q_Id":69525475,"Users Score":1,"Answer":"Yes, the upper limit of 3 makes a big difference.\nLet w(i, j) be the number of positions that contain i that should contain j. To find the optimal number of swaps, let w'(i, j) = w(i, j) - min(w(i, j), w(j, i)). The answer is (sum over i 0 and w(j, i) > 0, then we can swap an appropriate i and j, costing us one swap but also lowering the bound by one. Otherwise, swap any two out of place elements. The first term of the answer goes up by one, and the second goes down by two. (I am implicitly invoking induction here.)\nThat this answer is a lower bound follows from the fact that no swap can decrease it by more than one. This follows from more tedious case analysis.\nThe reason that this answer doesn't generalize past (much past?) 3 is that the cycle structure gets more complicated. Still, for array entries bounded by k, there should be an algorithm whose exponential dependence is limited to k, with a polynomial dependence on n, the length of the arrays.","Q_Score":2,"Tags":"python,arrays,algorithm,sorting,swap","A_Id":69527446,"CreationDate":"2021-10-11T11:28:00.000","Title":"Algorithm to calculate the minimum swaps required to sort an array with duplicated elements in a given range?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm absolutely new in python, so there is a question.\nI've splitted my original df to X_train, y_train, X_test, y_test.\nNow i want to drop from y_train (pd.series) outliers therefore i need to remove object with same index from X_train(pd.df).\nWhat is the easiest and cleanest way to do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":139,"Q_Id":69529353,"Users Score":0,"Answer":"try using y_train = y_train[X_train_new.index] where X_train_new is your new X_train after dropping some columns\/row\/outliers.","Q_Score":0,"Tags":"python,pandas,scikit-learn","A_Id":70894633,"CreationDate":"2021-10-11T16:11:00.000","Title":"Remove rows from X_train and y_train at once","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2 indexes, one named NIM, and one named Total Score. Both have 100X1 block matrices.\nWhen I run the code below the index gets removed.\n Final_Score = np.hstack((NIM, np.atleast_2d(total_score).T))\nIs there a way to combine several matrices into one and keep their indexes?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":69540167,"Users Score":0,"Answer":"In the end I used the same code but added:\npd.DataFrame(Final_score,columns=['NIM','Final Score'])\nNow, I can change np array into pd.","Q_Score":0,"Tags":"python,pandas,numpy,indexing","A_Id":69540529,"CreationDate":"2021-10-12T12:00:00.000","Title":"How to keep indexes when combining matrices?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with a medium-size dataset that consists of around 150 HDF files, 0.5GB each. There is a scheduled process that updates those files using store.append from pd.HDFStore.\nI am trying to achieve the following scenario:\nFor HDF file:\n\nKeep the process that updates the store running\nOpen a store in a read-only mode\nRun a while loop that will be continuously selecting the latest available row from the store.\nClose the store on script exit\n\nNow, this works fine, because we can have as many readers as we want, as long as all of them are in read-only mode. However, in step 3, because HDFStore caches the file, it is not returning the rows that were appended after the connection was open. Is there a way to select the newly added rows without re-opening the store?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60,"Q_Id":69542159,"Users Score":0,"Answer":"After doing more research, I concluded that this is not possible with HDF files. The only reliable way of achieving the functionality above is to use a database (SQLite is closest - the read\/write speed is lower than HDF but still faster than a fully-fledged database like Postgres or MySQL).","Q_Score":0,"Tags":"python,pandas,pytables,hdf","A_Id":69837444,"CreationDate":"2021-10-12T14:15:00.000","Title":"Pandas HDFStore caching","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What columns do I have to consider while implementing K Means? I have 91 columns after pre processing. And also to how many columns do I have to apply K Means clustering ? Is it all of them or only a few to be considered ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":69551843,"Users Score":0,"Answer":"It's actually about trial and error. There is no straight way to say which columns are going to help you the most until you try and figure it by yourself.\nbut you can use dimensionality reduction algorithms like PCA to project data to a lower dimension without much data loss. It's a common approach and also helps with the speed of your clustering algorithm.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":69552586,"CreationDate":"2021-10-13T08:06:00.000","Title":"K means algorithm implementation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"here I used panda for export my data which is located in numpy array. but there is a problem that I cant export my data and also there is a erroe that you can see below.\nvalueError: Must pass 2-d input\nthis is my main variable AccZONE=c.T and The type of that is Array Of float64, and the size Of That is (710,1,1)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":69566433,"Users Score":0,"Answer":"From the error it looks like the array is 3 dimensions, you need to change it to 2 dimensions, it would be nice if you could provide some code.\nYou can try np.reshape(arr,(-1,1)) or np.ravel(arr).","Q_Score":1,"Tags":"python,numpy","A_Id":69566506,"CreationDate":"2021-10-14T07:10:00.000","Title":"Export final data from numpy to excel","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running python script present in .py file using ExecuteStreamCommand processor in Nifi. For reading a csv file pandas modules is required. I'm calling pandas in the program but I'm getting error mentioned as \"No modules Pandas found\"\nI have Python installed in my local and added to path to Command path.\nHow to install Pandas library?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":113,"Q_Id":69573638,"Users Score":0,"Answer":"I\u2019ve had a similar issue with other modules. What you need to do is install the Python modules on the NiFi server that your script calls. What the error message is telling you is that it\u2019s trying to find the module called pandas but it isn\u2019t installed on the host.","Q_Score":0,"Tags":"python-3.x,pandas,etl,apache-nifi,minify","A_Id":69617182,"CreationDate":"2021-10-14T15:44:00.000","Title":"Python Modules in Apache Nifi","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've read many similar posts that say Excel's T.INV.2T(probability, degs_Freedom) can be reproduced in python using scipy.stats.t.ppf().\nIf I use the example of T.INV.2T(0.05, 58) excels yields 2.002.\nBased on other answers posted I should get the same answer using scipy.stats.t.ppf(0.05, 58), but I do not. I get -1.672.\nAny ideas on what is wrong with my syntax?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":176,"Q_Id":69576929,"Users Score":0,"Answer":"In Excel, you have two functions for returning the inverse of the Student's t-distribution: T.INV and T.INV.2T.\nThe first returns a left-tailed inverse of the Student's t-distribution and the second returns a two-tailed one.\nscipy.stats.t.ppf also returns a left-tailed inverse of t-distribution. So, if you want to compare scipy.stats.t.ppf with Excel you need to use the T.INV formula and not T.INV.2T \u2013 or you should divide the probability by two and then use it with scipy.","Q_Score":0,"Tags":"python,excel,scipy","A_Id":69577704,"CreationDate":"2021-10-14T20:22:00.000","Title":"How to reproduce Excel's T.INV.2T in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"reading an excel file in python jupyter notebook i'm trying to change a column datatype of a pandas dataframe from object to float and every try I get the message of ValueError: could not convert string to float: 'Variable by base style'. What does 'Variable by base style' mean?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":69577673,"Users Score":0,"Answer":"the data you're trying to convert includes an item : \"Variable by base style\" which obviously cannot be changed to a float.","Q_Score":0,"Tags":"python,excel,dataframe,types,jupyter-notebook","A_Id":69578053,"CreationDate":"2021-10-14T21:43:00.000","Title":"Can't convert object data type to float in pandas data frame","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a numerical solution to a partial integro-differential equation, and I need it to run quickly so I found that scipy.integrate.simps is best, but it's not always 100% accurate and produces the spikes in [1]. My solution was to remove them with scipy.signal.medfilt and then interpolate over the gaps with an interpolator (I've tried CubicSpline, PChipInterpolator, scipy.interp1d, akima,...) but all of them produce little \"hiccups\" in the solution that can be seen at y=0.1, (produced with 3rd order butterworth filter) and these errors grow as the solution is evolved. How do I remove the spikes and get a simple, smooth interpolation over the gaps? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":245,"Q_Id":69578472,"Users Score":1,"Answer":"I don't work with scipy, but from what I've gathered, some things stood out to me, and could possibly be what's causing problems.\n\nYour call to plt.show() which displays the data happens before you filter out the outliers with medfilt(), so the corrected data might not appear in your plot\nThe median filter from what I gather doesn't remove outliers from your data, instead it resets each data point with the median value amongst its k-neighbors.\nWith this in mind, I have two suggestions, (1) your median filter window might be too small, and that is causing the outliers to not be removed. Try setting it yourself using medfilt(self.n_, k_size=5), it defaults to 3 so try odd numbers larger than 3. (2) Given that you're not losing data points from using the medfilter, you might not need the lines that follow it which try to interpolate data that was presumably removed.","Q_Score":0,"Tags":"python,scipy,signal-processing,interpolation,numerical-computing","A_Id":69626960,"CreationDate":"2021-10-14T23:40:00.000","Title":"How to remove spikes in solution and produce smooth interpolation with scipy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset with ages (int) and wages, I would like to have the average wage next to each row of a person respective of that age.\nI created a smaller dataset using\nmean = df.groupby('age', as_index=False)['lnWage'].mean()\nwhat is the best way to append (for 2000 rows)?\n\n\n\n\nAge\nWage\n\n\n\n\n30\n10\n\n\n30\n20\n\n\n\n\nthen\n\n\n\n\nAge\nWage\naveragewage\n\n\n\n\n30\n10\n15\n\n\n30\n20\n15\n\n\n\n\nthanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":45,"Q_Id":69578616,"Users Score":0,"Answer":"The comments above are helpful, I have found this to be the easiest method, where average is the df with average wages. (ffr)\ndf_main['avgWage'] = df['age'].map(average_df['averageWage'])","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":69578875,"CreationDate":"2021-10-15T00:05:00.000","Title":"add average value to every row containing value in different column","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when I do this\n\n\n my_list = df.loc[df['ID'] == \"9\", ['ID1','ID2','ID3','ID4']].values.flatten().tolist()\n\n\nI get the result\n\n\n my_list = ['-1','32','63','-1']\n\n\nAnd then when I do my_list .remove('-1') I see\n\n\n my_list = ['32','63']\n\n\nwhich is what I want to see .However when I try to do .remove in single step like\n\n\n my_list = df.loc[df['ID'] == \"9\",['ID1','ID2','ID3','ID4']].values.flatten().tolist().remove('-1')\n\n\nthen my_list is empty.\nWhy is this happening?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":25,"Q_Id":69580061,"Users Score":1,"Answer":"Because remove does the operation in place, modifying the list itself. It doesn't return anything.","Q_Score":0,"Tags":"python,pandas","A_Id":69580073,"CreationDate":"2021-10-15T04:39:00.000","Title":"using .remove seperately vs using it in tolist()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any magic way to create an huge absence presence matrix in python? pd.crosstab and pd.pivot_table fail because of the memory requirement.\nI have an extremely large dataset like this one:\n\n\n\n\nPerson\nThing\n\n\n\n\nPeter\nbike\n\n\nPeter\ncar\n\n\nJenny\nbike\n\n\nJenny\nplane\n\n\n\n\nand need this:\n\n\n\n\n\nBike\ncar\nplane\n\n\n\n\nPeter\n1\n1\n0\n\n\nJenny\n1\n0\n1\n\n\n\n\nNote, the matrix is rather sparse. It contains a lot of zeros.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":69580955,"Users Score":0,"Answer":"Computers used in data science sometimes have absurdly high amount of RAM (I think I've seen one with 1tb before...)\nIf you don't have that much RAM, then I think the only way to resolve this is to utilize the hard drive...\nI would say, process the data, write it as a structured data on to a hard drive, and loop through while reading, say 50mb at a time to check if the name has already been added to the file, and modify it.","Q_Score":0,"Tags":"python","A_Id":69581324,"CreationDate":"2021-10-15T06:57:00.000","Title":"Create a very large absence\/presence SPARSEE matrix in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run my code on anaconda prompt and it gives me this error, any suggestions?\nAttributeError: module 'nearest_neighbors' has no attribute 'knn_batch'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":93,"Q_Id":69584008,"Users Score":0,"Answer":"Thats not an anaconda error, but an error with the Python code. You'll have to debug the code itself to see, where the error lies. Basically you are trying to access a function that doesn't exist.","Q_Score":0,"Tags":"python,anaconda,nearest-neighbor","A_Id":69585154,"CreationDate":"2021-10-15T11:26:00.000","Title":"AttributeError: module 'nearest_neighbors' has no attribute 'knn_batch'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently trying to start working with tensorflow.\nI work with anaconda and I tried to install the tensorflow packages in the root environment but it always displays the message: \"Several errors encountered\".\nWhen I looked it up it says the solution is to create another environment exclusively for tensorflow, I did and it worked. But I'd still like to know what the reason for this is.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":122,"Q_Id":69586601,"Users Score":2,"Answer":"I have had the same question when I started out. It seemed like it is the \"correct\" thing to do, so I just did it, but never understood why. After working with TensorFlow for 2 years now, and on multiple machines, I realised just how specific the set of its requirements is. Only a few versions of python are compatible with it, the same thing with numpy, and if you want to use NVIDIA GPUs, good luck figuring out the specific versions of cuda and cudnn.\nYou don't want to have to tailor most of the python-related software on your machine to running tensorflow. In order to avoid breaking it whenever you install something that requires a higher version of numpy, for example, it is best to keep it in a separate environment. This way you have an isolated \"container\" that keeps everything just the way TensorFlow wants it, while still being able to use other software if needed.\nNot to mention that there are several versions of TensorFlow and they all have different requirements.","Q_Score":0,"Tags":"python,tensorflow,anaconda,conda","A_Id":69586890,"CreationDate":"2021-10-15T14:48:00.000","Title":"Why do I need another conda environment from tensorflow?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building a Streamlit dashboard that will have to read data from a DataFrame. The problem is that I have a local csv and a local Excel file form which I am reading data using pd.read_csv().\nHowever, when I share my dashboard url with others, the data will fail to load because they won't have the file locally.\nHow can I read the contents of a csv and Excel file and turn them into a \"hardcoded\" pandas DataFrame?\nI guess my question is: how should I store and read my data without having local csv and Excel files?\nEdit: sorry for no code or MRE, but I literallyu have no idea how to do this. If I had a piece of code, it would simply be a pandas dataframe with sample data in it.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":99,"Q_Id":69604934,"Users Score":1,"Answer":"In R I would use dput() function to show me the code necessary to create a data frame.\nFor Python I know that print(df.to_dict()) would do something similar to be a \"hardcoded\" Pandas DF.\nSo I would do the following:\n1: print your df. df.to_dict()\n2: copy and paste the necessary code to create the data frame inside your streamlit app. Something similar to this: {'a': {0: 1, 1: 2}, 'b': {0: 3, 1: 3}}\n3: \"load\" the data frames by creating them everytime the application is run. df = pd.DataFrame.from_dict({'a': {0: 1, 1: 2}, 'b': {0: 3, 1: 3}})\nPS: note that this solution is not scalable neither would work if your data keeps changing from time to time. If that's the case, you would need to keep printing and pasting your new df to your code every time.","Q_Score":1,"Tags":"python,pandas,dataframe,csv,streamlit","A_Id":69605030,"CreationDate":"2021-10-17T13:48:00.000","Title":"How to save contents of local csv file into a \"hardcoded\" Pandas DataFrame?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a FastText trained model \"*.ftz\". My program runs in the multithread mode.\nIs there anyway to load a model once and use it without loading for each thread?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":69615104,"Users Score":0,"Answer":"After some efforts to find a good solution I used Fastapi and implemented model as a service.","Q_Score":0,"Tags":"python,machine-learning,nlp,data-science,fasttext","A_Id":69722958,"CreationDate":"2021-10-18T11:17:00.000","Title":"Load trained model only once","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a regression problem and my dataset is very imbalanced. My features are age, sex, weight, medication dose, some lab results and I am trying to predict one column of continuous values.\nIn my dataset some individuals are represented by more samples than others. For example 30 lines of data from one individual, 10 from a second individual and 1 from a third and so on. I do not know how to select the training set so that the model is not biased towards specific subjects.\nI divided the training and testing set in a way that there is no data from the same individuals in both sets but still training a model with a training set that is not balanced regarding the amount of data from each individual would bias the model.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":69616840,"Users Score":2,"Answer":"I would suggest to duplicate samples. So that, for example, every individual will have 30 rows of data.\nAs an alternative, you can also adjust the weights. So that an individual with 30 samples will have weight 1, an individual with 10 samples will have weight 3, and an individual with 30 samples will have weight 30 [it's an equivalent to duplicating, but doesn't increases the training set]","Q_Score":0,"Tags":"python,machine-learning","A_Id":69617120,"CreationDate":"2021-10-18T13:18:00.000","Title":"How to create a training set for regression in Python if in a dataset some individuals are represented by more samples than others?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Python program that is controlling some machines and stores some data. The data is produced at a rate of about 20 rows per second (and about 10 columns or so). The whole run of this program can be as long as one week, as a result there is a large dataframe.\nWhat are safe and correct ways to store this data? With safe I mean that if something fails in the day 6, I will still have all the data from days 1\u21926. With correct I mean not re-writing the whole dataframe to a file in each loop.\nMy current solution is a CSV file, I just print each row manually. This solution is both safe and correct, but the problem is that CSV does not preserve data types and also occupies more memory. So I would like to know if there is a binary solution. I like the feather format as it is really fast, but it does not allow to append rows.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":47,"Q_Id":69626326,"Users Score":1,"Answer":"I can think of two easy options:\n\nstore chunks of data (e.g. every 30 seconds or whatever suits your use case) into separate files; you can then postprocess them back into a single dataframe.\nstore each row into an SQL database as it comes in. Sqlite will likely be a good start, but I'd maybe really go for PostgreSQL. That's what databases are meant for, after all.","Q_Score":0,"Tags":"python,pandas,file","A_Id":69626394,"CreationDate":"2021-10-19T07:00:00.000","Title":"How to lively save pandas dataframe to file?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Below is the dataframe:\n\n\n\n\n\ndate\nopen\n\n\n\n\n26\n15-09-21\n406.5\n\n\n\n\nNow I need the value of open so I tried:- print(df.open)\nIt gave error:\n\nAttributeError: 'DataFrame' object has no attribute 'open'\n\ncolumn types are as follow: print(df.dtypes)\n\ndate ----> object\nopen ----> float64","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":69626443,"Users Score":0,"Answer":"If you have a white space in your \" open\" column, just rename it by df = df.rename(columns={' open':'open'})\nI sometimes prefer to call a specific pandas column using this df[\"open\"] and press \"Tab\" for the auto-complete (in Jupyter Notebook or in Vscode). That way I am aware of any concealed typos such as that whitespace which you have","Q_Score":1,"Tags":"python,pandas","A_Id":69626712,"CreationDate":"2021-10-19T07:11:00.000","Title":"Getting the value of particular column when its only one row in pandas dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using databricks-connect and VS Code to develop some python code for Databricks.\nI would like to code and run\/test everything directly from VS Code using databricks-connect to avoid dealing with Databricks web IDE. For basic notebooks, it works just fine but I would like to do the same with multiple notebooks and use imports (e.g. use import config-notebook in another notebook).\nHowever, in VS Code import another-notebook works fine but it does not work in Databricks.\nFrom what I could find, the alternative in Databricks is %run \"another-notebook\" but it does not work if I want to run that from VS Code (databricks-connect does not include notebook workflow).\nIs there any way to make notebook imports that works both in Databricks and is supported by databricks-connect ?\nThanks a lot for your answers !","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1411,"Q_Id":69633404,"Users Score":0,"Answer":"Well, you can create packages .whl(wheel) install in the cluster and call via import in any notebook is a breeze","Q_Score":2,"Tags":"python,python-import,databricks,databricks-connect","A_Id":71722481,"CreationDate":"2021-10-19T15:10:00.000","Title":"Import notebooks in Databricks","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a Dial a Ride Problem (DARP). I have a lage amount of nodes and edges (338 nodes and 826 edges). I've imported the node\/edge data from OSMnx and am trying to solve the model with Gurobi Optimizer in Python.\nTo be able to use the OSMnx data with Gurobi, I created a matrix = len(nodes) x len(nodes) matrix and therein printed the length of the edge if two nodes were connected, and a large number otherwise. In the optimization, a x[i,j] = len(nodes) x len(nodes) binary decision variable is used to decide if an edge is traversed or not.\nThe problem I am encountering is a large computing time for just one request (+1 hour). I think this is because the model also has to consider all the other indices from this large matrix, even though they can be ignored completely since they represent that two nodes are unconnected.\nMy question therefore is if someone can help me find some preprocessing techniques or something else that might reduce my computational time. For example, tell the model that it can ignore indices from this matrix if the value is too high or maybe a more efficient node\/edge storage file that Gurobi can use more efficiently.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":69633659,"Users Score":0,"Answer":"If your graph is sparse, the optimization model should be sparse, too. Specifically, you should only create a variable x[i,j] if the edge (i,j) exists in the graph. For an example of how to do this, see the netflow.py sample in the examples\/python subdirectory of Gurobi.","Q_Score":0,"Tags":"python,performance,gurobi,data-preprocessing","A_Id":69635199,"CreationDate":"2021-10-19T15:26:00.000","Title":"Preprocess node\/edge data or reformat so Gurobi can optimize more efficiently","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm making a FPS game in Ursina, and I'd like to be able to aim. I will do this, I think, by changing the camera's FOV : it is perfect !\nThe problem is that I'd like to be able to animate the transition of aiming. I cannot use a for loop, as the FOV only updates once it is finished, and I cannot use the animate method... I tried :\ncamera.animate(\"fov\", -30, duration = 2, delay=0, auto_destroy = True)\nWith the syntax :\nanimate(name, value, duration=.1, delay=0, curve=curve.in_expo, loop=False, resolution=None, interrupt='kill', time_step=None, auto_destroy=True)\nHere, my value (I'd like to decrease my FOV, so to zoom, by 30) doesn't mean anything : I can put whatever I want, and it will not stop until the fov is equal to 0.\nIs there a way to fix that ? Either by finding a method to update the camera in the for loop, or either by finding any way to animate the FOV transition","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":74,"Q_Id":69636493,"Users Score":0,"Answer":"Found the answer : the value parameter is actually not the value you want to increase or decrease your FOV (or anything) of, but it's actually the value it will go to ! So, if I put 1, my FOV will go to 1, that's why.\nTo animate -30 for my FOV, the correct syntax is :\ncamera.animate(\"fov\", camera.fov-30, duration = 2, delay=0, auto_destroy = True)","Q_Score":0,"Tags":"python,python-3.x,ursina","A_Id":69658479,"CreationDate":"2021-10-19T19:13:00.000","Title":"Python ursina : aim by changing FOV's value (issue with animation)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to retrieve from the fitted xgboost object the hyper-parameters used to train the model. More specifically, I would like to know the number of estimators (i.e. trees) used in the model. Since I am using early stopping, the n_estimator parameter would not give me the resulting number of estimators in the model.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":95,"Q_Id":69639901,"Users Score":0,"Answer":"If you are trying to get the parameters of your model:\nprint(model.get_xgb_params())","Q_Score":0,"Tags":"python,xgboost","A_Id":69650137,"CreationDate":"2021-10-20T03:13:00.000","Title":"Retrieve hyperparameters from a fitted xgboost model object","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to retrieve from the fitted xgboost object the hyper-parameters used to train the model. More specifically, I would like to know the number of estimators (i.e. trees) used in the model. Since I am using early stopping, the n_estimator parameter would not give me the resulting number of estimators in the model.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":95,"Q_Id":69639901,"Users Score":0,"Answer":"model.get_params(deep=True) should show n_estimators\nThen use model.get_xgb_params() for xgboost specific parameters.","Q_Score":0,"Tags":"python,xgboost","A_Id":72436865,"CreationDate":"2021-10-20T03:13:00.000","Title":"Retrieve hyperparameters from a fitted xgboost model object","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting below warning for python in console.I did not found any solution for these.We dont want to suppress warnings .\nAlso we have a big code base setup.how to know which code block is cause of this error as warning dont give code line number.\nI am using below version of python and numpy.Is it due to old verison's of python and numpy.\nPython version- 3.6.8\nNumpy Version- 1.19.5\nmatplotlib version is 3.3.4\npandas version is 1.1.5\nWarning:\n\/python3.6\/site-packages\/matplotlib\/cbook\/init.py:1402: FutureWarning: Support for multi-dimensional indexing (e.g. obj[:, None]) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead.\npython3.6\/site-packages\/pandas\/core\/indexing.py:1743: SettingWithCopyWarning:\nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3134,"Q_Id":69641287,"Users Score":0,"Answer":"It's the way you're accessing the array, using slicing. Matplotlib is going to remove that from how they handle arrays, but they haven't yet. It's just a recommendation to convert to a different type of array access, like Numpy, before that happens. Based of what you're showing, i'd guess it's as simple as 1. Create Numpy Array 2. Use identical slicing except using Numpy syntax. Should be good to go after that I'd imagine.","Q_Score":0,"Tags":"python,python-3.x,numpy,matplotlib","A_Id":69654561,"CreationDate":"2021-10-20T06:31:00.000","Title":"Python warning :FutureWarning: Support for multi-dimensional indexing","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been building an application using Apache Cordova - it's actually based on machine learning, but all my machine learning prototyping has been done in Python.\nIs there a way I could incorporate my Python libraries (like scikit-learn) into my Apache Cordova app, or is there something else I should include?\nThank you, any help would be appreciated!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":110,"Q_Id":69647582,"Users Score":1,"Answer":"No, you can't embed a programming language as a plugin for Cordova. You can however do a remote call to a server running python.","Q_Score":0,"Tags":"python,cordova","A_Id":69649632,"CreationDate":"2021-10-20T14:09:00.000","Title":"Can we use Python modules with Apache Cordova?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have two date columns having corresponding Dollars associated in two other column. I want to plot it in single chart, but for that data preparation in required in python.\nActual table\n\n\n\n\nStartDate\nstart$\nEndDate\nEnd$\n\n\n\n\n5 June\n500\n7 June\n300\n\n\n7 June\n600\n10 June\n550\n\n\n8 june\n900\n10 June\n600\n\n\n\n\nExpected Table\n\n\n\n\nPythonDate\nstart$\nEnd$\n\n\n\n\n5 June\n500\n0\n\n\n6 june\n0\n0\n\n\n7 June\n600\n300\n\n\n8 June\n900\n0\n\n\n9 June\n0\n0\n\n\n10June\n0\n1150\n\n\n\n\nAny solution in Python?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":35,"Q_Id":69652113,"Users Score":1,"Answer":"I can suggest you a basic logic, you figure out how to do it. It's not difficult to do it and it'll be a good learning too:\n\nYou can read only the subset of columns you need from the input table\nas a single dataframe. Make such two dataframes with value as 0 for\nthe column that you be missing and then append them together.","Q_Score":0,"Tags":"python,tableau-desktop","A_Id":69652329,"CreationDate":"2021-10-20T19:39:00.000","Title":"Data preparation to convert two date field in one","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a sentiment analysis code with atleast 80%+ accuracy. I tried Vader and it I found it easy and usable, however it was giving accuracy of 64% only.\nNow, I was looking at some BERT models and I noticed it needs to be re-trained? Is that correct? Isn't it pre-trained? is re-training necessary?","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":251,"Q_Id":69655995,"Users Score":-2,"Answer":"you can use pickle.\nPickle lets you.. well pickle your model for later use and in fact, you can use a loop to keep training the model until it reaches a certain accuracy and then exit the loop and pickle the model for later use.\nYou can find many tutorials on youtube on how to pickel a model.","Q_Score":2,"Tags":"python,tensorflow,sentiment-analysis,bert-language-model,roberta-language-model","A_Id":69656036,"CreationDate":"2021-10-21T04:44:00.000","Title":"Is it necessary to re-train BERT models, specifically RoBERTa model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example, I've a table:\n\n\n\n\n\nNumber\nB\nAction\nDateTime\n\n\n\n\n1\n161\nFalse\nsend:\n2021-10-20 13:10:18\n\n\n2\n202\nFalse\nget\n2021-10-20 13:10:20\n\n\n3\n202\nFalse\ntake\n2021-10-20 13:10:21\n\n\n4\n161\nFalse\nreply\n2021-10-20 13:12:25\n\n\n5\n202\nTrue\nsend\n2021-10-20 13:15:18\n\n\n6\n161\nFalse\nget\n2021-10-20 13:15:20\n\n\n7\n161\nFalse\ntake\n2021-10-20 13:15:21\n\n\n8\n202\nFalse\nreply\n2021-10-20 13:15:25\n\n\n\n\nHere, True\/False is based on whether column 'Action' has 'send' without colon or not. If 'send' then it's True, otherwise False.\nI want to delete rows based on condition of a row which is True. So, delete rows if:\ni) a column 'Number' has a same a number which corresponds to value True in column 'B', In this case: delete if Number== 202\nii)and if a column 'Datetime' is in range of 2 minutes of a column which corresponds to value True in column 'B'. Datetime corresponding to True value is '2021-10-20 13:15:18' and it's range of 2 is: [2021-10-20 13:13:18 ; 2021-10-20 13:17:18].\nOverall, deleted rows should have a number=202 and which are in range [2021-10-20 13:13:18;2021-10-20 13:17:18]\nNew table should look like this:\n\n\n\n\n\nNumber\nB\nAction\nDateTime\n\n\n\n\n1\n161\nFalse\nsend:\n2021-10-20 13:10:18\n\n\n2\n202\nFalse\nget\n2021-10-20 13:10:20\n\n\n3\n202\nFalse\ntake\n2021-10-20 13:10:21\n\n\n4\n161\nFalse\nreply\n2021-10-20 13:12:25\n\n\n6\n161\nFalse\nget\n2021-10-20 13:15:20\n\n\n7\n161\nFalse\ntake\n2021-10-20 13:15:21\n\n\n\n\nSorry, if a question and task is not fully clear.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":69661046,"Users Score":0,"Answer":"Try this:\ndf = df.loc[df[\"Number\"]!=202 & df[\"B\"]!= 'True']\nIf type in column B is boolean, then change the string \"true\" to a boolean True.","Q_Score":0,"Tags":"python,pandas","A_Id":69661123,"CreationDate":"2021-10-21T11:19:00.000","Title":"Pandas: How to delete rows based on some conditions?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 45x45 matrix which Stack overflow isn't letting me include as it is too long. But if I throw this matrix into numpy.linalg.eig, it gives me an eigenvector of all zeros in the last column. What does that even mean?","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":140,"Q_Id":69666960,"Users Score":-1,"Answer":"So it looks like the matrix is actually degenerate, which I suppose makes sense, actually.","Q_Score":0,"Tags":"python,numpy,linear-algebra","A_Id":69667485,"CreationDate":"2021-10-21T18:21:00.000","Title":"Numpy.linalg.eig function returning zero eigenvector?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to write a function equilateral(x, y): that takes two np.ndarrays of shape (N,) , where x and y are natural numbers and returns a point z an np.ndarray of shape (N,) such that (x, y, z) are are the vertices of an equilateral triangle.\nAny one please suggest.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":209,"Q_Id":69671976,"Users Score":1,"Answer":"In order to get the third vertex, you could just rotate the point (x2, y2,...) by 60 degrees around point (x1, y1,...). The other admissible solution would be obtained with a rotation by -60 degrees, i.e., in the opposite direction.\nSo just rotate y around x by 60\/-60 degrees and you have your 3rd co-ordinate.","Q_Score":2,"Tags":"python,math,geometry","A_Id":69672007,"CreationDate":"2021-10-22T05:49:00.000","Title":"Python function to find a point of an equilateral triangle","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a model defined as\ngmodel=Model(a)+Model(b)\nresult=gmodel.fit(data,...)\nI use this model to fit the data, which gives me the parameters and their error estimates. Using the result.eval_components(), I could access the component a and component b of the model function. Also, using result.eval_uncertainty(), I could access the 1-sigma uncertainties in the model functions, which would be two lines. Now I want to know each component a and b in that 1-sigma uncertainty lines.Is there a easy way of doing this.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":123,"Q_Id":69683599,"Users Score":0,"Answer":"There is not currently \"an easy way of doing this\" - the eval_uncertainty method belongs to the lmfit.ModelResult, not the lmfit.Model itself, at least partly because it needs the resulting covariance matrix.\nBut: I think eval_uncertainty method could probably calculate the uncertainty in any component models too. I would suggest raising an Issue and\/or using the lmfit mailing list to discuss making this change.","Q_Score":0,"Tags":"python,model,components,composite,lmfit","A_Id":69716796,"CreationDate":"2021-10-22T22:17:00.000","Title":"Calculate the uncertainty in components of composite model in Lmfit","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im trying to convert a pil image to a numpy array but i want the image to keep it's transperancy, how can i do that?\nIv'e tried using numpy.array()\nBut it doesnt keep the transperancy","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":87,"Q_Id":69687221,"Users Score":0,"Answer":"I'm not quite sure what you mean by it doesn't keep its transparency. If you convert a PIL image with transparency using numpy.array() it should return a numpy array with shape width, height, 4 where the 4th channel will represent the alpha channel values. And after whatever processing you need to do, if you convert it back to a PIL image using Image.fromarray() and perhaps saving with Image.save() you should get back an image with the same transparency. Can't help you much more without seeing an actual snippet of the code and possibly the image.","Q_Score":0,"Tags":"python-3.x,image-processing","A_Id":69687931,"CreationDate":"2021-10-23T10:26:00.000","Title":"How can i convert a PIL image to cv2 numpy array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a function to turn bytes back to a numpy array. Is there a simple way to do it?\nPickle doesn't work because my data is too long and everything else I tried fails as well... I'm trying to send a frame over a socket from my client to my server.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":69690796,"Users Score":0,"Answer":"Try this: x = np.frombuffer(n, dtype=i.dtype)","Q_Score":0,"Tags":"python,numpy,byte","A_Id":69691435,"CreationDate":"2021-10-23T18:06:00.000","Title":"What is the opposite function of tobytes()?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use the python weka wrapper. I am using the Cross validation method. It then prints the classification results. Then i use build_Classifier and test on training data using test_model. It provides different no.of classification instances compared to the cross validation model.\nFrom what i understood, in the cross validation model, 10 different models are built, and then the accuracy is averaged while the models are discarded. Then it fits the entire data again and produces the classification results.\nthen when the data is the same, shouldnt i get the same results with the build_classifier model as well?\nor is it because i put randomstate in crossvalidation but did not randomize the data in build_model?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":69695926,"Users Score":0,"Answer":"When performing cross-validation, the entire dataset is essentially being used as test set. The predictions from the k folds get collected and used for calculating the statistics that get output. No averaging of statistics or models occurs.\nTraining and evaluating on the full dataset will yield different results, but you should see the same number of instances being used. It is possible that there is a bug in your code. But you need to post your code to determine the cause of that.","Q_Score":0,"Tags":"python,weka","A_Id":69760372,"CreationDate":"2021-10-24T10:38:00.000","Title":"WEKA training and cross validation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an array of n positive integers. I want to calculate a list of all contiguous subarray products of size k modulo p. For instance for the following array:\na = [3, 12, 5, 2, 3, 7, 4, 3]\nwith k = 3 and p = 12, the ordered list of all k-sized contiguous subarray products will be:\nk_products = [180, 120, 30, 42, 84, 84]\nand modulo p we have:\nk_products_p = [0, 0, 6, 6, 0, 0]\nwe can easily compute k_products using a sliding window. All we have to do is to compute the product for the first k-sized subarray and then compute the next elements of k_product using the following formula:\nk_product[i] = k_product[i - 1] * a[i + k] \/ a[i - 1]\nand after forming the whole list, we can compute k_product[i] % p for each i to get k_product_p. That's it. O(n) complexity is pretty good.\nBut if the elements of a[i] are big, the elements of k_product may overflow, and thus we cannot compute k_product_p. Plus, we cannot, for example do the following:\nk_product[i] = ((k_product[i - 1] % p) * (a[i + k] % p) \/ (a[i - 1] % p)) % p \/\/ incorrect\nSo is there a fast algorithm to do this? Note that p is not necessarily prime and it is also not necessarily coprime to the elements of a.\nEdit: As mentioned in the comments, there will be no overflow in python, but working with very big numbers will be time-consuming.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":89,"Q_Id":69705829,"Users Score":2,"Answer":"This is not a sliding window algorithm, but it is a simple and effective way to solve this problem in O(n) time without any division:\nLet A be your original array. We will imagine that there is a \"mark\" on every kth element of A -- elements A[0], A[k], A[2k], etc. This ensures that every k-length window in A will contain exactly one mark.\nNow, make two new arrays B and C, such that:\n\nIn array B, each element B[i] will contain the product (mod p) of A[i] and all following elements up to but not including the next mark. If A[i] is marked, then B[i] = 1. You can calculate this in a single pass backward from i=n-1 to i=0.\n\nIn array C, each element C[i] will contain the product (mod p) of A[i] and all preceding elements down to and including the previous mark. If A[i] is marked, then C[i] = A[i]. You can calculate this in a single pass forward from i=0 to i=n-1.\n\n\nNow, you can easily calculate the complete product of any k-length window in constant time, because the product of any window from A[i]...A[i+k-1] is just B[i] * C[i+k-1]. Remember that there is exactly one mark inside the window. B[i] is the product of the elements before the mark, and C[i+k-1] is the product of the marked element and the elements after it.","Q_Score":1,"Tags":"python,arrays,algorithm,sliding-window,modular-arithmetic","A_Id":69715231,"CreationDate":"2021-10-25T09:42:00.000","Title":"Sliding window algorithm to calculate the list of all k-element contiguous subarray products of an array modulo p","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pandas dataframe containing large volumes of text in each row and it takes up 1.6GB of space when converted to .pkl. Now I want to make a list of words from this dataframe, and I thought that something as simple as [word for text in df.text for word in i.split()] should suffice, however, this expression eats up all 16GB of ram in 10 seconds and that's it. It is really interesting to me how that works, why is it not just above 1.6GB? I know that lists allocate a little more memory to be able to expand, so I have tried tuples - the same result. I even tried writing everything into a file as tuples ('one', 'two', 'three') and then opening the file and doing eval - still the same result. Why does that happen? Does pandas compress data or is python that inefficient? What is a better way to do it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":54,"Q_Id":69706033,"Users Score":1,"Answer":"You can use a generator. For example map(func, iterable)","Q_Score":0,"Tags":"python,python-3.x,pandas,list,memory","A_Id":69706279,"CreationDate":"2021-10-25T09:58:00.000","Title":"Why do python lists take up so much memory?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a model which requires solving a system of ODEs with tfp.math.ode.BDF, and I would like to find the individual least-squares fits of this model to n > 1000 datasets. That is to say, if my model has m parameters then at the end of the optimization process I will have an n by m tensor of best-fit parameter values.\nWhat would be the best way to perform this optimization in parallel? At this point I\u2019m planning to define an objective function that adds up the n individual sums of square residuals, and then uses tfp.optimizer.lbfgs_minimize to find the best-fit values of the combined n\u00d7m parameters.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":232,"Q_Id":69713266,"Users Score":1,"Answer":"I believe the BDF optimizer and LBFGS both support batches (of problems), so you could have an outer \"problem\" axis to your data and leastsq return value. But since BDF is for stiff problems, it's likely to have much longer runtimes for some problems than for others, and you might be best served treating each problem separately (tf.map_fn) as opposed to running them all in parallel -- in a batch, you can't run ahead onto the next LBFGS iteration for problem X until you compute the BDF integration for problem Y. Or just use a python for loop over your problems, each time calling a @tf.function def lbfgs_over_bdf(data): ....","Q_Score":1,"Tags":"python,tensorflow,tensorflow-probability","A_Id":69855861,"CreationDate":"2021-10-25T18:48:00.000","Title":"performing many gradient-based optimizations in parallel with TensorFlow","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I were using environments for months and they were working perfectly.. suddenly i can not execute any code in pycharm under any virtual environment and i get this error massage:\nfrom tensorflow.python.profiler import trace\nImportError: cannot import name 'trace' from 'tensorflow.python.profiler' (C:\\Users\\Nuha\\anaconda3\\envs\\tf_1.15\\lib\\site-packages\\tensorflow_core\\python\\profiler_init_.py)\nAny help please!!\nIt seams that it happens because i install more packages and maybe conflict occurs","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1391,"Q_Id":69720241,"Users Score":0,"Answer":"it was because environment conflict so i rebuild new environment and it works perfectly","Q_Score":0,"Tags":"python,tensorflow,pycharm,environment,anaconda3","A_Id":69988718,"CreationDate":"2021-10-26T09:00:00.000","Title":"ImportError: the 'trace' from 'tensorflow.python.profiler'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to find the max and min of both the horizontal and vertical convolution axis without going through and performing the actual convolution?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":66,"Q_Id":69757786,"Users Score":0,"Answer":"You simply cannot skip doing the convolution altogether. There's no way to just bypass it. This scenario would be similar to trying to find the height of the Eiffel Tower without out already knowing it, looking it up, or measuring it somehow. Although convolutions can be slow on many machines, you will unfortunately need to perform the operation to get the minimum and maximum values.","Q_Score":0,"Tags":"python,convolution","A_Id":69758287,"CreationDate":"2021-10-28T16:28:00.000","Title":"Find max and min of convolution without doing convolution","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a column with the following format:\nOriginal format:\n\n\n\n\nmm\/dd\/YYYY\n\n\n\n\n10\/28\/2021\n\n\n10\/28\/2021\n\n\n\n\nthe output after:\nprint(df['mm\/dd\/YYYY'])\n0 2021-10-28 00:00:00\n1 2021-10-28 00:00:00\nHowever when I am trying to convert to datetime I get the following error:\npd.to_datetime(df['mm\/dd\/YYYY'], format='%Y-%m-%d %H:%M:%S')\n\ntime data mm\/dd\/YYYY doesn't match format specified","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":178,"Q_Id":69759211,"Users Score":1,"Answer":"You are passing the wrong format. Try\npd.to_datetime(df['mm\/dd\/YYYY'], format='%m\/%d\/%Y')","Q_Score":0,"Tags":"python,pandas,datetime","A_Id":69763407,"CreationDate":"2021-10-28T18:29:00.000","Title":"time data mm\/dd\/YYYY doesn't match format specified","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"df1 =\n\n\n\n\n\nname\nage\nbranch\nsubject\ndate of joining\n\n\n\n\n1\nSteve\n27\nMechanical\nAutocad\n01-08-2021\n\n\n2\nAdam\n32\nElectrical\ncontrol sys\n14-08-2021\n\n\n3\nRaj\n24\nElectrical\ncircuit\n20-08-2021\n\n\n4\nTim\n25\nComputers\nclouding\n21-08-2021\n\n\n\n\ndf2= [['name','branch']]\nprint(df2)\n\n\n\n\n\nname\nbranch\n\n\n\n\n1\nSteve\nMechanical\n\n\n2\nAdam\nElectrical\n\n\n3\nRaj\nElectrical\n\n\n4\nTim\nComputers\n\n\n\n\nNow I have two data frames,\nI need only name and branch columns and remove the remaining columns, all these operations should apply to the original df1. I don't want separately df2","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":69771915,"Users Score":1,"Answer":"Simply, Overwrite the df1 only\ndf1= df1[['name','branch']]\nor\ndf2= df1[['name','branch']]\ndel df1\nTo delete df1 or df2.\ndel df1\nor\ndel df2\nBased on requirement","Q_Score":1,"Tags":"python,python-3.x,pandas,dataframe,overwrite","A_Id":69772340,"CreationDate":"2021-10-29T16:08:00.000","Title":"how can I get only one data frame or how to overwrite the data frame?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to compute a high dimension dataset, with clustering on Orange3 app. So, there's too many time spent to calculate the Distance Matrix between the objects. If I could use a graphic card for this tasks it will take much less time to complete the task. Anyone know, let's say, a workaround to do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":97,"Q_Id":69776738,"Users Score":1,"Answer":"No. Orange uses numpy arrays and computes distances on CPU. Short of reimplementing the routine for calculation of distances (which in itself is rather short and simple), there's nothing you can do about it.\nOrange will start using Dask in some not too distant future, but until then try reducing your data set. You may not need all dimensions and\/or objects for your clustering.","Q_Score":0,"Tags":"python-3.x,orange","A_Id":69778229,"CreationDate":"2021-10-30T04:13:00.000","Title":"Is there a simple way to use Oange3 with an Nvidia GPU?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just wondered why Pandas DataFrame class functions do not change their instance.\nFor example, if I use pd.DataFrame.rename(), dropn(), I need to update the instance by redefining it. However, if its class is list, you can delete an element by a pop() method without redefining it. The function changes its intrinsic instance.\nIs there a reason why pandas or numpy use this kind of style?\nCan you explain why this kind of styling is better or its advantages?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":69797150,"Users Score":0,"Answer":"The reason is to allow the option to overwrite the dataframe object you are working on, or to leave it unchanged by creating a copy and assigning it to a different variable. The option is valuable as depending on the circumstances you may want to directly modify the original data or not.\nThe inplace parameter is one way in which you have the power to choose between the two options.","Q_Score":0,"Tags":"python,pandas,numpy,styles","A_Id":69798243,"CreationDate":"2021-11-01T12:48:00.000","Title":"Why do we need to redefine pandas DataFrame after changing columns?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just wondered why Pandas DataFrame class functions do not change their instance.\nFor example, if I use pd.DataFrame.rename(), dropn(), I need to update the instance by redefining it. However, if its class is list, you can delete an element by a pop() method without redefining it. The function changes its intrinsic instance.\nIs there a reason why pandas or numpy use this kind of style?\nCan you explain why this kind of styling is better or its advantages?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":69797150,"Users Score":0,"Answer":"Each class defines what changes can be done in-place and which can't, creating instead a new object. The reasons are varied and can't be reduced to a few simple rules.\nThe underlying data structure of a list is designed for growth and shrinkage. Even so some changes are cheaper than others. append and pop at the end requires fewer changes of the data than addition or removal of items at the beginning or middle. Even so, actions like blist = alist[1:] produce a new list.\ntuple is a variation on list that is immutable, and is widely used in the base Python for function arguments and packing\/unpacking results.\nA numpy array has a fixed size. Like lists, individual values can be changed in-place, but growth requires making a new array (except for a limited use of resize). numpy also has a view mechanism that makes a new array, but which shares underlying data. This can be efficient, but has pitfalls for the unwary.\npandas is built on numpy, with indices and values stored in arrays. As other answers show it often has a in-place option, but I suspect that doesn't actually reduce the work or run time. We'd have to know a lot more about the change(s) and dataframe structure.\nUltimately we, SO posters, can't answer \"why\" questions authoritatively. We can only give opinions based on knowledge and experience. Most of us are not developers, and certainly not original developers.","Q_Score":0,"Tags":"python,pandas,numpy,styles","A_Id":69799457,"CreationDate":"2021-11-01T12:48:00.000","Title":"Why do we need to redefine pandas DataFrame after changing columns?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Tensorflow is very heavy library , is there any way to save and load keras modeles(.h5) without using Tensorflow lib?","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":86,"Q_Id":69816835,"Users Score":-1,"Answer":"keras framework is built upon tensorflow. If you want to use keras you will have to install tensorflow library.","Q_Score":0,"Tags":"python,tensorflow,keras","A_Id":69817349,"CreationDate":"2021-11-02T20:56:00.000","Title":"How to save keras models without tensorflow","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am currently using a config file to train a model in tensorflow. Inside the config file, i need to specify a python path. Since im on windows, my paths obviosly looks like these r\"C:\\path\\path\\path. But when tensorflow is using the configfile, i get this error:\nfine_tune_checkpoint: r'C:\\path\\path\\path\\ckpt-0': Expected string but found: 'r' \nAnyone has encountered a similar problem?","AnswerCount":1,"Available Count":1,"Score":-0.3799489623,"is_accepted":false,"ViewCount":68,"Q_Id":69831071,"Users Score":-2,"Answer":"Looks like it tripped because there was an r written outside of your quotation marks. I'd try to delete that and see if it works, or if the r is in your path, add it within the quotation marks.","Q_Score":0,"Tags":"python,tensorflow,path","A_Id":69831140,"CreationDate":"2021-11-03T20:01:00.000","Title":"Python path problem: Expected string but found: 'r'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a CSV file containing list of postcodes of different persons which involves travelling from one postcode to another for different jobs, a person could travel to 5 postcoodes a day. using numpy array, I got list of list of postcodes. I then concatenate the list of postcode to get one big list of postcode using a = np.concatenate(b), after which I want to sort it in an alphabetical order, I used : print(np.sort(a)) is gave me error error AxisError: axis -1 is out of bounds for array of dimension 0\nI also tried using a.sort() but it is giving me TypeError: '<' not supported between instances of 'float' and 'str'\nPlease, can someone help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":69835722,"Users Score":0,"Answer":"Looks like you're passing in both floats and strings into your list.\nTry converting the values in b into a float before you concatenate them.","Q_Score":0,"Tags":"python,sorting","A_Id":69835789,"CreationDate":"2021-11-04T07:38:00.000","Title":"Concatenating and sorting a list of list array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently on Linux x86-64 machine. I am trying to install opencv in my virtual environment using pip however the error I keep receiving is\nERROR: Could not find a version that satisfies the requirement numpy==1.19.3 (from versions: 1.19.2+computecanada, 1.21.0+computecanada, 1.21.2+computecanada)\nERROR: No matching distribution found for numpy==1.19.3\nI am running python 3.9.6 (64bit) and my current numpy version is 1.21.3. the command I've been using is pip install opencv-python. i've also tried uninstalling other instances of openCV and have tried the other options all with the same error. Does openCV not support numpy 1.21.3? Would anyone be able to point me in the right direction?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":206,"Q_Id":69847380,"Users Score":0,"Answer":"Actually, this error happens if numpy version does not match OpenCV required version.\nfor my case:\nI used python 3.6. so I solved this error by following:\n\npip install numpy==1.19.0\npip install opencv-python==3.4.11.45\n\nafter installing numpy I search which OpenCV version support this numpy version, I found 3.4.11.45 so I install it by 2 number command and it is working.","Q_Score":0,"Tags":"python,numpy,opencv,pip","A_Id":71853174,"CreationDate":"2021-11-05T00:31:00.000","Title":"Installation issues using pip for OpenCv","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I built a box-embedding model on the latest wikipedia articles dump and i need to compare it with the word2vec model in gensim. I saw that if i generate the corpus data as a txt file using get_texts() method in class WikiCorpus there are a lot of stop words, so this make me think that WikiCorpus doesn't delete stop words isn't it?. Now once trained my box model on the wiki corpus txt i notice that calling the \"most similar\" function that i create appositely for box embedding prints very often stop words, instead the same word passed to the most similar function of word2vec model trained on the same corpus txt produce best results. Can someone suggest me why Word2vec model fit so well despite the corpus txt have a lot of stop words instead my box model on the same corpus not?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":69847406,"Users Score":0,"Answer":"How did you train a box-embedding, and why did you think it would offer good most_similar() results?\nFrom a (very) quick glance at the 'BoxE' paper by Abboud et al (2020), it appears to require training based on a knowledge base representation \u2013 not the raw text that'd come from WikiCorpus. (So I'd not necessarily expect a properly-trained BoxE embedding would have 'stop words' in it at all.)\nAnd, BoxE appears to be optimized for evaluating potential facts \u2013 not more general most_similar rankings. So I'd not expect a simple most_similar listing from it to necessarily be expressive.\nIn usual word2vec, removing stop-words isn't very important and plenty of published work doesn't bother doing so. The downsampling of highly-frequent words already tends to ignore many stop-word occurrences \u2013 and their highly diverse usage contexts mean they are likely to get weak word-vectors not especially close to other more-narrow-meaning word-vectors.\nSo in usual word2vec, stop-words aren't very likely to be in the top-neighbors, by cosine-similarity, of other more-specific entity words.","Q_Score":0,"Tags":"python,nlp,gensim,word2vec","A_Id":69848836,"CreationDate":"2021-11-05T00:36:00.000","Title":"does WikiCorpus remove stop_words in gensim?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe where the indexes are not numbers but strings (specifically, name of countries) and they are all unique. Given the name of a country, how do I find its row number (the 'number' value of the index)?\nI tried df[df.index == 'country_name'].index but this doesn't work.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":489,"Q_Id":69848807,"Users Score":0,"Answer":"Why you don make the index to be created with numbers instead of text? Because your df can be sorted in many ways beyond the alphabetical, and you can lose the rows count.\nWith numbered index this wouldn't be a problem.","Q_Score":8,"Tags":"python,pandas,indexing","A_Id":69848896,"CreationDate":"2021-11-05T04:52:00.000","Title":"How do I find the row # of a string index?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pandas Timestamp column that looks like this:\n2021.11.04_23.03.33\nHow do I convert this in a single liner to be able to look like this:\n2021-11-04 23:03:33","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":69856278,"Users Score":0,"Answer":"use a regular expression by looking for the hour and minute second pattern (\\d{4})-\\d{2}-(\\d{2})\\s+(\\d{2})_(\\d{2}).(\\d{2}).(\\d{2}) and use re.findall then get each group part then reassemble the datetime stringthen convert to a datetime","Q_Score":0,"Tags":"python,pandas,datetime,timestamp","A_Id":69856542,"CreationDate":"2021-11-05T16:21:00.000","Title":"Pandas Timestamp reformatting","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing PID gain tuning for a DC motor\nI gathered real data from the motor which involve the position according to time.\nAnd i want to calculate the rise time, overshoot, and settling time from the data.\nIs there any function in matlab or python which can do this?\nThank you!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":111,"Q_Id":69871085,"Users Score":0,"Answer":"In the cases that you use the step command to extract the step-response characteristics of the system, the stepinfo command calculates the rise time, overshoot, and settling time, and so on. I don't know whether it is applicable in the data case or not but you can test it?","Q_Score":0,"Tags":"python,matlab,controls","A_Id":69880340,"CreationDate":"2021-11-07T09:11:00.000","Title":"Is there any function that calculates the rise time, overshoot, and settling time?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On google colab I installed conda and then cudf through conda. However now i need to reinstall all packages like sklearn etc which I am using in my code. Is there some way to install cudf without conda ? pip no more works with cudf. Also if there is some other similar gpu dataframe which can be installed using pip, it will be of great help.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":142,"Q_Id":69900508,"Users Score":1,"Answer":"No, cudf is not available as a pip package.\nYou don't say why you need cudf, but I would try pandas, dask or vaex, surely one of those will do what you need.","Q_Score":1,"Tags":"python,pip,gpu,conda,cudf","A_Id":69900809,"CreationDate":"2021-11-09T15:07:00.000","Title":"Install cudf without conda","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"While writing in CSV file, automatically folder is created and then csv file with cryptic name is created, how to create this CSV with any specific name but without creating folder in pyspark not in pandas.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":409,"Q_Id":69905103,"Users Score":0,"Answer":"That's just the way Spark works with the parallelizing mechanism. Spark application meant to have one or more workers to read your data and to write into a location. When you write a CSV file, having a directory with multiple files is the way multiple workers can write at the same time.\nIf you're using HDFS, you can consider writing another bash script to move or reorganize files the way you want\nIf you're using Databricks, you can use dbutils.ls to interact with DBFS files in the same way.","Q_Score":0,"Tags":"python,apache-spark,pyspark,apache-spark-sql","A_Id":69905210,"CreationDate":"2021-11-09T21:07:00.000","Title":"How to write in CSV file without creating folder in pyspark?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For an assignment I have to write some data to a .csv file. I have an implementation that works using Python's csv module, but apparently I am not supposed to use any imported libraries...\nSo, my question is how I could go about doing so? I am no expert when it comes to these things, so I am finding it difficult to find a solution online; everywhere I look import csv is being used.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1097,"Q_Id":69911985,"Users Score":1,"Answer":"I guess that the point of your assignment is not to have some else to do it for you online. So a few hints:\n\norganise your data per row.\niterates through the rows\nlook at concatenating strings\ndo all above while iterating to a text file per line","Q_Score":0,"Tags":"python,csv,file-writing","A_Id":69912092,"CreationDate":"2021-11-10T10:52:00.000","Title":"How to write to a .csv file without \"import csv\"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a time series in which i am trying to detect anomalies. The thing is that with those anomalies i want to have a range for which the data points should lie to avoid being the anomaly point. I am using the ML .Net algorithm to detect anomalies and I have done that part but how to get range?\nIf by some way I can get the range for the points in time series I can plot them and show that the points outside this range are anomalies.\nI have tried to calculate the range using prediction interval calculation but that doesn't work for all the data points in the time series.\nLike, assume I have 100 points, I take 100\/4, i.e 25 as the sliding window to calculate the prediction interval for the next point, i.e 26th point but the problem then arises is that how to calculate the prediction interval for the first 25 points?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":69926526,"Users Score":0,"Answer":"A method operating on a fixed-length sliding window generally needs that entire window to be filled, in order to make an output. In that case you must pad the input sequence in the beginning if you want to get predictions (and thus anomaly scores) for the first datapoints. It can be hard to make that padded data realistic, however, which can lead to poor predictions.\nA nifty technique is to compute anomaly scores with two different models, one going in the forward direction, the other in the reverse direction, to get scores everywhere. However now you must decide how to handle the ares where you have two sets of predictions - to use min\/max\/average anomaly score.\nThere are some models that can operate well on variable-length inputs, like sequence to sequence models made with Recurrent Neural Networks.","Q_Score":0,"Tags":"python,c#,anomaly-detection,lower-bound,upperbound","A_Id":69940939,"CreationDate":"2021-11-11T10:13:00.000","Title":"Interval Prediction for a Time Series | Anomaly in Time Series","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of different expressions. It looks like this:\nmy_list = [[1 ,2 ,'M' ,2], [1 ,2 ,'A' , 1], [1 ,2 ,'g' ,3], [1 ,2 ,'o' ,4]]\nI want to sort the list. The key should always be the first entry in the list, in this case the book positions A, M, g, o. However, upper and lower case should be ignored.\nIn python I used:\nmy_list.sort(key = itemgetter (3))\nOutput is:\n[[1, 2, 'A', 1], [1, 2, 'M', 2], [1, 2, 'g', 3], [1, 2, 'o', 4]]\nThe problem is that in my result the uppercase letters are sorted first and then the lowercase letters. How can I make lower and upper case letters sort together? The result should look like this:\n[[1 ,2 ,'A' ,1], [1 ,2 ,'g' ,3], [1 ,2 ,'M' ,2], [1 ,2 ,'o' ,4]]","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":69942469,"Users Score":1,"Answer":"Use key=lambda lst: lst[2].lower().","Q_Score":0,"Tags":"python","A_Id":69942503,"CreationDate":"2021-11-12T11:48:00.000","Title":"python sort multi-dimensional lists CASE INSENSITIVE","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building a model to identify a subset of features to classify an object belong which group. In detail, I have a dataset of 11 objects in which 5 belong to group A and 6 belong to group B, each object has been characterized with a mutation status of 19,000 genes and the values are binary, mutation or no-mutation. My aim is to identify a group of genes among those 19,000 genes so I can predict the object belongs to group A or B. For example, if the object has gene A, B, C mutation and D, E gene with no mutation, it belongs to group A, if not it belongs to group B.\nSince I have a large number of features (19,000), I will need to perform feature selection. I'm thinking maybe I can remove features with low variance first as a primary step and then apply the recursive feature elimination with cross-validation to select optimal features. And also don't know yet which model I should use to do the classification, SVM or random forest.\nCan you give me some advice? Thank you so much.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":69971969,"Users Score":0,"Answer":"Obviously in a first step you can delete all features with zero variance. Also, with 11 observations against the remaining features you will not be able to \"find the truth\" but maybe \"find some good candidates\". Whether you'll want to set a lower limit of the variance above zero depends on whether you have additional information or theory. If not, why not leave feature selection in the hands of the algorithm?","Q_Score":0,"Tags":"python,r,binary,classification","A_Id":69973829,"CreationDate":"2021-11-15T09:19:00.000","Title":"Is it reasonable to use 2 feature selection steps?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I deployed Apache Spark 3.2.0 using this script run from a distribution folder for Python:\n.\/bin\/docker-image-tool.sh -r -t my-tag -p .\/kubernetes\/dockerfiles\/spark\/bindings\/python\/Dockerfile build\nI can create a container under K8s using Spark-Submit just fine. My goal is to run spark-submit configured for client mode vs. local mode and expect additional containers will be created for the executors.\nDoes the image I created allow for this, or do I need to create a second image (without the -p option) using the docker-image tool and configure within a different container ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":69980622,"Users Score":0,"Answer":"It turns out that only one image is needed if you're running PySpark. Using Client-mode, the code spawns the executors and workers for you and they run once you create a spark-submit command. Big improvement from Spark version 2.4!","Q_Score":1,"Tags":"python,docker,apache-spark,kubernetes","A_Id":70038985,"CreationDate":"2021-11-15T20:41:00.000","Title":"Two separate images to run spark in client-mode using Kubernetes, Python with Apache-Spark 3.2.0?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have an array\narr = np.array([[1,1,2], [1,2,3]]).\nI want to get amount of unique element for each row and count mean\nI can do this np.array([len(np.unique(row)) for row in arr]).mean().\nBut seems, that it's a slow way. Is there another faster approach?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":69989090,"Users Score":0,"Answer":"set(arr.flatten()) will create your desired result. Not sure about how fast it is though.\nOutput:\n{1, 2, 3}\nEdit:\nYou wanted the number of unique elements, so you wrap the whole thing in len()","Q_Score":1,"Tags":"python,numpy,unique","A_Id":69989156,"CreationDate":"2021-11-16T12:21:00.000","Title":"Get amount of unique elements in numpy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example, how many Mb will be required for EfficientNetB3? On drive model weights require 187 Mb of memory, does it mean that when the model will be loaded on GPU, it will use 187 Mb of GPU memory?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":69989216,"Users Score":0,"Answer":"It's difficult to calculate total, but you can estimate a minimum to just load a model, which would be roughly the model size. Tensorflow, for example, defaults to reserving 100% of the GPU memory. You can set limits, but the amount of memory to be used is based on many things, such as number of layers, input image size, batch size, etc.","Q_Score":0,"Tags":"python,computer-vision,gpu","A_Id":70011280,"CreationDate":"2021-11-16T12:30:00.000","Title":"How to evaluate the required GPU memory for running neural network models?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to get pandas dataframe when select only one column? In R there is drop = False for that.\nWe can use pd.DataFrame(df['breakfast']) or df[['breakfast']], but do we have smth like drop = False as it in R?\nPS: press F for breakfast)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":69990711,"Users Score":0,"Answer":"I think you are looking for something like index=df.index.\nThe question is a bit broad","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":69990837,"CreationDate":"2021-11-16T14:13:00.000","Title":"How to get pandas dataframe when select only one column? In R there is drop = False for that","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to OpenCV and trying to use SIFT and SURF for a project.\nOn my laptop I have OpenCV version= 4.5.1.48 and also added OpenCV-contrib-python of version 4.5.1.48\ncurrently the problem I'm facing is the error I'm getting after following the documentation SIFT works perfectly after following documentation but SURF isn't working and giving me error for following codes\ncode 1\nsurf = cv.xfeatures2d.SURF_create()\nAttributeError: module 'cv2.cv2' has no attribute 'xfeatures2d'\ncode 2\nsurf = cv2.SURF_create()\nAttributeError: module 'cv2.cv2' has no attribute 'SURF_create'\nAfter reading many answers on Stack overflow I changed version of OpenCV and did many things but nothing is working for me\nI'm new to this please someone guide me through this\nI read about the patent expiring too but nothing is working in my case pls tell me if im wrong somewhere\nThanks in advance","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":2344,"Q_Id":69993071,"Users Score":-1,"Answer":"For patent reasons, opencv 4.5.1.48 does not include the whole algorithm\nYou can use Python3.6 (or Python3.7 maybe OK) and install opencv-pyhton==3.4.2.16 and opencv-contrib-python==3.4.2.16, then you can use the function that:\nsurf = cv2.xfeatures2d.SURF_create()\nor\nsift = cv2.xfeatures2d.SIFT_create()","Q_Score":0,"Tags":"python,opencv,cv2,sift,surf","A_Id":71440200,"CreationDate":"2021-11-16T16:49:00.000","Title":"AttributeError: module 'cv2.cv2' has no attribute 'SURF_create' , 2. module 'cv2.cv2' has no attribute 'xfeatures2d'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"w2v = gensim.models.Word2Vec.load(\"w2v.pkl\")\nI am using this method to load pickle file through gensim but encountering an error.\nAttributeError: 'dict' object has no attribute '_load_specials'","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":102,"Q_Id":69999353,"Users Score":1,"Answer":"If you saved the model using pickle, you should be using some form of unpickle to re-load it. (Gensim has a utility method for unpickling a file containing a single pickled object at [gensim.utils.unpickle][1].)\nGensim's per-instance .save() and per-class .load() methods are a custom save\/load protocol that internally makes use of pickle but does other things (& perhaps spreads the model over multiple files) as well. You should only Word2Vec.load(\u2026) a filename that was previously created by code like w2v_model.save(\u2026).","Q_Score":0,"Tags":"python-3.x,pickle,gensim,word2vec","A_Id":70002229,"CreationDate":"2021-11-17T05:30:00.000","Title":"Load pickle file in gensim","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working in Google Cloud Platform and I am trying to use Pyspark to convert a csv file into an avro file. I have seen a lot of websites but I haven't been able to implment the solution. Thank you in advance. :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":80,"Q_Id":70006506,"Users Score":0,"Answer":"You can read the csv file into a dataset\/dataframe using spark and use databricks library to write it as avro. Something like:\ndataset.write.format(\"com.databricks.spark.avro\").save(\"your output path\")","Q_Score":0,"Tags":"python,apache-spark,pyspark","A_Id":70006601,"CreationDate":"2021-11-17T14:46:00.000","Title":"How to convert a csv file to an avro file using PySpark?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to extract project relevant information via web scraping using Python+ Spacy and then building a table of projects with few attributes , example phrases that are of interest for me are:\n\nThe last is the 300-MW Hardin Solar III Energy Center in Roundhead, Marion, and McDonald townships in Hardin County.\nIn July, OPSB approved the 577-MW Fox Squirrel Solar Farm in Madison County.\nSan Diego agency seeking developers for pumped storage energy project.\nThe $52.5m royalty revenue-based royalty investment includes the 151MW Old Settler wind farm\n\nHere I have highlighted different types of information that I'm interested in , I need to end up with a table with following columns :\n{project name} , {Location} ,{company}, {Capacity} , {start date} , {end Date} , {$investment} , {fuelType}\nI'm using Spacy , but looking at the dependency tree I couldn't find any common rule , so if I use matchers I will end up with 10's of them , and they will not capture every possible information in text, is there a systematic approach that can help me achieve even a part of this task (EX: Extract capacity and assign it to the proper project name)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":195,"Q_Id":70010363,"Users Score":0,"Answer":"You should be able to handle this with spaCy. You'll want a different strategy depending on what label you're using.\n\nLocation, dates, dollars: You should be able to use the default NER pipeline to get these.\nCapacity, fuel type: You can write a simple Matcher (not DependencyMatcher) for these.\nCompany: You can use the default NER or train a custom one for this.\nProject Name: I don't understand this from your examples. \"pumped storage energy project\" could be found using a Matcher or DependencyMatcher, I guess, but is hard. What are other project name examples?\n\nA bigger problem you have is that it sounds like you want a nice neat table, but there's no guarantee your information is structured like that. What if an article mentions that a company is building two plants in the same sentence? How do you deal with multiple values? That's not a problem a library can solve for you - you have to look at your data and decide whether that doesn't happen, so you can ignore it, or what you'll do when it does happen.","Q_Score":1,"Tags":"python,nlp,spacy,information-retrieval,information-extraction","A_Id":70014635,"CreationDate":"2021-11-17T19:22:00.000","Title":"Information extraction with Spacy with context awareness","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose you have a pandas.DataFrame like so:\n\n\n\n\nInstitution\nFeat1\nFeat2\nFeat3\n...\n\n\n\n\nID1\n14.5\n0\n0.32\n...\n\n\nID2\n322.12\n1\n0.94\n...\n\n\nID3\n27.08\n0\n1.47\n...\n\n\n\n\nMy question is simple: how would one select rows from this dataframe based on the maximum combined values from two or more columns. For example:\n\nI want to select rows where the columns Feat1and Feat3 have their maximum value together, returning:\n\n\n\n\n\nInstitution\nFeat1\nFeat2\nFeat3\n...\n\n\n\n\nID2\n322.12\n1\n0.94\n...\n\n\n\n\nI am certain a good old for loop can take care of the problem given a little time, but I believe there must be a Pandas function for that, hope someone point me in the right direction.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":70027445,"Users Score":0,"Answer":"You can play arround with:\ndf.sum(axis=1)\ndf['row_sum'] = df.sum(axis=1)\nor\ndf['sum'] = df['col1' ] + df['col3']\nAnd then:\ndf.sort(['sum' ],ascending=[False or True])\ndf.sort_index()","Q_Score":1,"Tags":"python,pandas","A_Id":70027532,"CreationDate":"2021-11-18T22:16:00.000","Title":"Select Pandas dataframe row where two or more columns have their maximum value together","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a spatial related search phrase, such as \"Find cafes near the train station\", what would be the approach to handling this with NLP \/ semantic searching?\nIn this case, would all the 'cafes' need to have a qualitative token with regard to their distance to the train station (e.g. near \/ far)? Curious to know what the thought process would be for handling these kind of tasks.\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":70048440,"Users Score":0,"Answer":"The way I would approach this is to look at the prepositions in the first place, in this case near means close by. You then identify the reference point (train station). Now you find cafes which are close to that, ie you should have a list of cafes with their coordinates, and you compare those against the coordinates of the train station, returning the ones that are closest.\nOther prepositions (opposite) or other descriptions (in the same street as) would need corresponding other metrics to evaluate whether they fit.\nThis is not a semantic search problem, as there is nothing inherent in language that describes whether something is close or far from another thing -- you need to map this onto the 'world', and make a decision from non-linguistic data.","Q_Score":0,"Tags":"python,search,text,nlp,semantics","A_Id":70049687,"CreationDate":"2021-11-20T17:49:00.000","Title":"How would NLP handle semantic searches for spatial information?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know when you import everything you can do thinks like nltk.bigrams(nltk.corpus.brown.words() for bigrams and nltk.trigrams(nltk.corpus.brown.words() for triagrams, but how do you do four grams?\nI've seen other ways to do it, but they all do it with a \"string\" or a text they make up. How do you do it with the nltk.corpus.brown? Do you have to covert it to a string and if so, how?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":51,"Q_Id":70049103,"Users Score":0,"Answer":"To get n number of items you can use nltk.ngrams() with the number to get as the second argument.\nIn your example, to get four-grams, you can use nltk.ngrams(nltk.corpus.brown.words(), 4)","Q_Score":0,"Tags":"python,nltk","A_Id":70049256,"CreationDate":"2021-11-20T19:15:00.000","Title":"Finding Four Grams in an NLTK Corpus","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to understand a few things about partioning a parquet on Dask.\nWhen I do it in a .csv file, the chunksize works as intended, doing 30 partitions based on 50 mb chunks.\nWhen I try to do it the same logic through the read_parquet, none partition is created, and when I force this with repartition(partition_size='50mb'), it create 109 partitions.\nCan someone explain to me why parquet doesn't seems to work at the same way like .csv when doing chunksizes?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":183,"Q_Id":70053555,"Users Score":2,"Answer":"In CSV, the fundamental, non-splittable chunk of data is one row, usually the bytes between one \\n character and the subsequent one. This bytes chunk size is typically small. When you load data with dask, it reads from a given offset to the next \\n to be able to read an exact number of rows. You would find, if you made the chunk size too small, that some partitions would contain no data.\nParquet is not structured like this. Its fundamental non-splittable chunk is the \"row-group\", and there is often just one row group per data file. This is done for efficiency: encoding and compressing a whole row group's worth of data in one block will give maximum read throughput. Furthermore, because of the encoding and compression, it's much harder for dask to guess how big a piece of a dataset will be as an in-memory pandas dataframe, but it can be many times bigger.\nA row group could easily be >>100MB in size. In fact, this is generally recommended, as smaller pieces will have a higher fraction of their processing time in overhead and latency.\nTo summarize\n\ndask will not split a parquet dataset beyond the partitioning within the data files\nthat partition size might be many times larger in memory than on disk, so repartitioning after load may result in many partitions\nthese are tradeoffs required to make parquet as fast and space-efficient as it is","Q_Score":0,"Tags":"python,dask,parquet","A_Id":70069529,"CreationDate":"2021-11-21T10:11:00.000","Title":"Repartioning parquet file dask","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given an integer n <= 10^18 which is the product of Fibonacci numbers, I need to factor it into said Fibonacci numbers.\nEach factorization has a score, which is one less than the count of factors plus the sum of the indices of the factors in the Fibonacci sequence that begins with f(1) = 1, f(2) = 2.\nIf multiple such factorizations are possible, I need the factorization that minimizes the score.\nExample:\n104 = 13 * 8 or 104 = 13 * 2 * 2 * 2\nf(6) = 13, f(5) = 8, f(2) = 2\nFor 104 = 13*8 = f(6)*f(5), we have a count of 2, indices of 6 & 5, giving us 2 + 6 + 5 - 1 = 12.\nFor 104 = 13 * 2 * 2 * 2 = f(6) * f(2) * f(2) * f(2), we have a count of 4 and indices of 6, 2, 2, 2, giving us 4 + 6 + 2 + 2 + 2 - 1 = 15.\nWe should pick 13 * 8 since it has the lower score.\nThe biggest problem I've come across is when we have a number like 1008, which is divisible by 144 and 21, but needs to be divided by 21 because 1008 % 7 == 0. Because my program is first dividing by the biggest numbers, number 144 is 'stealing' 3 from number 21 so my program doesn't find a solution.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":93,"Q_Id":70055095,"Users Score":0,"Answer":"Carmichael's theorem proves that each Fibonacci number after 144 has at least one prime divisor that doesn't divide any earlier Fibonacci number.\nThere aren't many Fibonacci numbers under 10^18; fewer than 90.\nMake an array of all the Fibonacci numbers <= 10^18.\nGiven an input n which is the product of Fibonacci numbers, its factorization into Fibonacci numbers must include every Fibonacci number above 144 that divides it, repeated as many times as it divides it.\nGo through your Fibonacci numbers in descending order and keep dividing n by any such number that divides it, until you get to 144.\nNow we need to be careful because two Fibonacci numbers don't have any prime factors not seen in previous Fibonacci numbers. These are 8 and 144. Since 8 is 2^3 and 2 is a Fibonacci number, you can't render your number unfactorable into Fibonacci numbers by taking the 8. Under your optimization, you will always choose the 8.\nThen 144 is the only factor that you might need to reject for a smaller factor. This can only happen if 34 or 21 are factors, and the 144 eliminates a needed 2 or 3.\n34 = 2 * 17, 21 = 3 * 7\nThat was long-winded, but it gets us to a simple approach.\nGo through the Fibonacci numbers <= n in descending order until you get to 144, then skip to 34, then 21, then back to 144 and descending down to 2.\nThis will give you the optimal factorization under your weird scoring scheme.\n----- this order -----\n[679891637638612258, 420196140727489673, 259695496911122585, 160500643816367088, 99194853094755497, 61305790721611591, 37889062373143906, 23416728348467685, 14472334024676221, 8944394323791464, 5527939700884757, 3416454622906707, 2111485077978050, 1304969544928657, 806515533049393, 498454011879264, 308061521170129, 190392490709135, 117669030460994, 72723460248141, 44945570212853, 27777890035288, 17167680177565, 10610209857723, 6557470319842, 4052739537881, 2504730781961, 1548008755920, 956722026041, 591286729879, 365435296162, 225851433717, 139583862445, 86267571272, 53316291173, 32951280099, 20365011074, 12586269025, 7778742049, 4807526976, 2971215073, 1836311903, 1134903170, 701408733, 433494437, 267914296, 165580141, 102334155, 63245986, 39088169, 24157817, 14930352, 9227465, 5702887, 3524578, 2178309, 1346269, 832040, 514229, 317811, 196418, 121393, 75025, 46368, 28657, 17711, 10946, 6765, 4181, 2584, 1597, 987, 610, 377, 233, 34, 21, 144, 89, 55, 13, 8, 5, 3, 2]","Q_Score":1,"Tags":"python,algorithm,fibonacci,division","A_Id":70058785,"CreationDate":"2021-11-21T13:44:00.000","Title":"The smallest sum of divisors","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The dataset is large with over 15000 rows.\nOne row of x,y,z plots a point on a 3D plot.\nI need to scale the data and so far I'm using RobustScaler(), but I want to make sure that the dataset is either normally distributed or it isn't.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":70060312,"Users Score":0,"Answer":"Matplotlib histogram [plt.hist()] can be used for checking data distribution. If the highest peak middle of the graph, then datasets are normally distributed.","Q_Score":0,"Tags":"python,multidimensional-array","A_Id":70061141,"CreationDate":"2021-11-22T02:34:00.000","Title":"I have a 3D dataset of coordinates x,y,z. How do I check if the dataset is normally distributed?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have used the .map_partitions with delayed functions and the result I got is a dataframe with delayed results in each row.\nIs there any way to unpack those delayed objects?\nAm I doing something wrong?\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":70078414,"Users Score":5,"Answer":"A very short answer: you should use map_partitions with a normal function, not a delayed one. The dataframe interface already provides laziness and parallelism, so you don't need to add another nested level.","Q_Score":1,"Tags":"python,pandas,dask","A_Id":70082881,"CreationDate":"2021-11-23T09:34:00.000","Title":"How to unpack a dataframe of delayed dask objects?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I want to implement a class that holds nested data. I would like to implement __getitem__ in a way so that obj[x][y] can be abbreviated by obj[x, y].\nHowever, I noticed a problem: The signature of __getitem__ is that it expects a single positional argument instead of *args. If multiple arguments are given, they are automatically put into a tuple.\nI.e. obj[a, b] and obj[(a, b)] both appear to be equivalent to obj.__getitem__((a,b))\nBut then how can I distinguish the two cases\n\nThe outer layer is indexed by tuples and obj[(a, b)] should return the value at that index\nThe outer layer is not indexed by tuples and obj[a, b] should return obj[a][b]\n\nThe only possible solutions I am aware of currently are\n\nAbandon the idea of coercing obj[x, y] into obj[x][y]\nIf we only want obj[x] always write obj[x,] instead.\n\nBoth are not really satisfactory.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":34,"Q_Id":70081385,"Users Score":2,"Answer":"Short of trying to inspect the calling source code (which is extremely fragile, with all sorts of failure cases, nowhere near worth the instability and debugging headache), this is not possible.\nobj[a, b] and obj[(a, b)] mean exactly the same thing in Python. There is no semantic difference, no difference in how they are executed, and nothing to hook into to distinguish them. It'd be like trying to distinguish the whitespace in obj[a,b] and obj[a, b].","Q_Score":1,"Tags":"python,python-3.x","A_Id":70081459,"CreationDate":"2021-11-23T13:00:00.000","Title":"Any way to distinguish `obj[x, y]` from `obj[(x, y)]`?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to remove the empty cells from my column A of my data which had text data in it.\nMy csv which I imported into data frame has 50k rows containing search data in column A.\nI tried the below options.\ndf= df.replace(r'^s*$', float('NaN'), regex = True)\ndf.replace(\"\", np.nan, inplace=True)\ndf.dropna(subset=['A'], inplace=True)\nStill there are empty cells","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":86,"Q_Id":70084610,"Users Score":0,"Answer":"Are you sure they are empty? Did you check to see they're not just empty strings (\"\")?\ndropna is the proper method, unless you want to also drop cells with empty strings.\nPlease elaborate, thank you","Q_Score":0,"Tags":"python,pandas,dataframe,text","A_Id":70084750,"CreationDate":"2021-11-23T16:43:00.000","Title":"Trying to remove empty cells in a column in csv from my data using pandas dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying \"from sklearn.linear_model import SGDOneClassSVM\"\nbut it doesn't work and raises an import error \"ImportError: cannot import name 'SGDOneClassSVM' from 'sklearn.linear_model\"","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":204,"Q_Id":70087711,"Users Score":-1,"Answer":"Upgrade sklearn package using the command:\npip install --upgrade scikit-learn","Q_Score":0,"Tags":"python","A_Id":71475443,"CreationDate":"2021-11-23T20:57:00.000","Title":"problem with importing SGDOneClassSVM from sklearn.linear_model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a greyscale images dataset that I want to feed into a neural network.\nx_train_grey.shape is (32, 32, 73257) so I understand it is (dimension_x, dimension_y, batch_size). Because the images are greyscale, there is only one \"depth\" dimension.\nHowever to feed this data to the neural network it needs to have this shape:(batch_size, dimension_x, dimension_y). With batch_szie at the beginning.\nHow do I reshape it to this format, so that batch_szie comes before the x, y images dimensions?\nOnce this is done, I expect to be able to pass this into a neural network (the first layer being Flatten()), like so:\nFlatten(input_shape=(32, 32, 1)),.\nCheers!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":70090376,"Users Score":0,"Answer":"Solved! By passing the correct shape into np.reshape().\nI really should get to know numpy better, before getting into deep learning.","Q_Score":0,"Tags":"python,tensorflow,neural-network,shapes,flatten","A_Id":70090445,"CreationDate":"2021-11-24T03:25:00.000","Title":"How to change the order of dimensions of images data's shape for a neural network?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python dataset that I have managed to take a sample from and put in a second dataset.\nAfter that I will need to produce another sample from the original dataset but I do not want any of the first sample to come up again.\nIdeally this would need any flag would only be there for a year so it can then be sampled again after that time has elapsed.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":70094650,"Users Score":0,"Answer":"Denote your original dataset with A. You generate a subset of A, denote it with B1. You can then create B2 from A_leftover = A \\ B1, where \\ denotes the set difference. You can then generate B3, B4, ... B12 from A_leftover, where Bi is generated from A_leftover = B(i-1).\nIf you want to put back B1 in the next year, A_leftover = A_leftover \\ B12 U B1, and from this, you can generate the subset for B13 (or you can denote it with B1 as 13%12 = 1). So after 12, you can say you can generate Bi from A_leftover = A_leftover \\ B(i-1) U B(i-11). Or you can use this formula from the very beginning, defining B(-i) = empty set for every i in [0,1,2,...,10].","Q_Score":0,"Tags":"python,random,dataset","A_Id":70130438,"CreationDate":"2021-11-24T10:44:00.000","Title":"How I do I get a second sample from a dataset in Python without getting duplication from a first sample?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, here I have a numpy array, array([[-1.228, 0.709, 0. ], [ 0. , 2.836, 0. ], [ 1.228, 0.709, 0. ]]). What my plan is to perform addition to all the rows of this array with a vector (say [1,2,3]), and then append the result onto the end of it i.e the addition of another three rows? I want to perform the same process, like 5 times, so that the vector is added only to the last three rows, which were the result of the previous calculation(addition). Any suggestions?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":48,"Q_Id":70098299,"Users Score":1,"Answer":"For the addition part, just write something like a[0]+[1,2,3] (where a is your array), numpy will perform addition element-wise as expected.\nFor appending a=np.append(a, [line], axis=1) is what you're looking for, where line is the new line you want to add, for example the result of the previous sum.\nThe iteration can be easily repeated selecting the last three rows thanks to negative indexing: if you use a[-1], a[-2] and a[-3] you'll be sure to pick the last three lines","Q_Score":0,"Tags":"python,numpy","A_Id":70098568,"CreationDate":"2021-11-24T14:56:00.000","Title":"Iterate over rows, and perform addition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to choose the best model to predict the traffic in a determinated hour.\nI think cluster is not for this problme, but i still don't know what would be the best option. If it's vector machine, decision tree, linear regression or Artificial Neural Networks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":14,"Q_Id":70109670,"Users Score":0,"Answer":"I think this depends mostly on your data. How much data do you have? If you only have few examples, I would go with VSM (Assuming you mean Support Vector Machines?). If you have a lot of examples I personally would go with a neural network.\nI guess you could even get a nice representation of the prolem if you use a recurrent network.","Q_Score":0,"Tags":"python-3.x","A_Id":70115394,"CreationDate":"2021-11-25T10:41:00.000","Title":"Predict future with models \/ VSM \/decision tree\/ linear regression\/ Artificial Neural Networks","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am newbie to data science and I am bit confused about working of map and applymap in pandas. As when I executed code df.applymap(lambda f: f*2) and df.apply(lambda f: f*2) provided same result. But when I change code for both which were df.applymap(lambda f: f*2 if f < 7 else f) and df.apply(lambda f: f*2 if f < 7 else f) then apply method caused an error. Upon my understanding, I came to conclusion that applymap works for each scalar value where apply does not work for each scalar value but instead it executes operation for whole column or series.\nKindly veterans help me out here if I am correct or not. Thanks in advance\nNote: df in code refers to whole DataFrame not series.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":23,"Q_Id":70111031,"Users Score":0,"Answer":"Yes, apply works on a row or a column basis of a DataFrame, applymap works element-wise on a DataFrame.","Q_Score":1,"Tags":"python,pandas,dataframe","A_Id":70111268,"CreationDate":"2021-11-25T12:21:00.000","Title":"Working of map vs applymap in pandas, python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am given an array of elements and the sum K, and I am supposed to find a subarray (doesn\u2019t have to be contiguous) whose sum is equal to K.\nFor example:\nInput: [1, 9, 3, 2, 21], 30\nOutput: [9, 21]\nDo I need to use backtracking or is there another algorithm using dynamic programming for example?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":84,"Q_Id":70111874,"Users Score":1,"Answer":"If it's not a big array you could use brute force: 2^n solutions!","Q_Score":0,"Tags":"python,algorithm,subset-sum","A_Id":70111955,"CreationDate":"2021-11-25T13:21:00.000","Title":"How do you find a subarray with the given sum?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using stableBaselines3 based on Open AI gym. The agent, in a toy problem version, tries to learn a given (fixed) target point (x and y coordinates within [0,31] and [0,25] respectively) on a screen.\nMy action space would thus be a box (Version A): self.action_space = ((gym.spaces.Box(np.array([0,0]),np.array([31,25])))). The reward obtained by the agent is minus the manhattan distance between the chosen point and target (the simulation terminates straight away). But when running the PPO algorithm, the agent seems to try only coordinates that are within the Box [0,0], [2,2] (ie coordinates are never bigger than 2). Nothing outside this box seems ever to be explored. The chosen policy is not even the best point within that box (typically (2,2)) but a random point within it.\nWhen I normalize to [0,1] both axes, with (Version B) self.action_space = ((gym.spaces.Box(np.array([0,0]),np.array([1,1])))), and the actual coordinates are rescaled (the x-action is multiplied by 31, the y- by 25) the agent does now explore the whole box (I tried PPO and A2C). However, the optimal policy produced corresponds often to a corner (the corner closest to the target), in spite of better rewards having been obtained during training at some point. Only occasionally one of the coordinates is not a boundary, never both together.\nIf I try to discretize my problem: self.action_space = gym.spaces.MultiDiscrete([2,32,26]), the agent correctly learns the best possible (x,y) action (nothing in the code from Version A changes except the action space). Obviously I'd like to not discretize.\nWhat are possible reasons for that whole behavior (not exploring, considering only\/mostly corners, moving away from better rewards)? The rest of the code is too unwieldy to paste here, but does not change between these scenarios except for the action space, so the fact that the discretized versions works does not fit with a bug with rewards calculations.\nFinally, my action space would need to have one discrete component (whether the agent has found the target or will continue looking) on top of the two continuous components (x and y). The reward of a non-decisive fixation would be a small penalty, the reward of the final decision as above (the better the closer to the actual target). self.action_space = gym.spaces.Tuple((gym.spaces.Discrete(2),gym.spaces.Box(np.array([0,0]),np.array([31,25]),dtype=np.float32))) should be what I'm looking for, but Tuple is not supported. Is there any workaround? What do people do when they need both continuous and discrete components? I thought of making the binary component into a float, and transforming it to 0\/1 below\/above a certain cutoff, but that can't lend itself too well to learning.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":70,"Q_Id":70115351,"Users Score":2,"Answer":"For posterity, stable_baselines seems to be sampling actions in mysterious ways. If the action space is defined as [0,1] or [-1,-1], stable_baselines will indeed sample that space. But if the action space is, in my case, [0,31], then the actions sampled are roughly within [0,3] or [0,4], with most values being within [0,1].\nSo the workaround seems to be to use Boxes using [0,1] or [-1,-1] for the action_space, and rescale the action returned by whatever SB3 algorithm you're using.","Q_Score":1,"Tags":"python,reinforcement-learning,openai-gym,stable-baselines","A_Id":70451460,"CreationDate":"2021-11-25T17:39:00.000","Title":"stablebaselines algorithms exploring badly two-dimension box in easy RL problem","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to import KNeihgborsClassifier from 'sklearn.neighbors' but I have this error ImportError: cannot import name 'KNeihgborsClassifier' from 'sklearn.neighbors' (C:\\Users\\lenovo\\anaconda3\\lib\\site-packages\\sklearn\\neighbors_init_.py)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":164,"Q_Id":70135105,"Users Score":0,"Answer":"You are importing KNeihgborsClassifier which is wrong, change it to:\nfrom sklearn.neighbors import KNeighborsClassifier","Q_Score":0,"Tags":"python,web-crawler","A_Id":70135169,"CreationDate":"2021-11-27T13:14:00.000","Title":"I'm trying to import KNeihgborsClassifier from 'sklearn.neighbors'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a task and the output should be a \"1-D np.array of dimension m\" and I don't understand how a 1-D array can have m Dimension, it has 1 per definition ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":38,"Q_Id":70140599,"Users Score":1,"Answer":"The word dimension can mean multiple things, in this case it means the size\/length of the singular dimension, i.e. you can say an array has dimensions 2x2.\nTherefore, a 1D array of dimension m is equivalent to a list of length m.","Q_Score":0,"Tags":"python,numpy","A_Id":70140614,"CreationDate":"2021-11-28T04:13:00.000","Title":"What is a 1-D np.array of dimension m?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have speed data of many particles to which I want to fit the Maxwellian curve. I am trying to use the fit method from scipy.stats.maxwell to fit to my data and extract the temperature of the system from that.\nFrom the documentation, I am unable to put my finger on what the parameters that we are trying to fit exactly are and hence how they relate to temperature.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":70143257,"Users Score":1,"Answer":"It's related to scale.\nYou also likely want to set floc=0 in maxwell.fit(...) \nCf the argument of the exponential: with non-zero scale it's -x**2 \/ scale**2 \/ 2, which you compare to the expected mv**2 \/ 2kT.","Q_Score":4,"Tags":"python,scipy,curve-fitting,model-fitting,scipy.stats","A_Id":70150380,"CreationDate":"2021-11-28T12:09:00.000","Title":"How to use scipy.stats.maxwell to find temperature?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have LogisticRegressionCv model it's .pkl file and import data as images but i don't know how to get it on flutter please help me If you know how or if I must to convert my model to other file formats.\nplease help me.\nThank you for your help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":70144848,"Users Score":0,"Answer":"as you've trained your model in python and stored it in pkl file. One method is in your flutter background, call python3 predict_yourmodel.py your_model_params and after the run, it will give your the model result.\nAnother way is implement a logisticRegressionCv in Flutter as it is a simple model, and easily be implemented. you can store all your params and l1 or l2 etc super-params in a txt instead of pkl file for readility.","Q_Score":0,"Tags":"python,flutter,classification","A_Id":70144960,"CreationDate":"2021-11-28T15:28:00.000","Title":"How to implement LogisticRegressionCv on flutter","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Since I don't have pure knowledge of the pandas library, I just want to explore the range of functions that pandas library offers to users.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":70149159,"Users Score":0,"Answer":"use dir(pandas)\nBut you'd better go to the official documentation.","Q_Score":0,"Tags":"python,pandas,jupyter-notebook","A_Id":70149172,"CreationDate":"2021-11-29T02:25:00.000","Title":"Is there any command which can show what kinds of function are in-built in pandas or matplotlib?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Getting this error in python:\nValueError: cannot reshape array of size 14333830 into shape (14130,1,1286),\nHow do I solve this?\nThis is the code generating the error:\ndata_train1=data_train.reshape(14130,1,1286)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":70149594,"Users Score":0,"Answer":"For doing reshaping, your new shape should match the previous shape. If you multiply 14130 * 1286, you get 18171180 which is obviously not the same as 14333830. So you must write something correct.","Q_Score":0,"Tags":"python-3.x","A_Id":70208352,"CreationDate":"2021-11-29T03:57:00.000","Title":"ValueError: cannot reshape array of size 14333830 into shape (14130,1,1286), how do I solve this?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a simple table which the datetime is formatted correctly on.\n\n\n\n\nDatetime\nDiff\n\n\n\n\n2021-01-01 12:00:00\n0\n\n\n2021-01-01 12:02:00\n2\n\n\n2021-01-01 12:04:00\n2\n\n\n2021-01-01 12:010:00\n6\n\n\n2021-01-01 12:020:00\n10\n\n\n2021-01-01 12:022:00\n2\n\n\n\n\nI would like to add a label\/batch name which increases when a specific threshold\/cutoff time is the difference. The output (with a threshold of diff > 7) I am hoping to achieve is:\n\n\n\n\nDatetime\nDiff\nBatch\n\n\n\n\n2021-01-01 12:00:00\n0\nA\n\n\n2021-01-01 12:02:00\n2\nA\n\n\n2021-01-01 12:04:00\n2\nA\n\n\n2021-01-01 12:010:00\n6\nA\n\n\n2021-01-01 12:020:00\n10\nB\n\n\n2021-01-01 12:022:00\n2\nB\n\n\n\n\nBatch doesn't need to be 'A','B','C' - probably easier to increase numerically.\nI cannot find a solution online but I'm assuming there is a method to split the table on all values below the threshold, apply the batch label and concatenate again. However I cannot seem to get it working.\nAny insight appreciated :)","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":49,"Q_Id":70166709,"Users Score":3,"Answer":"Since True and False values represent 1 and 0 when summed, you can use this to create a cumulative sum on a boolean column made by df.Diff > 7:\ndf['Batch'] = (df.Diff > 7).cumsum()","Q_Score":1,"Tags":"python,pandas,dataframe","A_Id":70166784,"CreationDate":"2021-11-30T09:01:00.000","Title":"Pandas create a column iteratively - increasing after specific threshold","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Consider a vector [0 1 2] and a matrix of size 3 x n. How can I multiply each element of the vector with the corresoponding row of the matrix. Each element of row 0 should be multiplied with 0, each element of row 1 should be multiplied with 1 and so on?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":70176264,"Users Score":1,"Answer":"I assume you're using numpy. You can use matrix *= vector.reshape(-1, 1). This will convert the vector to a column, then multiply the rows.","Q_Score":0,"Tags":"python,numpy","A_Id":70176337,"CreationDate":"2021-11-30T21:25:00.000","Title":"element wise multiplication vector with rows of matrix","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script where I'm using pandas for transformations\/manipulation of my data. I know I have some \"inefficient\" blocks of code. My question is, if pyspark is supposed to be much faster, can I just replace these blocks using pyspark instead of pandas or do I need everything to be in pyspark? If I'm in Databricks, how much does this really matter since it's already on a spark cluster?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":633,"Q_Id":70177467,"Users Score":2,"Answer":"If the data is small enough that you can use pandas to process it, then you likely don't need pyspark. Spark is useful when you have such large data sizes that it doesn't fit into memory in one machine since it can perform distributed computation. That being said, if the computation is complex enough that it could benefit from a lot of parallelization, then you could see an efficiency boost using pyspark. I'm more comfortable with pyspark's APIs than pandas, so I might end up using pyspark anyways, but whether you'll see an efficiency boost depends a lot on the problem.","Q_Score":0,"Tags":"python,apache-spark,pyspark,databricks","A_Id":70178242,"CreationDate":"2021-11-30T23:41:00.000","Title":"Databricks - Pyspark vs Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script where I'm using pandas for transformations\/manipulation of my data. I know I have some \"inefficient\" blocks of code. My question is, if pyspark is supposed to be much faster, can I just replace these blocks using pyspark instead of pandas or do I need everything to be in pyspark? If I'm in Databricks, how much does this really matter since it's already on a spark cluster?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":633,"Q_Id":70177467,"Users Score":0,"Answer":"Pandas run operations on a single machine whereas PySpark runs on multiple machines. If you are working on a Machine Learning application where you are dealing with larger datasets, PySpark is the best fit which could process operations many times(100x) faster than Pandas.\nPySpark is very efficient for processing large datasets. But you can convert spark dataframe to Pandas dataframe after preprocessing and data exploration to train machine learning models using sklearn.","Q_Score":0,"Tags":"python,apache-spark,pyspark,databricks","A_Id":70252024,"CreationDate":"2021-11-30T23:41:00.000","Title":"Databricks - Pyspark vs Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a scenario in which I have a peptide frame having 9 AA. I want to generate all possible peptides by replacing a maximum of 3 AA on this frame ie by replacing only 1 or 2 or 3 AA.\nThe frame is CKASGFTFS and I want to see all the mutants by replacing a maximum of 3 AA from the pool of 20 AA.\nwe have a pool of 20 different AA (A,R,N,D,E,G,C,Q,H,I,L,K,M,F,P,S,T,W,Y,V).\nI am new to coding so Can someone help me out with how to code for this in Python or Biopython.\noutput is supposed to be a list of unique sequences like below:\nCKASGFTFT, CTTSGFTFS, CTASGKTFS, CTASAFTWS, CTRSGFTFS, CKASEFTFS ....so on so forth getting 1, 2, or 3 substitutions from the pool of AA without changing the existing frame.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":122,"Q_Id":70178355,"Users Score":1,"Answer":"Let's compute the total number of mutations that you are looking for.\nSay you want to replace a single AA. Firstly, there are 9 AAs in your frame, each of which can be changed into one of 19 other AA. That's 9 * 19 = 171\nIf you want to change two AA, there are 9c2 = 36 combinations of AA in your frame, and 19^2 permutations of two of the pool. That gives us 36 * 19^2 = 12996\nFinally, if you want to change three, there are 9c3 = 84 combinations and 19^3 permutations of three of the pool. That gives us 84 * 19^3 = 576156\nPut it all together and you get 171 + 12996 + 576156 = 589323 possible mutations. Hopefully, this helps illustrate the scale of the task you are trying to accomplish!","Q_Score":3,"Tags":"python,data-science,computer-science,bioinformatics,biopython","A_Id":70178766,"CreationDate":"2021-12-01T02:11:00.000","Title":"Generate the all possible unique peptides (permutants) in Python\/Biopython","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I try to install sklearn-crfsuite, I get the following error:\n\nfatal error C1083: Cannot open include file: 'basetsd.h': No such file\nor directory\n\ntrying this command pip install sklearn-crfsuite, also installed Microsoft visual C++ 2019 and the required libraries.\nPlease let me know if there is any solution to this, do I need to set any variable in the system path?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":139,"Q_Id":70178642,"Users Score":0,"Answer":"If I understand your problem correctly, add the path to your header file in your project using Property->C\/C++->General->Additional Include Directories.\nIf you want this to apply to all your projects use the Property manager to add this path.\nOf course make sure the header exists.","Q_Score":0,"Tags":"python-3.x,visual-c++","A_Id":70232059,"CreationDate":"2021-12-01T03:03:00.000","Title":"fatal error C1083: Cannot open include file: 'basetsd.h': No such file or directory","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"ctx=con.cursor()\nctx.execute(select col1 from table1)\nresult=ctx.fetchall()\ndata=pd.DataFrame(result)\ndata.columns['field']\nfor index,row in data:\nupdate table2 set col2='some value' where col1=str(row['field'])","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":97,"Q_Id":70190380,"Users Score":0,"Answer":"Solution to this is:\nInsert the data into some transient table and then then use that table for update.\nFor insert :\ndata = panda.DataFrame(result)\njust use data.to_csv('file complete path',index=False,Header=True)\nusing put command place the file in internal stage and from there use Copy command to copy data into transient table.\nlater on you can use this table to update your target table.","Q_Score":1,"Tags":"python,dataframe,snowflake-schema","A_Id":70471719,"CreationDate":"2021-12-01T19:46:00.000","Title":"Updating snowflake table row by row using panda dataframe (iterrows()) taking lot of time .Can some one give better approach to speed up updates?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"after model training, spacy has generated model\\model-best and model\\model-last folders. What's the difference between the two models and which one should be used for predictions?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":220,"Q_Id":70195392,"Users Score":1,"Answer":"model-best is the model that got the highest score on the dev set. It is usually the model you would want to use.\nmodel-last is the model trained in the last iteration. You might want to use it if you resume training.","Q_Score":1,"Tags":"python,machine-learning,nlp,spacy,spacy-3","A_Id":70209537,"CreationDate":"2021-12-02T07:13:00.000","Title":"difference between model-best and model-last in spacy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using compute_face_descriptor function in dlib which is the function of dlib.face_recognition_model_v1('dlib_face_recognition_resnet_model_v1.dat').\nThere is an option to set \"num_jitters\". I set \"num_jitters\"=10, but the output embedding I am getting different on subsequent runs. I have tried setting seed using np.random.seed(43), but still, the output changes on subsequent runs\nIs there a way to set seed in this function using \"num_jitters\"=10 so that the output embedding doesn't change on subsequent runs?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":70196893,"Users Score":0,"Answer":"\"num_jitters\". means how many times dlib will re-sample your face image each time randomly moving it a little bit. That's why you are getting different embeddings.","Q_Score":1,"Tags":"python-3.x,artificial-intelligence,face-recognition,dlib","A_Id":70202072,"CreationDate":"2021-12-02T09:23:00.000","Title":"How to set seed to compute_face_descriptor function in dlib?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"during i trained my own model, i have a simple question.\norigianl input image shape is (height : 434, width : 636), and i used resized image(416 x 416) for my train model(Unet++).\nI wonder if it is right to resize the test image when inference step, How can I resize the model output to the original image size when comparing test output with original test image.\n---------process\noriginal input size : (434, 636)\ntrain input size: (416, 416)\n\ninference\ntest img -> resize (416, 416) -> test model -> test output(416,416) -> comparing test output with test img","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":70198259,"Users Score":0,"Answer":"It's absolutely right to resize the input image to the model input size else, it will generate an error if you feed an image of different size to the model. Coming towards your question, you can solve this either by rescaling the model output to the original size of your input images. A simple technique can be resizing the masks but there can be better ways. OR\nYou can resize your input images and their Ground Truths (masks) to the model size, and so you won't need to rescale the model's output. I hope that answers the question !!!","Q_Score":0,"Tags":"python,computer-vision,image-segmentation","A_Id":70200445,"CreationDate":"2021-12-02T11:04:00.000","Title":"Question about Inference for Image segmentation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to get the molecules from the SMILES using rdkit in python. The SMILES I used was downloaded from the drugbank.\nHowever, when I using the function Chem.MolFromSmiles, some SMILES would report but some wouldn't:\nExplicit valence for atom # 0 N, 4, is greater than permitted.\nI found some explanation about this problem: it is because the SMILES generated a invalid molecule that doesn't exist in real world. But I am not a chemistry student.... So anyone know how to fix this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":480,"Q_Id":70198529,"Users Score":2,"Answer":"Your SMILES string would appear to have a neutral 4-co-ordinate nitrogen atom in it, which doesn't exist in real molecules. 4-co-ordinate nitrogen atoms have a positive charge, eg [N+] in a SMILES string.","Q_Score":0,"Tags":"python,chemistry,rdkit","A_Id":70233778,"CreationDate":"2021-12-02T11:23:00.000","Title":"Problems encountered when using RDKIT to convert SMILES to mol","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to write several Panda Dataframes into a SQL database. The dataframes are generated in different processes using the multiprocessing library.\nEach dataframe should get its own trial number when it is written into the database. Can I solve this using SQL autoincrement or do I have to create a counter variable in the Python code.\nIf I use the function pandas.DataFrame.to_sql and set an index as autoincrement, I get a consecutive index for each row.\nHere is an example how it should look like\n\n\n\n\ntrial number\ntimestamp\nvalue\n\n\n\n\n1\ntime1\nvalue1\n\n\n1\ntime2\nvalue2\n\n\n1\ntime_n\nvalue_n\n\n\n2\ntime1\nvalue1\n\n\n2\ntime2\nvalue2\n\n\n2\ntime3\nvalue3\n\n\n2\ntime_n\nvalue_n\n\n\n\n\nI use Python 3.9 and MariaDb as Database. I hope for help. Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":70199234,"Users Score":0,"Answer":"You should have a separate trials table in your database where you cspture the details of each trial. The trials table will have an auto incremented id field.\nBefore writing your dataframes to your values table, each process inserts a record into the trials table and get the generated auto increment value.\nThen use this value to set the trial number column when you dump the frame to your table.","Q_Score":0,"Tags":"python,mysql,pandas,dataframe","A_Id":70199692,"CreationDate":"2021-12-02T12:11:00.000","Title":"Write Panda Dataframes to SQL. Each data frame must be identifiable by a trial number","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"df['Current Ver'] = df['Current Ver'].astype(str).apply(lambda x : x.replace('.', ',',1).replace('.', '').replace(',', '.',1)).astype(float)\nSlowly learning lambda command, my understanding of this line of code is:\n\nChange dataframe type to str\nApply lambda with one perimeter x\nReplace all the string format . to , , (I don't understand what does 1 stands for, have done research prior asking, didn't find clue)\nReplace all the string format . to null value\nReplace all the string format , to . , (again still have no clue what does 1 stands for in this case)\nChange dataframe type to float\n\nPlease help me better understanding this line of code, thank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":70207568,"Users Score":2,"Answer":"This replaces the first . in the string with a ,, removes the remaining periods, changes the first , back to a ., then converts the result to a float for the 'Current Ver' column in the dataframe.","Q_Score":0,"Tags":"python,dataframe,lambda,replace,feature-engineering","A_Id":70207602,"CreationDate":"2021-12-02T23:07:00.000","Title":"How to understand this lambda with 3 .replace() line of code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Initially, my dataframe had a Month column containing numbers representing the months.\n\n\n\n\nMonth\n\n\n\n\n1\n\n\n2\n\n\n3\n\n\n4\n\n\n\n\nI typed df[\"Month\"] = pd.to_datetime(df[\"Month\"]) and I get this...\n\n\n\n\nMonth\n\n\n\n\n970-01-01 00:00:00.0000000001\n\n\n1970-01-01 00:00:00.000000002\n\n\n1970-01-01 00:00:00.000000003\n\n\n1970-01-01 00:00:00.000000004\n\n\n\n\nI would like to just retain just the dates and not the time. Any solutions?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":70212361,"Users Score":0,"Answer":"get the date from the column using df['Month'].dt.date","Q_Score":0,"Tags":"python,pandas,dataframe,datetime","A_Id":70212458,"CreationDate":"2021-12-03T09:54:00.000","Title":"Date and Time Format Conversion in Pandas, Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've installed the native version of python3 through miniforge and the intel version of Spyder through homebrew. Everything is working fine with one exception, my plots seem to work with the \"graphics backend\" \"inline\" option. When I try to select the \"automatic\" option, or any other option rather than inline, the IPython doesn't initiate. Has anyone had the same problem?\nKind regards,","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":145,"Q_Id":70214081,"Users Score":0,"Answer":"(Spyder maintainer here) This problem is fixed in our 5.2.0 version, released in November 2021.","Q_Score":0,"Tags":"python,python-3.x,macos,spyder,apple-m1","A_Id":70327476,"CreationDate":"2021-12-03T12:08:00.000","Title":"Spyder \"Graphics backend\" \"automatic\" option not working on M1 macbook","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have df = pd.concat(dict_of_df, axis=0) and sometimes [rarely] it might be the case that all of the df in the dictionary are empty in which case I would like Pandas to cheerfully return an empty dataframe. But instead I get a ValueError.\nI can write a loop to check for the length of each df before calling concat, but would prefer to not always do that, so at the moment I just embed the concat into a try\/except... which doesnt make be really happy either because if there was a \"true\" ValueError I would like to have know it. So then I could do a try\/except loop and if exception is thrown then do a count of all the dicts and ... ugh. This is getting crazy.\nIs there something more clean? Thanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":67,"Q_Id":70214336,"Users Score":0,"Answer":"Sorry, I am going to withdraw the question. I now realize that\npd.concat([None,None]) produces the ValueError, whereas as noted above pd.concat(pd.DataFrame(),pd.DataFrame()) does exactly what you would hope. Also pd.concat([None,pd.DataFrame()]) is fine too. So it's not really fair of me to complain about concat. I need to stop feeding my routine non-existent datasets !\nThanks for feedback","Q_Score":1,"Tags":"python,pandas,concatenation","A_Id":70215836,"CreationDate":"2021-12-03T12:28:00.000","Title":"Can I avoid a ValueError concatenating empty dataframes?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"in pandas the inplace parameter make modification on the reference but I know in python data are sent by value not by reference i want to know how this is implemented or how this work","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":36,"Q_Id":70214493,"Users Score":1,"Answer":"Python\u2019s argument passing model is neither \u201cPass by Value\u201d nor \u201cPass by Reference\u201d but it is \u201cPass by Object Reference\u201d\n\nWhen you pass a dictionary to a function and modify that dictionary inside the function, the changes will reflect on the dictionary everywhere.\nHowever, here we are dealing with something even less ambiguous. When passing inplace=True to a method call on a pandas object (be it a Series or a DataFrame), we are simply saying: change the current object instead of getting me a new one. Method calls can modify variables of the instances on which they were called - this is independent of whether a language is \"call by value\" or \"call by reference\". The only case in which this would get tricky is if a language only had constants (think val) and no variables (think var) - think purely functional languages. Then, it's true - you can only return new objects and can't modify any old ones. In practice, though, even in purest of languages you can find ways to update records in-place.","Q_Score":0,"Tags":"python,python-3.x,pandas,mutability,call-by-value","A_Id":70214879,"CreationDate":"2021-12-03T12:42:00.000","Title":"the inplace parameter in pandas how it works?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2 columns of data with the first column being product codes (all filled out) and the second column with product description.\nThe first column has all the product codes filled out but there are some rows where the product description (second column) is missing.\nFor example row 200 has a product code of 145 but the description on that row is empty (NaN). However, there are other rows with product code 145 where the description exists, which is \"laptop\". I would like to have the description of row 200 to be filled with \"laptop\" because that's the description for that product code.\nI want to find a solution where I can fill out all NaN values in the second column (product description) based on the first column (product code).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":70222684,"Users Score":0,"Answer":"First, decide on a function that takes descriptions and picks out one of them. You could use min, max, mode, define you own get_desc, etc. Then you can separate the dataframe by product code with groupby and apply whatever function you decided on: df.groupby('product code').apply(get_desc) or df.groupby('product code')['product description'].apply(get_desc) depending on whether get_desc takes a dataframe or column as input. Then you can merge the resulting dataframe with your original dataframe. You can either replace the entire original product description column with the product description column of the groupby output, or have merge create a new column, then fillna the old product description with the new product description.","Q_Score":0,"Tags":"python,pandas","A_Id":70222777,"CreationDate":"2021-12-04T02:51:00.000","Title":"Fill NaN in column 2 with median string based on value in column 1 in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to horizontally cut the spectrogram of a wav file into 24 pieces,and after measuring the power of each piece, and finally rank the pieces by power orders what should I do please","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":70223996,"Users Score":0,"Answer":"Could you show some code that you have written to try out the same? It would be easier to help if we have something to build upon and rectify issues, if any.\nAdditionally please try basic image manipulation to do the same. Instead of cutting you could divide the image into N (here 24) regions and analyze them in parallel using multiprocessing.","Q_Score":0,"Tags":"python-3.x,image,wav,spectrogram","A_Id":70224122,"CreationDate":"2021-12-04T08:08:00.000","Title":"cut the spectrogram of a wav file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to understand why one or two parameters in my Pytorch neural network occasionally become nan after calling optimizer.step().\nI have already checked the gradients after calling .backward() and just before calling the optimizer, and they neither contain nans nor are very large. I am doing gradient clipping, but I don't think that this can be responsible since the gradients still look fine after clipping. I am using single-precision floats everywhere.\nThis behavior happens randomly every hundred thousand epochs or so, and is proving very difficult to debug. Unfortunately the code is too long to reproduce here and I haven't been able to replicate the problem in a smaller example.\nIf anyone can suggest possible issues I haven't mentioned above, that would be super helpful.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":78,"Q_Id":70226432,"Users Score":0,"Answer":"This ended up being ignorance on my part: there were Infs in the gradients that were evading my diagnostic code, as I didn't realize Pytorch's .isnan() method doesn't detect them.","Q_Score":0,"Tags":"python,pytorch","A_Id":70231086,"CreationDate":"2021-12-04T14:19:00.000","Title":"What are the main reasons why some network parameters might become nan after calling optimizer.step in Pytorch?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can someone help me with transforming the following table using a PYTHON function?\nI need 2 new columns: A \"follower Type\" which will have entries as organic or paid and a \"Follower count\" which has the values corresponding to the type of follower.\nCurrent Table -\n\n\n\n\norg\norganic follower\npaid follower\nstart date\nstop date\n\n\n\n\nOne\n2\n0\n1634169600000\n1634256000000\n\n\nOne\n-1\n0\n1634256000000\n1634342400000\n\n\n\n\nDesired Table -\n\n\n\n\norg\nstart date\nstop date\nFollower Type\nFollower Count\n\n\n\n\nOne\n1634169600000\n1634256000000\nOrganic\n2\n\n\nOne\n1634169600000\n1634256000000\nPaid\n0\n\n\nOne\n1634256000000\n1634342400000\nOrganic\n-1\n\n\nOne\n1634256000000\n1634342400000\nPaid\n0\n\n\n\n\nIf anybody knows how to do this, please do let me know.\nThanks and Cheers!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":70233350,"Users Score":0,"Answer":"Use reindex to change column order\n'''\ncolumn_names = [\"C\", \"A\", \"B\"]\ndf = df.reindex(columns=column_names)\n'''\nLike below you can add columns to existing dataframe\ndf[newcolumn]=formula","Q_Score":0,"Tags":"python,pandas,dataframe,etl","A_Id":70233378,"CreationDate":"2021-12-05T10:23:00.000","Title":"How can I do the following dataframe transformation in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset with more than 50 columns and I'm trying to find a way in Python to make a simple linear regression between each combination of variables. The goal here is to find a starting point in furthering my analysis (i.e, I will dwelve deeper into those pairs that have a somewhat significant R Square).\nI've put all my columns in a list of numpy arrays. How could I go about making a simple linear regression between each combination, and for that combination, print the R square? Is there a possibility to try also a multiple linear regression, with up to 5-6 variables, again with each combination?\nEach array has ~200 rows, so code efficiency in terms of speed would not be a big issue for this personal project.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":70239375,"Users Score":0,"Answer":"This is more of an EDA problem than a python problem. Look into some regression resources, specifically a correlation matrix. However, one possible solution could use itertools.combinations with a group size of 6. This will give you 15,890,700 different options for running a regression so unless you want to run greater than 15 million regressions you should do some EDA to find important features in your dataset.","Q_Score":0,"Tags":"python,pandas,scikit-learn,linear-regression","A_Id":70239812,"CreationDate":"2021-12-05T23:12:00.000","Title":"Automatic Linear\/Multiple Regression in Python with 50+ columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use tf.math.round() but the output still has decimal points (310.0, 210.0 etc)\nIf I use tf.cast(tf.math.round(), dtype=\"int32\"), then I see the error mentioned in the title when calling finish on tornado handler\nHow can I cast to int using tensorflow operations and still be json serielizable","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":70247396,"Users Score":0,"Answer":"You should use tf.int32 instead.","Q_Score":0,"Tags":"python,tensorflow","A_Id":70252694,"CreationDate":"2021-12-06T14:51:00.000","Title":"tf cast leads to TypeError: Object of type 'int32' is not JSON serializable","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I trained my named entity recognizer with spacy. I would like to evaluate it. So I looked at the spacy documentation and came across the scorer function. However, it doesn't seem to work with the IOB format. Do you think there will be a way to use spacy to evaluate my IOB data or am I doomed to transform my data into the format spacy wants?\nThank u very much :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":70248250,"Users Score":0,"Answer":"You can't evaluate IOB data directly. You should be able to just use spacy convert to convert it to .spacy data in one step and then use spacy evaluate with that file though. (And if you trained your model in spaCy then presumably you already did the same conversion with your training and dev data?)","Q_Score":1,"Tags":"python,spacy,named-entity-recognition,evaluation","A_Id":70255261,"CreationDate":"2021-12-06T15:56:00.000","Title":"Python - Is there a way to evaluate a named entity recognizer trained on IOB data using spacy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to save \/ load data and objects in Python. I usually use pickle to save pandas data frame and custom objects. Recently I had to change python version (from 3.6 to 3.8) and pandas \/ pickle version accordingly. I now have trouble to read previous pickled version. I have found some ways to deal with that (ranging from using some pickle options to reloading \/ rewriting the data).\nHowever I would be interested in a more generic way to save data \/ objects that would be python \/ packages independant. Does such a thing exists (without adding to much weird dependencies) ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":70258644,"Users Score":0,"Answer":"If you save your data as a .CSV file (depending on what your data looks like) you should'nt get dependencies problem\nIf your data cannot be saved as a csv I think using JSON could also be a solution","Q_Score":0,"Tags":"python,pandas,database,pickle","A_Id":70259168,"CreationDate":"2021-12-07T10:38:00.000","Title":"Save data \/ objects without python \/ pandas \/ pickle dependencies?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ERROR: Failed building wheel for scikit-image\nFailed to build scikit-image\nERROR: Could not build wheels for scikit-image, which is required to install pyproject.toml-based projects","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":273,"Q_Id":70267226,"Users Score":0,"Answer":"Did you try to upgrade pip ( python -m pip install --upgrade pip )?\nTry installing numpy first too.","Q_Score":0,"Tags":"python","A_Id":70267378,"CreationDate":"2021-12-07T21:35:00.000","Title":"Failed building wheel for scikit-image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"in google colaboratory using python, I am trying to load model to classify\nI am trying to load keras model using python to classify image, I am getting above error","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":560,"Q_Id":70286727,"Users Score":0,"Answer":"predict_classes is only available for the Sequential class.\nWith the Model class, you can use the predict method which will give you a vector of probabilities and then get the argmax of this vector (with np.argmax(y_pred1,axis=1)).","Q_Score":0,"Tags":"python,tensorflow,keras,google-colaboratory","A_Id":70299600,"CreationDate":"2021-12-09T08:29:00.000","Title":"AttributeError: 'Functional' object has no attribute 'predict_classes'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In practice, using both [..., :2] and [:2] on np.array([1,2,3]) results in np.array([1,2]). Are there also cases where the result differs when you use an ellipsis like this on an array?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":70291089,"Users Score":1,"Answer":"np.arrays are designed to handle n-dimensional arrays, specified as [rows, columns]In the case of np.array([1, 2, 3]), [:2] and [:, :2] will yield the same result because our array input is 1-dimensional of shape [1, 3], e.g. with 1 row and 3 columns.\nIf we instead input np.array([[1,2,3], [4,5,6]]), e.g. a 2-dimensional array of shape [2, 3], this will change. On this array, if we, e.g., do [:1, :2] we will get array([[1, 2]]) because we are asking for everything up to the first (i.e. the 2nd since we count from zero) row and everything up to the second (i.e. the 3rd) column.\nHope this makes sense.","Q_Score":0,"Tags":"python,numpy","A_Id":70291290,"CreationDate":"2021-12-09T13:51:00.000","Title":"What is the difference between using [..., :] and [:] on a numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use an LSTM model to predict the future sales.\nThe data is like the table below.\n\n\n\n\ndate \u00a0 \u00a0 \u00a0\nstore\nfamily \u00a0 \u00a0\nsales\n\n\n\n\n01\/01\/2013\n1 \u00a0 \u00a0\nAUTOMOTIVE\n0 \u00a0 \u00a0\n\n\n01\/01\/2013\n1 \u00a0 \u00a0\nBABY CARE \u00a0\n0 \u00a0 \u00a0\n\n\n01\/01\/2013\n1 \u00a0 \u00a0\nBEAUTY \u00a0 \u00a0\n1 \u00a0 \u00a0\n\n\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n\n\n01\/01\/2013\n2 \u00a0 \u00a0\nAUTOMOTIVE\n0 \u00a0 \u00a0\n\n\n01\/01\/2013\n2 \u00a0 \u00a0\nBABY CARE \u00a0\n0 \u00a0 \u00a0\n\n\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n\n\n01\/01\/2013\n50 \u00a0 \u00a0\nAUTOMOTIVE\n0 \u00a0 \u00a0\n\n\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n\n\n01\/02\/2013\n1 \u00a0 \u00a0\nAUTOMOTIVE\n0 \u00a0 \u00a0\n\n\n01\/02\/2013\n1 \u00a0 \u00a0\nBABY CARE \u00a0\n50 \u00a0 \u00a0\n\n\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n\n\n01\/02\/2013\n2 \u00a0 \u00a0\nAUTOMOTIVE\n500 \u00a0\n\n\n01\/02\/2013\n2 \u00a0 \u00a0\nBABY CARE \u00a0\n0 \u00a0 \u00a0\n\n\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n\n\n01\/02\/2013\n50 \u00a0 \u00a0\nAUTOMOTIVE\n0 \u00a0 \u00a0\n\n\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n\n\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n\n\n12\/31\/2015\n1 \u00a0 \u00a0\nAUTOMOTIVE\n0 \u00a0 \u00a0\n\n\n12\/31\/2015\n1 \u00a0 \u00a0\nBABY CARE \u00a0\n50 \u00a0 \u00a0\n\n\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n\n\n12\/31\/2015\n2 \u00a0 \u00a0\nAUTOMOTIVE\n500 \u00a0\n\n\n12\/31\/2015\n2 \u00a0 \u00a0\nBABY CARE \u00a0\n0 \u00a0 \u00a0\n\n\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n\n\n12\/31\/2015\n50 \u00a0 \u00a0\nAUTOMOTIVE\n0 \u00a0 \u00a0\n\n\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n.. \u00a0 \u00a0 \u00a0 \u00a0\n. \u00a0 \u00a0\n\n\n\n\n\nFor each day, it has 50 stores.\nFor each store, it has different type of family (product). (They are all in perfect order, thank God).\nLast, for each type of family, it has its sales.\n\nHere is the problem.\nThe dimension of input of LSTM model is (Batch_Size, Sequence_Length, Input_Dimension). It is a 3D tensor.\nHowever, in my case, my Input_Dimension is 2D, which is (rows x columns)\nrows: number of rows in one day, which is 1782\ncolumns: number of features, which is 2 (store and family)\nIs there a good way to make my data into a shape which can be fed into a LSTM model?\nThanks a lot!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":211,"Q_Id":70299742,"Users Score":0,"Answer":"The solution I came up with is to make the whole data in each day to be a long long long sequence.\nSo the dimension will be 1D, and can be fed into the LSTM model.\nBut I don't think this is the optimal solution.\nDoes anyone come up with better answer?\nAppreciate.","Q_Score":1,"Tags":"python,pytorch,lstm,recurrent-neural-network","A_Id":70334001,"CreationDate":"2021-12-10T04:25:00.000","Title":"How to feed a 4D tensor into LSTM model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a csv file with comments marked by '#'. I want to select only the table part from this and get it into a pandas dataframe. I can just check the '#' marks and the table header and delete them but it will not be dynamic enough. If the csv file is slightly changed it won't work.\nPlease help me figure out a way to extract only the table part from this csv file.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":70302840,"Users Score":0,"Answer":".csv file can't have comment. Then you must delete comment-line manualy. Try start checking from end file, and stop if # in LINE and ';' not in LINE","Q_Score":0,"Tags":"python,pandas,oracle,dataframe,csv","A_Id":70303795,"CreationDate":"2021-12-10T10:11:00.000","Title":"How to extract a table from a csv file generated by Database","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a recurring issue when running even the simplest code using deepface.\nFor information I run it on python 3.9, with a M1 macbook air on OS Monterey 12.0.1\nI can't seem to find any information on how to resolve it, hope anyone can help !\nThank you very much in advance,\nPaul\nfrom deepface import DeepFace\nresult = DeepFace.verify(img1_path = \"photo1.jpg\", img2_path = \"photo2.jpg\")\nobj = DeepFace.analyze(img_path = \"photo1.jpg\", actions = ['age', 'gender', 'race', 'emotion'])","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":326,"Q_Id":70324807,"Users Score":0,"Answer":"I finally found a solution : underlying was an issue with tensor flow. I changed the version I had and replaced it with an M1-compatible version. It worked as intented","Q_Score":0,"Tags":"tensorflow,apple-m1,python-3.9,illegal-instruction,deepface","A_Id":70440792,"CreationDate":"2021-12-12T15:10:00.000","Title":"Illegal Instruction : 4 when running deepface","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"if i wanted to train an lstm to predict the next date in a sequence of dates, how would i do that since lstm require a scaled value?\nexample of data:\n\n\n\n\ndate\nnext date\n\n\n\n\n2012-05-12\n2012-05-13\n\n\n2012-05-13\n2012-05-19\n\n\n2012-05-19\n2012-05-20\n\n\n2012-05-20\n2012-05-22\n\n\n2012-05-22\n2012-05-26\n\n\n2012-05-26\n2012-05-27\n\n\n2012-05-27\n2012-05-30\n\n\n2012-05-30\n2012-06-12\n\n\n2012-06-12\n2012-05-19\n\n\n2012-06-19\n2012-06-25","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":145,"Q_Id":70333854,"Users Score":2,"Answer":"You could hand over the date split into three inputs: One would then be the year, the other the month, and the last the day. While normalizing your inputs definitely makes sense, however I would not entirely agree with your \"LSTM requires\".\nDay and month are already limited to a range of values which can be scaled\n\nday (1 - 31)\nmonth (1 - 12)\n\nFor year you need to make an educated assumption based on your application. So that year can then also be transferred to a scaled value. Judging from your data, it might be that year is constant at 2012 and it is not needed to begin with.\n\nyear (2012 - 2013(?))\n\nNote: Ask yourself whether you give the neural network enough system information to be able to predict the next date - meaning, is there already enough of a pattern in your data? Otherwise you might end up training a random predictor.","Q_Score":2,"Tags":"python,datetime,machine-learning,time-series,lstm","A_Id":70333995,"CreationDate":"2021-12-13T11:24:00.000","Title":"How to train a LSTM on a sequence of dates?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We can basically use databricks as intermediate but I'm stuck on the python script to replicate data from blob storage to azure my sql every 30 second we are using CSV file here.The script needs to store the csv's in current timestamps.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":39,"Q_Id":70345519,"Users Score":1,"Answer":"There is no ready stream option for mysql in spark\/databricks as it is not stream source\/sink technology.\nYou can use in databricks writeStream .forEach(df) or .forEachBatch(df) option. This way it create temporary dataframe which you can save in place of your choice (so write to mysql).\nPersonally I would go for simple solution. In Azure Data Factory is enough to create two datasets (can be even without it) - one mysql, one blob and use pipeline with Copy activity to transfer data.","Q_Score":1,"Tags":"python,azure,apache-spark,google-cloud-platform,databricks","A_Id":70347715,"CreationDate":"2021-12-14T07:59:00.000","Title":"Is there any way to replicate realtime streaming from azure blob storage to to azure my sql","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have 2 environments:\nEnvironment #1:\n\nPython 3.7.5\nPandas 0.23.4\n\nEnvironment #2:\n\nPython 3.8.10\nPandas 1.3.4\n\nI have the same code in both versions, no modifications were made to it. However, I have this specific line of code which seems to be causing an issue\/produces a different output:\ndf_result = pd.merge(df_l, df_r, left_on=left_on, right_on=right_on, how='inner', suffixes=suffixes)\ndf_l and df_r are just read Excel files. I checked them in debugger in both versions and they are completely the same, so that should be fine.\nAlso, the left_on, right_on and suffixes variables have exactly the same value in both environments (checked via debugger, as well).\nHowever, when the df_result gets generated by the merge function, in environment #1 (old Python, old Pandas) it produces a DataFrame with 16170 rows. In environment #2 (new Python, new Pandas) it produces a DataFrame with only 8249 rows.\nThe number of columns are the same, difference is only in number of rows.\nWhat is causing this behavior?\nHow do I make sure that the environment #2 (new Python, new Pandas) produces exactly the same output with 16170 rows as produced by environment #1 (old Python, old Pandas)?\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":70346888,"Users Score":1,"Answer":"At the end the issue lied within new Pandas' approach to handle NaN values.\nWhile in the old Pandas the code changed the NaN values with (as string), in the new Pandas it just left it as nan (pd.nan type).\nI made sure to do df.fillna('', inplace=True) and it worked fine. The resulted DataFrame now has the same number of rows as produced by the old Pandas.","Q_Score":0,"Tags":"python,pandas","A_Id":70349491,"CreationDate":"2021-12-14T09:53:00.000","Title":"Python - Old pandas merge results in more rows than new pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I updated my Python3 to Python 3.10. It still is showing Python 3.8 as my current version. but that's not the issue. My issue is that when I went to install the matplotlib package using pip install matplotlib, I got some errors. I also tried running pip3 install matplotlib. I got the following errors:\n\nWARNING: Retrying (Retry(total=4, connect=None, read=None,\nredirect=None, status=None)) after connection broken by\n'NewConnectionError(': Failed to establish a new connection: [Errno\n8] nodename nor servname provided, or not known')':\n\/simple\/matplotlib\/\nERROR: Could not find a version that satisfies the requirement\nmatplotlib (from versions: none) ERROR: No matching distribution found\nfor matplotlib\n\nThe I tried running \/Applications\/Xcode.app\/Contents\/Developer\/usr\/bin\/python3 -m pip install --upgrade pip and got the following error:\n\nDefaulting to user installation because normal site-packages is not\nwriteable.\nRequirement already up-to-date: pip in\n\/Applications\/Xcode.app\/Contents\/Developer\/Library\/Frameworks\/Python3.framework\/Versions\/3.8\/lib\/python3.8\/site-packages\n(20.2.3)\n\nI don't get it. It wanted me to upgrade pip and then says it's already up to date?\nI just need the matplotlib module installed for my Python scripts.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":168,"Q_Id":70350672,"Users Score":0,"Answer":"If you are trying to install matplotlib in your organisation laptop then your organsiation could be blocking the network to connect and download the package. This is one reason its showing retrying error message. You can try disconnecting vpn if you are connecting with any and retry installing it. This error is due to network issue only.","Q_Score":0,"Tags":"python-3.x,matplotlib,pip","A_Id":70359746,"CreationDate":"2021-12-14T14:32:00.000","Title":"Errors while installing matplotlib using pip install","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently i was struggling trying to take the pixel values of a 3D volume (np array) using specific space coordinate of a STL object.\nThe STL object is spatially overlapped with the 3D volume but the latter has no coordinate and so i don't know how to pick pixel values corresponding to the STL coordinates.\nAny idea?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":70351665,"Users Score":0,"Answer":"If the STL object is truly in the 3d volume's coordinate space, then you can simply STL's coordinate as an index to lookup the value from the 3d array. This lookup does nearest neighbor interpolation of the 3d image. For better looking results you'd want to do linear (or even cubic) interpolation of the nearby pixels.\nIn most 3d imaging tasks, those coordinate spaces do not align. So there is a transform to go from world space to 3d volume space. But if all you have is a 3d numpy array, then there is no transformation information.\nUpdate:\nTo index into the 3d volume take the X, Y, Z coordinates of your point from the STL object and convert them into integer value I, J, K. Then lookup in the numpy array using I,J,K as indices: np_array[K][J][I]. I think you have to reverse the order of the indices because of the array ordering numpy uses.\nWhen you way 3d array and the STL align in python, how are you showing that? The original DICOM or Nifti certainly have world coordinate transformations in the metadata.","Q_Score":0,"Tags":"python,stl,interpolation,medical-imaging,nifti","A_Id":70356311,"CreationDate":"2021-12-14T15:46:00.000","Title":"Mapping values from NP ARRAY to STL","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a certain number of datasets and I've given numbers to each of them as the names let's consider 20 datasets, so the names are 1.csv, 2.csv and so on.\nI'm trying to give an input, here the number(name of the dataset) so that my code reads and works on that dataset. How do I make that possible?\nI've done something like giving input and changing it into a string and using pandas read_csv(string+\".csv\") but the code's not working\nCan anyone help out?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":31,"Q_Id":70361023,"Users Score":0,"Answer":"pandas read_csv(string+\".csv\")\nI have done this and it works, I had to change the integer to string first.","Q_Score":0,"Tags":"python,pandas,tkinter,dataset","A_Id":70396788,"CreationDate":"2021-12-15T09:13:00.000","Title":"Giving input such that it reads the exact dataset among the others tkinter","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I see\n\ndf[\"col2\"] = df[\"col1\"].apply(len)\nlen(df[\"col1\"])\n\nMy question is,\n\nWhy use \"len\" function without parenthesis in 1, but use it with parenthesis in 2?\n\nWhat is the difference between the two?\n\n\nI see this kind of occasion a lot, where using a function with and without parenthesis.\nCan someone explain to me what exactly is going on?\nThanks.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":70363430,"Users Score":0,"Answer":"len(s) will return the lenght of the s variable\nlen will return the function itslelf. So if I do a=len, then I can do a(s). Of course, it is not recommended to do such thing as a=len.","Q_Score":2,"Tags":"python,pandas","A_Id":70363533,"CreationDate":"2021-12-15T12:06:00.000","Title":"difference between \"function()\" and \"function\"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I see\n\ndf[\"col2\"] = df[\"col1\"].apply(len)\nlen(df[\"col1\"])\n\nMy question is,\n\nWhy use \"len\" function without parenthesis in 1, but use it with parenthesis in 2?\n\nWhat is the difference between the two?\n\n\nI see this kind of occasion a lot, where using a function with and without parenthesis.\nCan someone explain to me what exactly is going on?\nThanks.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":70363430,"Users Score":0,"Answer":"In the second case you are directly calling the len method and will get the result, i.e. how many rows are in col1 in the df.\nIn the first you are giving the reference to the len function to the apply function.\nThis is a shortcut for df[\"col2\"] = df[\"col1\"].apply(lambda x: len(x))\nThis version you use if you want to make the behavior of a method flexible by letting the user of the method hand in the function to influence some part of an algorithm. Like here in the case with the apply method. Depending of the conents in the column you want to fill the new column with something, and here it was decided to fill this with the lengths of the content of other column.","Q_Score":2,"Tags":"python,pandas","A_Id":70363532,"CreationDate":"2021-12-15T12:06:00.000","Title":"difference between \"function()\" and \"function\"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I see\n\ndf[\"col2\"] = df[\"col1\"].apply(len)\nlen(df[\"col1\"])\n\nMy question is,\n\nWhy use \"len\" function without parenthesis in 1, but use it with parenthesis in 2?\n\nWhat is the difference between the two?\n\n\nI see this kind of occasion a lot, where using a function with and without parenthesis.\nCan someone explain to me what exactly is going on?\nThanks.","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":81,"Q_Id":70363430,"Users Score":0,"Answer":"In 1, the function len is being passed to a method called apply. That method presumably will apply the function len along the first axis (probably returning something like a list of lengths). In 2, the function len is being called directly, with an argument df[\"col2\"], presumably to get the length of the data frame.\nThe use in 1 is sometimes called a \"higher order function\", but in principle it's just passing a function to another function for it to use.","Q_Score":2,"Tags":"python,pandas","A_Id":70363505,"CreationDate":"2021-12-15T12:06:00.000","Title":"difference between \"function()\" and \"function\"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When i have python 3.5 and python 3.6 on ubuntu .I entered some alternate commands to use python 3.5 only (when I type python -V and python3 -V the same output is 3.5.2)\nAnd then i install virtualenv and virtualenvwrapper \u2014 these packages allow me to create and manage Python virtual environments:\n$ sudo pip install virtualenv virtualenvwrapper\n$ sudo rm -rf ~\/get-pip.py ~\/.cache\/pip\nTo finish the install of these tools, I updated our ~\/.bashrc file.I added the following lines to your ~\/.bashrc :\nexport WORKON_HOME=$HOME\/.virtualenvs\nexport VIRTUALENVWRAPPER_PYTHON=\/usr\/bin\/python3\nsource \/usr\/local\/bin\/virtualenvwrapper.sh\nNext, source the ~\/.bashrc file:\n$ source ~\/.bashrc\nAnd final I created your OpenCV 4 + Python 3 virtual environment:\n$ mkvirtualenv cv -p python3\ni have created the virtual environment but had some problems in the back end and i guess it was due to the presence of python3.6. In the end i decided to uninstall python 3.6 and rerun the steps above from scratch and had a problem at the last step that I mentioned above.When i enter command \"mkvirtualenv cv -p python3\" i get an ERROR:\nFileExistsError: [Errno 17] File exists: '\/usr\/bin\/python' -> '\/home\/had2000\/.virtualenvs\/cv\/bin\/python'\nAt the same time when i enter the command \"update-alternatives --config python\" python3.6 is no longer there,but i get a warning:\nupdate-alternatives: warning: alternative \/usr\/bin\/python3.6 (part of link group python) doesn't exist; removing from list of alternatives\nThere is 1 choice for the alternative python (providing \/usr\/bin\/python).\nLooking forward to your help, thank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":418,"Q_Id":70365689,"Users Score":0,"Answer":"From the commands you've shared, the error arises from the mkvirtualenv cv being run twice - i.e. the environment already exists. To remove the environment you created, do: rmvirtualenv env-name-here which in this case will become rmvirtualenv cv. This shouldn't be done with that environment active, BTW. An alternate route is to delete $WORKON_HOME\/env-name-here. By default, $WORKON_HOME is usually .virtualenvs.","Q_Score":1,"Tags":"python,opencv,virtualenv,virtualenvwrapper","A_Id":70370334,"CreationDate":"2021-12-15T14:47:00.000","Title":"FileExistsError: [Errno 17] File exists: '\/usr\/bin\/python' -> '\/home\/had2000\/.virtualenvs\/cv\/bin\/python'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"I encountered a problem while doing my ML project. Hope to get some advice from you!\nI fit logistic LASSO on a dataset with only 15 features trying to predict a binary outcome. I know that LASSO is supposed to do feature selection and eliminate the unimportant ones (coefficient = 0), but in my analysis, it has selected all the features and did not eliminate any one of them. My questions are:\n\nIs this because I have too few features, or that the features are not correlated with each other(low co-linearity?)\nIs this a bad thing or a good thing for a classification model?\nsome coefficients of the features LASSO selected are less than 0.1, can I interpret them as non-important or not that important to the model?\n\np.s. I run the model using the sklearn package in python.\nThank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":77,"Q_Id":70368565,"Users Score":0,"Answer":"Lasso did not fail to perform feature selection. It just determined that none of the 15 features were unimportant. For the one's where you get coefficients = 0.1 this just means that they are less important when compared to other more important features. So I would not be concerned!\nAlso 15 features is not a large amount of features for Lasso to determine the important one's. I mean it depends on the data so for some datasets, it can eliminate some features from a dataset of 10 features and sometimes it won't eliminate any from a dataset of 20. It just depends on the data!\nCheers!","Q_Score":0,"Tags":"python,machine-learning,feature-selection,lasso-regression","A_Id":70409173,"CreationDate":"2021-12-15T18:22:00.000","Title":"Whay did LASSO fail to perform feature selection\uff1f","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm struggling to install Spyder (5.1.5) after installing Keras and Tensorflow.\nHere are the steps I have taken so far:\n\nInstall Anaconda\nWithin Anaconda Navigator create a new environment named 'tensorflow'\nInstall tensorflow and keras within Anaconda Navigator in the 'tensorflow' environment.\nattempt to install Spyder from Anaconda Navigator in the 'tensorflow' environment. I get the following error message when I do this:\n\n'spyder cannot be installed on this environment. Do you want to install the package in an existing environment or create a new environment?'\nThe other thing I've tried, from the Anaconda prompt:\n\nconda activate tensorflow (activate tensorflow environment)\nconda install spyder\n\nI get the following error:\nSolving environment: failed with initial frozen solve. Retrying with flexible solve.\nSolving environment: failed with repodata from current_repodata.json, will retry with next repodata source.\nCollecting package metadata (repodata.json): done\nSolving environment: failed with initial frozen solve. Retrying with flexible solve.\nSolving environment: -\nFound conflicts! Looking for incompatible packages.\nThis can take several minutes. Press CTRL-C to abort.\nThanks for any help!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1380,"Q_Id":70369851,"Users Score":0,"Answer":"My solution was to install Spyder first in a clean environment and then use pip.\npip install tensorflow\nwhich installed tensorflow.keras as well.","Q_Score":3,"Tags":"python,tensorflow,keras,anaconda,spyder","A_Id":72335556,"CreationDate":"2021-12-15T20:21:00.000","Title":"Can't install Spyder after installing Tensorflow and Keras","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Encountering an error when running this cell, does any one know how to fix it? Thank you.\ncfos and autofluo images have been resampled align with the template\/reference file in atlas. Is it necessary to debug this file?\nFile \"\/homeanaconda3\/envs\/ClearMapStable\/lib\/python3.6\/site-packages\/tifffile\/tifffile.py\", line 4696, in open\nself._fh = open(self._file, self._mode)\nFileNotFoundError: [Errno 2] No such file or directory: '\/home\/ClearMap2\/Documentation\/Example\/Haloperidol\/haloperidol\/1268\/debug_resampled.tif'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":70370370,"Users Score":0,"Answer":"Thanks for all responding.\nI've solved this problem. It simply need to turn on the debug mode at the beginning of the script. In my case, I turned on the change the code to 'ws.debug = True' in the Initialize workspace cell.","Q_Score":0,"Tags":"python","A_Id":70654656,"CreationDate":"2021-12-15T21:13:00.000","Title":"ClearMap2 - Cell alignment section - file not found error","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On the column that I\u2019d like to filter, the column contains data from two different sources. I\u2019d like to normalize this data. We collected some data a certain way and the other rows of data contain data that was collected another way. There are rows that contain 1.2 2.3 3.4 and nothing over 5. I would like to multiply these numbers by 1,000 to match up with the others and remove the comma from the numbers above 1,000.\n\n\n\n\ncol1\ncol2\n\n\n\n\n1\n1.641\n\n\n2\n1.548\n\n\n3\n1,807.000\n\n\n4\n1,759.000","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":70386662,"Users Score":0,"Answer":"I have only tried one solution of those given.\nFloat64. What you talked about is accurate @fmarz10. I wanted to filter the rows and apply a transformation, then remove something. This first row of code works perfect, it just worked.\ndf.loc[df[\u2018col2\u2019]<=5,\u2019col2\u2019] = df[\u2018col2\u2019]*1000\nI did, however, refrain from using the second suggestion to as some numbers are not just whole numbers and contain values at least two places into the decimal. To complete it, I did something like this and it looks good from just scanning the first few rows.\ndf[\u2018col\u20192\u2019] = df[\u2018col2\u2019].replace(\u2018,\u2019,\u2019\u2019)\nvs the original suggestion:\ndf[\u2018col\u20192\u2019] = df[\u2018col2\u2019].str.replace(\u2018,\u2019,\u2019\u2019)\nNOTE: This works, but this is weekly data, and each row is a week\u2019s worth of data and there are about 15,000 rows, so I need to just graze a few before making an assessment, but the first few look good.","Q_Score":0,"Tags":"python,pandas,dataframe,filter","A_Id":70398127,"CreationDate":"2021-12-16T23:11:00.000","Title":"I have a data frame that I\u2019d like to filter","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On the column that I\u2019d like to filter, the column contains data from two different sources. I\u2019d like to normalize this data. We collected some data a certain way and the other rows of data contain data that was collected another way. There are rows that contain 1.2 2.3 3.4 and nothing over 5. I would like to multiply these numbers by 1,000 to match up with the others and remove the comma from the numbers above 1,000.\n\n\n\n\ncol1\ncol2\n\n\n\n\n1\n1.641\n\n\n2\n1.548\n\n\n3\n1,807.000\n\n\n4\n1,759.000","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":79,"Q_Id":70386662,"Users Score":1,"Answer":"One other thing I thought about was the type mixing in Python. Based on what you have above, my guess is either you have col2 as string or float. If string, then go through the replace method to get rid of the string. If float, then you shouldn't need to replace the comma (that may be how Python shows thousands and millions but I can't remember specifics).\nRun print(df.dtypes) to check.","Q_Score":0,"Tags":"python,pandas,dataframe,filter","A_Id":70394474,"CreationDate":"2021-12-16T23:11:00.000","Title":"I have a data frame that I\u2019d like to filter","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On the column that I\u2019d like to filter, the column contains data from two different sources. I\u2019d like to normalize this data. We collected some data a certain way and the other rows of data contain data that was collected another way. There are rows that contain 1.2 2.3 3.4 and nothing over 5. I would like to multiply these numbers by 1,000 to match up with the others and remove the comma from the numbers above 1,000.\n\n\n\n\ncol1\ncol2\n\n\n\n\n1\n1.641\n\n\n2\n1.548\n\n\n3\n1,807.000\n\n\n4\n1,759.000","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":79,"Q_Id":70386662,"Users Score":1,"Answer":"It sounds like you want to filter some rows (col2 < 5), apply a transformation (col2 * 1000) then remove something (,).\ndf.loc[df['col2']<=5,'col2'] = df['col2']*1000\nNext would be to remove the comma but if you know all the values in col2 are whole numbers (no decimals) then I think you can just\ndf['col2'] = int(df['col2'])\nBut its safer to apply a replace but only if the values are string (if not, df['col2'] = str(df['col2']))\nThen you can apply the following:\ndf['col'2'] = df['col2'].str.replace(',','')","Q_Score":0,"Tags":"python,pandas,dataframe,filter","A_Id":70386915,"CreationDate":"2021-12-16T23:11:00.000","Title":"I have a data frame that I\u2019d like to filter","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some indices that I have to apply retention policies to.\nIndice-a-date_of_creation 30 days\nIndice-b-date_of_creation 180 days\nIs there a way to set retention policies to those Indices on Kibana?\nIf not, how can I set them on elasticsearch?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":205,"Q_Id":70393923,"Users Score":2,"Answer":"Since ELK 6.6 (XPack) there is index lifecycle management.\nIn the ELK 7.16 you can use Index Lifecycle Policies in kibana\n\nStack Management > Index Lifecycle Policies. Click Create policy.\n\n\nIn older versions as your indexes contain timestamp you can write script to generate list of indexes to delete and then run loop over such list and call\ncurl -XDELETE","Q_Score":0,"Tags":"python,elasticsearch,kibana,retention,elasticsearch-indices","A_Id":70394894,"CreationDate":"2021-12-17T13:27:00.000","Title":"Set retention days to a Elasticsearch indices, using kibana or Elasticsearch itself","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i'm doing k-mean clustering on an image (fruit tree image) with k=4 clusters. when i display 4 clusters seperately, fruits goes to cluster1, stem goes to cluster 2, leaves goes to clster3 and background goes to cluster4. i'm further interested in fruit clutser only. the probelm is when i change image to another fruit tree image, fruit cluster goes to cluster2 or sometimes to clsuter3 or 4. my wish is to not change the cluster for fruit, means if fruit is in cluster1 it should be in cluster1 in all images of fruit tree. how can i do that? 2ndly if its not possible i want to select that cluster automatically which contains fruit. how can i do that? thanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":105,"Q_Id":70445348,"Users Score":1,"Answer":"K-means clustering is unsupervised, meaning the algorithm does not know any labels. That is why the clusters are assigned at random to the targets. You can use a heuristic evaluation of the fruit cluster to determine which one it is. For example, based on data about the pixels (color, location, etc), and then assign it a label by hand. In any case, this step will require human intervention of some sort.","Q_Score":0,"Tags":"python,opencv,cluster-analysis,k-means,image-segmentation","A_Id":70445416,"CreationDate":"2021-12-22T07:05:00.000","Title":"How to choose required cluster after k-means clustering in python opencv?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running a python script in VSCode on a remote server and I want to save a dataframe that is generated in that script locally. Is this somehow possible? Thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":50,"Q_Id":70447849,"Users Score":1,"Answer":"You can save the dataframe to a directory (maybe in .csv) on the remote server and download it from the explorer in VSCode by right-clicking on that file.","Q_Score":0,"Tags":"python,visual-studio-code,pytorch,remote-server","A_Id":70467001,"CreationDate":"2021-12-22T10:49:00.000","Title":"Locally save a dataframe from a remote server in VSCode","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Until now, I have always used SAS to work with sensitive data. It would be nice to change to Python instead. However, I realize I do not understand how the data is handled during processing in pandas.\nWhile running SAS, one knows exactly where all the temporary files are stored (hence it is easy to keep these in an encrypted container). But what happens when I use pandas data frames? I think I would not even notice, if the data left my computer during processing.\nThe size of the mere flat files, of which I typically have dozens to merge, are a couple of Gb. Hence I cannot simply rely on the hope, that everything will be kept in the RAM during processing - or can I? I am currently using a desktop with 64 Gb RAM.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":70447960,"Users Score":0,"Answer":"If it's a matter of life and death, I would write the data merging function in C. This is the only way to be 100% sure of what happens with the data. The general philosophy of Python is to hide whatever happens \"under the hood\", this does not seem to fit your particular use case.","Q_Score":0,"Tags":"python,pandas,temporary-files","A_Id":70448388,"CreationDate":"2021-12-22T10:58:00.000","Title":"Are temporary files generated while working with pandas data frames","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a retail dataset that consists of uncleaned mobile phone numbers. I have data like this\n\n\n\n\nPhone Number\n\n\n\n\n03451000000\n\n\n03451000001\n\n\n03451010101\n\n\n03451111111\n\n\n03459999999\n\n\n03459090909\n\n\n\n\nNow there is a very high probability that the above phone numbers are fakely entered by cashier. The genuine number looks like this for example 03453485413.\nThere are two important things:\n\nThe length of the string is always fixed 11 characters\nThe phone number always starts with 03*********\n\nNow how do I eliminate phone numbers based on the rule that, for example, character repetition of more than 5 times eliminated?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":70448636,"Users Score":0,"Answer":"You should use regex to find such patters.\nFor example:\n(\\d)\\1{4,}\nThis will match a digit and check if it repeats itself 4 more times. This is the case in examples 1, 2, 4 & 5\nAnother example is: (\\d\\d)\\1{2,}\nThis will match 2 digits and checks if it repeats itself 2 more times. This is the case in examples 1, 3, 4, 5 & 6","Q_Score":0,"Tags":"python,pandas,data-manipulation","A_Id":70449885,"CreationDate":"2021-12-22T11:52:00.000","Title":"Python Pandas phone numbers cleaning by eliminating consecutive repeated characters","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found\nI tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\nAfter this, there comes a traceback error which says on the last line: \"from tensorflow.summary import FileWriter\nImportError: cannot import name 'FileWriter' from 'tensorflow.summary' (C:\\Users\\HP\\tetris-ai\\venv\\lib\\site-packages\\tensorboard\\summary_tf\\summary_init_.py)\nAfter installing tensoflow gpu again, I got this error\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ntensorflow 2.6.2 requires keras<2.7,>=2.6.0, but you have keras 2.7.0 which is incompatible.\ntensorflow 2.6.2 requires tensorflow-estimator<2.7,>=2.6.0, but you have tensorflow-estimator 2.7.0 which is incompatible.\nSuccessfully installed keras-2.7.0 tensorflow-estimator-2.7.0 tensorflow-gpu-2.7.0\nBut my issue with the dll and traceback error continued.In Vscode and in pycharm.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":108,"Q_Id":70459068,"Users Score":0,"Answer":"It could be that you need a Nvidia GPU, CUDA is the language NVIDIA uses.\nYou can check if you have one following these steps: Windows -> Task Manager.","Q_Score":0,"Tags":"python,tensorflow,keras","A_Id":70468868,"CreationDate":"2021-12-23T07:42:00.000","Title":"Could not load library cudart64_110.dll with tensor flow gpu installation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working on a deep neural network, but i am confused about how we can compute the training time of a deep neural network. How i will know that my neural network takes less time compared to other deep neural networks.\nI am looking forward to your help and any article recommendation.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":540,"Q_Id":70462682,"Users Score":-1,"Answer":"If you are using a jupyter notebook or any notebook using a .ipynb file then you can use the: %%time, to calculate the time to run the cell.\nIf you are planning to use a .py code and just want to calculate the time the code runs you can utilise the time library before and after the training portion of the code, you could use the following method\n\nfrom time import time\nstart = time()\n\"Your code\"\nprint(time()-start)","Q_Score":0,"Tags":"python,tensorflow,neural-network","A_Id":70463250,"CreationDate":"2021-12-23T13:19:00.000","Title":"how we can compute the training time of deep neural networks?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am scraping reviews off Amazon with the intent to perform sentiment analysis to classify them into positive, negative and neutral. Now the data I would get would be text and unlabeled.\nMy approach to this problem would be as following:-\n1.) Label the data using clustering algorithms like DBScan, HDBScan or KMeans. The number of clusters would obviously be 3.\n2.) Train a Classification algorithm on the labelled data.\nNow I have never performed clustering on text data but I am familiar with the basics of clustering. So my question is:\n\nIs my approach correct?\n\nAny articles\/blogs\/tutorials I can follow for text based clustering since I am kinda new to this?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":83,"Q_Id":70479512,"Users Score":1,"Answer":"I have never done such an experiment but as far as I know, the most challenging part of this work is transforming the sentences or documents into fixed-length vectors (mapping into semantic space). I highly suggest using a sentiment analysis pipeline from huggingface library for embedding the sentences (in this way you might exploit some supervision). There are other options as well:\n\nUsing sentence-transformers library. (straightforward and still good)\nUsing BoW. (simplest way but hard to get what you want)\nUsing TF-IDF (still simple but may simply do the work)\n\nAfter you reach this point (every review ==> fixed-length vector) you can exploit whatever you want to cluster them and look after the results.","Q_Score":1,"Tags":"python,nlp,sentiment-analysis,multiclass-classification,unsupervised-learning","A_Id":70481004,"CreationDate":"2021-12-25T10:52:00.000","Title":"Clustering text data based on sentiment?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to download NSE Futures data since 2012 for my strategy backtesting. I tried NSEpy and jugaad-data libraries but they are giving one day's data at a time.\nI tried Getbhavcopy as well but the data is not accurate there.\nIs there any other free source to download the same.\nThanks,\nMohit","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":70486337,"Users Score":0,"Answer":"I've used NSEpy, this is basically scraping from the NSE website, better to use some API which actually has the right to provide the data. e.g: Samco, angel one APIs.\nthey are free as well.","Q_Score":0,"Tags":"python,stockquotes","A_Id":70607962,"CreationDate":"2021-12-26T12:11:00.000","Title":"Download Historic NSE Futures Data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a possible approach for extracting sentences from paragraphs \/ sentence tokenization for paragraphs that doesn't have any punctuations and\/or all lowercased? We have a specific need for being able to split paragraphs into sentences while expecting the worst case that paragraph inputted are improper.\nExample:\nthis is a sentence this is a sentence this is a sentence this is a sentence this is a sentence\ninto\n[\"this is a sentence\", \"this is a sentence\", \"this is a sentence\", \"this is a sentence\", \"this is a sentence\"]\nThe sentence tokenizer that we have tried so far seems to rely on punctuations and true casing:\nUsing nltk.sent_tokenize\n\"This is a sentence. This is a sentence. This is a sentence\"\ninto\n['This is a sentence.', 'This is a sentence.', 'This is a sentence']","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":52,"Q_Id":70486787,"Users Score":1,"Answer":"This is a hard problem, and you are likely better off trying to figure out how to deal with imperfect sentence segmentation. That said there are some ways you can deal with this.\nYou can try to train a sentence segmenter from scratch using a sequence labeller. The sentencizer in spaCy is one such model. This should be pretty easy to configure, but without punctuation or case I'm not sure how well it'd work.\nThe other thing you can do is use a parser that segments text into sentences. The spaCy parser does this, but its training data is properly cased and punctuated, so you'd need to train your own model to do this. You could use the output of the parser on normal sentences, with everything lower cased and punctuation removed, as training data. Normally this kind of training data is inferior to the original, but given your specific needs it should be easy to get at least.\nOther possibilities involve using models to add punctuation and casing back, but in that case you run into issues that errors in the models will compound, so it's probably harder than predicting sentence boundaries directly.","Q_Score":1,"Tags":"python,nlp,nltk,spacy,linguistics","A_Id":70491992,"CreationDate":"2021-12-26T13:23:00.000","Title":"Sentence tokenization w\/o relying on punctuations and capitalizations","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a possible approach for extracting sentences from paragraphs \/ sentence tokenization for paragraphs that doesn't have any punctuations and\/or all lowercased? We have a specific need for being able to split paragraphs into sentences while expecting the worst case that paragraph inputted are improper.\nExample:\nthis is a sentence this is a sentence this is a sentence this is a sentence this is a sentence\ninto\n[\"this is a sentence\", \"this is a sentence\", \"this is a sentence\", \"this is a sentence\", \"this is a sentence\"]\nThe sentence tokenizer that we have tried so far seems to rely on punctuations and true casing:\nUsing nltk.sent_tokenize\n\"This is a sentence. This is a sentence. This is a sentence\"\ninto\n['This is a sentence.', 'This is a sentence.', 'This is a sentence']","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":70486787,"Users Score":0,"Answer":"The only thing I can think of is to use a statistical classifier based on words that typically start or end sentences. This will not necessarily work in your example (I think only a full grammatical analysis would be able to identify sentence boundaries in that case), but you might get some way towards your goal.\nSimply build a list of words that typically come at the beginning of a sentence. Words like the or this will probably be quite high on that list; count how many times the word occurs in your training text, and how many of these times it is at the beginning of a sentence. Then do the same for the end -- here you should never get the, as it cannot end a sentence in any but the most contrived examples.\nWith these two lists, go through your text and work out if you have a word that is likely to end a sentence followed by one that is likely to start one; if yes, you have a candidate for a potential sentence boundary. In your example, this would be likely to start a sentence, and sentence would be likely to be the sentence-final word. Obviously it depends on your data whether it works or not. If you're feeling adventurous, use parts-of-speech tags instead of the actual words; then your lists will be much shorter, and it should probably still work just as well.\nHowever, you might find that you also get phrase boundaries (as each sentence will start with a phrase, and the end of the last phrase of a sentence will also coincide with the end of the sentence). It is hard to predict whether it will work without actually trying it out, but it should be quick and easy to implement and is better than nothing.","Q_Score":1,"Tags":"python,nlp,nltk,spacy,linguistics","A_Id":70489170,"CreationDate":"2021-12-26T13:23:00.000","Title":"Sentence tokenization w\/o relying on punctuations and capitalizations","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can someone explain me, how I can recover the previous state of my table in jupyter notebook? For example, I have a table with a column \"prices\" and I accidentally had set all this numbers to 0. How I can make a stepback to recover previous values of numbers in \"prices\". Thank you in advance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":70490737,"Users Score":0,"Answer":"I'm not entirely sure if it will work but you can try navigating to your ipython directory and checking the history.sqlite file there might be a previous state of the table stored there somewhere!\nIf you're on windows just enter ipython locate, navigate to that directory and you will find it inside of profile_default.","Q_Score":1,"Tags":"python,python-3.x,pandas,numpy,jupyter-notebook","A_Id":70682294,"CreationDate":"2021-12-27T00:22:00.000","Title":"How I can recover the previous state of my table in jupyter notebook?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was trying to train a model using tensorboard.\nWhile executing, I got this error:\n$ python train.py Traceback (most recent call last): File \"train.py\", line 6, in from torch.utils.tensorboard import SummaryWriter File \"C:\\Users\\91960\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\utils\\tensorboard\\__init__.py\", line 4, in LooseVersion = distutils.version.LooseVersion \nAttributeError: module 'setuptools._distutils' has no attribute 'version'.\nI'm using python 3.8.9 64-bit & tensorflow with distutils is already installed which is required by tensorboard.\nWhy is this happening ? Please help !","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":18845,"Q_Id":70520120,"Users Score":22,"Answer":"This command did the trick for me:\npython3 -m pip install setuptools==59.5.0\npip successfully installed this version:\nSuccessfully installed setuptools-60.1.0 instead of setuptools-60.2.0","Q_Score":25,"Tags":"python,tensorflow,tensorboard,distutils","A_Id":70563364,"CreationDate":"2021-12-29T13:30:00.000","Title":"AttributeError: module 'setuptools._distutils' has no attribute 'version'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have found the optimal results for 7 hyperparameter namely:\n\nNumber of layers,\nNode size,\nActivation functions,\nlearning rate,\nmomentum,\nbatch size,\noptimizer\n\nUsing Optuna multiobjective optimization. I minimized the training and validation loss as my objectives. Since the number of tuning parameters are more I reduced the number of epoch per trail as 50. Then I got the best parameters, post Optuna optimization. I increased the epoch size and build the same model with torch.manual_seed. But the results obtained after the same 50th epoch is different from what I got in the Optuna results.\nWhat is the reason am I missing anything? I want to reproduce the same results for the same condition!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":89,"Q_Id":70526767,"Users Score":0,"Answer":"Finally, I was able to find the reason for improper reproducibility. In my code I used two different objective functions; def train(trial) and def layer(trial). I pivoted the second objective function into the train(trial). Also, specifying a manual seed is also important. Anyway there will be slight deviations of 0.0001%.","Q_Score":0,"Tags":"python,deep-learning,pytorch,optuna","A_Id":70561263,"CreationDate":"2021-12-30T02:01:00.000","Title":"Results reproducibility using Pytorch and Optuna for DNN","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of tuples with the start and end node ID of a segment. I want to rearrange the tuples (segments) in a way that constructs a continuous path. There can be a loop in the path i.e., one node can be traversed multiple times. I do know the origin and destination of the entire path. There may be some segments that cannot be included in the path as their start or end node cannot be connected to the previous or next node to form that continuous path. These segments should be removed from the rearranged list of tuples.\nExample:\nThe list of tuples may look like the following. Also, I know that the path should start at 101 and end at 205.\n[(101, 203), (104, 202), (203, 104), (104, 208), (185, 205), (202, 185)]\nExpected Results:\n[(101, 203), (203, 104), (104, 202), (202, 185), (185, 205)]\nI would like to solve this in Python. My understanding is that this can be solved using a loop that looks at the end node ID of each segment, finds a segment with a similar start node ID, and adds that append that segment to the list of rearranged tuples. I am not quite sure how it can be solved efficiently. So, any hints or example codes would really help.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":70528041,"Users Score":0,"Answer":"The main challenge in this question is probably just choosing between tuples with the same starting\/ending node, e.g. choosing between whether (104, 202) or (104, 208) should come after (203, 104). You didn't really make it clear of what conditions the resultant path must satisfy so I assume any continuous path is fine.\nIn such a case, this question can simply be framed as a shortest path question, so you just need to find the shortest path between node 101 and 205, and the weight of each vertice is just 1 if the node tuple exist in the set, else infinity. So for example. the vertice between 203 and 104 is 1 since (203, 104) is in the original list, on the other hand the weight between 203 and 208 is infinity, since the vertice doesn't exist. You can then apply the standard Dijkstra shortest path algorithm to obtain the nodes traversed and therefore the tuples.\nEdit\n\nI think I misunderstood what you are trying to do, so I think you are actually trying to include as many nodes as possible for the path created. In such a case, perhaps something like breadth first search would be possible, where you keep track of all possible paths and keep deleting those that have no further possible connections.","Q_Score":2,"Tags":"python,graph,networkx,osmnx","A_Id":70528268,"CreationDate":"2021-12-30T06:11:00.000","Title":"How to rearrange a list of tuples (start node ID, end node ID) to create continuous path in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm training a neural network on stimuli which are being developed to mimic a sensory neuroscience task to compare performance to human results.\nThe task is based on spatial localization of audio. I need to generate white noise audio in python to present to the neural network, but also need to alter the audio as if it were presented at different locations. I understand how I'd generate the audio, but I'm not sure on how to generate the white noise from different theoretical locations.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":70532557,"Users Score":0,"Answer":"You can add a delay to the right or left track, to account for the arrival time at the two ears. If I recall correctly, it amounts to up to about 25 or 30 milliseconds, depending on the angle. The travel distance disparity from source to the two ears can be calculated with basic trigonometry, and then multiplied by speed of sound in air to get the delay length. (IDK what python has for controlling delays or to what granularity delay lengths can be specified.)\nMost of the other cues we have for spacial location are a lot harder to quantify. Most commonly we use volume, of course. Especially for higher-pitched content (wavelengths smaller than the width of the head) the head itself can block and cause some volume differences, based on the angle.\nBut a lot comes from reverberation for environmental cues, from timbrel roll-off as a function of distance (a quiet sound with lots of highs in the mix can really sound like they are right next to your ear), from moving the head to capture the sound from different angles, and from the filtering effects of the pinna of the ear. Because everyone's ear shape is different, I don't know that there is a universal thumbnail algorithm for what causes a sound to be sensed as originating from a particular altitude for a given angle. I think to some extent we just all learn by experiencing the sounds with our own particular ears while observing the sound source visually.","Q_Score":0,"Tags":"python,audio","A_Id":70538556,"CreationDate":"2021-12-30T13:50:00.000","Title":"Generating Spatial White Noise audio in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am doing a face recognition project in a Raspberry Pi. It is a django project and when I run the server I get this error: AttributeError: module 'cv2' has no attribute 'face'\nI searched for the error and I came up with that I needed to install opencv-contrib-python\nThe problem is that when I try to install it the download gets to 99% and I get this: 'pip3 install opencv-contrib-pyt\u2026' terminated by signal SIGKILL (Forced quit).\nDoes anyone know why this happens? how can I fix this? help is much appreciated","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":476,"Q_Id":70548669,"Users Score":-1,"Answer":"I got the same error and fixed it like this.\nTry:\n\nsudo apt install python-opencv libopencv-dev\n\ny\/n --> y\n\nsudo apt install python python-pip\n\ny\/n --> y\n\npip install opencv-python","Q_Score":0,"Tags":"python,opencv,pip,raspberry-pi","A_Id":70548954,"CreationDate":"2022-01-01T11:48:00.000","Title":"Can't install opencv package in a Raspberry pi","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset of a disease with about 37 features, all of them are categorical variables except for two which are \"Age\" and \"Age_onset\". The number of entries is 219.\nI am working on developing a binary classification model that predicts whether patients have this disease or not.\nThe current problem I am facing is deciding what suitable model to select giving the categorical nature and volume of my data.\nNow the categorical variables are not high-cardinality, even after applying one-hot encoding the number of variables increases from 37 to 81 therefore it is still considered low in dimensionality. Thus the feature selection methods are not needed.\nMoreover, the data is not large in terms of the number of entries (219) and dimensionality (81), therefore there is no need to go for complex models such as neural network or ensemble methods.\nThis rules out a large number of models and by far I think the best candidate is the Logistic regression classification model.\nMy question: is this line of reasoning valid? or should I attempt to use complex models and through trial and error I can arrive at the best model in terms of performance and results?\nI have gone through many articles and papers with regard to handling categorical data in classification problems, however, my data contains no continuous variables (except for two) and it is not high in cardinality meaning all of the categorical variables have two or three possible answers (highlighted by the number of features after applying one-hot encoding which is 81). So I am not sure that the solutions discussed in those articles applies to my problem.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":105,"Q_Id":70554166,"Users Score":0,"Answer":"Is this line of reasoning valid?\n\nYou do not have a low number of variables. This number of variables might be considered high, especially considering your low number of entries. As you likely know, scikit-learn is built to handle huge numbers of entries, large datasets. I theorize (not assume) that you have a low number of entries because you cleaned the data too much. You might try another cleaning method that interpolates missing values.\nPurely from an academic perspective with statistics and later studies in data science, I suggest that even better than interpolation with the cleaning, you could gather more data.\nOn another note, no, the reasoning is not valid (see above).\n\nShould I attempt to use complex models and through trial and error I can arrive at the best model in terms of performance and results?\n\nI would, in your position, try all the models you can, with as many different parameters as possible.","Q_Score":0,"Tags":"python,scikit-learn,classification,prediction,categorical-data","A_Id":70554245,"CreationDate":"2022-01-02T07:02:00.000","Title":"Handling small dataset of categorical data in Binary classification problem","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A question regarding filtering using a list of values. I want to do the following:\n\nfilter a dataframe based on certain criteria\ncreate a list of (one column, containing ID's) of this dataframe\nnext i want to exclude this list from another dataframe.\n\nall individual steps are working using the following code:\ndf3 = df2.loc[df2['value'] < parameter] (1)\nmy_list = df3['ID'].tolist() (2)\nfinal_df = df[~df['column'].isin(my_list)] (3)\nyet somehow filtering the frame using the first step results in the final step NOT working (so not filtering anything). When i remove the first step it again works like a charm. Does anybody know what i am doing wrong?\nkind regards,\nAlex","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":70567790,"Users Score":0,"Answer":"Changed datatype to 'integer' between step 1 and 2.","Q_Score":0,"Tags":"python,pandas,dataframe,filtering","A_Id":70591936,"CreationDate":"2022-01-03T15:13:00.000","Title":"How do i filter using a list created from filtered dataframe?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset with dates encoded as strings formatted as %B %d, %Y, eg September 10, 2021.\nUsing:df['sale_date'] = pd.to_datetime(df.sale_date, format = '%B %d, %Y')\nproduces this error ValueError: time data 'September 10, 2021' does not match format '%B %d, %Y' (match)\nManually checking with strptimedatetime.strptime('September 10, 2021', '%B %d, %Y') produces the correct datetime object.\nIs there something I missed in the pd.to_datetime?\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":201,"Q_Id":70575105,"Users Score":0,"Answer":"Upon further investigation, I found out that the error only happens on the first element of the series. It seems that the string has '\\ufeff' added to it. So I just did a series.str.replace() and now it is working. Sorry for the bother. Question is how did that BOM end up there?","Q_Score":0,"Tags":"python,pandas,datetime","A_Id":70575337,"CreationDate":"2022-01-04T07:05:00.000","Title":"String to date in pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building ARIMA\/Sarima Model in percentages but getting following error\n1-\nmodel = SARIMAX(np.asarray(train), order = (0, 1, 1), seasonal_order =(1, 1, 1, 12))\nTypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''\n2- If i don't covert pandas data frame to numpy array i get following error\nmodel = SARIMAX(train, order = (0, 1, 1), seasonal_order =(1, 1, 1, 12))\nValueError: Pandas data cast to numpy dtype of object. Check input data with np.asarray(data).\nthough few days back same code was working which I am using in step 2","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":70584589,"Users Score":0,"Answer":"issue was with the input data format, few rows had commas so it was reading it as string. Once i removed it is working fine.","Q_Score":0,"Tags":"python,arima,sarimax","A_Id":70604210,"CreationDate":"2022-01-04T20:18:00.000","Title":"when I run Arima\/Sarima Model getting errors - ufunc 'isnan' not supported for the input types or Panda on numpy cast","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In the sklearn documentation for sklearn.cross_validation.ShuffleSplit, it states:\nNote: contrary to other cross-validation strategies, random splits do not guarantee that all folds will be different, although this is still very likely for sizeable datasets.\nIs this an issue? If so, why?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":162,"Q_Id":70596943,"Users Score":2,"Answer":"Contrary to the most often used KFold cross validation strategy, the Shuffle Split uses random samples of elements in each iteration. For a working example, let's consider a simple training dataset with 10 observations;\nTraining data = [1,2,3,4,5,6,7,8,9,10]\n\nKFold (k=5)\n\n\nShuffle the data, imagine it is now [6,9,1,4,10,5,7,2,3,8]\nCreate folds; Fold 1 = [6,9], Fold 2 = [1,4], Fold 3 = [10,5], Fold 4 =\n[7,2] and Fold 5 = [3,8]\nTrain keeping one fold aside each iteration for evaluation and using all others\n\n\nShuffle split (n_iter=3, test_size=0.2)\n\nIt works iterative manner where you specify number of iterations (default n_iter=10 in sklearn)\n\nEach iteration shuffle the data; [6,9,1,4,10,3,8,2,5,7], [6,2,1,4,10,7,5,9,3,8] and [2,6,1,4,10,5,7,9,3,8]\nSplit into specified train and evaluation dataset as chosen with the hyper-parameter (test_size); Training data are [6,9,1,4,10,3,8,2], [6,2,1,4,10,7,5,9] and [2,6,1,4,10,5,7,9] respectively. Test data are [5,7], [3,8] and [3,8] respectively.\n\nAs you can notice, although the shuffle is different (technically it can be same), the training and testing data for the last two iteration are exactly same. As the number of iterations increase, your chance of fitting the same dataset increases which is counter-intuitive to the cross-validation idea where we would like get an estimate of generalizability of our model with limited amount of data. On the other hand, the datasets usually contains numerous observations so that having the same (or very similar) training and test datasets is not an issue. Keeping number of iterations high enough improves the generalizability of your results.","Q_Score":2,"Tags":"python,scikit-learn,cross-validation,shuffle","A_Id":70599003,"CreationDate":"2022-01-05T17:16:00.000","Title":"Shuffle split cross validation, what are the limitations?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"hi am working on a project which is detecting heart failure and now I want to use the k_means algorithm for clustering and SVM for classification.\nI need to know if I can split the dataset into training and testing? since am using k_means is it ok??\nplease help...thanks","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":38,"Q_Id":70617725,"Users Score":1,"Answer":"Yes, you can cut randomly in two sets. You can cut in sequential sets. You can cut in large temporally-adjacent tests. That is what the ANOVA tests are all about.","Q_Score":0,"Tags":"python,numpy","A_Id":70617776,"CreationDate":"2022-01-07T07:12:00.000","Title":"can we split the dataset in k means into testing and training?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In fact, I use a neural network consisting of four layers of input and two hidden one for exit and I had 17 features to enter in order to classify or predict something, but the range of weights in the network should be between 1 and -1 and I used the pygad library but when I print the solutions it gives me the range Between 9 and -9, I used the activation function ReLu for the two hidden layers and sigmoid strong text for the exit layer. Please help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":70627951,"Users Score":0,"Answer":"The range of the weights exceeds the initial range (-1 to 1) because of the mutation. You can control the mutation in PyGAD using these 2 simple ways:\n\nSet the mutation_by_replacement parameter to True. In this case, no gene will exceed the -1 to 1 range.\nSet init_range_low=-0.5 and init_range_high=0.5 but also set the 2 parameters random_mutation_min_val and random_mutation_max_val to small values. For example, random_mutation_min_val=-0.2 and random_mutation_max_val=0.2. This option just tries to lower down the values created out of mutation. But there is possibility that the values get outside the -1 to 1 range.","Q_Score":1,"Tags":"python","A_Id":70887044,"CreationDate":"2022-01-07T22:40:00.000","Title":"Is the range of weights in the optimized neural network in the genetic algorithm between 9 and -9 correct?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a part of my research work, I am in need to split a deep Learning model(CNN) into two in order to run the model in two different devices in a distributed way. Here, I split a CNN model into two, A and B where the output of A will be the input for B. If anyone has an idea, please.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":70633768,"Users Score":0,"Answer":"What model are you splitting?\nIn PyTorch it is possible to do so. Check the children function of a model in pytorch and reconstruct the layers for two sub models.","Q_Score":0,"Tags":"python,deep-learning,conv-neural-network","A_Id":70755335,"CreationDate":"2022-01-08T15:24:00.000","Title":"Splitting a CNN model to run in two devices","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using cartopy to draw global histograms with matplotlib.\nThere is a part in the map which I don't want to show, but it is in the longitude and the latitude that I need (so setting an extent can't help here).\nI know there is a way to create a mask for an area, but I'm having a little trouble with it.\nThe area that I want to crop isn't a country or anything specific, just a coastline...\nCan I create a mask using the Long and the Lat values only?\nThe goal is to show only the Mediterranean sea, and not anything else, so I don't want to see the northern coasts of France and Spain that aren't a part of the Mediterranean coastline.\nThank you guys,\nKarin.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":59,"Q_Id":70635366,"Users Score":1,"Answer":"I used ax.add_patch(rectangle) to cover that part in the plot.","Q_Score":0,"Tags":"python,matplotlib,cartopy","A_Id":70636425,"CreationDate":"2022-01-08T18:46:00.000","Title":"cropping unwanted map part in cartopy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have student data which contains column 10_grade which consists both percentage and cgpa values mix.. I need to convert 10_grade column into percentage. A python code will be helpful","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":70641556,"Users Score":0,"Answer":"Its been sorted.. now i want to plot these 3 categorical variables together 1) Graduation_Location (Mumbai region,south region,north region,others)\n2) Course_Country (us and canada, asian countries, European countries, others)\n3) status (hold reject , others)\nI was able to plot two wiith the help of pd.crosstab\npd.crosstab(df.Grad_Location,df.Course_Country)","Q_Score":0,"Tags":"python,pandas,jupyter-notebook,exploratory-data-analysis","A_Id":70813823,"CreationDate":"2022-01-09T13:05:00.000","Title":"how to convert a column in percentage which contains both cgpa and percentage values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have recently got started with spark. I am using Python as the language for the spark application. What happens if we execute pure python code as a spark application (Spark). Does it will be executed on an executor or the driver itself.\nSince the main function runs on spark, I think it should be on driver, but I have also read that spark does not do any computation, so will it run on executor then ? I may be missing something here. Would appreciate if anyone could explain this.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":58,"Q_Id":70669894,"Users Score":3,"Answer":"The code you write (your \"main\" function) will be executed on the driver, and when you will operate on distributed data (e.g. RDDs and others), that executor will coordinate with the workers and handle that operation on them (in a sense, those operations will be executed on the workers).","Q_Score":0,"Tags":"python,apache-spark,pyspark","A_Id":70670326,"CreationDate":"2022-01-11T16:09:00.000","Title":"Pure Python \/ Python specific code in Pyspark application","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Today when I launched my Spyder the IPython console immediately showed me a note:\n\nraise ValueError(f\"Key {key}: {ve}\") from None\nValueError: Key backend: 'qt4agg' is not a valid value for backend; supported values are ['GTK3Agg', 'GTK3Cairo', 'GTK4Agg', 'GTK4Cairo', 'MacOSX', 'nbAgg', 'QtAgg', 'QtCairo', 'Qt5Agg', 'Qt5Cairo', 'TkAgg', 'TkCairo', 'WebAgg', 'WX', 'WXAgg', 'WXCairo', 'agg', 'cairo', 'pdf', 'pgf', 'ps', 'svg', 'template']\n\nI tried to update matplotlib in anaconda command line, but after the updating it still appeared. How can I cope with this problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":217,"Q_Id":70670899,"Users Score":0,"Answer":"(Spyder maintainer here) This is a bug in Spyder because we still offer the possibility to set the Qt4 backend, but Matplotlib removed it some versions ago. So please change your backend to other value.\nWe'll fix this problem in our 5.3.0 version, to be released in March 2022, by removing that option.","Q_Score":0,"Tags":"python,matplotlib,anaconda,spyder","A_Id":70674132,"CreationDate":"2022-01-11T17:21:00.000","Title":"I met an matplotlib backend error just after Spyder launched","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a method in graph-tool through which checking whether two nodes are connected (as first neighbours) or not without having to iterate?\nFor example, something like graph_tool.is_connected(v,u) which returns a boolean depending on whether v and u are or not connected vertices. Something like a function to check just whether a certain edge exists.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":121,"Q_Id":70679146,"Users Score":1,"Answer":"It is solved by checking the result of the function g.edge(v,u). If add_missing=False it just returns None whenever the edge does not exist. Thanks to @NerdOnTour for the comment","Q_Score":0,"Tags":"python,graph-tool,complex-networks","A_Id":70679684,"CreationDate":"2022-01-12T09:29:00.000","Title":"Check whether two vertices are connected using graph-tool on Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I trained and tested a model in python with Keras and save it in SavedModel format.\nThen I imported it with the C-API and TF_LoadSessionFromSavedModel().\nI predict the same set of data in python and C but get different results.\nThe predictions in python are ok, the predictions in C are not ok, but also not fully nonsense.\nThings I have checked:\n\nTensorflow Version 2.5 in Python and C\nuse of the same model\nuse of the same data with same format\nload the SavedModel in Python again\ntry different arcitectures\ntrain without keras in low-level tensorflow\n\nEvery time python results are good, C results are different and worse.\nIs there something wrong with the SavedModel Format regarding Python and C?\nOre any other tipps to solve this problem?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":70682972,"Users Score":0,"Answer":"The Problem was, that the dataset was normalized in python as type float64\nand in C as type float32.\nSame type with same normalization gives the same result.\nThanks @GPhilo for your comment. It was the right direction!","Q_Score":0,"Tags":"python,c++,c,tensorflow,tensorflow2.0","A_Id":70684777,"CreationDate":"2022-01-12T14:05:00.000","Title":"Different prediction of Tensorflow in Python and C-API with SavedModel","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a SQL Server v2017 at work. When they installed machine learning it installed Python 3.5 with Pandas 0.19. I am trying to use read_excel on a file on a network drive. I can run the script on my local machine, but I have Python 3.9 and Pandas 1.35. The Script works fine locally but not when executed through the Server using EXECUTE sp_execute_external_script. I realize there could be a huge number of things that coul dbe causeing problems, but I need to rule out Pandas version first. The server is locke own adn it takes a lot of red tape to change something.\nCan Pandas 0.19 read_excel access excel files on a UNC address. I know the newer version can, but this would help me rule out the Pandas library as a source for the issue.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":70687091,"Users Score":0,"Answer":"(I work for MS and I support SQL ML Services)\nThe short answer to your question is -\nYou will have a hard time accessing a UNC path in ML Services. It is technically possible, but the complications make it a no-go for many. You didn't show your code or errors, but I can assure you that your problem isn't with pandas, and perhaps you got an error about not being able to 'connect' because we disable outbound network traffic from ML services by default... but if you got past that, then you probably got an authentication error.\nThe long answer to your question is -\nSQL 2016 and 2017 - We use local 'worker' accounts. The default names (they are based on your instance name) are MSSQLSERVER01,02,03... 20. (There are 20 by default... there is also a MSSQLSERVER00, but we'll ignore that one).\nThe Launchpad service is ran by it's service account (default: NT Service\\MSSQLLaunchpad), and it can be ran as a domain account. But, it is not launchpad that is actually executing your R\/Python code. Launchpad kicks off the R process, and it does this under the MSSQLSERVERXX users. It is THAT user that is technically running your code, and therefore, it is that user that is trying to connect to your UNC path and not YOUR user that you are logged into SQL as. This user is a local user - which cannot authenticate across a UNC share. This issue comes down to a design limitation.\nIn Windows, there is no way to provide a username\/password in your UNC path (whereas, in Linux, you can). Using a mapped drive will not work because those are local-to-your-user-and-login-session. Therefore, a mapped drive of one logged in user will not be accessible to other users (and therefore the MSSQLSERVERXX users).\nIn short, if you absolutely wanted to make it work, you would have to disable authentication entirely on your network share. In Windows, this is more than just adding \"EVERYONE\" permissions to the file. You would also have to allow GUEST (or in the *nix world, ANONYMOUS) access to file shares. This is disabled by default in all recent Windows versions and you would have to modify various gpos\/registry settings\/etc to even allow that. It would not be my recommendation.\nIf this were in an AD environment, you could also theoretically allow the COMPUTER account of your SQL host so that ALL connections from THAT \"COMPUTER\" would be allowed. Again, less than ideal.\nIn SQL 2019 - we got rid of the local user accounts, and use appcontainers instead. This removes the need for local user accounts (many customers in large organizations have restrictions on local user accounts), and offers additional security, but as always, with more security comes more complexity. In this situation, if you were to run the launchpad service as a domain user, your R\/Python processes ARE executed as the LAUNCHPAD account (but in a very locked down appcontainer context). Theoretically, you could then grant THAT service account in AD access to your remote UNC share... BUT, appcontainers provide a far more granular control of specific 'permissions' (not file level permissions). For example, at least conceptually, when you are using an app on your phone, or perhaps a Windows store UWP app, and it asks 'do you want to allow this to access your camera?\" - those layer of permissions are something that appcontainers can provide. We have to explicitly declare individual 'capabilities', and we do not currently declare the ability to access UNC shares due to several other security implications that we must first consider and address. This too is a design limitation currently.\nThe above possibilities for SQL 2016\/2017 do not apply, and will not work, for SQL 2019.\nHowever, for all of them, while it may not be ideal, my suggestion and your best option is:\n\nReconsider which direction you are doing this. Instead of using your SPEES (sp_execute_external_scripts) code to access a network share, consider sharing out a directory from the SQL host itself... this way, you at least don't have to allow GUEST access, and can retain some level of permissions. Then you can drop whatever files you need into the share, but then access it via the local-to-that-host path (ex: C:\\SQL_ML_SHARE\\file.xel) in your SPEES code.","Q_Score":0,"Tags":"python,excel,pandas","A_Id":71343151,"CreationDate":"2022-01-12T19:07:00.000","Title":"Pandas 0.19 Read_Excel and UNC Addresses","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I encountered an issue that when I use Series.str.len() in pandas query method, and actually all the functions for Series.str is not supported in some of my envs, but work in other envs, and all these envs have almost same version of pandas and numpy. (I'm sure Series.str.xxxxx could work in all my envs before)\nEnv1\nPython 3.9.7\nnumpy==1.21.4\npandas==1.3.4\nWhen I ran pd.DataFrame(columns=['core_text']).query(\"core_text.str.len()>1\"), it raised\n\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"\/Users\/huhon\/miniconda3\/envs\/venv_dev\/lib\/python3.9\/site-packages\/pandas\/core\/frame.py\", line 4060, in query\nres = self.eval(expr, **kwargs)\nFile \"\/Users\/huhon\/miniconda3\/envs\/venv_dev\/lib\/python3.9\/site-packages\/pandas\/core\/frame.py\", line 4191, in eval\nreturn _eval(expr, inplace=inplace, **kwargs)\nFile \"\/Users\/huhon\/miniconda3\/envs\/venv_dev\/lib\/python3.9\/site-packages\/pandas\/core\/computation\/eval.py\", line 353, in eval\nret = eng_inst.evaluate()\nFile \"\/Users\/huhon\/miniconda3\/envs\/venv_dev\/lib\/python3.9\/site-packages\/pandas\/core\/computation\/engines.py\", line 80, in evaluate\nres = self._evaluate()\nFile \"\/Users\/huhon\/miniconda3\/envs\/venv_dev\/lib\/python3.9\/site-packages\/pandas\/core\/computation\/engines.py\", line 120, in _evaluate\n_check_ne_builtin_clash(self.expr)\nFile \"\/Users\/huhon\/miniconda3\/envs\/venv_dev\/lib\/python3.9\/site-packages\/pandas\/core\/computation\/engines.py\", line 36, in _check_ne_builtin_clash\nnames = expr.names\nFile \"\/Users\/huhon\/miniconda3\/envs\/venv_dev\/lib\/python3.9\/site-packages\/pandas\/core\/computation\/expr.py\", line 834, in names\nreturn frozenset(term.name for term in com.flatten(self.terms))\nTypeError: unhashable type: 'numpy.ndarray'\n\nEnv2\nPython 3.9.9\nnumpy==1.21.4\npandas==1.3.4\nIt works perfected.\nAnyone can help? Thanks in advance!\nHong","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":144,"Q_Id":70692645,"Users Score":0,"Answer":"The problem is probably with the versions of numexpr. This module evaluates all string expressions for Pandas like query or pd.eval.\nThe solution is to upgrade your version of numexpr (or remove and reinstall it).","Q_Score":1,"Tags":"python,pandas","A_Id":70694070,"CreationDate":"2022-01-13T07:08:00.000","Title":"TypeError: unhashable type: 'numpy.ndarray' when I use Series.str.len() in pandas query?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Firstly I guess this question might be duplicated, but I couldn't find it.\nQuestion. It's actually in the title.\n\nCan I get Essential Matrix from Fundamental matrix with few matching keypoints, but without intrinsic parameters?\n\nSituation. I am trying to find Essential matrix. I use my phone camera to take photos, and then extract keypoints using SIFT(or ORB). I have 2 images of an object and it's matching points. I could get F, Fundamental Matrix but I have no idea how to get Essential Matrix from this.\nI don't have camera intrinsic parameters, such as Fx, Fy, Cx, Cy.\nI am stuck to this situation. I googled but couldn't get answers.\nLet me know if it's duplicated then I'd delete this.\nPLUS: I also don't have camera coordinates or world coordinates.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":135,"Q_Id":70693520,"Users Score":0,"Answer":"No you can't, you need to know the metric size of the pixel and the focal because essential matrix is in the real world.","Q_Score":0,"Tags":"python,opencv,matrix,camera-calibration","A_Id":70694262,"CreationDate":"2022-01-13T08:33:00.000","Title":"Can I get Essential Matrix from Fundamental matrix with few matching keypoints, but without intrinsic parameters?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I think this is a very basic question, my apologies as I am very new to pytorch. I am trying to find if an image is manipulated or not using MantraNet. After running 2-3 inferences I get the CUDA out of memory, then after restarting the kernel also I keep getting the same error: The error is given below:\nRuntimeError: CUDA out of memory. Tried to allocate 616.00 MiB (GPU 0; 4.00 GiB total capacity; 1.91 GiB already allocated; 503.14 MiB free; 1.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\nThe 'tried to allocate memory(here 616.00 MiB) keeps changing. I checked the GPU statistics and it shoots up while I try to do the inferencing. In tensorflow I know we can control the memory usage by defining an upper limit, is there anything similar in pytorch that one can try?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":857,"Q_Id":70697046,"Users Score":1,"Answer":"So during inferencing for a single image I realized there was no way I was able to fit my image in the gpu which I was using, so after resizing the image to a particular value I was able to do the inferencing without facing any memory issue.","Q_Score":0,"Tags":"python,pytorch,gpu,pytorch-lightning","A_Id":70797731,"CreationDate":"2022-01-13T13:00:00.000","Title":"pytorch cuda out of memory while inferencing","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on merging two datasets on python; however, I'm running into a sorting issue while preparing the excel files for processing.\nExcel 1 sorts A-Z of project ID's as:12.a2.b3\nHowever, excel 2 sorts A-Z as:132.a2.b\nHow do I make sure they both sort as excel 1?\nI've changed format of columns from General to number for both and it's still similar outcome.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":70700093,"Users Score":0,"Answer":"IMHO, sorting is unnecessary. u want:\n\nmerging two datasets on python\n\nThus, just import\/merge both data 1st.. then sort in python.. just from looking in the output file you can see if some of the row label IS actually different. Eg : \"2.a\" vs \"2.a \"","Q_Score":0,"Tags":"python,excel,sorting","A_Id":70737928,"CreationDate":"2022-01-13T16:42:00.000","Title":"A-Z sorting is different between two excel files","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a numpy array of shape (10, 10, 1024, 1024, 3). This represents an 10x10 grid of images, each of shape (1024, 1024, 3) (1024x1024 color images). I want to reshape this into one array of shape (10*1024, 10*1024, 3), where the 1024x1024 patch of pixels in the upper left of the new image corresponds to the [0, 0] index of my original array. What's the best way to do this using numpy?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":70702010,"Users Score":0,"Answer":"This should do the job: np.swapaxes(arr,1,2).reshape(10*1024, 10*1024, 3). Note that swapaxis generates an array of shape (10, 1024, 10, 1024, 3).","Q_Score":0,"Tags":"python,arrays,numpy","A_Id":70702323,"CreationDate":"2022-01-13T19:24:00.000","Title":"Reshaping a 2D array of images using numpy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to calculate maximum diameter of a 3D binary mask of a nodule (irregular shape).\nI have implemented a function that calculates the distance of all boundary points between each other. This method is very computational expensive while dealing with tumors or larger volume.\nSo my Question is what can be the possible methods to calculate maximum diameter of a 3d binary mask which is less computationally expensive.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":55,"Q_Id":70709648,"Users Score":1,"Answer":"Something similar to a Gradient Descent could be implemented.\n\nStart with 2 points (A and B), located randomly around the 3D mask\nFor point A, calculate the direction to travel on the 3D mask that will most increase the distance between him and point B.\nMake point A take a small step in that direction.\nFor point B, calculate the direction to travel on the 3D mask that will most increase the distance between him and point A.\nMake point B take a small step in that direction.\nRepeat until it converges.\n\nThis will very likely find local maxima, so you would probably have to repeat the experiment several times to find the real global maxima.","Q_Score":0,"Tags":"python","A_Id":70709921,"CreationDate":"2022-01-14T11:07:00.000","Title":"Calculate maximum diameter in 3D binary mask","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"in a linear regression model let's say we have 3 independent variable (age, height, gender) and one dependent variable (diabetic) and then we split the model as, X train- i.e. (say 70%) data of independent variables for training, X test-> i.e. 30% data of independent variables for testing\ny train-> i.e. (say 70%) data of dependent variable for training, y test-> i.e. 30% data of dependent variables for testing\nso when we do predict X-test, or predict X-test, are we predicting values of independent variables or are we predicting the dependent variable (diabetic?)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":126,"Q_Id":70721510,"Users Score":0,"Answer":"So our goal in training is given some features or dependant variables or X ind a model that predicts the independent variable or y from X. We mostly do that by minimizing some loss function f(M(X), y). When it comes to testing, we want to see how good our model is when applied to examples that were not part of the training set. So we take our trained model M, feed it with our features X_test and check how good it predicts y_test.","Q_Score":0,"Tags":"python,regression,prediction","A_Id":70721596,"CreationDate":"2022-01-15T12:37:00.000","Title":"i need clarity with prediction of X test, X train, y test, y train","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"in a linear regression model let's say we have 3 independent variable (age, height, gender) and one dependent variable (diabetic) and then we split the model as, X train- i.e. (say 70%) data of independent variables for training, X test-> i.e. 30% data of independent variables for testing\ny train-> i.e. (say 70%) data of dependent variable for training, y test-> i.e. 30% data of dependent variables for testing\nso when we do predict X-test, or predict X-test, are we predicting values of independent variables or are we predicting the dependent variable (diabetic?)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":126,"Q_Id":70721510,"Users Score":0,"Answer":"We are predicting the dependent variable i.e diabetic.\nYou can compare your results with Y test to get accuracy of your model.","Q_Score":0,"Tags":"python,regression,prediction","A_Id":70721585,"CreationDate":"2022-01-15T12:37:00.000","Title":"i need clarity with prediction of X test, X train, y test, y train","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2 lists\n\n[A,B,C,D,E,F] - first list\n[X,X,X,X,X,X] - second list\n\nI would like to take last element of the first list and if there are any elements in the second list move them to the left and add the element as last.\n\n[A,B,C,D,E,F]\n[X,X,X,X,X,X]\n[A,B,C,D,E]\n[X,X,X,X,X,F]\n[A,B,C,D]\n[X,X,X,X,F,E]\n\nTill there is only first element in the first array, so it would stop at:\n\n[A]\n[X,F,E,D,C,B]\n\nI'm quite new to Python, I would really appreciate some help","AnswerCount":7,"Available Count":1,"Score":0.0285636566,"is_accepted":false,"ViewCount":74,"Q_Id":70731104,"Users Score":1,"Answer":"You can use for loop for this.\nAnd you can access the last elements by using -1 as index values.","Q_Score":1,"Tags":"python,list","A_Id":70731190,"CreationDate":"2022-01-16T14:41:00.000","Title":"Python changing last elements of list","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I have a table that has 5 rows and 10 columns:\n\nRow 1 has 3 missing values\nRow 2 has 2 missing values\nRow 3 has 8 missing values\nRow 4 has 5 missing values\nRow 5 has 2 missing values\n\nI would like the function to return me row 2 & 5","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":70748203,"Users Score":0,"Answer":"df.isnull().sum(axis=1) will return the number of missing values per rows.\nmin(df.isnull().sum(axis=1)) will return the minimum missing values for a row\ndf[df.isnull().sum(axis=1) == min(df.isnull().sum(axis=1))] will return the rows that have the minimum amount of missing values","Q_Score":0,"Tags":"python,pandas","A_Id":70748228,"CreationDate":"2022-01-17T22:23:00.000","Title":"Return rows that have the minimum amount of missing values pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Would there be \u201cdilution\u201d of accuracy if I train the same text classification model with multiple training datasets? For example, my end users would be providing (uploading) their own tagged CSVs to train the model and use the trained model in the future. The contexts of datasets would be different - L&D, Technology, Customer Support, etc.\nIf yes, how do I have a \u201cseparate instance or model\u201d for each user?\nI am using Python and would possibly use Gradio or Streamlit as the UI. Open to advice.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":42,"Q_Id":70750444,"Users Score":1,"Answer":"I ended up using huggingface's zero-shot classification.","Q_Score":1,"Tags":"python,text-classification","A_Id":72430464,"CreationDate":"2022-01-18T04:54:00.000","Title":"Text Classification - Multiple Training Datasets","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a task about clusterization in python. When I did this clusterization I need to check the result with business logic.\nI dont see the pattern in solved clusters. Next, I decided to do post analysis with correlation. I take one cluster and calculate a correlation pairwise. In calculation I used whole feature unlike a clusterization when I used only 3.\nI got a high level of correlation from 0.99 to 1 in whole cluster. For me it means that algorithm watched the logic in cluster.\nBut, i did this clusterization to solved a problem with banks data (i wont to see the client's pattern like (issued amount > 50.000,age < 22, salary < 80.000 - this client, for instance bad)). And I cant see the business logic, for me it's random data.\nWith this description I have a question. How can i check the logic in the clusters except a simple self-checking ?\nI think there are 2 reasons. First, my clusterization is bad and I need to write a new one. Second, the data is bad and I need to check data and do a post analysis\nI did a BIRCH cluster with StandardScaler.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":29,"Q_Id":70751515,"Users Score":0,"Answer":"All of verification methods are 'empirical'.\n\nYou can compare the different methods of clusterization and choose the best one.\nThe correlation comparison methods:\na) If correlation approximately 1. You need to calculate a row's average and median. Next step you compare this two value and drop bads row.\nb) If corr are different in whole matrix. Calculate averages for all rows and compare the value and the mean average; choose god one like this 'value > mean(avg)'","Q_Score":0,"Tags":"python,cluster-analysis,cluster-computing,hierarchical-clustering","A_Id":70783707,"CreationDate":"2022-01-18T07:09:00.000","Title":"Are there any verification methods how good cluster I got?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am new in tensorflow i have a question : Can i put in my conv2d an input with shape not fixed ?\ninputs.shape =(?, 1568)\nwhen i train my neural network i get this message :\nraise ValueError(f'Input {input_index} of layer \"{layer_name}\" ' ValueError: Input 0 of layer \"conv2d\" is incompatible with the layer: expected min_ndim=4, found ndim=2. Full shape received: (Dimension(None), Dimension(1568))\nmy con2d layer is like this :\n x = Conv2D(32, (3,3), padding=\"same\",input_shape=input_shape[1:])(inputs)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":75,"Q_Id":70769357,"Users Score":0,"Answer":"I resolved that by using tf.expand_dims","Q_Score":2,"Tags":"python,tensorflow,keras,conv-neural-network","A_Id":70778470,"CreationDate":"2022-01-19T10:56:00.000","Title":"can I use conv2d with input dimension not fixed?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am quiet certain I installed everything correctly, and the required file is clearly in my path. I am not sure what I can do at this point. Please Help.\necho %path%\nC:\\Users\\idvin\\anaconda3\\envs\\3.7;C:\\Users\\idvin\\anaconda3\\envs\\3.7\\Library\\mingw-w64\\bin;C:\\Users\\idvin\\anaconda3\\envs\\3.7\\Library\\usr\\bin;C:\\Users\\idvin\\anaconda3\\envs\\3.7\\Library\\bin;C:\\Users\\idvin\\anaconda3\\envs\\3.7\\Scripts;C:\\Users\\idvin\\anaconda3\\envs\\3.7\\bin;C:\\Users\\idvin\\anaconda3\\condabin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6\\bin;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6\\libnvvp;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\iCLS;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\iCLS;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0;C:\\Windows\\System32\\OpenSSH;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\DAL;C:\\Program Files (x86)\\Intel\\Intel(R) Management Engine Components\\IPT;C:\\Program Files\\Intel\\Intel(R) Management Engine Components\\IPT;C:\\Program Files\\Intel\\WiFi\\bin;C:\\Program Files\\Common Files\\Intel\\WirelessCommon;C:\\WINDOWS\\system32;C:\\WINDOWS;C:\\WINDOWS\\System32\\Wbem;C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0;C:\\WINDOWS\\System32\\OpenSSH;C:\\Program Files\\MATLAB\\R2020b\\bin;C:\\Users\\idvin\\Downloads\\elastix-4.9.0-win64;C:\\Program Files\\NVIDIA Corporation\\Nsight Compute 2022.1.0;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6\\bin\\cudnn_cnn_infer64_8.dll;D:\\dll_x64\\dll_x64\\zlibwapi.dll;C:\\Program Files\\MySQL\\MySQL Shell 8.0\\bin;C:\\Users\\idvin\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Users\\idvin\\AppData\\Local\\Programs\\Julia-1.6.4\\bin;D:\\dll_x64\\dll_x64\\zlibwapi.dll;C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.6\\bin\\cudnn_cnn_train64_8.dll;.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":143,"Q_Id":70769740,"Users Score":1,"Answer":"solved it in my path i needed to add the directory of zlibwapi.dll not zlibwapi.dll itself","Q_Score":1,"Tags":"python,tensorflow,cudnn","A_Id":70770098,"CreationDate":"2022-01-19T11:25:00.000","Title":"Could not load library cudnn_cnn_infer64_8.dll. Error code 126 Please make sure cudnn_cnn_infer64_8.dll is in your library path","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"[Redacted]\nIn this example, in my final cell of code, I try to call my model. This is following the tutorial on a Youtube video.\nIn this step, the video is able to perform the lines\nmodel = UCC_Classifier(config)\nthen in the next cell\nloss, output = model(input_ids.unsqueeze(dim=0), am.unsqueeze(dim=0), labels.unsqueeze(dim=0))\nTo successfully get a result. However when I try and do the same thing, I get told my class is not callable. I cannot see any difference and am unsure why this might not be callable.\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":212,"Q_Id":70773312,"Users Score":1,"Answer":"Your UCC_Classifier model should be a pl.LightningModule, not a pl.LightningDataModule.","Q_Score":0,"Tags":"python,pytorch,typeerror,pytorch-lightning","A_Id":70773358,"CreationDate":"2022-01-19T15:30:00.000","Title":"Python TypeError - 'Class' object is not callable (Google Collab Example Inside)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So whenever i am trying to read from a source with stream i get this error \"A file referenced in the transaction log cannot be found\" and it points to a file that does not exist.\nI have tried:\n\nChanging the checkpoint location\nChanging the start location\nRunning \"spark._jvm.com.databricks.sql.transaction.tahoe.DeltaLog.clearCache()\"\n\nIs there anything else i could do?\nThanks in advance guys n girls!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":166,"Q_Id":70773415,"Users Score":0,"Answer":"So! I had another stream that was running and it had the same parent directory as this stream.. this seems to have been a issue.\nFirst stream was looking in: .start(\"\/mnt\/dev_stream\/first_stream\")\nSecond stream was looking in: .start(\"\/mnt\/dev_stream\/second_stream\")\nEditing the second stream to look in .start(\"\/mnt\/new_dev_stream\/new_second_stream\") fixed this issue!","Q_Score":0,"Tags":"python,databricks,azure-databricks,databricks-connect","A_Id":70773744,"CreationDate":"2022-01-19T15:37:00.000","Title":"Databricks streaming \"A file referenced in the transaction log cannot be found\"","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"tfds.load(name=\"imdb_reviews\", data_dir=direc, split=\"train\", with_info=True, as_supervised=True)\ni have download the dataset , it has downloads and imdb_reviews directories, in the imdb_reviews directory, it has plain_text directory and inside it, exists a directory named 1.0.0 and there are some files inside that. let me say the path to train is: '\/content\/drive\/MyDrive\/datasets\/packt\/imdb\/imdb_reviews\/plain_text\/1.0.0\/imdb_reviews-train.tfrecord-00000-of-00001' and the path to test is '\/content\/drive\/MyDrive\/datasets\/packt\/imdb\/imdb_reviews\/plain_text\/1.0.0\/imdb_reviews-test.tfrecord-00000-of-00001' , there are also dataset_info.json and features.json and labels.labels.txt and an unsupervised file, how can I replace the command so that it does not cause other problems. I want to tokenize and encode it with a function\nbert_train= [bert_encoder(r) for r,l in imdb_train]\nand there is\nencoded= tokenizer.encode_plus(text, add_special_tokens=True, max_length=150, pad_to_max_length=True,truncation=True,return_attention_mask=True, return_token_type_ids=True )\ninside that encoding function.\nthank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":70776829,"Users Score":0,"Answer":"I found the answer. if you give the directory to the command tfds.load() then the next time it does not download the dataset because it finds out there exits the data in your drive. so there is actually no need to replace the command with other things.","Q_Score":0,"Tags":"python,nlp,tokenize,tensorflow-datasets","A_Id":70919258,"CreationDate":"2022-01-19T19:44:00.000","Title":"how to replace the command tfds.load for imdb reviews with download dataset file?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing unit tests for 2 data frames to test for equality by converting them to dictionaries and using unittest's assertDictEqual(). The context is that I'm converting Excel functions to Python but due to their different rounding system, some values are off by merely +\/- 1\nI've attempted to use the DF.round(-1) to round to the nearest 10th but due to the +\/- 1, some numbers may round the opposite way so for example 15 would round up but 14 would round down and the test would fail. All values in the 12x20 data frame are integers\nWhat I'm looking for (feel free to suggest any alternate solution):\n\nA CLEAN way to test for approximate equality of data frames or nested dictionaries\nor a way to make the ones-digit of each element '0' to avoid the rounding issue\n\nThank you, and please let me know if any additional context is required. Due to confidentiality issues and my NDA (non-disclosure agreement), I cannot share the code but I can formulate an example if necessary","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":36,"Q_Id":70778057,"Users Score":1,"Answer":"I'm not 100 pourcent sure I got what you are trying to do but why not just divide by 10 to lose the last digit that is bothering you?\ndivision with \"\/\/\" will keep only the significant numbers. You can then multiply by ten if you want to keep the overall number size.","Q_Score":1,"Tags":"python,pandas","A_Id":70778217,"CreationDate":"2022-01-19T21:37:00.000","Title":"Pandas Dataframe: Change each value's ones-digit","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have one large dataframe that currently has both \"?\", \"??\", and NaN values that I'm trying to remove. I want to redefine the columns to be booleans to see whether they contained \"?\", \"??\" or NaN.\nMy current approach involves cloning different columns of the dataframe based on whether they contain just \"?\", just \"??\" or just NaN values and separately iterating through the columns, col, to change the values (ex: df[col] = df[col].isnull()) and finally merging them together again.\nIs there an easier way to do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":70779165,"Users Score":0,"Answer":"how about using quick-sort algorithm. And, I am not sure I got what your data looks like and what results you want to obtain. Maybe you could show us parts of your data frame.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":70779231,"CreationDate":"2022-01-19T23:43:00.000","Title":"How to replace values in Dataframe based on multiple conditions?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've seen a lot of tutorials in Youtube about template matching in opencv-python, one thing they have in common is that they always uses the source image when matching the template. My question is does template matching works if the template is not from the original image? Can I use it as a simple method for object detection? And how accurate it will be? thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":171,"Q_Id":70783341,"Users Score":0,"Answer":"Template matching is a technique in digital image processing for finding small parts of an image which match a template image.\nThis technique only works in the images that are almost the same. Small changes on the desired objects included in new frames can create difficulties to make a good match.\nTo detect an object(template) in an image you could use local feature descriptors and match the descriptors of every keypoint to detect if a zone has a high number of matches to assign them to the object.\nHope it works.","Q_Score":1,"Tags":"python,opencv,object-detection","A_Id":70783742,"CreationDate":"2022-01-20T09:01:00.000","Title":"Does template matching works if the template is not from the original image?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can use print(70*\"_\") in python to output dashed line as a separator between results.\nNow how can I output the same results in R.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":58,"Q_Id":70786415,"Users Score":2,"Answer":"strrep(\"_\", 70) this is just a base R function\n[1] \"______________________________________________________________________\"","Q_Score":0,"Tags":"python,r,printing","A_Id":70786751,"CreationDate":"2022-01-20T12:39:00.000","Title":"Using print() function in R to print dashed lines (not in graphs)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working in an NLP task for a classification problem. My dataset is imbalanced and some authors have 1 only text, thus I want to have this text only in the training test. As for the other authors I have to have a spliting of 70%, 15% and 15% respectivelly.\nI tried to use train_test_split function from sklearn, but the results aren't good.\nMy dataset is a dataframe and it looks like this\nTitle Preprocessed_Text Label\n\nPlease let me know.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":70849127,"Users Score":0,"Answer":"Whit only One sample of a particular class it seems impossible to measure the classification performance on this class. So I recommend using one or more oversampling approaches to overcome the imbalance problem ([a hands-on article on it][1]). As a matter of fact, you must pay more attention to splitting the data in such a way that preserves the prior probability of each class (for example by setting the stratify argument in train_test_split). In addition, there are some considerations about the scoring method you must take into account (for example accuracy is not the best fit for scoring).","Q_Score":1,"Tags":"python,nlp","A_Id":70850322,"CreationDate":"2022-01-25T13:04:00.000","Title":"Train\/Validation\/Testing sets for imbalanced dataset","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to calculate standard deviation of \"series\" for each row but the problem is every row in my column has a nested list.\nMy data frame is like this:\n\n\n\n\nnumber\nseries\n\n\n\n\n1\n69,1,33,1,51,13,88,75,632\n\n\n2\n9,1,400,1,51,13,27,5,132\n\n\n3\n9,1,3,1,5,13,21,5,3\n\n\n4\n1,1,343,1,51,13,74,27,3\n\n\n5\n9,1,73,1,51,13,94,75,2","AnswerCount":1,"Available Count":1,"Score":0.6640367703,"is_accepted":false,"ViewCount":84,"Q_Id":70851124,"Users Score":4,"Answer":"If the series is a list\ndf[\"std\"] = df[\"series\"].apply(np.std)\nIf the series is a string\ndf[\"std\"] = df[\"series\"].apply(lambda x: [int(i) for i in x.split(\",\")]).apply(np.std)","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":70851192,"CreationDate":"2022-01-25T15:14:00.000","Title":"Calculate standard deviation for each row(rows contains list)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a network diagram that is sketched in Visio. I would like to use it as an input for the networkx graph from node2vec python package. The documentation says that there is a function called to_networkx_graph() that takes, as its input, the following types of data:\n\"any NetworkX graph dict-of-dicts dict-of-lists container (e.g. set, list, tuple) of edges iterator (e.g. itertools.chain) that produces edges generator of edges Pandas DataFrame (row per edge) numpy matrix numpy ndarray scipy sparse matrix pygraphviz agraph\"\nBut, still, not mentioning other formats like Visio, pdf, odg, PowerPoint, etc.\nSo, how to proceed?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":68,"Q_Id":70900560,"Users Score":2,"Answer":"I think you need to create some data in the format referred to in the documentation, not just a network diagram. A Visio diagram will not do the job and I know of no way to do a conversion.","Q_Score":0,"Tags":"python-3.x,networkx,visio","A_Id":70907712,"CreationDate":"2022-01-28T21:37:00.000","Title":"How to convert network diagram in Visio format to a a networkx graph?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a very basic question.\nThe input is api feed from source, that has created date as a column. What I am looking to accomplish is to store this file(by splitting it up) into the following format:\nlanding\/year=2020\/month=01\/date=01 and so on...\nThe year, month, date values are the dates from Created_at column.\nTHe file will be stored as transaction_id.parquet (transaction_id is also another column in the feed).\nWhat is the suggested option to get to this structure? Is it prefix for each file by splitting created_date into year, month, date?\nLooking for you response.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":175,"Q_Id":70900619,"Users Score":0,"Answer":"Your design should be something like below\n\nCreate a file in YYYYMMDD format\nlet's assume that you are receiving a file named 20220129file_name.txt\nSplit it by \"_\" to get the DATE portion\nSplit other parts such as year\/month and day\nCreate another function to validate if a particular year\/month\/day S3 folder exists? if yes then put the file in that folder or else create the folder set and put the file.\nThere is no ready-made code for the same but you can create it. It's pretty simple.","Q_Score":0,"Tags":"python,amazon-web-services,amazon-s3","A_Id":70921284,"CreationDate":"2022-01-28T21:43:00.000","Title":"Store file in S3 according to Year, month,date","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have the following dataframe:\n\n\n\n\nName\n1 -30\nLimit\n\n\n\n\nA\n100\n1000\n\n\nB\n200\n1000\n\n\n\n\nI am trying to create a subset of this dataframe, for only the first two columns, with the following code:\nSub_DF = DF[[\"Name\",\"1-30\"]]\nThis unfortunately leads to the following error: KeyError: \"['1-30'] not in index\"\nSo my expected output should look like this:\n\n\n\n\nName\n1 -30\n\n\n\n\nA\n100\n\n\nB\n200\n\n\n\n\nI have tried using the iloc function but that did not help. I also tried to enter 1-30 without quotation marks.\nPlease find below the info about the column names:\n\nName 243 non-null object\n1 - 30 245 non-null float64\nCred.limit 213 non-null float64\n\nAny tips? Please note that I am new to programming :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":70916075,"Users Score":0,"Answer":"Sub_DF = DF[[\"Name\",\"1 -30\"]]","Q_Score":0,"Tags":"python,pandas,dataframe,subset","A_Id":70916209,"CreationDate":"2022-01-30T15:26:00.000","Title":"Subset of dataframe, \"not in index \" error","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone please help me?\nI am trying to run a .py script for which I need an older pytorch version, because a function I am using is deprecated in later torch versions. But I seem not to be able to install it correctly.\nI installed torch into my virtual environment using\nconda create -n my_env python=3.6.2\nsource activate my_env\nconda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.2 -c pytorch\nThen I have a python file (myfile.py) that I start using a shell file (start.sh). The files are on a SLURM-cluster, so I start start.sh with sbatch start.sh.\nstart.sh\nsource activate my_env\nsrun --unbuffered python myfile.py\nmyfile.py\nimport torch\nprint(torch.__version__)\nThe print command from myfile.py still returns \"1.8.0\", but using conda list in my environment shows \"1.7.0\" for pytorch version.\nEven when I type python -c \"import torch; print(torch.__version__)\" directly into terminal, it will return \"1.8.0\" (rather than \"1.7.0\" from conda list)\nAm I doing something very obvious wrong possibly? Did I install in a wrong way, or is somehow my environment not properly loaded in the python file?\nBest regards and thanks a lot in advance","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":96,"Q_Id":70918758,"Users Score":1,"Answer":"It turned out that installing the environment as described added a link to another python installation to my PYTHONPATH (a link to \/.local\/python) and that directory was added to PYTHONPATH in a higher order than the python used in my environment (\/anaconda\/env\/my_env\/python\/...) .\nTherefore, the local version of python was used instead.\nI could not delete it from PYTHONPATH either, but changing the directory name to \/.local\/_python did the trick.\nIt's not pretty, but it works.\nThanks everyone for the contributions!","Q_Score":0,"Tags":"python,pytorch,conda,slurm,virtual-environment","A_Id":71037063,"CreationDate":"2022-01-30T20:44:00.000","Title":"Using older torch version in conda environment not working","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm just trying to get my head around np.empty(), I understand that it creates an uninitialized array but I'm failing to understand what that means and where the values come from. Any help would be welcomed, thanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":81,"Q_Id":70927114,"Users Score":1,"Answer":"numpy is not Python but a wrapper around C code. numpy.empty returns a (wrapper around) an uninitialized C array. You should never try to read a value that you have not previously written because it can be anything including a trap value on systems that have it. It is know as Undefined Behaviour (a close parent to Hell) by C programmers...","Q_Score":2,"Tags":"python,numpy,numpy-ndarray","A_Id":70927352,"CreationDate":"2022-01-31T13:51:00.000","Title":"What is an uninitialized array and what are the values returned by numpy.empty?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have four matrices, a, b, c, and d.\nIn Python (with numpy), I need to do result = np.matmul(np.matmul(np.matmul(a,b),c),d) to multiply them.\nIn MATLAB\/GNU Octave, I can multiply them in a much simpler manner, result = a*b*c*d.\nIs there any way to multiply matrices in Python, so that I would not have to repeatedly write np.matmul avoid nested brackets?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":111,"Q_Id":70955671,"Users Score":3,"Answer":"Use the @ operator. result = a@b@c@d.","Q_Score":1,"Tags":"python,numpy,matrix-multiplication","A_Id":70955707,"CreationDate":"2022-02-02T12:30:00.000","Title":"How to multiply many matrices in one go in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to compare two 13-D vectors using the cosine similarity but want all of the column entries\/features to have equal weighting. Right now, I have 3 features with much larger values that appear to be too heavily-weighted in my comparison results. Is there any easy way to normalize the different features so that they are on a similar scale. I am doing this in python.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":135,"Q_Id":70963950,"Users Score":0,"Answer":"The usual approach is for each feature x to recalculate them as x = x - np.mean(x) this will place your frame of reference at the center of the cluster, \"look to the points closer\".\nThen for each cluster x = x \/ sqrt(mean(x**2)), this will normalize the features, this will make the points more evenly distributed over all possible directions in the feature space.","Q_Score":0,"Tags":"python,scipy,cosine-similarity","A_Id":70986221,"CreationDate":"2022-02-02T22:56:00.000","Title":"Cosine Similarity normalization","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"[[0.12673968 0.15562803 0.03175346 0.6858788 ]]\nThis is how my predict function is giving its output, I want to fetch the index of the highest value.\nTried this:\npred= pred.tolist() print(max(pred)) index_l=pred.index(max(pred)) print(index_l)\nBut it seems to output only 0.\nPrinting max(pred) is giving the output:\n[0.12673968076705933, 0.1556280255317688, 0.031753458082675934, 0.6858788132667542]\nThe network uses sequential with hidden layers (embedding, BiLSTM, BiLSTM, Dense, Dense)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":12,"Q_Id":70986710,"Users Score":0,"Answer":"You just need to use np.argmax(pred[0]). Since your pred have the shape [[]] rather than [], your element is the list of itself. So in order to get the max pred you need to use np.argmax(l[0]).","Q_Score":0,"Tags":"python-3.x,data-science,recurrent-neural-network","A_Id":70997494,"CreationDate":"2022-02-04T12:53:00.000","Title":"How to capture index of highest valued array from predict function?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a df with 3 columns, City, State, and MSA. Some of the MSA values are NaN. I would like to fill the MSA NaN values with a concatenation of City and State. I can fill MSA with City using df.MSA_CBSA.fillna(df.City, inplace=True), but some cities in different states have the same name.\n\n\n\n\nCity\nState\nMSA\n\n\n\n\nChicago\nIL\nChicago MSA\n\n\nBelleville\nIL\nNan\n\n\nBelleville\nKS\nNan\n\n\n\n\n\n\n\nCity\nState\nMSA\n\n\n\n\nChicago\nIL\nChicago MSA\n\n\nBelleville\nIL\nBelleville IL\n\n\nBelleville\nKS\nBelleville KS","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":140,"Q_Id":70997079,"Users Score":0,"Answer":"Keep using the vectorized operation that you suggested. Notice that the argument can receive a combination from the other instances:\ndf.MSA.fillna(df.City + \",\" + df.State, inplace=True)","Q_Score":0,"Tags":"python,pandas,dataframe,fillna","A_Id":70997171,"CreationDate":"2022-02-05T10:23:00.000","Title":"Pandas fillna with string values from 2 other columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"While I was constructing a simple Sequential ANN and selecting parameters for the model.compile method, I observed that Keras.metrics and Keras.losses contain capitalized as well as lowercase versions, for example tf.keras.metrics.SparseCategoricalAccuracy versus tf.keras.metrics.sparse_categorical_accuracy. I was wondering what the difference is between those versions and which one is more suitable to be used in model.compile ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":71005046,"Users Score":0,"Answer":"tf.keras.metrics.SparseCategoricalAccuracy is a Class so you get an object and you can pass it over to model.compile. Since it is an object it can have state between the calls.\nHowever tf.keras.metrics.sparse_categorical_accuracy is a function and it is stateless. Both perform the same operation but their usage is different.","Q_Score":0,"Tags":"python,tensorflow,keras,metrics","A_Id":71005447,"CreationDate":"2022-02-06T07:28:00.000","Title":"What's the difference between capitalized and lowercase versions of Keras.losses and Keras.metrics?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As it is said in the headline, is there that kind of naming convention, especially in python?\nFor example there are functions in python sorted (function that returns changed object, but doesn't change object) and list or numpy.ndarray method sort (returns nothing, but changes the object).\nBut for reversed and list.reverse functions it's not quite the case, reversed returns iterator.\nIn my case I have Permutation class and I want to add inverse functions for these two cases. Should I name them inverted and inverse (last one will be out of class) or like get_inv and set_inv respectively, because these methods are just like getters and setters (which is also quite true for sorted and sort)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":111,"Q_Id":71012696,"Users Score":1,"Answer":"Here is my convention which has served me well in all programming languages I have used:\n\nA function with no side-effects which returns a truth value should be named after the property it queries and use an adjective as the last word, e.g. sorted or items_sorted.\nA function with no side-effects which returns a non-truth value should be named after the result it returns and use a noun as the last word, e.g. sorted_items.\nA function with side-effects should be named after what it does and use a verb as the first word, e.g. sort or sort_items.\n\nRule 1 and 2 is also applicable to variable names.\nIn your concrete example i would use the identifier invert for the method which inverts a permutation, and inverse for a method which returns the inverse of a permutation.","Q_Score":0,"Tags":"python,methods","A_Id":71039928,"CreationDate":"2022-02-07T01:47:00.000","Title":"Naming convention for methods whether they change an object or return changed object","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Dask. I was using it with an xarray dataset. I persisted the dataset in memory and the jupyter cell showed that it was ready (no more asterisk). But the dask dashboard was busy executing the task. I didn't understand. When this happens, should I wait till dask dashboard has stabilized or am I free to run the next cell?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":71020332,"Users Score":0,"Answer":"Persist submits the task graph to the scheduler and returns future objects pointing to your results. So the calculation will be running in the background while you continue your work. You don't need to wait for the dashboard to finish.","Q_Score":0,"Tags":"python,parallel-processing,dask,python-xarray,dask-distributed","A_Id":71023010,"CreationDate":"2022-02-07T14:37:00.000","Title":"If jupyter notebook is ready but Dask dashboard is still showing that its running some task, can I execute my next cell?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"To load a large dataset into Polars efficiently one can use the lazy API and the scan_* functions. This works well when we are performing an aggregation (so we have a big input dataset but a small result). However, if I want to process a big dataset in it's entirety (for example, change a value in each row of a column), it seems that there is no way around using collect and loading the whole (result) dataset into memory.\nIs it instead possible to write a LazyFrame to disk directly, and have the processing operate on chunks of the dataset sequentially, in order to limit memory usage?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":97,"Q_Id":71021201,"Users Score":1,"Answer":"Polars' algorithms are not streaming, so they need all data in memory for the operations like join, groupby, aggregations etc. So writing to disk directly would still have those intermediate DataFrames in memory.\nThere are of course things you can do. Depending on the type of query you do, it may lend itself to embarrassingly parallellizaton. A sum could for instance easily be computed in chunks.\nYou could also process columns in smaller chunks. This allows you to still compute harder aggregations\/ computations.\nUse lazy\nIf you have many filters in your query and polars is able to do them at the scan, your memory pressure is reduced to the selectivity ratio.","Q_Score":0,"Tags":"python-polars","A_Id":71022950,"CreationDate":"2022-02-07T15:35:00.000","Title":"Can I process a DataFrame using Polars without constructing the entire output in memory?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"it shows Can't get attribute 'DocvecsArray' on in anaconda prompt while compiling my code.What should i do to solve this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":71026209,"Users Score":0,"Answer":"DocvecsArray is a long-obsolete class name from older versions of Gensim.\nAre you trying to load an old model into a Python environment with a current Gensim? If so, do you know from which version of Gensim the model was saved (even approximately, or by date)?\nIt may be possible to bring an old model forward, but it may require one or more interim steps, where the model is loaded into an older version of Gensim, that can still read, convert, & then re-save (in a newer format) the old model.","Q_Score":0,"Tags":"python,nlp,gensim,word2vec,doc2vec","A_Id":71027022,"CreationDate":"2022-02-07T22:25:00.000","Title":"My doc2vec library cannot load DocvecsArray.is there a solution.python code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Good afternoon,\nfirst of all thanks to all who take the time to read this.\nMy problem is this, I would like to have a word2vec output the most common words.\nI do this with the following command:\n#how many words to print out ( sort at frequency)\nx = list(model.wv.index_to_key[:2500])\nBasically it works, but sometimes I get only 1948 or 2290 words printed out. I can't find any connection with the size of the original corpus (tokens, lines etc.) or deviation from the target value (if I increase the output value to e.g. 3500 it outputs 3207 words).\nI would like to understand why this is the case, unfortunately I can't find anything on Google and therefore I don't know how to solve the problem. maybe by increasing the value and later deleting all rows after 2501 by using pandas","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":71033089,"Users Score":0,"Answer":"If any Python list ranged-access, like my_list[:n], returns less than n items, then the original list my_list had less than n items in it.\nSo, if model.wv.index_to_key[:2500] is only returning a list of length 1948, then I'm pretty sure if you check len(model.wv.index_to_key), you'll see the source list is only 1948 items long. And of course, you can't take the 1st 2500 items from a list that's only 1948 items long!\nWhy might your model have fewer unique words than you expect, or even that you counted via other methods?\nSomething might be amiss in your preprocessing\/tokenization, but most likely is that you're not considering the effect of the default min_count=5 parameter. That default causes all words that appear fewer than 5 times to be ignored during training, as if they weren't even in the source texts.\nYou may be tempted to use min_count=1, to keep all words, but that's almost always a bad idea in word2vec training. Word2vec needs many subtly-contrasting alternate uses of a word to train a good word-vector.\nKeeping words which only have one, or a few, usage examples winds up not failing to get good generalizable vectors for those rare words, but also interferes with the full learning of vectors for other nearby more-frequent words \u2013 now that their training has to fight the noise & extra model-cycles from the insufficiently-represented rare words.\nInstead of lowering min_count, it's better to get more data, or live with a smaller final vocabulary.","Q_Score":0,"Tags":"python,nlp,gensim,word2vec","A_Id":71039427,"CreationDate":"2022-02-08T11:25:00.000","Title":"Inconsistent result output with gensim index_to_key","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just setup ubuntu deep learning instance on AWS and would like to run my existing jupyter notebook there. Im working on creating CNN model on new images dataset.\nIm stuck at reading my huge image files on my local drive from this remote server.\nHow can i read the files\/folders on my local drive via this jupyter notebook on the instance?\nIs there other solution than uploading the dataset?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":96,"Q_Id":71041103,"Users Score":2,"Answer":"Im not familiar yet with awscli, instead i transfer my dataset to the instance using winSCP. So far, it worked well. But i do appreciate for any advice, suggestion for any other methods that can be used besides winscp.","Q_Score":1,"Tags":"python,amazon-ec2,jupyter-notebook,remote-server","A_Id":71063874,"CreationDate":"2022-02-08T21:26:00.000","Title":"Accessing local files via jupyter notebook on remote AWS server","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"My input data has a high resolution of datetime with seconds in fraction. For example, it should be\n1900-01-01 17:40:14.410000 instead of 1900-01-01 17:40:14.\nApparently this format has not been recognized by the pandas or python. How should I successfully convert this to pandas recognized time stamp style.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":450,"Q_Id":71046151,"Users Score":0,"Answer":"I'm not sure if I understand it correctly but pandas do have the style in the Timestamp class,\nmyTime = pandas.Timestamp(\"1900-01-01 17:40:14.410000\") which you can access the attributes and methods of.\nmyTime.year should output >>> 1900\nmyTime.time() should output >>> 17:40:14.410000 so on and so forth.","Q_Score":0,"Tags":"python,pandas,timestamp","A_Id":71046330,"CreationDate":"2022-02-09T08:25:00.000","Title":"Cannot convert input [00:00:00.020000] of type to Timestamp","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently I have prepared a Custom NER model where I can extract 15 entities. I have trained my model on single annotated file using config.cfg file by spacy3. Now I want to train my model again on 100 annotated files. How can I pass these annotated files?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":96,"Q_Id":71049375,"Users Score":0,"Answer":"If you put .spacy files in a directory and specify the directory as your train\/dev corpus spaCy will automatically load all the files.","Q_Score":1,"Tags":"python,nlp,named-entity-recognition,entities,spacy-3","A_Id":71060105,"CreationDate":"2022-02-09T12:11:00.000","Title":"How can I feed multiple of 100 annotated files in training Custom NER model using spacy3","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have created a Tabular Dataset using Azure ML python API. Data under question is a bunch of parquet files (~10K parquet files each of size of 330 KB) residing in Azure Data Lake Gen 2 spread across multiple partitions. When I try to load the dataset using the API TabularDataset.to_pandas_dataframe(), it continues forever (hangs), if there are empty parquet files included in the Dataset. If the tabular dataset doesn't include those empty parquet files, TabularDataset.to_pandas_dataframe() completes within few minutes.\nBy empty parquet file, I mean that the if I read the individual parquet file using pandas (pd.read_parquet()), it results in an empty DF (df.empty == True).\nI discovered the root cause while working on another issue mentioned [here][1].\nMy question is how can make TabularDataset.to_pandas_dataframe() work even when there are empty parquet files?\nUpdate\nThe issue has been fixed in the following version:\n\nazureml-dataprep : 3.0.1\nazureml-core : 1.40.0","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":171,"Q_Id":71075255,"Users Score":1,"Answer":"Thanks for reporting it.\nThis is a bug in handling of the parquet files with columns but empty row set. This has been fixed already and will be included in next release.\nI could not repro the hang on multiple files, though, so if you could provide more info on that would be nice.","Q_Score":0,"Tags":"azure,azure-machine-learning-service,azureml,azureml-python-sdk","A_Id":71357828,"CreationDate":"2022-02-11T04:14:00.000","Title":"AzureML: TabularDataset.to_pandas_dataframe() hangs when parquet file is empty","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created a Tabular Dataset using Azure ML python API. Data under question is a bunch of parquet files (~10K parquet files each of size of 330 KB) residing in Azure Data Lake Gen 2 spread across multiple partitions. When I try to load the dataset using the API TabularDataset.to_pandas_dataframe(), it continues forever (hangs), if there are empty parquet files included in the Dataset. If the tabular dataset doesn't include those empty parquet files, TabularDataset.to_pandas_dataframe() completes within few minutes.\nBy empty parquet file, I mean that the if I read the individual parquet file using pandas (pd.read_parquet()), it results in an empty DF (df.empty == True).\nI discovered the root cause while working on another issue mentioned [here][1].\nMy question is how can make TabularDataset.to_pandas_dataframe() work even when there are empty parquet files?\nUpdate\nThe issue has been fixed in the following version:\n\nazureml-dataprep : 3.0.1\nazureml-core : 1.40.0","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":171,"Q_Id":71075255,"Users Score":0,"Answer":"You can use the on_error='null' parameter to handle the null values.\nYour statement will look like this:\nTabularDataset.to_pandas_dataframe(on_error='null', out_of_range_datetime='null')\nAlternatively, you can check the size of the file before passing it to to_pandas_dataframe method. If the filesize is 0, either write some sample data into it using python open keyword or ignore the file, based on your requirement.","Q_Score":0,"Tags":"azure,azure-machine-learning-service,azureml,azureml-python-sdk","A_Id":71108141,"CreationDate":"2022-02-11T04:14:00.000","Title":"AzureML: TabularDataset.to_pandas_dataframe() hangs when parquet file is empty","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am very new to python and I'd like to ask for an advice on how to, where to start, what to learn.\nI've got this fantasy name generator (joining randomly picked letters), which every now and then creates a name which is acceptable, what I'd like to do though is to train an AI to generate names which aren't lets say just consonants, ultimately being able to generate human, elvish, dwarfish etc names.\nI'd appreciate any advice in this matter.\nEdit:\nMy idea is: I get a string of letters, if they resemble a name, I approve it, if not - reject. It creates a dataset of True\/False values, which can be used in machine learning, at least that's what I am hoping for, as I said, I am new to programming.\nAgain, I don't mind learning, but where do I begin?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":71081896,"Users Score":0,"Answer":"Single characters are not really a good fit for this, as there are combinatorial restrictions as to which letters can be combined to larger sequences. It is much easier to not have single letters, but instead move on to bi-grams, tri-grams, or syllables. It doesn't really matter what you choose, as long as they can combine freely.\nYou need to come up with an inventory of elements which comply with the rules of your language; you can collect those from text samples in the language you are aiming for.\nIn the simplest case, get a list of names like the ones you want to generate, and collect three-letter sequences from that, preferably with their frequency count. Or simply make some up:\nFor example, if you have a language with a syllablic structure where you always have a consonant followed by a vowel, then by combining elements which are a consonant followed by a vowel you will always end up with valid names.\nThen pick 2 to 5 (or however long you want your names to be) elements randomly from that inventory, perhaps guided by their frequency.\nYou could also add in a filter to remove those with unsuitable letter combinations (at the element boundaries) afterwards. Or go through the element list and remove invalid ones (eg any ending in 'q' -- either drop them, or add a 'u' to them).\nDepending on what inventory you're using, you can simulate different languages\/cultures for your names, as languages differ in their phonological structures.","Q_Score":1,"Tags":"python,random,dataset,artificial-intelligence","A_Id":71082895,"CreationDate":"2022-02-11T14:40:00.000","Title":"How to train AI to create familiar sounding, randomly generated names?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was recently asked to create an in-place sorting algorithm in an interview. The follow up was to code it up to make it faster than O(n logn), discuss the time complexity of every loop.\nI understand that insertion sort, bubble sort, heap sort, quicksort, and shell sort are all in-place, however which of these can be modified to have better time complexity?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":71088879,"Users Score":0,"Answer":"A comparison based sort cannot be faster than O(nlogn). Since all the algorithms you mentioned are comparison based, none of them can have better time complexity.\nThere are algorithms like bucket sort and radix sort that can achieve O(n) in some cases, namely if the input is uniformly distributed.","Q_Score":0,"Tags":"python,python-3.x,sorting,data-structures","A_Id":71088923,"CreationDate":"2022-02-12T03:55:00.000","Title":"How to create an in-place sorting algorithm with faster than O (nlogn) time complexity?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to run a model but it needs older version of gensim with DocvecsArray attribute.How can i run it?\nAttributeError: Can't get attribute 'DocvecsArray' on [2, 3, z**2] (where the last element is the constant term in the expression).\nlinear_eq_to_matrix and solve_linear are also possibilities.","Q_Score":0,"Tags":"python,sympy","A_Id":71128055,"CreationDate":"2022-02-15T00:52:00.000","Title":"SymPy module, identify equation as linear or non-linear?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I am using LightAutoML on supervised data can someone help me with how to do preprocessing in this framework I am using it for the first time\nI have tried to use train split but it says that input contains null values","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":29,"Q_Id":71120633,"Users Score":1,"Answer":"I found out in the documentation of lightautoml that it takes care of data preprocessing and feature engineering too","Q_Score":1,"Tags":"python","A_Id":71553044,"CreationDate":"2022-02-15T03:00:00.000","Title":"how to do LightAutoML data preprocessing especially NaN Values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run code that is supposed to identify different types of image categories.\nThe code is of VGG16 and I have the following error.\nOf course I tried to install and import directories (keras etc...)and the error still exists.\nWould appreciate help.\nThanks.\nThis is the line of code that is marked\nvgg16 = applications.VGG16(include_top=False, weights='data3\/')\nAnd that's the error\nAttributeError: module 'keras.applications' has no attribute 'VGG16'","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":2025,"Q_Id":71125285,"Users Score":1,"Answer":"It should be applications.vgg16.VGG16(...).","Q_Score":0,"Tags":"python,tensorflow,keras,vgg-net","A_Id":71125490,"CreationDate":"2022-02-15T11:00:00.000","Title":"AttributeError: module 'keras.applications' has no attribute 'VGG16'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run code that is supposed to identify different types of image categories.\nThe code is of VGG16 and I have the following error.\nOf course I tried to install and import directories (keras etc...)and the error still exists.\nWould appreciate help.\nThanks.\nThis is the line of code that is marked\nvgg16 = applications.VGG16(include_top=False, weights='data3\/')\nAnd that's the error\nAttributeError: module 'keras.applications' has no attribute 'VGG16'","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2025,"Q_Id":71125285,"Users Score":0,"Answer":"I solved same issue with from tensorflow.keras import applications instead of from keras import applications","Q_Score":0,"Tags":"python,tensorflow,keras,vgg-net","A_Id":71185264,"CreationDate":"2022-02-15T11:00:00.000","Title":"AttributeError: module 'keras.applications' has no attribute 'VGG16'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ERROR: Could not find a version that satisfies the requirement tensorflow-addons (from versions: none)\nERROR: No matching distribution found for tensorflow-addons","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":255,"Q_Id":71126107,"Users Score":1,"Answer":"The reason why you see that error message is because tensorflow-addons is in beta development and built only up to python 3.9\nPlease downgrade your python version to 3.9, that should do the trick (for any operating system).\nAfter that, please run:\npip install tensorflow-addons==0.15.0\nYou should not see any uncompatibilities or error messages.","Q_Score":0,"Tags":"python,tensorflow,deep-learning,tensorflow-addons","A_Id":71126338,"CreationDate":"2022-02-15T12:01:00.000","Title":"cant install tensorflow-addons i have tensorflow version 2.8 and python 3.10","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have a dataset of images and built a strong image recognition model. now i want to add another label to my model.\ni am asking myself, if i have to label every single image in my dataset, which has the requested attribute:\nsimple example:\nlets say i have 500k images in total and i want to label all images which have a palm on it.\nlets imagine that around 100k images have a palm on it.\nwould my model be able to recognise the label palm 80%, 90% or better, if i only label around 20, 30 or 50k images with a palm on it? or do i have to label all 100k images with a palm to get acceptable performance?\nfrom my point of view this could be interpretated in two directions:\n\nmultilabel image classification model ignores all 0 labeled attributes and these wont affect model accuracy -> 20k labeled palm images would be good enough for strong performance, because the model is only interested in the attributes labeled as 1. (even if 100k labeled images would result in better performance)\nmultilabel image classification model will get affected by 0 labeled attributes as well. if only 20k out of 100k palm images will be labeled, the model gets confused, because 80k images have a palm on it, but arent labeled as palm. result would be weak performance regarding this label. if thats the case, all 100k images have to be labeled for strong performance.\n\nAm I right with one of the two suggestions or does multilabel image classification work different?\nI have a very big dataset and I have to label all my images by hand, which takes a lot of time. If my first suggestion works, I could save myself weeks of work.\nI would appreciate a lot, if you share your expertise, experiences and whys!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":71133526,"Users Score":0,"Answer":"The training process uses the negative cases just as much as the positive cases to learn what a palm is. So if some of the supplied negative cases actually contain a palm tree, your model will have a much harder time learning. You could try only labeling the 20k images to start to see if the result is good enough, but for the best result you should label all 100k.","Q_Score":0,"Tags":"python,tensorflow,keras,image-classification","A_Id":72058808,"CreationDate":"2022-02-15T21:19:00.000","Title":"Keras, TF: Do I have to label all images when adding an attribute to a mutilabel image classification model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using h2o autoML on python.\nI used the autoML part to find the best model possible: it is a StackedEnsemble.\nNow I would like to take the model and retrain it on a bigger dataset (which was not possible before because I would explode the google colab free RAM capacity).\nBut AutoML does some preprocessing to my data and I don't know which one.\nHow can I get the preprocessing steps to re-apply it to my bigger data before feeding it to the model ?\nThanks in advance,\nGab","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":53,"Q_Id":71138526,"Users Score":0,"Answer":"Stacked Ensemble is a model that is based on outputs of other models. To re-train the SE model you will need to re-train the individual models.\nApart from that AutoML will not pre-process the data. It delegates the pre-processing to downstream models. There is one exception - target encoding.\nDid you enable TE in AutoML?","Q_Score":0,"Tags":"python,google-colaboratory,h2o,automl,data-preprocessing","A_Id":71150782,"CreationDate":"2022-02-16T08:38:00.000","Title":"h2o AutoML - retrain stacked ensemble from autoML - preprocessing the data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was working on wine data on kaggle. Where there was a column named price has values like $32, $17, $15.99, Nan\nwine_data.isnull().sum()--After applying this code, there were a lot of missing values so I wrote another code i.e.\nwine_data['designation'].fillna(wine_data['designation'].mode()[0], inplace = True)\nwine_data['varietal'].fillna(wine_data['varietal'].mode()[0], inplace = True)\nwine_data['appellation'].fillna(wine_data['appellation'].mode()[0], inplace = True)\nwine_data['alcohol'].fillna(wine_data['alcohol'].mode()[0], inplace = True)\nwine_data['price'].fillna(wine_data['price'].mode()[0], inplace = True)\nwine_data['reviewer'].fillna(wine_data['reviewer'].mode()[0], inplace = True)\nwine_data['review'].fillna(wine_data['review'].mode()[0], inplace = True)\nThen I wanted to do a correlation of alcohol with rating and price with rating but both alcohol and price column has '%' and '$' these characters.So, I applied this code.\nwine_data = wine_data.assign(alcohol_num = lambda row: row[\"alcohol\"].replace(\"%\", \"\", regex=True).astype('float'))\nwine_data = wine_data.assign(price_numbers= wine_data['price'].str.replace('$','',regex = True)).astype('float')\nIt's throwing me an error like--\ncould not convert string to float: 'J. Lohr 2000 Hilltop Vineyard Cabernet Sauvignon (Paso Robles)'\nThen I tried this code:\nwine_data = wine_data.assign(price_numbers= wine_data['price'].str.replace('$','',regex = True)).astype('int')\nIt's throwing me an error like--\ninvalid literal for int() with base 10: 'J. Lohr 2000 Hilltop Vineyard Cabernet Sauvignon (Paso Robles)'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":71139549,"Users Score":0,"Answer":"Your data is not clean. One of the elements in your price column keeps containing the string 'J. Lohr 2000 Hilltop Vineyard Cabernet Sauvignon (Paso Robles)', which is why the column cannot be converted to float, even though you did some other cleansing steps.\nYou want be a bit more structured in your data cleansing: Do one step after the other, take a look at the intermediate df, and do not try to do many cleansing steps at once with an apply() function. If you have a messy dataset, maybe 10 steps are required, no way you can do all of that with a single apply() call.","Q_Score":1,"Tags":"python,python-3.x,pandas,types","A_Id":71140179,"CreationDate":"2022-02-16T09:51:00.000","Title":"How to convert a datatype of a column with both integer and decimal numbers in Python?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Lets say we have a numpy array A with shape (m_1,m_2,...,m_n) where n can be variable. Given a list of n integers [i_1,...,i_n] I want to slice A as follows: A[i_1][i_2]...[i_n]\nWhat is the easiest way to do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":19,"Q_Id":71149829,"Users Score":1,"Answer":"see comment of hpaulj: Use a tuple, not a list","Q_Score":0,"Tags":"python,numpy,numpy-slicing","A_Id":71149942,"CreationDate":"2022-02-16T22:00:00.000","Title":"How to slice numpy array across multiple dimensions by passing a list?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is a typical python coding challenge. Many beginners have a hard time handling it.\nFor example, we have a test array as:\ntest = [[1,2],[1,3],[2,4],[2,1],[2],[5,1],[3,4]]\nQ1: count the number of pairs in the list.\nQ2: count the number of pairs for 1.\nI know I can use the least\/greatest function in SQL to do the job, but I don't know how to do it in python, especially in 2 dimension arrays.\nExpected result for Q1 is 5 ([1,2],[1,3],[2,4],[5,1],[3,4]\uff09\nExpected result for Q2 is 3 (2,3,5)","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":42,"Q_Id":71152445,"Users Score":-1,"Answer":"len([k in test if len(k)==2]) and len([k for k in test if 1 in k]) should get you there.","Q_Score":0,"Tags":"python,arrays,pandas,match","A_Id":71152463,"CreationDate":"2022-02-17T04:14:00.000","Title":"count paired numbers in 2-dimension python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some questions about multiple regression models in python:\n\nWhy is it necessary to apply a \u201cdummy intercept\u201d vector of ones to start for the Least Square method (OLS)? (I am refering to the use of X = sm.add_constant(X). I know that the Least square method is a system of derivatives equal to zero. Is it computed with some iterative method that make a \u201cdummy intercept\u201d necessary? Where can I find some informative material about the detail of the algorithm est = sm.OLS(y, X).fit()?\n\nAs far as I understood, scale.fit_transform produce a normalization of the data. Usually a normalization do not produce value higher than 1. Why, once scaled I see value that exceed 1?\n\nWhere is it possible to find a official documentation about python functions?\n\n\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":165,"Q_Id":71154581,"Users Score":1,"Answer":"In the OLS the function you are trying to fit is :\ny=ax1+ax2+ax3+c. if you don't use c term, your line will always pass through the origin. Hence to give more degrees of freedom to your line which can be offset by c from your origin you need c .\n\nYou can fit a line without constant term and you will get set of coefficients (dummy intercept is not necessary for iterative computation), but that might not be the best possible straight line which minimises the least square.","Q_Score":1,"Tags":"python,regression,least-squares","A_Id":71154648,"CreationDate":"2022-02-17T08:22:00.000","Title":"regression OLS in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to interpolate a 4D data set (four angles, one output value). Angles are cyclic but scipy.interpolate.LinearNDInterpolator can't seem to take this into account. Is there a tool that can do this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":71177893,"Users Score":0,"Answer":"You could duplicate all of your data at \u00b12\u03c0 on each dimension (for 4 dimensions, the easy way of doing this would create 81 copies of each point; the slightly harder way would create 16 copies, by adding 2\u03c0 to the angles between 0 and \u03c0, and subtracting 2\u03c0 from the angles between \u03c0 and 2\u03c0). That should ensure that every point you query (with all of the angles between 0 and 2\u03c0) will have neighbors \"on both sides\" in each dimension, for the linear interpolation to work with.","Q_Score":0,"Tags":"python,scipy,interpolation,linear-interpolation","A_Id":71178254,"CreationDate":"2022-02-18T17:54:00.000","Title":"is there a way to do 4D (linear) interpolation of data with cyclic (angle) coordinates in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I entered following command in my jupyter notebook: !pip install -U ibm-watson-machine-learning and with I can see the package install with !pip list.\nBut when I try to import like so: import ibm_watson_machine_learning, I get following error: ModuleNotFoundError: No module named 'ibm_watson_machine_learning'.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":34,"Q_Id":71187941,"Users Score":0,"Answer":"SOLVED: For me, I simply needed to update all my packages in conda with conda upgrade --all.","Q_Score":0,"Tags":"python,pip,ibm-cloud,modulenotfounderror","A_Id":71252390,"CreationDate":"2022-02-19T18:31:00.000","Title":"ModuleNotFoundError for ibm-watson-machine-learning package","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe where one of the columns contains a list of values:\nexample:\ntype(df['col_list'].values[0]) = list\nI saved this dataframe as csv file (df.to_csv('my_file.csv'))\nWhen I load the dataframe (df = pd.read_csv('my_file.csv'))\nthe column which contains list of values change to string type:\ntype(df['col_list'].values[0]) = str\nWhen converting to list (list(df['col_list'].values[0]) I'm getting list of characters instead of list of values.\nHow can I save\/load dataframe which one of it's columns contains list of values ?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":79,"Q_Id":71191595,"Users Score":1,"Answer":"Use JSON or HDF file format instead of CSV. CSV file format is really inconvenient for storing a list or a collection of objects.","Q_Score":1,"Tags":"python,pandas,dataframe","A_Id":71191607,"CreationDate":"2022-02-20T05:28:00.000","Title":"saving and loading list values?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"so i am trying to use the Logistic Regression classifier and apply it on the UniGram bag-of-words feature set\nmy code:\nclf = sklearn.linear_model.LogisticRegression()\nclf.fit(tf_features_train, train_labels)\nprint (clf)\nerror message: This solver needs samples of at least 2 classes in the data, but the data contains only one class: 1\ncan someone please help me","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":71199863,"Users Score":0,"Answer":"The message is explicit: check the content of train_labels, it contains only class 1. Normally it should contain at least two different classes, otherwise there's nothing to classify.","Q_Score":0,"Tags":"python-3.x,machine-learning,text-classification","A_Id":71229761,"CreationDate":"2022-02-21T00:01:00.000","Title":"I'm trying to use the Logistic Regression classifier and apply it on the UniGram bag-of-words feature set","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Currently trying to write code to check for data quality of a 7 gb data file. I tried googling exactly but to no avail. Initially, the purpose of the code is to check how many are nulls\/NaNs and later on to join it with another datafile and compare the quality between each. We are expecting the second is the more reliable but I would like to later on automate the whole process. I was wondering if there is someone here willing to share their data quality python code using Dask. Thank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":71203005,"Users Score":2,"Answer":"I would suggest the following approach:\n\ntry to define how you would check quality on small dataset and implement it in Pandas\ntry to generalize the process in a way that if each \"part of file\" or partition is of good quality, than whole dataset can be considered of good quality.\nuse Dask's map_partitions to parralelize this processing over your dataset's partition.","Q_Score":0,"Tags":"python,dask,data-quality","A_Id":71207327,"CreationDate":"2022-02-21T08:08:00.000","Title":"Data Quality check with Python Dask","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a very big data frame with the orders for some products with a reference. This reference has periodical updates, so for the same product there are a lot of rows in the dataframe. I want to choose the last update for each reference, but i dont know why.\nFor a reference, for example there are 10 updates, for another, 34, so there is not a patron...\nAny ideas?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":71203191,"Users Score":0,"Answer":"you can use func iget like this :\ndf['column'].iget(-1);\nor\ndf.iloc[-1:]","Q_Score":0,"Tags":"python,analysis","A_Id":71203744,"CreationDate":"2022-02-21T08:25:00.000","Title":"how to select the last value in a irregular data frame","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a df like this:\n\n\n\n\nmonth\noutcome\nmom.ret\n\n\n\n\n10\/20\nwinner\n0.2\n\n\n10\/20\nwinner\n0.9\n\n\n11\/20\nwinner\n0.6\n\n\n11\/20\nwinner\n0.2\n\n\n11\/20\nwinner\n0.9\n\n\n10\/20\nloser\n0.6\n\n\n10\/20\nloser\n0.2\n\n\n10\/20\nloser\n0.9\n\n\n11\/20\nloser\n0.6\n\n\n\n\nI would like to add another column, which has 1 \/ by the counts of times the value \"winner\" or \"loser\" appears per each month on the column outcome. The expected output for the example df is:\n\n\n\n\nmonth\noutcome\nmom.ret\nq\n\n\n\n\n10\/20\nwinner\n0.2\n1\/2\n\n\n10\/20\nwinner\n0.9\n1\/2\n\n\n11\/20\nwinner\n0.6\n1\/3\n\n\n11\/20\nwinner\n0.2\n1\/3\n\n\n11\/20\nwinner\n0.9\n1\/3\n\n\n10\/20\nloser\n0.6\n1\/3\n\n\n10\/20\nloser\n0.2\n1\/3\n\n\n10\/20\nloser\n0.9\n1\/3\n\n\n11\/20\nloser\n0.6\n1\/1\n\n\n\n\nI thought of using the function count to count how many times the values are repeated, but then I need to specify that the \"count\" should be done per each date. Any ideas?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":38,"Q_Id":71208468,"Users Score":1,"Answer":"Use df['q'] = 1\/df.groupby(['month', 'outcome']).transform('count').","Q_Score":2,"Tags":"python,pandas,counting,np","A_Id":71209197,"CreationDate":"2022-02-21T15:00:00.000","Title":"Filling a column with the amount of duplicated values in another column","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I run the below code:\nfrom statsmodels.regression import rolling\nI get this error message:\n\nAttributeError Traceback (most recent call last)\n\/var\/folders\/q9\/_s10_9yx6k7gxt3w4t7j0hgw0000gn\/T\/ipykernel_56663\/1581398632.py in \n----> 1 from statsmodels.regression import rolling\n~\/opt\/anaconda3\/lib\/python3.8\/site-packages\/statsmodels\/regression\/rolling.py in \n456\n457\n--> 458 class RollingRegressionResults(object):\n459 \"\"\"\n460 Results from rolling regressions\n~\/opt\/anaconda3\/lib\/python3.8\/site-packages\/statsmodels\/regression\/rolling.py in RollingRegressionResults()\n514\n515 @cache_readonly\n--> 516 @Appender(RegressionResults.aic.func.doc)\n517 def aic(self):\n518 return self._wrap(RegressionResults.aic.func(self))\nAttributeError: 'pandas._libs.properties.CachedProperty' object has no attribute 'func'\n\nI've never had this problem before and I'm unsure what has gone wrong. I'm running statsmodels version 0.12.2 and Python 3.8.12 on MacOS 11.4. I'm trying to use RollingOLS.\nThanks for your help.\nEDIT:\nOut of curiosity I just replaced all '.func' with '' in this file and this issue no longer exists and the results seem to be accurate. I don't really understand what this did however and since I'm using this in a professional capacity I need to be sure this is correct.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":730,"Q_Id":71209601,"Users Score":0,"Answer":"I can't reproduce this issue with macOS 12.1 - it's likely a problem with your code \/ system setup.\nHowever, 0.13.2 seems to work.","Q_Score":0,"Tags":"python,python-3.x,pandas,statsmodels","A_Id":71210378,"CreationDate":"2022-02-21T16:15:00.000","Title":"Problem when importing statsmodels.regression.rolling (AttributeError: 'pandas._libs.properties.CachedProperty' object has no attribute 'func')","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I run the below code:\nfrom statsmodels.regression import rolling\nI get this error message:\n\nAttributeError Traceback (most recent call last)\n\/var\/folders\/q9\/_s10_9yx6k7gxt3w4t7j0hgw0000gn\/T\/ipykernel_56663\/1581398632.py in \n----> 1 from statsmodels.regression import rolling\n~\/opt\/anaconda3\/lib\/python3.8\/site-packages\/statsmodels\/regression\/rolling.py in \n456\n457\n--> 458 class RollingRegressionResults(object):\n459 \"\"\"\n460 Results from rolling regressions\n~\/opt\/anaconda3\/lib\/python3.8\/site-packages\/statsmodels\/regression\/rolling.py in RollingRegressionResults()\n514\n515 @cache_readonly\n--> 516 @Appender(RegressionResults.aic.func.doc)\n517 def aic(self):\n518 return self._wrap(RegressionResults.aic.func(self))\nAttributeError: 'pandas._libs.properties.CachedProperty' object has no attribute 'func'\n\nI've never had this problem before and I'm unsure what has gone wrong. I'm running statsmodels version 0.12.2 and Python 3.8.12 on MacOS 11.4. I'm trying to use RollingOLS.\nThanks for your help.\nEDIT:\nOut of curiosity I just replaced all '.func' with '' in this file and this issue no longer exists and the results seem to be accurate. I don't really understand what this did however and since I'm using this in a professional capacity I need to be sure this is correct.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":730,"Q_Id":71209601,"Users Score":0,"Answer":"I was getting the same error when I was trying to import statsmodels.tsa.arima_model that has been removed & replaced by statsmodels.tsa.arima.model.\nThe steps I followed to troubleshoot the error are:\n\nUpdating pandas using this command: pip install --upgrade pandas --user\n\nUpdating statsmodels using this command: pip install --upgrade statsmodels --user\n\nAfter that I got the below error:\nNotImplementedError:\nstatsmodels.tsa.arima_model.ARMA and statsmodels.tsa.arima_model.ARIMA have\nbeen removed in favor of statsmodels.tsa.arima.model.ARIMA (note the .\nbetween arima and model) and statsmodels.tsa.SARIMAX.\nstatsmodels.tsa.arima.model.ARIMA makes use of the statespace framework and\nis both well tested and maintained. It also offers alternative specialized\nparameter estimators.\n\n\nThen I resolved the error by replacing statsmodels.tsa.arima_model with statsmodels.tsa.arima.model .","Q_Score":0,"Tags":"python,python-3.x,pandas,statsmodels","A_Id":71891984,"CreationDate":"2022-02-21T16:15:00.000","Title":"Problem when importing statsmodels.regression.rolling (AttributeError: 'pandas._libs.properties.CachedProperty' object has no attribute 'func')","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I run the below code:\nfrom statsmodels.regression import rolling\nI get this error message:\n\nAttributeError Traceback (most recent call last)\n\/var\/folders\/q9\/_s10_9yx6k7gxt3w4t7j0hgw0000gn\/T\/ipykernel_56663\/1581398632.py in \n----> 1 from statsmodels.regression import rolling\n~\/opt\/anaconda3\/lib\/python3.8\/site-packages\/statsmodels\/regression\/rolling.py in \n456\n457\n--> 458 class RollingRegressionResults(object):\n459 \"\"\"\n460 Results from rolling regressions\n~\/opt\/anaconda3\/lib\/python3.8\/site-packages\/statsmodels\/regression\/rolling.py in RollingRegressionResults()\n514\n515 @cache_readonly\n--> 516 @Appender(RegressionResults.aic.func.doc)\n517 def aic(self):\n518 return self._wrap(RegressionResults.aic.func(self))\nAttributeError: 'pandas._libs.properties.CachedProperty' object has no attribute 'func'\n\nI've never had this problem before and I'm unsure what has gone wrong. I'm running statsmodels version 0.12.2 and Python 3.8.12 on MacOS 11.4. I'm trying to use RollingOLS.\nThanks for your help.\nEDIT:\nOut of curiosity I just replaced all '.func' with '' in this file and this issue no longer exists and the results seem to be accurate. I don't really understand what this did however and since I'm using this in a professional capacity I need to be sure this is correct.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":730,"Q_Id":71209601,"Users Score":0,"Answer":"statsmodels.tsa.arima_model.ARMA and statsmodels.tsa.arima_model.ARIMA\nhave been removed in favor of statsmodels.tsa.arima.model.ARIMA (note the . between arima and model) and statsmodels.tsa.SARIMAX.\nstatsmodels.tsa.arima.model.ARIMA makes use of the statespace framework and is both well tested and maintained. It also offers alternative specialized parameter estimators.","Q_Score":0,"Tags":"python,python-3.x,pandas,statsmodels","A_Id":72198642,"CreationDate":"2022-02-21T16:15:00.000","Title":"Problem when importing statsmodels.regression.rolling (AttributeError: 'pandas._libs.properties.CachedProperty' object has no attribute 'func')","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a TF-IDF value for a word that is found in number of documents and not only a single document or a specific document.\nFor example, Consider this corpus\ncorpus = [\n'This is the first document.',\n'This document is the second document.',\n'And this is the third one.',\n'Is this the first document?',\n'Is this the second cow?, why is it blue?',\n]\nI want to get TD-IDF value for word 'FIRST' which is in document 1 and 4. TF-IDF value is calculated on basis of that specific document, in this case I will get 2 score for both indiviual document. However, I need a single score for word 'FIRST' considering all documents at same time.\nIs there any way I can get score TF-IDF score of a word from all set of documents?\nIs there any other method or technique which can help me solve the problem?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":207,"Q_Id":71213250,"Users Score":1,"Answer":"tl;dr\nTf-Idf is not made to weight words. You cannot compute the Tf-Idf of a word. You can compute the frequency of a word in a corpus.\nWhat is TfIdf\nThe Tf-Idf computes the score for a word according to a document ! It gives high scores to words that are frequent (TF) and particular (IDF) to a document. TF-IDF's goal is to compute similarity between documents, not\nweighting words.\nThe solution given by maaniB is essentially just the normalized frequency of words. Depending on what you need to accomplish you should find an other metric to weigh words (the frequency is generally a great start).\nWe can see that the Tf-Idf gives a better score to 'cow' in doc 5 because 'cow' is particular to this document but this is lost in maaniB's solution.\nExample\nFor example we will compare the Tf-Idf of 'cow' and 'is'.\nTF-IDF formula is (without logs): Tf * N \/ Df. N is the number of documents, Tf the frequency of word in document and Df the number of document in which word appear.\n'is' appears in every document so it's Df will be 5. It appears once in documents 1, 2, 3 and 4 so the Tf will be 1 and twice in doc 5.\nSo the TF-IDF of 'is' in doc 1,2,3,4 will be 1 * 5 \/ 5 = 1; and in doc 5 it will be 2 * 5 \/ 5 = 2.\n'cow' appears only in the 5th document so it's Df is 1. It appears once in document 5 so it's Tf is 1.\nSo the TF-IDF of 'cow' in doc 5 will be 1 * 5 \/ 1 = 5; and in every other doc : 0 * 5 \/ 1 = 0.\nIn conclusion 'is' is very frequent in doc 5 (appears twice) but not particular to doc 5 (appears in every document) so it's Tf-Idf is lower than the one of 'cow' which appear only once but in only one document !","Q_Score":1,"Tags":"python,scikit-learn,nlp,tf-idf,tfidfvectorizer","A_Id":71219517,"CreationDate":"2022-02-21T21:30:00.000","Title":"How to get TF-IDF value of a word from all set of documents?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example I have multiple lines log file\nI have mapper.py. this script do parse file.\nIn this case I want to do my mapper it independently","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":71217146,"Users Score":0,"Answer":"Hadoop Streaming is already \"distributed\", but is isolated to one input and output stream. You would need to write a script to loop over the files and run individual streaming jobs per-file.\nIf you want to batch process many files, then you should upload all files to a single HDFS folder, and then you can use mrjob (assuming you actually want MapReduce), or you could switch to pyspark to process them all in parallel, since I see no need to do that sequentially.","Q_Score":0,"Tags":"python,hadoop,mapreduce,hadoop-streaming","A_Id":71224112,"CreationDate":"2022-02-22T07:05:00.000","Title":"How to distribute Mapreduce task in hadoop streaming","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have the following intent classification data (4 columns, 65 rows):\nColumns: \u00a0 intent-A \u00a0 intent-B \u00a0 intent-C \u00a0 intent-D\nrecords: \u00a0\u00a0\u00a0 \u00a0 \u00a0 d1a \u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0 \u00a0 d1b \u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0 \u00a0 d1c \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0 d1d\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0 \u00a0 d2a \u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0 \u00a0 d2b \u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0 \u00a0 d2c \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0 d2d\nI am attempting to combine the columns into two columns to look like this (2 columns, 260 rows):\ndata \u00a0 intent\nd1a \u00a0 intent-A\nd1b \u00a0 intent-B\nd1c \u00a0 intent-C\nd1d \u00a0 intent-D\n\nd2a \u00a0 intent-A\n\nd2b \u00a0 intent-B\n\nd2c \u00a0 intent-C\n\nd2d \u00a0 intent-D\nI am using pandas DataFrame and have tried using different functions with no success (append, concat, etc.). Any help would be most appreciated!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":31,"Q_Id":71231590,"Users Score":1,"Answer":"You can use the following code, (here df is your data frame)-\npd.DataFrame({\"Date\":df.values.flatten(), \"intent\":df.columns.tolist()*65})","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":71231641,"CreationDate":"2022-02-23T04:59:00.000","Title":"Extracting selected columns to new DataFrame as a copy in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's suppose that I have n points, and a square numpy matrix where i,j'th entry is filled with the distance between the point i and point j. How can I derive an adjacency matrix in a way that, there is an \"edge\" between point a and point b if for every other point c, max(dist(c,a), dist(c,b)) > dist(a,b), in other words there is not any other point c such as c is closer to a and b than, a and b are to each other. I could write this in numpy with some for loops, but I wonder if there is any easier\/faster way to do so.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":29,"Q_Id":71232076,"Users Score":1,"Answer":"This is really hard to do without a concrete example, so I may be getting this slightly wrong.\nSo you have an nxn matrix (presumably symmetric with a diagonal of 0) representing the distances. Let's call this matrix A.\nThen A[:,None,:] is an nx1xn matrix such that if you broadcast it to nxnxn, then A[i, j, k] is the distance from the i'th point to the k'th point. Likewise, A[None, :, :], when broadcast to nxnxn, gives a matrix such that A[i, j, k] gives the distance from the j'th point to the kth point.\nSo B = np.maximum(A[:,None,:],A[None,:,:]) is an array such at b[i, j, k] is the maximum of the distance from i to k or from j to k. Take the minimum of B along the last dimension, and you've got the value for the best possible k. Edges are those values for which np.min(B, axis=2) <= A.\nAgain, try this out on your own computer. I may have slight details wrong.","Q_Score":1,"Tags":"python,numpy","A_Id":71232393,"CreationDate":"2022-02-23T06:09:00.000","Title":"Deriving an adjacency matrix wrt distance","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to write a complicated function that will evaluate a new column for a DataFrame in pandas.\nThis function will have to use data from multiple (more than 10) columns of this DataFrame.\nIt won't fit into a lambda, to plug it in easily to the apply() function.\nI don't want to write a function that takes more than 10 arguments and plug it into apply(), because it would hurt readability of my code.\nI would rather not use for loop to iterate over rows, as it has poor performance.\nIs there a clever solution to this problem?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":19,"Q_Id":71237049,"Users Score":1,"Answer":"If all column values are on the same row you can use apply(func, axis=1) to pass a row from your df as argument to function func. Then in func, you can extract all values from your row.","Q_Score":1,"Tags":"python,pandas","A_Id":71237272,"CreationDate":"2022-02-23T12:36:00.000","Title":"Writing a complicated function that will by applied to a DataFrame","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I devlopped a flask app in which I use pandas.\nWhen I start the python environment using\nSource myenv\/bin\/activate\nAnd run.py\n=> everything is ok and the app run normally\nBut when I try to deploy the app using mod_wsgi it crushes with this importing pandas error\nPandas\/init.py line 13\nMissing_dependencies.append(f\"{dependency}:{e}\")\nAm I missing something ?\nI use the standard mod_wsgi config that is working for with another app that doesn't use pandas\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":8,"Q_Id":71237451,"Users Score":0,"Answer":"I found a solution for my problem:\nIt was due to python 2 used to compile mod_wsgi\nI changed that to python 3\nNow everything is working fine","Q_Score":0,"Tags":"python-3.x,pandas,flask,mod-wsgi","A_Id":71264954,"CreationDate":"2022-02-23T13:02:00.000","Title":"Issue with import pandas error using mod_wsgi","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to add the elements that are in a row vector to a 1X1 vector in python,\nProblem:\n[10 100]\nSolution:\n[110]\nIs there any way to achieve this?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":25,"Q_Id":71238300,"Users Score":1,"Answer":"Example provided in question is more of a list.\nTo sum up all the elements in a list, sum() function can be used.\ne.g:\nsum([10 100])\n\/\/output: 110","Q_Score":0,"Tags":"python,list,vector","A_Id":71238413,"CreationDate":"2022-02-23T14:01:00.000","Title":"Adding elements in a 1 D vector","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why is df.head() (withoutprint) enough to print rows from the dataframe in Google Colab but in PyCharm, I have to actually use print(df.head()) as df.head() alone will not print any rows?\nI am wondering if this could be due to different versions of Python between Google Colab and what I have in PyCharm on my local machine or if maybe it's due to a setting in PyCharm?\nNot a big deal to have to use print in PyCharm but just asking since I am just learning Python and I was stuck for a while trying to figure out why df.head() wasn't working in PyCharm like it did in Google Colab until I figured I had to use print.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":57,"Q_Id":71238390,"Users Score":1,"Answer":"Google Colab uses Jupyter notebook. If you are using a Jupyter notebook in Pycharm, it should work similarly to Google Colab. If you are using a normal Python File with .py extension, you have to use the print statement.\np.s. I use VS Code since it supports Jupyter notebooks directly in the editor and works similar to Google Colab.","Q_Score":0,"Tags":"python,pandas,pycharm","A_Id":71238551,"CreationDate":"2022-02-23T14:06:00.000","Title":"Colab vs PyCharm - df.head() vs print(df.head())","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am receiving below error in Azure synapse Pyspark notebook\nTypeError: AutoMLConfig() takes no arguments\nwhile running below code:\nautoml_settings = {\n\"primary_metric\": 'r2_score',\n\"enable_early_stopping\": True,\n\"experiment_timeout_hours\": 0.5,\n\"max_cores_per_iteration\": 1,\n\"max_concurrent_iterations\": 2,\n\"enforce_time_on_windows\": True,\n\"exclude_nan_labels\": True,\n\"enable_tf\": False,\n\"verbosity\": 20\n}\nautoml_config = AutoMLConfig(\"task\": 'regression',\n\"label_column_name\": label,\n\"compute_target\": compute_target,\n\"featurization\": 'auto',\n\"training_data\": train_data\n**automl_settings)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":71239957,"Users Score":0,"Answer":"You're using the wrong version of Apache spark pool. If you change to a pool with version 2.4, then it should fix your problem.","Q_Score":0,"Tags":"python,azure,azure-synapse,automl","A_Id":72192750,"CreationDate":"2022-02-23T15:41:00.000","Title":"TypeError: AutoMLConfig() takes no arguments in Azure Synapse","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a 0.4 KV electrical network and I need to use particle swarm optimization algorithm on it to find the optimal place and size for DGs but I'm new to optimization subject I tried a lot but I couldn't know how to do it could anyone help me with it please","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":17,"Q_Id":71258086,"Users Score":0,"Answer":"From the paper \"Prakash, D. B., and C. Lakshminarayana. \"Multiple DG placements in distribution system for power loss reduction using PSO algorithm.\" Procedia technology 25 (2016): 785-792\", PSO algorithm is given below\nStep 1: Input data such as line impedance, line power.\nStep 2: Calculate voltages at each node and total power loss in the distribution network using forward backward sweep method.\nStep 3: Initialize population size.\nStep 4: Initialize number of particles to be optimized.\nStep 5: Set bus count x=2.\nStep 6: Set generation count y=0.\nStep 7: Generate random position and velocity for each particle.\nStep 8: Calculate power loss for each particle using Active power loss minimization.\nStep 9: Initialize current position of each particle as \u2018Pbest\u2019.\nStep 10: Assign \u2018Gbest\u2019 as best amont \u2018Pbest\u2019.\nStep 11: Update velocity and position of each particle using velocity and position update equations respectively.\nStep 12: If generation count reaches maximum limit, go to Step 13 or else increase the counter by one and go to Step 7.\nStep 13: If bus count reaches maximum limit, go to Step 14 or else increase the counter by one and go to Step 6.\nStep 14: Display the results.","Q_Score":0,"Tags":"python,matlab,particle-swarm","A_Id":71704199,"CreationDate":"2022-02-24T20:49:00.000","Title":"optimal location of DGs","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run inference for a Tensorflow model on GPU, but it is using CPU. I can confirm it is using CPU as the inference time is very large and nvidia-smi shows no python process.\nTo debug this, I listed the physical and logical devices in Tensorflow. I can see that physical devices list contains GPU, but the logical devices list doesn't contain GPU. What can I do to fix this and run my model inference on GPU?\nI am using Tensorflow 2.4.4.\n\ntf.config.list_physical_devices()\n[PhysicalDevice(name='\/physical_device:CPU:0', device_type='CPU'),\nPhysicalDevice(name='\/physical_device:GPU:0', device_type='GPU')]\ntf.config.list_logical_devices()\n[LogicalDevice(name='\/device:CPU:0', device_type='CPU')]","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":71259178,"Users Score":0,"Answer":"The reason why GPU was listed in physical devices but not in logical devices was because I had this line in my script. This line made my GPU not visible to the runtime.\n\ntf.config.set_visible_devices([], \"GPU\")","Q_Score":0,"Tags":"python,tensorflow","A_Id":71259258,"CreationDate":"2022-02-24T22:53:00.000","Title":"GPU listed in physical devices in Tensorflow, but not in logical devices","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"is it possible to create my own object detection script with YOLO or create a Neuron Network to implement it in the NAO robot( iknow that there is a box of detection in choregraph but isn't very useful that's why i want to build an other one from scratch ) .. if there are any resources or something else that help me not hesitate to put them and thank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":71261879,"Users Score":0,"Answer":"It is possible, but not easy.\nYou could use TensorFlow or PyTorch and run YOLO on a PC connected to NAO using PyNAOqi, to get images from the camera.\nThese Python packages are not available on NAO, because it is lacking pip and compilers.\nMost advanced developers should be able to compile neural networks into binaries that can run on the robot using the NAOqi C++ SDK, but honestly they must be rare.","Q_Score":1,"Tags":"python,neural-network,yolo,nao-robot,naos-project","A_Id":72156004,"CreationDate":"2022-02-25T06:25:00.000","Title":"robot NAO object detection from scratsh","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use sklearn Linear Regression, however whenever I run my code it comes up with an error: Expected 2D array, got 1D array instead:\narray=[1.16 2.51 1.15 1.52 1.11 1.84 1.07 3. 2. 1.71 0.48 1.85 1.32 1.17\n1.48 2.59].\nAnyone know How I can fix this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":71294987,"Users Score":0,"Answer":"You have only one element in the array because you didn't but \",\" between your numbers. Try it like this: array=[1.16, 2.51, 1.15, 1.52, 1.11, 1.84, 1.07, 3, 2, 1.71, 0.48, 1.85, 1.32, 1.17, 1.48, 2.59]. If this isn't what you wanted, describe your problem a bit more.","Q_Score":0,"Tags":"python,arrays,multidimensional-array,linear-regression","A_Id":71295230,"CreationDate":"2022-02-28T12:22:00.000","Title":"How do I change the dimension of an array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried scipy.interpolate.RegularGridInterpolator but MATLAB and python give me results with tiny different (For example: python: -151736.1266937256 MATLAB: -151736.1266989708). And I do care about those different decimals.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":135,"Q_Id":71303408,"Users Score":0,"Answer":"Those two functions are equivalent. However, MATLAB's griddedInterpolant has multiple interpolation methods, whilst RegularGridInterpolator only seems to support linear and nearest. With MATLAB, this gives you more possibilities to choose a proper method given your data.\nYour two results seem to be accurate to the 12th digit, which in most cases is a good accuracy. The difference between the results is probably due to different implementations of the interpolation method.\nIf you want accuracy beyond the 12th digit you should rescale your problem so that you only consider the decimals.","Q_Score":0,"Tags":"python,numpy,matlab,scipy","A_Id":71306383,"CreationDate":"2022-03-01T03:27:00.000","Title":"What is equivalent to MATLAB griddedInterpolant function in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"While running the code az.plot_trace(result_final);\nfacing the below error\nTypeError: expected dtype object, got 'numpy.dtype[float64]'\nThe above exception was the direct cause of the following exception:\nSystemError: CPUDispatcher() returned a result with an error set\nCan you please let me know how to solve this","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":71305033,"Users Score":0,"Answer":"I suggest upgrading numba to the latest version, e.g. python3 -m pip install numba==[latest version]. This might require a manual update to llmvite as well by doing, python3 -m pip install --ignore-installed llvmlite. Hope that helps!","Q_Score":0,"Tags":"python,data-science,arviz,bambi","A_Id":72236985,"CreationDate":"2022-03-01T07:27:00.000","Title":"In Arviz while ploting the plot_trace or plot_posterior getting the type error","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I have a dataframe with a string column and I want to do some filtering, what's the difference between\ndf[\"string_column\"].str.startswith(...)\nand\ndf[\"string_column\"].startswith(...)\nBoth work fine for me. I'm just curious of why we use .str","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":71315674,"Users Score":0,"Answer":"some methods are specific to string types only. such as contains(), lower(), replace()..","Q_Score":0,"Tags":"python,pandas","A_Id":71315764,"CreationDate":"2022-03-01T23:13:00.000","Title":"Why use df[\"column\":].str as opposed to not","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am training a model with different outputs in PyTorch, and I have four different losses for positions (in meter), rotations (in degree), and velocity, and a boolean value of 0 or 1 that the model has to predict.\nAFAIK, there are two ways to define a final loss function here:\none - the naive weighted sum of the losses\ntwo - the defining coefficient for each loss to optimize the final loss.\nSo, My question is how is better to weigh these losses to obtain the final loss, correctly?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":171,"Q_Id":71317141,"Users Score":2,"Answer":"This is not a question about programming but instead about optimization in a multi-objective setup. The two options you've described come down to the same approach which is a linear combination of the loss term. However, keep in mind there are many other approaches out there with dynamic loss weighting, uncertainty weighting, etc... In practice, the most often used approach is the linear combination where each objective gets a weight that is determined via grid-search or random-search.\nYou can look up this survey on multi-task learning which showcases some approaches: Multi-Task Learning for Dense Prediction Tasks: A Survey, Vandenhende et al., T-PAMI'20.\nThis is an active line of research, as such, there is no definite answer to your question.","Q_Score":2,"Tags":"python,optimization,pytorch,loss-function,loss","A_Id":71320260,"CreationDate":"2022-03-02T03:32:00.000","Title":"optimizing multiple loss functions in pytorch","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have imported data from excel to python and now want to draw multiple plots on a single figure but for that I will need separate variables like 'x' & 'y' etc because we know that plt.plot(x,y), basically I have two datasets in which I am doing Time series analysis. In first data set I have Monthly+Yearly data in which I combined both columns and formed one column having name Year-Month, In second dataset I have Daily+Yearly data in which I formed one column by merging both and named it as Year-Daily. Now the dependent variable in both datasets is the number of sunspots.\n\nNow I want to Plot Daily and Monthly sunspot numbers on a single Graph in Python, so how will I do that?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":71327624,"Users Score":0,"Answer":"What is the library that are you using to import the data?","Q_Score":0,"Tags":"python,jupyter-notebook,time-series,jupyter","A_Id":71327904,"CreationDate":"2022-03-02T18:37:00.000","Title":"Imported data from excel and assigning variables in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to calculate the confidence of a random forest regression.\nI couldn't find a way to do so in sklearn library in python, thus I am trying to calculate it using the variance between the predictions of each tree, but I still couldn't find anything.\nHave you faced this problem and have a solution that helps me calculate the confidence?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":71332829,"Users Score":0,"Answer":"The answer was pretty simple, we have the ability to get each estimator in the forest (we can simply go through the forest in a loop) and this solves the problem.","Q_Score":0,"Tags":"python,scikit-learn,regression,random-forest","A_Id":71333821,"CreationDate":"2022-03-03T06:16:00.000","Title":"Trees' predictions in random forest regression","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently updated matplotlib and now I am consistently getting an error when I write from matplotlib import pyplot as plt.\nImportError: cannot import name 'artist' from 'matplotlib' (C:\\Users\\nenze\\AppData\\Roaming\\Python\\Python39\\site-packages\\matplotlib\\__init__.py)\nI've tried uninstalling and reinstalling matplotlib which didn't solve anything. I even tried to downgrade to an older version but I am still getting the same error.\nThis is with matplotlib version 3.5.1. This is with Python version 3.9.7. This is through Jupyter Notebooks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":733,"Q_Id":71342729,"Users Score":0,"Answer":"Ended up deleting Python\\Python39\\site-packages\\matplotlib_init_.py and it worked itself out.\nI also deleted files that started with ~ (ex: ~matplotlib).","Q_Score":0,"Tags":"python,matplotlib,jupyter-notebook","A_Id":71344004,"CreationDate":"2022-03-03T19:28:00.000","Title":"Cannot import name 'artist' from 'matplotlib' (Python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an example of a dataset (below). Column ['ID'] has values refering to customer codes A, B, C. Each customer-code has been to different locations (referred to as ['LON'] and ['LAT'].\nI am trying to group each ID and calculate the mean value of the corresponding LON and LAT values. After the calculation, I try to append the mean value in the same column or a new column but it doesn't seem to work (runs into an error that the column isn't defined).\nCould you please shed some light?\nThanks so much!\n\n\n\n\nID\nLON\nLAT\n\n\n\n\nA\n62.03755\n16.34481\n\n\nB\n-50.37181\n54.94410\n\n\nC\n16.95291\n50.35189\n\n\nB\n59.95044\n173.64574\n\n\nA\n31.31972\n-128.33218\n\n\nB\n-50.37181\n54.94410\n\n\nA\n23.11042\n157.43303\n\n\nB\n2.15615\n97.10632\n\n\n\n\nI tried this:\ndf.groupby('ID')['LON'].mean().append\nand\ndf['MEANLON'] = df.groupby('ID', as_index=False)['LON'].mean()","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":71343848,"Users Score":0,"Answer":"Thank you for your help Ivan!\nTo answer your Qs.\n\nYes, I'm trying to calculate the mean values of the latitude & the longitude.\n\nI'll try to explained it better. The shape of the original df I deal with is (15, 30000).\nIt contians electric vehicles charging records in a City.\n\n\n\nEach ID refer to to a contract ID \/ User.\nsome users charge regularly, mostly at the same charging station or other close-by stations.\nI'm trying to Filter the charge records by Each ID ans calculate the mean lon & lat at which charge events happened.\nThis mean values of geocoordinates indicates where the User lives (roughly assumptions).\n\n\n\n\n\nID\nLON\nLAT\nStation\nSTARTTIME\nENDTIME\n\n\n\n\nA\n62.03755\n16.34481\nstationname\ntimestamp\ntimestamp\n\n\nB\n-50.37181\n54.94410\nstationname\ntimestamp\ntimestamp\n\n\nC\n16.95291\n50.35189\nstationname\ntimestamp\ntimestamp\n\n\nB\n59.95044\n173.64574\nstationname\ntimestamp\ntimestamp\n\n\nA\n31.31972\n-128.33218\nstationname\ntimestamp\ntimestamp\n\n\nB\n-50.37181\n54.94410\nstationname\ntimestamp\ntimestamp\n\n\nA\n23.11042\n157.43303\nstationname\ntimestamp\ntimestamp\n\n\n\n\nWhat I'm trying to get is like:\n\n\n\n\nID\nLON\nLAT\nStation\nSTARTTIME\nENDTIME\n\n\n\n\nA\nMean of LON values for A\nMean of LAT values for A\nstationname\ntimestamp\ntimestamp\n\n\nB\nMean of LON values for B\nMean of LAT values for B\nstationname\ntimestamp\ntimestamp\n\n\nC\nMean of LON vakues for C\nMean of LAT values for C\nstationname\ntimestamp\ntimestamp\n\n\n\n\nOr\n\n\n\n\nID\nLON\nLAT\nStation\nSTARTTIME\nENDTIME\nLONMEAN\nLATMEAN\n\n\n\n\nA\n62.03755\n16.34481\nstationname\ntimestamp\ntimestamp\nMean of LON values for A\nMean of LAT values for A\n\n\nB\n-50.37181\n54.94410\nstationname\ntimestamp\ntimestamp\nMean of LON values for B\nMean of LAT values for B\n\n\nC\n16.95291\n50.35189\nstationname\ntimestamp\ntimestamp\nMean of LON vakues for C\nMean of LAT values for C\n\n\nB\n59.95044\n173.64574\nstationname\ntimestamp\ntimestamp\nMean of LON values for B\nMean of LAT values for B\n\n\nA\n31.31972\n-128.33218\nstationname\ntimestamp\ntimestamp\nMean of LON values for A\nMean of LAT values for A\n\n\nB\n-50.37181\n54.94410\nstationname\ntimestamp\ntimestamp\nMean of LON values for B\nMean of LAT values for B\n\n\nA\n23.11042\n157.43303\nstationname\ntimestamp\ntimestamp\nMean of LON values for A\nMean of LAT values for A","Q_Score":0,"Tags":"python,append,pandas-groupby,jupyter,mean","A_Id":71346297,"CreationDate":"2022-03-03T21:11:00.000","Title":"How to calculate mean value and appending it to a column? #python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to make a calculator for my data.\nBasically I have multiple measurements in different .csv files that are named as their physical representation (temperature_1, current_1, voltage_1 ecc.) and I am trying to make a calculator in python that given a certain expression [e.g. (current_1 * voltage_1) + (current_2 * voltage_2)] is able to load the data from each file and evaluates the result of the expression on the dataframes.\nI already made simple functions in order to sum, subtract, multiply and divide dataframes but I am stuck on how to handle complex expressions like the sum of many multiplications [e.g. (current_1 * voltage_1) + (current_2 * voltage_2) + (current_3 * voltage_3) ecc.].\nI tried to use a parser but still got no result.\nSomebody has any idea on how to handle this?\nNote: all the .csv have 2 columns, time and measurement, the number of rows are the same and the acquisition time is at the same timestamp.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":71350849,"Users Score":0,"Answer":"I have solved the issue. For anybody who will need similar functions i report here the solution in steps\n\nWrite your equation with the name of your files. E.g. current_1 * voltage_1 (you need file current_1.csv, voltage_1.csv)\n\nparse your equation with any parser. I used py_expression_eval.\n\nExtract the variables from the equation (variables = parser.parse(equation).variables())\n\niterate over the variables and at each step:\n\nload the data in a dataframe\ninsert the column of the measurement in general dataframe\nchange the name of that column to the name of your file (e.g. current_1)\n\nby doing this you will obtain a dataframe with columns: time, measurement_1, measurement_2 ecc.\n\nUse df.eval('result= ' + expression, inplace=True) to evaluate your initial expression using the columns you have added to the general dataframe\n\n\nHope this helps somebody","Q_Score":0,"Tags":"python,dataframe,calculator","A_Id":71351112,"CreationDate":"2022-03-04T11:35:00.000","Title":"Calculator for Dataframes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wondered if there is any way to reproducibly draw random numbers when using parallel==True with jitted functions in numba. I know that for singlethreaded code, you can set the random seed for numpy or the standard random module within a jitted function, but that does not seem to work for multithreaded code. Maybe there is some sort of workaround one could use?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":71351836,"Users Score":0,"Answer":"In parallel, each worker need to have its own seed as a random number generator cannot be both efficient and and thread safe at the same time. If you want the number of threads not to have an impact on the result, then you need to split the computation in chunks and set a seed for each chunk (computed by one thread). The seed chosen for a given chunk can be for example the chunk ID.","Q_Score":1,"Tags":"python,multithreading,numba,random-seed","A_Id":71352869,"CreationDate":"2022-03-04T13:01:00.000","Title":"Random seeds and multithreading in numba","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am stuck on a very simple problem but more I try to solve it the harder it becomes. Or maybe there is no better solution than O(N^2). The problem is simple. I have N data points in first set and M data points in second. I have a NxM similarity matrix A such that A[i, j] (range is between 0 and 1) gives score about similarity of Ni and Mj data-points.\nI want to find out which the points from first set match the best in the second. i.e the output is list with N elements each one corresponding to unique indices of M set which they match the most.\nI am using numpy. I sort matrix on second axis but the issue is argsort will not give me unqiue indices. And with indices logic it becomes really confusing.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":50,"Q_Id":71376238,"Users Score":1,"Answer":"np.argmax(A, axis=1) does exactly what you describe (assuming that 1 means most similar).","Q_Score":0,"Tags":"python,numpy,graph,theory,similarity","A_Id":71403361,"CreationDate":"2022-03-07T03:54:00.000","Title":"How to find most optimal match between data points given the similarity matrix?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have the following data frame structure:\n\n\n\n\nid_trip\ndtm_start_trip\ndtm_end_trip\nstart_station\nend_station\n\n\n\n\n1\n2018-10-01 10:15:00\n2018-10-01 10:17:00\n100\n200\n\n\n2\n2018-10-01 10:17:00\n2018-10-01 10:18:00\n200\n100\n\n\n3\n2018-10-01 10:19:00\n2018-10-01 10:34:00\n100\n300\n\n\n4\n2018-10-01 10:20:00\n2018-10-01 10:22:00\n300\n100\n\n\n5\n2018-10-01 10:20:00\n2018-10-01 10:29:00\n400\n400\n\n\n\n\nAnd I would like to check, using python, how often a trip starts and ends in a given season. The idea was to do these average intervals per day, per hour and then in intervals of a few minutes.\nWhat would be the best approach to doing this?\nMy desired output would be something to inform eg: for station 100 on 2018-10-01, a travel starts, on average, every 4 minutes","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":71380793,"Users Score":0,"Answer":"In order to do that you could group your DataFrame by different travels. Firstly, I would make a new column with a travel id, so travels starting and ending in the same stations can be grouped.\nThen you can easily group those rows by travel id and get all the information you need.\nPlease note that your data sample does not include any \"same travel\". Also, consider providing a code sample for your data, it would be easier for us to work with and run tests.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":71380893,"CreationDate":"2022-03-07T12:03:00.000","Title":"Check average travel intervals for each station - pyhton","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Haven't been able to find this answer online, so I'm asking the stackoverflow community...\nI'm wondering if DataSpell can connect to a SageMaker instance and use the EC2 instance hardware (i.e. virtual CPUs, GPUs, RAM, etc.) to run data transformations and machine learning model training on python and Jupyter notebook files?\nI.e. I want all the advantages of DataSpell on my local computer (git, debugging, auto-complete, refactoring, etc.), while having all the advantages of a SageMaker instance on AWS (scalable compute hardware, fast training, etc.) to run python and Jupyter notebook files.\nThank you.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":151,"Q_Id":71382998,"Users Score":-1,"Answer":"This can not be done. You can't bring your own IDE to SageMaker. You can use SageMaker's native IDE - SageMaker Studio which will give you an integrated experience with all of SageMaker's capabilities.\nI work at AWS and my opinions are my own.","Q_Score":1,"Tags":"python,jupyter-notebook,pycharm,amazon-sagemaker,dataspell","A_Id":71386785,"CreationDate":"2022-03-07T14:59:00.000","Title":"DataSpell & AWS Sagemaker Connection","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a list of texts. I turn each text into a token list. For example if one of the texts is 'I am studying word2vec' the respective token list will be (assuming I consider n-grams with n = 1, 2, 3) ['I', 'am', 'studying ', 'word2vec, 'I am', 'am studying', 'studying word2vec', 'I am studying', 'am studying word2vec'].\n\nIs this the right way to transform any text in order to apply most_similar()?\n\n(I could also delete n-grams that contain at least one stopword, but that's not the point of my question.)\nI call this list of lists of tokens texts. Now I build the model:\nmodel = Word2Vec(texts)\nthen, if I use\nwords = model.most_similar('term', topn=5)\n\nIs there a way to determine what kind of results i will get? For example, if term is a 1-gram then will I get a list of five 1-gram? If term is a 2-gram then will I get a list of five 2-gram?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":71384680,"Users Score":1,"Answer":"Generally, the very best way to determine \"what kinds of results\" you will get if you were to try certain things is to try those things, and observe the results you actually get.\nIn preparing text for word2vec training, it is not typical to convert an input text to the form you've shown, with a bunch of space-delimited word n-grams added. Rather, the string 'I am studying word2vec' would typically just be preprocessed\/tokenized to a list of (unigram) tokens like ['I', 'am', 'studying', 'word2vec'].\nThe model will then learn one vector per single word \u2013 with no vectors for multigrams. And since it only knows such 1-word vectors, all the results its reports from .most_similar() will also be single words.\nYou can preprocess your text to combine some words into multiword entities, based on some sort of statistical or semantic understanding of the text. Very often, this process converts the runs-of-related-words to underscore-connected single tokens. For example, 'I visited New York City' might become ['I', 'visited', 'New_York_City'].\nBut any such preprocessing decisions are separate from the word2vec algorithm itself, which just considers whatever 'words' you feed it as 1:1 keys for looking-up vectors-in-training. It only knows tokens, not n-grams.","Q_Score":0,"Tags":"python,gensim,word2vec","A_Id":71388346,"CreationDate":"2022-03-07T17:02:00.000","Title":"Retrieve n-grams with word2vec","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried updating conda, and I got this message:\nERROR conda.core.link:_execute_actions(337): An error occurred while uninstalling package 'defaults::requests-2.14.2-py36_0'. PermissionError(13, 'Permission denied').\nAnd if I try updating just matplotlib on conda, I get: ERROR conda.core.link:_execute_actions: An error occurred while installing package","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":71390705,"Users Score":0,"Answer":"The first question when you update conda is that PermissionError, maybe you just do not have enough system permission. The second question, you did not give enough error tips.","Q_Score":0,"Tags":"python,matplotlib,conda","A_Id":71390771,"CreationDate":"2022-03-08T06:09:00.000","Title":"Having trouble installing matplotlib in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am collecting time series data, which can be separated into \"tasks\" based on a particular target value. These tasks can be numbered based on the associated target. However, the lengths of data associated with each task will differ because it may take less time or more time for a \"task\" to be completed. Right now in MATLAB, this data is separated by the target number into a MATLAB cell, which is extremely convenient as the analysis on this time-series data will be the same for each set of data associated with each target, and thus I can complete data analysis simply by using a for loop to go through each cell in the cell array. My knowledge on the closest equivalent of this in Python would be to generate a ragged array. However, through my research on answering this question, I have found that automatic setting of a ragged array has been deprecated, and that if you want to generate a ragged array you must set dtype = object. I have a few questions surrounding this scenario:\n\nDoes setting dtype=object for the ragged array come with any inherent limitations on how one will access the data within the array?\n\nIs there a more convenient way of saving these ragged arrays as numpy files besides reducing dimensionality from 3D to 2D and also saving a file of the associated index? This would be fairly inconvenient I think as I have thousands of files for which it would be convenient to save as a ragged array.\n\nRelated to 2, is saving the data as a .npz file any different in practice in terms of saving an associated index? More specifically, would I be able to unpack the ragged arrays automatically based on a technically separate .npy file for each one and being able to assume that each set of data associated with each target is stored in the same way for every file?\n\nMost importantly, is using ragged arrays really the best equivalent set-up for my task, or do I get the deprecation warning about setting dtype=object because manipulating data in this way has become redundant and Python3 has a better method for dealing with stacked arrays of varying size?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":25,"Q_Id":71397763,"Users Score":1,"Answer":"I have decided to move forward with a known solution to my problem, and it seems to be adapting well.\nI organize each set of separate data into it's own array, and then store them in a sequence in a list as I would with cells in MATLAB.\nTo save this information, when I separated out the data I stored the subsequent index value in a list. By this I mean that:\n\nI identify the location of the next separate set of data.\nI copy the data up until that index value into an array that is appended to a list.\nI store the index value that was the start of the next separate set of data.\nI delete that information from a copy of my original array.\nI repeat steps 1-4 until there is only one uniquely labelled sequence of data left. I append this set of data. There is no other index to record. Therefore the list of indices is equal to the length of the list of arrays -1.\nWhen saving data, I take the original array and save it in a .npz file with the unpacking indices.\nWhen I want to use and reload the data into it's separate arrays for analysis, I can 'pack' and 'unpack' the array into it's two different forms, from single numpy array to list of numpy arrays.\n\nThis solution is working quite well. I hope this helps someone in the future.","Q_Score":0,"Tags":"python-3.x,deprecated,data-management,ragged","A_Id":71588373,"CreationDate":"2022-03-08T15:46:00.000","Title":"Python: Is there a better way to work with ragged arrays than a list of arrays with dtype = object?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to make a binary classifier that classifies the following:\nClass 1. Some images that I already have.\nClass 2. Some images that I create from a function, using the images of class 1.\nThe problem is that instead of pre-creating the two classes, and then loading them, to speed up the process I would like the class 2 images to be created for each batch.\nAny ideas on how I can tackle the problem? If I use the DataLoader as usual, I have to enter the images of both classes directly, but if I still don't have the images of the second class I don't know how to do it.\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":29,"Q_Id":71399540,"Users Score":1,"Answer":"You can tackle the problem in at least two ways.\n\n(Preferred) You create a custom Dataset class, AugDset, such that AugDset.__len__() returns 2 * len(real_dset), and when idx > len(imgset), AugDset.__getitem__(idx) generates the synthetic image from real_dset(idx).\nYou create your custom collate_fn function, to be passed to DataLoader that, given a batch, it augments it with your synthetic generated images.","Q_Score":0,"Tags":"python,pytorch","A_Id":71400516,"CreationDate":"2022-03-08T17:57:00.000","Title":"How can I create images for each batch using Pytorch?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have trained a Scikit Learn model in Python environment which i need to use it for inference in GoLang. Could you please help me how can i export\/save my model in python and then use it back in GoLang.\nI found a solution for Neural Network model where i can save Tensorflow model in ONNX format and load it using Onnx-go in GoLang. But this is specific for Neural Network models. But I am unable to figure it out for scikit-learn models.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":241,"Q_Id":71403705,"Users Score":0,"Answer":"You can develop an REST json API service to expose your scikit-learn model and communicate with go client.","Q_Score":0,"Tags":"python,go,scikit-learn","A_Id":71403744,"CreationDate":"2022-03-09T02:38:00.000","Title":"How to use trained Scikit Learn Python model in GoLang?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i'm totally new in NLP and Bert Model.\nWhat im trying to do right now is Sentiment Analysis on Twitter Trending Hashtag (\"neg\", \"neu\", \"pos\") by using DistilBert Model, but the accurazcy was about 50% ( I tried w Label data taken from Kaggle).\nSo here is my idea:\n(1) First, I will Fine-tunning Distilbertmodel (Model 1) with IMDB dataset,\n(2) After that since i've got some data took from Twitter post, i will sentiment analysis them my Model 1 and get Result 2.\n(3) Then I will refine-tunning Model 1 with the Result 2 and expecting to have Model (3).\nIm not really sure this process has any meaning to make the model more accuracy or not.\nThanks for reading my post.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":26,"Q_Id":71404582,"Users Score":1,"Answer":"I'm a little skeptical about your first step. Since the IMDB database is different from your target database, I do not think it will positively affect the outcome of your work. Thus, I would suggest fine-tuning it on a dataset like a tweeter or other social media hashtags; however, if you are only focusing on hashtags and do not care about the text, that might work! My little experience with fine-tuning transformers like BART and BERT shows that the dataset that you are working on should be very similar to your actual data. But in general, you can fine-tune a model with different datasets, and if the datasets are structured for one goal, it can improve the model's accuracy.","Q_Score":0,"Tags":"python,nlp,pytorch,sentiment-analysis,bert-language-model","A_Id":72243747,"CreationDate":"2022-03-09T05:09:00.000","Title":"Does Fine-tunning Bert Model in multiple times with different dataset make it more accuracy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to train my YOLOv4 detector on 5 classes [Person,Car,Motorcycle,Bus,Truck]. I used around 2000 images for training and 500 for validation.\nThe dataset I used is from OID or from COCO.\nThe main problem is that, when the training is over, the detector finds only one class in the image every time. For example, if it's a human in a car, it returns only the Car or the Person bounding box detection.\nI saw that the .txt annotation on every image is only for one class.\nIt's difficult to annotate by myself 10.000 images.\nAll the tutorials usually detect only one class in the image.\nAny ideas on how to train my model on all 5 classes?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":97,"Q_Id":71406898,"Users Score":0,"Answer":"i finally found the solution.\nThe problem was that OID dataset downloads images with one specific class, like person, car etc.\nAS Louis Lac mentioned i must train my model on dataset with all relevant classes","Q_Score":0,"Tags":"python,object-detection,training-data,yolo","A_Id":71496500,"CreationDate":"2022-03-09T09:17:00.000","Title":"Train multi classes object detector (YOLOv4)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My yolov5 model was trained on 416 * 416 images. I need to detect objects on my input image of size 4008 * 2672. I split the image into tiles of size 416 * 416 and fed to the model and it can able to detect objects but at the time of stitching the predicted image tiles to reconstruct original image, I could see some objects at the edge of tiles become split and detecting half in one tile and another half in another tile, can someone tell me how to made that half detections into a single detection in the reconstruction.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":128,"Q_Id":71407415,"Users Score":0,"Answer":"Running a second detection after offseting the tiles split would ensure that all previously cut objects would be in a single tile (assuming they are smaller than a tile). Maybe you could then combine the two results to get only the full objects","Q_Score":0,"Tags":"python,opencv,computer-vision,object-detection,yolov5","A_Id":71407894,"CreationDate":"2022-03-09T09:55:00.000","Title":"Detect Objects on high resolution image by splitting the image into tiles and reconstruct the tiles into single image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have a file with 6 different people, each person having 2 photos (in different angles) thus there are 6 * 2 = 12 images (in B&W).\nEach image is 140 (ht) x 120 (width)\nWhen I read the file, all the info is read so I have 12 columns (corresponding to each image) and 16,800 rows (corresponding to the image size).\nHow do I plot the image on matplotlib ?\nI tried extracting each column e.g. df.loc[:,0] and then reshaping it to (140,120). But plotting it gives some abstract art looking output instead of the face. am I doing something wrong?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":71410399,"Users Score":0,"Answer":"I've solved it. I think it was a bug\/glitch.\nSo I tried with plt.imshow() and plt.show() but it did not work.\nI then tried various methods plt.plot() <- which was giving me the weird colour output.\nEventually, going back to plt.imshow() somehow worked","Q_Score":0,"Tags":"python,image,numpy","A_Id":71411789,"CreationDate":"2022-03-09T13:50:00.000","Title":"How to show image that's been given as 1-D array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Having trouble getting Pandas data reader to retrieve price quotes from Yahoo\u2019s API. The most up to date answer seems to be:\n\n\"pip install --upgrade pandas pip install --upgrade pandas-datareader\n\nHowever, for the time being I will be using Google Collab and its Python platform, does anyone know how to update the pandas here? Or has the API truly just been discontinued?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":184,"Q_Id":71414673,"Users Score":1,"Answer":"In Colab you need to put a ! before pip","Q_Score":0,"Tags":"python,pandas,finance,yahoo-api","A_Id":71414712,"CreationDate":"2022-03-09T18:57:00.000","Title":"Python, Pandas, Yahoo Finance API","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We're developing custom runtime for databricks cluster. We need to version and archive our clusters for client. We made it run successfully in our own environment but we're not able to make it work in client's environment. It's large corporation with many restrictions.\nWe\u2019re able to start EC2 instance and pull image, but there must be some other blocker. I think ec2 instance is succefully running, but I have error in databricks\n\nCluster terminated.Reason:Container launch failure\nAn unexpected error was encountered while launching containers on\nworker instances for the cluster. Please retry and contact Databricks\nif the problem persists.\nInstance ID: i-0fb50653895453fdf\nInternal error message: Failed to launch spark container on instance\ni-0fb50653895453fdf. Exception: Container setup has timed out\n\nIt should be in some settings\/permissions inside client's environment.\nHere is end of ec2 log\n\n-----END SSH HOST KEY KEYS----- [ 59.876874] cloud-init[1705]: Cloud-init v. 21.4-0ubuntu1~18.04.1 running 'modules:final' at Wed, 09\nMar 2022 15:05:30 +0000. Up 17.38 seconds. [ 59.877016]\ncloud-init[1705]: Cloud-init v. 21.4-0ubuntu1~18.04.1 finished at Wed,\n09 Mar 2022 15:06:13 +0000. Datasource DataSourceEc2Local. Up 59.86\nseconds [ 59.819059] audit: kauditd hold queue overflow [\n66.068641] audit: kauditd hold queue overflow [ 66.070755] audit: kauditd hold queue overflow [ 66.072833] audit: kauditd hold queue\noverflow [ 74.733249] audit: kauditd hold queue overflow [\n74.735227] audit: kauditd hold queue overflow [ 74.737109] audit: kauditd hold queue overflow [ 79.899966] audit: kauditd hold queue\noverflow [ 79.903557] audit: kauditd hold queue overflow [\n79.907108] audit: kauditd hold queue overflow [ 89.324990] audit: kauditd hold queue overflow [ 89.329193] audit: kauditd hold queue\noverflow [ 89.333125] audit: kauditd hold queue overflow [\n106.617320] audit: kauditd hold queue overflow [ 106.620980] audit: kauditd hold queue overflow [ 107.464865] audit: kauditd hold queue\noverflow [ 127.175767] audit: kauditd hold queue overflow [\n127.179897] audit: kauditd hold queue overflow [ 127.215281] audit: kauditd hold queue overflow [ 132.190357] audit: kauditd hold queue\noverflow [ 132.193968] audit: kauditd hold queue overflow [\n132.197546] audit: kauditd hold queue overflow [ 156.211713] audit: kauditd hold queue overflow [ 156.215388] audit: kauditd hold queue\noverflow [ 228.558571] audit: kauditd hold queue overflow [\n228.562120] audit: kauditd hold queue overflow [ 228.565629] audit: kauditd hold queue overflow [ 316.405562] audit: kauditd hold queue\noverflow [ 316.409136] audit: kauditd hold queue overflow","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":107,"Q_Id":71419465,"Users Score":0,"Answer":"This is usually caused by slowness in downloading the custom docker image, please check if you can download from the docker repository properly from the network where your VMs are launched.","Q_Score":0,"Tags":"python,linux,amazon-web-services,docker,databricks","A_Id":72107176,"CreationDate":"2022-03-10T06:00:00.000","Title":"AWS Databricks Cluster terminated.Reason:Container launch failure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a dataframe and am trying to set the index to the column 'JnlNo'. Currently the index is just a row number. The JnlNo are integers. But it keeps returning this error:\n\nKeyError Traceback (most recent call last)\n~\\AppData\\Local\\Temp\/ipykernel_52996\/1782638178.py in \n----> 1 journals=journals.set_index('JnlNo')\n~\\Anaconda3\\lib\\site-packages\\pandas\\util_decorators.py in wrapper(*args, **kwargs)\n309 stacklevel=stacklevel,\n310 )\n--> 311 return func(*args, **kwargs)\n312\n313 return wrapper\n~\\Anaconda3\\lib\\site-packages\\pandas\\core\\frame.py in set_index(self, keys, drop, append, inplace, verify_integrity)\n5449\n5450 if missing:\n-> 5451 raise KeyError(f\"None of {missing} are in the columns\")\n5452\n5453 if inplace:\nKeyError: \"None of ['JnlNo'] are in the columns\"\nI have initially ran these codes\nimport pandas as pd\njournals = pd.read_csv('Journals.csv')\njournals.head()\nbut when I then went ahead to set_index\njournals=journals.set_index('JnlNo'), it returned the error.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":71423964,"Users Score":0,"Answer":"Look at journals.columns. You can only use columns that are in the DataFrame. JnlNo is not in the dataframe as the error message tells you. Maybe you're confusing an uppercase i with a lowercase L or something like that.","Q_Score":0,"Tags":"python,pandas,csv,indexing","A_Id":71424322,"CreationDate":"2022-03-10T12:11:00.000","Title":"Dataframe set_index function returning error","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using gensim to create a Word2Vec model. I'm wondering if there is a way to feed the gensim class Word2Vec with my examples [(target, context1), (target, context2), ...] instead of feeding it with sentences.\nThanks in advance","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":16,"Q_Id":71427111,"Users Score":1,"Answer":"The Gensim Word2Vec class expects a re-iterable sequence where each item is a list of string word tokens. It then does the construction of the inner 'micro-examples' (context-word -> target-word in skip-gram, or context-window -> target-window in CBOW) itself.\nThere's not an alternate interface, or easy extension-hook, for changing the micro-examples. (Though, as the source code is available, it's possible even when not easy to change it arbitrarily.)\nIf you only ever need single-word contexts to single-word targets, and are OK with (as in standard word2vec) every A B pair to imply both an A -> B prediction and a B -> A prediction, you may be able to approximate your desired effect by the proper preprocessing of your corpus, completely outside Word2Vec code.\nSpecifically, only ever provide 2-word texts, of exactly the word pairs you want trained, as if they were full texts.","Q_Score":0,"Tags":"python,gensim","A_Id":71427321,"CreationDate":"2022-03-10T15:57:00.000","Title":"How to inject training examples in gensim Word2Vec?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a df like that:\n\n\n\n\nmonth\nstock\nMV\n\n\n\n\n1994-07\nA\n50\n\n\n1994-07\nB\n60\n\n\n1994-07\nC\n70\n\n\n1994-07\nD\n80\n\n\n1994-08\nA\n90\n\n\n1994-08\nB\n60\n\n\n1994-08\nC\n70\n\n\n1994-08\nD\n95\n\n\n1994-08\nE\n100\n\n\n1994-08\nF\n110\n\n\n\n\nI would like to subset my df in a way that I only have in it the 50% of the highest MV per month. For July\/1994 I only have 4 stock, so 50% will be the 2 highest MV. For the month after, I have 6 stocks, which gives me 3 highest values:\n\n\n\n\nmonth\nstock\nMV\n\n\n\n\n1994-07\nC\n70\n\n\n1994-07\nD\n80\n\n\n1994-08\nD\n95\n\n\n1994-08\nE\n100\n\n\n1994-08\nF\n110\n\n\n\n\nI have tried:\ndf = df.groupby(pd.Grouper(freq=\"M\")).nlargest(2, \"MV\")\nBut I got the error: AttributeError: 'DataFrameGroupBy' object has no attribute 'nlargest'\nIn addition, the value of n will need to be a different value for every month. I am not sure how to handle that as well.","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":56,"Q_Id":71468905,"Users Score":2,"Answer":"df.groupby('month').apply(lambda monthly_data: monthly_data[monthly_data['MV'] >= monthly_data['MV'].median())","Q_Score":2,"Tags":"python,pandas,pandas-groupby,grouping,subset","A_Id":71468944,"CreationDate":"2022-03-14T13:55:00.000","Title":"Subset dataframe based on large values of a column per month","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example, if I want to see only one sentence in the dataframe in row 21, how can I type in the head function?\ndf.head(20)? df.head(19:20)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":71501339,"Users Score":0,"Answer":"you can access elements with their integer position df.iat[row,col]\nor an integer row with df.iloc([row])","Q_Score":0,"Tags":"python","A_Id":71501403,"CreationDate":"2022-03-16T17:02:00.000","Title":"How can I type to see only one row in df.head()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I mock existing Azure Databricks PySpark codes of a project (written by others) and run them locally on windows machine\/Anaconda to test and practice?\nIs it possible to mock the codes or I need to create a new cluster on Databricks for my own testing purposes?\nhow I can connect to storage account, use the Databricks Utilities, etc? I only have experience with Python & GCP and just joined a Databricks project and need to run the cells one by one to see the result and modify if required.\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":73,"Q_Id":71516084,"Users Score":0,"Answer":"You can test\/run PySpark code from your IDE by installing PySpark on your local computer.\nNow, to use Databricks Utilities, in fact you would need a Databricks instance and it's not available on local. You can try Databricks community Editionfor free but with some limitation\nTo acess a cloud storage account, it can be done locally from your computer or from your own Databricks instance. In both cases your will have to set up the end point of this storage account using its secrets.","Q_Score":1,"Tags":"python,azure,pyspark,databricks","A_Id":71524640,"CreationDate":"2022-03-17T16:45:00.000","Title":"How to mock and test Databricks Pyspark notebooks Locally","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm looking at python code working with numba, and have some questions. There is less tutorial on numba, so that I come here to ask.\nIn numba, data type is pre-declared to help processing. I'm not clear on the rule to declare data type. One example is numba.float64[:,::1]. I feel it's declaring a 2D array in float type. However, I'm not sure what ::1 means here. Another example is nb.types.NPDatetime('M')[::1]. Is it slicing the 1D array?\nI still have questions on ListType(), which is imported from numba.types. Only one element is allowed here? In my code, one class type is saved and passed to ListType() as single argument. What if I need to explicitly define this class type, and pass it here? Thanks.\nI feel there is few tutorial or documents on numba module. If ok, please share some resources on numba. That's very appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":34,"Q_Id":71518774,"Users Score":1,"Answer":"One example is numba.float64[:,::1]. I feel it's declaring a 2D array in float type. However, I'm not sure what ::1 means here\n\nExactly. ::1 means that the array axis is contiguous. This enable further optimizations like the use of SIMD instructions.\n\nAnother example is nb.types.NPDatetime('M')[::1]. Is it slicing the 1D array?\n\nnb.types.NPDatetime('M') is a Numpy datetime type (where 'M' is meant to specify the granularity of the datetime. eg. months) here and [::1] means that this is a 1D contiguous array (containing datetime objects).\nOne should not be confused between object instances and object types. In Python, this is quite frequent to mix both but this is due to the dynamic nature of the language. Statically-typed compiled languages like C or C++ clearly separate the two concepts and types cannot be manipulated at runtime.\n\nOnly one element is allowed here?\n\nListType is a class representing the type of a typed list. Its unique parameter defines the type of the item in the resulting type of list. For example nb.types.ListType(nb.types.int32) returns an object representing the type of a typed list containing 32-bit integers. Note that it is not a list instance. It is meant to be provided to Numba signature or other types.","Q_Score":0,"Tags":"python,numba","A_Id":71519774,"CreationDate":"2022-03-17T20:39:00.000","Title":"How to understand [] in data type definition in numba","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am analyzing a consumer survey and there are both Dutch (NL) and French (FR) respondents. Depending on the answer they gave when we asked about their mother language they got the same questionnaire but translated in Dutch or French. The problem is that the output of Qualtrics (the survey software) gave us the following output:\n\n\n\n\nUser_Language\nQ1_NL\nQ2_NL\n...\nQ1_FR\nQ_FR\n...\n\n\n\n\nNL\n1\n3\n...\n\n\n...\n\n\nNL\n4\n4\n..\n\n\n...\n\n\nNL\n1\n3\n...\n\n\n...\n\n\nNL\n2\n5\n...\n\n\n...\n\n\n...\n...\n...\n...\n...\n...\n...\n\n\nFR\n\n\n...\n3\n2\n...\n\n\nFR\n\n\n..\n4\n3\n...\n\n\nFR\n\n\n...\n2\n5\n...\n\n\nFR\n\n\n...\n1\n2\n...\n\n\n\n\nAs you can see the answers for the Dutch-speaking participants were recorded in the first n columns, while the French answers were recorded in the following n columns.\nHow can I cut the French answers from the last n columns and append them on the bottom of the DataFrame as those are answers to the exact same questions?\nThanks!\nEDIT: Solutions that make use of grouping by the strings \"Q1\" or \"Q2\" are unfortunately not viable as the column names are actually the questions, this was just an example value. I do know the exact range of the French answers and the Dutch answers.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":71527299,"Users Score":1,"Answer":"If there aren't any overlapping values you could simply split the data frame into two data frames based on where the dutch values \"stop\" and the french \"start\" rename the value columns of those two data frames Q2_NL and so on to simply Q2 ... and then concatenate those frames again into one.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":71527352,"CreationDate":"2022-03-18T12:51:00.000","Title":"Transform DataFrame: place values in right columns as new rows","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I`m trying to make a research in which the observations of my dataset are represented by matrices (arrays composed of numbers, similar to how images for deep learning are represented, but mine are not images) of different shapes.\nWhat I`ve already tried is to write those arrays as lists in one column of a pandas dataframe and then save this as a csv\\excel. After that I planned just to load such a file and convert those lists to arrays of appropriate shapes and then to convert a set of such arrays to a tensor which I finally will use for training the deep model in keras.\nBut it seems like this method is extremely inefficient, cause only 1\/6 of my dataset has already occupied about 6 Gb of memory (pandas saved as csv) which is huge and I won't be able to load it in RAM (I'm using google colab to run my experiments).\nSo my question is: is there any other way of storing a set of arrays of different shapes, which won`t occupy so much memory? Maybe I can store tensors directly somehow? Or maybe there are some ways to store pandas in some compressed types of files which are not so heavy?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":69,"Q_Id":71545778,"Users Score":0,"Answer":"Are you storing purely (or mostly) continuous variables? If so, maybe you could reduce the accuracy (i.e., from float64 to float32) these variables if you don't need need such an accurate value per datapoint.\nThere's a bunch of ways in reducing the size of your data that's being stored in your memory, and the what's written is one of the many ways to do so. Maybe you could break the process that you've mentioned into smaller chunks (i.e., storage of data, extraction of data), and work on each chunk\/stage individually, which hopefully will reduce the overall size of your data!\nOtherwise, you could perhaps take advantage of database management systems (SQL or NoSQL depending on which fits best) which might be better, though querying that amount of data might constitute yet another issue.\nI'm by no means an expert in this but I'm just explaining more of how I've dealt with excessively large datasets (similar to what you're currently experiencing) in the past, and I'm pretty sure someone here will probably give you a more definitive answer as compared to my 'a little of everything' answer. All the best!","Q_Score":1,"Tags":"python,arrays,pandas,tensorflow,keras","A_Id":71545973,"CreationDate":"2022-03-20T10:06:00.000","Title":"How to store a set of arrays for deep learning not consuming too much memory (Python)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I`m trying to make a research in which the observations of my dataset are represented by matrices (arrays composed of numbers, similar to how images for deep learning are represented, but mine are not images) of different shapes.\nWhat I`ve already tried is to write those arrays as lists in one column of a pandas dataframe and then save this as a csv\\excel. After that I planned just to load such a file and convert those lists to arrays of appropriate shapes and then to convert a set of such arrays to a tensor which I finally will use for training the deep model in keras.\nBut it seems like this method is extremely inefficient, cause only 1\/6 of my dataset has already occupied about 6 Gb of memory (pandas saved as csv) which is huge and I won't be able to load it in RAM (I'm using google colab to run my experiments).\nSo my question is: is there any other way of storing a set of arrays of different shapes, which won`t occupy so much memory? Maybe I can store tensors directly somehow? Or maybe there are some ways to store pandas in some compressed types of files which are not so heavy?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":69,"Q_Id":71545778,"Users Score":1,"Answer":"Yes, Avoid using csv\/excel for big datasets, there are tons of data formats out there, for this case I would recommend to use a compressed format like pd.Dataframe.to_hdf, pd.Dataframe.to_parquet or pd.Dataframe.to_pickle.\nThere are even more formats to choose and compression options within the functions (for example to_hdf takes the argument complevel that you can set to 9 ).","Q_Score":1,"Tags":"python,arrays,pandas,tensorflow,keras","A_Id":71546044,"CreationDate":"2022-03-20T10:06:00.000","Title":"How to store a set of arrays for deep learning not consuming too much memory (Python)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written a script which deploys a Dashboard using plotly-dash. It has graphs, data for which is coming from the excel file located on the PC. This data is stored in the excel file which will be updated on a daily basis. What can I do for the app to get updated with the new data without me redeploying it every day? Maybe you can give some advice or ideas?\nP.S. The dashboard is currently deployed using Heroku.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":71561170,"Users Score":0,"Answer":"If your app reads the file in as part of a callback based on something like the pathname (dcc.Location), then you could just refresh the page.","Q_Score":0,"Tags":"python,plotly-dash,dashboard,live-update","A_Id":71565243,"CreationDate":"2022-03-21T16:46:00.000","Title":"Plotly\/dash Dashboard is not live updating - Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I got and LSTM that gives me output (4,32,32) i pass it to the Linear Layer(hidden size of LSTM, num_classes=1) and it gives me an output shape (4,32,1). I am trying to solve a wake word model for my AI assistant.\nI have 2 classes i want to predict from. 0 is not wake up and 1 is the wake up AI.\nMy batch size is 32. But the output is (4,32,1). Isnt it should be 32,1 or something like that so i will know that there is one prediction for 1 audio mfcc?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":185,"Q_Id":71566905,"Users Score":1,"Answer":"Not quite. You need to reshape your data to (32, 1) or (1, 32) in order for your linear layer to work. You can achieve this by adding a dimension with torch.unsqueeze() or even directly with torch.view(). If you use the unsqueeze function, the new shape should be (32, 1). If you use the view function, the new shape should be (1, 32).","Q_Score":1,"Tags":"python,pytorch,lstm,recurrent-neural-network","A_Id":71566924,"CreationDate":"2022-03-22T04:26:00.000","Title":"How to correctly combine LSTM with Linear layer","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had trained a weight file to detect an object and another weight file to detect another specific object using yolov5. If i want to detect both objects in a single images, can i ,like, use both weight files together? Or is there a way to combine the both trained files into a single one, without training the datasets again together?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":71567440,"Users Score":0,"Answer":"Actually, no. There is no way to aggregate models, trained to detect different objects into one. You can sequentially detect objects by first and second model. Proper approach is to train model again with two classes.","Q_Score":0,"Tags":"python,object-detection,yolov5","A_Id":71567616,"CreationDate":"2022-03-22T05:46:00.000","Title":"Is there any way to detect two kinds of objects from an image or video using two seperate trained weight files in **YoloV5**?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two numpy arrays with 0s and 1s in them. How can I find the indexes with 1 in the first array and 0 in the second?\nI tried np.logical_and\nBut got error message (builtin_function_or_method' object is not subscriptable)","AnswerCount":5,"Available Count":2,"Score":0.1194272985,"is_accepted":false,"ViewCount":41,"Q_Id":71574066,"Users Score":3,"Answer":"Use np.where(arr1==1) and np.where(arr2==0)","Q_Score":0,"Tags":"python,numpy","A_Id":71574144,"CreationDate":"2022-03-22T14:43:00.000","Title":"2 different specified elements from 2 numpy arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two numpy arrays with 0s and 1s in them. How can I find the indexes with 1 in the first array and 0 in the second?\nI tried np.logical_and\nBut got error message (builtin_function_or_method' object is not subscriptable)","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":71574066,"Users Score":0,"Answer":"tow numpy array given in problem.\narray1 and array2\njust use\none_index=np.where(array1==1)\nand\nzero_index=np.where(array2==0)","Q_Score":0,"Tags":"python,numpy","A_Id":71574387,"CreationDate":"2022-03-22T14:43:00.000","Title":"2 different specified elements from 2 numpy arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I installed Tensorflow using pyenv. But whenever i import it gives me this error.I am using Debian in raspberry pi4.My python version is 3.7.12 and tensorflow version is 2.5.0.\n''' pi@raspberrypi:~\/project $ python\nPython 3.7.12 (default, Mar 22 2022, 14:27:41)\n[GCC 10.2.1 20210110] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\n\n\nimport tensorflow\nRuntimeError: module compiled against API version 0xe but this version of numpy is 0xd\nTraceback (most recent call last):\nFile \"\", line 1, in \nFile \"\/home\/pi\/.pyenv\/versions\/3.7.12\/lib\/python3.7\/site-packages\/tensorflow\/init.py\", line 41, in \nfrom tensorflow.python.tools import module_util as _module_util\nFile \"\/home\/pi\/.pyenv\/versions\/3.7.12\/lib\/python3.7\/site-packages\/tensorflow\/python\/init.py\", line 40, in \nfrom tensorflow.python.eager import context\nFile \"\/home\/pi\/.pyenv\/versions\/3.7.12\/lib\/python3.7\/site-packages\/tensorflow\/python\/eager\/context.py\", line 37, in \nfrom tensorflow.python.client import pywrap_tf_session\nFile \"\/home\/pi\/.pyenv\/versions\/3.7.12\/lib\/python3.7\/site-packages\/tensorflow\/python\/client\/pywrap_tf_session.py\", line 23, in \nfrom tensorflow.python._pywrap_tf_session import *\nImportError: SystemError: returned a result with an error set\n'''","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":188,"Q_Id":71575274,"Users Score":0,"Answer":"The error message tries to say that Tensorflow needs a recent version of numpy. So,try to upgrade numpy pip3 install --upgrade numpy","Q_Score":0,"Tags":"python,tensorflow,debian","A_Id":71601231,"CreationDate":"2022-03-22T16:05:00.000","Title":"Tensorflow shows errors after importing using python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"(apologies about formatting, this is my first question, and when finalizing question, there was no back button?)\nmy categorical columns that i KNOW are categorical and even object_cols CONFIRMS this are:\n'x34', 'x35', 'x41', 'x45', 'x68', 'x93'. All 6 of them are present in xtrain and xvalid. But why is x41 being kicked out by the issubset operation? Even though we can clearly see x41 is present in object_cols_val\nwhy is the issubset messing up and throwing out x41?\nwhat is this doing:\n[col for col in object_cols if set(xvalid[col]).issubset(set(xtrain[col]))]\nI thought it's checking each column from object_cols in xvalid, then checking to see if it's a subset of xtrain, WHICH IT IS. Ugh, why is it treating x41 differently? (probably not related but x41 has a $ and numbers, but why would that matter? as long as the column is present in both sets?)\nall categorical columns\nobject_cols_train=[col for col in xtrain.columns if xtrain[col].dtype =='object']\nprint(\"object_cols are:\",object_cols)\nobject_cols_val=[col for col in xvalid.columns if xvalid[col].dtype =='object']\nprint(\"object_cols_val are:\",object_cols_in_val)\n\"good\" columns that can safely be ordinal encoded\ngood_label_cols=[col for col in object_cols if set(xvalid[col]).issubset(set(xtrain[col]))]\nprint(\"good_label_cols are:\",good_label_cols)\n\"bad\" problematic columns that should be dropped (for now, but i believe we should NEVER drop)\nbad_label_cols=list(set(object_cols)-set(good_label_cols))\nprint(\"bad_label_cols are:\",bad_label_cols)\n\noutputs:\n\n\nobject_cols are: ['x34', 'x35', 'x41', 'x45', 'x68', 'x93']\nobject_cols_val are: ['x34', 'x35', 'x41', 'x45', 'x68', 'x93']\ngood_label_cols are: ['x34', 'x35', 'x45', 'x68', 'x93']\nbad_label_cols are: ['x41']\n\nI'm still beginner\/intermediate, i tried separating out the sets to see what they look like, but cant because 'col'.\nI tried:\nxtrain[col]\nset(xtrain[col])\nset(xvalid[col]).issubset(set(xtrain[col]))\nI KNOW what xtrain['x41'] and xvalid['x41'] look like.\nMaybe i should include here:\nxtrain['x41'].head(),xvalid['x41'].head()\n\noutput:\n\n(22449 $-996.73\n39178 $-361.51\n33715 $851.5\n36010 $-765.51\n13370 $-1391.9\nName: x41, dtype: object,\n34320 $412.48\n27355 $-473.03\n18144 $-208.31\n20740 $-434.41\n10805 $203.53\nName: x41, dtype: object)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":11,"Q_Id":71580804,"Users Score":0,"Answer":"embarrassed to admit that it took wayyy tooo looonnnggg to bring my mind at peace.\nissubset is taking EACH VALUE within x41 column and comparing in xtrain and xvalid. the thing is, x41 has 17000 unique values, so obviously when doing the split, the values will not be in BOTH sets. Thus, it is NOT a subset, because not all xvalid['x41'] values are in xtrain['x41'].\nphew. life makes sense again.","Q_Score":0,"Tags":"python,machine-learning,set","A_Id":71589539,"CreationDate":"2022-03-23T01:28:00.000","Title":"why is issubset knowingly reducing one of my columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with the YOLOv3 model for an object detection task. I am using pre-trained weights that were generated for the COCO dataset, however, I have my own data for the problem I am working on. According to my knowledge, using those trained weights as a starting point for my own model should not have any effect on the performance of the model once it is trained on an entirely different dataset (right?).\nMy question is: will the model give \"honest\" results if I train it multiple times and test it on the same test set each time, or would it have better performance since it has already been exposed to those test images during an earlier experiment? I've heard people say things like \"the model has already seen that data\", does that apply in my case?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":71607963,"Users Score":0,"Answer":"For hyper-parameter selection, evaluation of different models, or evaluating during training, you should always use a validation set.\nYou are not allowed to use the test set until the end!\nThe whole purpose of test data is to get an estimation of the performance after the deployment. When you use it during training to train your model or evaluate your model, you expose that data. For example, based on the accuracy of the test set, you decide to increase the number of layers.\nNow, your performance will be increased on the test set. However, it will come with a price!\nYour estimation on the test set becomes biased, and you no longer be able to use that estimation to talk about data that your model sees after deployment.\nFor example, You want to train an object detector for self-driving cars, and you exposed the test set during training. Therefore, you can not use the accuracy on the test set to talk about the performance of the object detector when you put it on a car and sell it to a customer.\nThere is an old sentence related to this matter:\n\nIf you torture the data enough, it will confess.","Q_Score":0,"Tags":"python,performance,pytorch,artificial-intelligence,yolo","A_Id":71609016,"CreationDate":"2022-03-24T18:51:00.000","Title":"Is the performance of a deep learning model affected if it has \"seen\" the same test images before?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to load an already trained NER model, which was loading normally until today, but I'm getting the following error, either importing the trained model or importing pt_core_news_lg:\nnlp4 = spacy.load('\/content\/gdrive\/My Drive\/spacy_NER4')\nValueError: Cannot create vectors table with dimension 0. If you're using pre-trained vectors, are the vectors loaded?\nI'm on Google Colab, following the installations:\n!pip install spacy==2.3.4\n!python -m spacy download pt_core_news_lg\nWhen I import my model, it generates this error. Does anyone have a tip or solution to this problem?\nIf I install spacy-nightly it throws another error:\nOSError: [E053] Could not read config.cfg from \/content\/gdrive\/My Drive\/space_NER4\/config.cfg\nHowever, when loading pt_core_news_lg, it loads normally","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":71619453,"Users Score":0,"Answer":"I solved this error by changing google account. I simply imported all my templates into the other account and it worked. However, the reason for the error of not loading in the account, I did not find","Q_Score":0,"Tags":"python,spacy","A_Id":71700772,"CreationDate":"2022-03-25T15:28:00.000","Title":"Error loading already trained ner spacy model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using ARIMA to forecast the time series of some medical data. I was wondering if I can take the ARIMA model I fit to my data and get some numbers that describe just the trend and seasonality separately. This would be useful for me because it would allow me to see what my model's trend rate is without seasonality affecting the results. Please let me know if you have any questions. Thanks.\nI was unable to find anything from a google search and have idea where to start. I looked into seasonal decompose but that seems to get trend and seasonality of my actual data, not the model fit to the data.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":71622905,"Users Score":0,"Answer":"I think that if you want to know the trend and the seasonality of your model you should first make prediction on a large range of date using .forecast(bignumber). Then on this prediction you could do decomposition using statsmodels.tsa.seasonal.seasonal_decompose. Like that you will have a clear idea of the trend and the seasonality learned by your ARIMA model. After, if you want to estimate the expression of your trend you can train a linear or polynomial model on the trend decomposed.","Q_Score":0,"Tags":"python,time-series,statsmodels,forecasting,arima","A_Id":71636637,"CreationDate":"2022-03-25T20:44:00.000","Title":"Can I break out my ARIMA model into trend and seasonality specific components?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using faiss indexflatIP to store vectors related to some words. I also use another list to store words (the vector of the nth element in the list is nth vector in faiss index). I have two questions:\n\nIs there a better way to relate words to their vectors?\nCan I update the nth element in the faiss?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":300,"Q_Id":71627943,"Users Score":1,"Answer":"You can do both.\n\n\nIs there a better way to relate words to their vectors?\n\n\nCall index.add_with_ids(vectors, ids)\nSome index types support the method add_with_ids, but flat indexes don't.\nIf you call the method on a flat index, you will receive the error add_with_ids not implemented for this type of index\nIf you want to use IDs with a flat index, you must use index2 = faiss.IndexIDMap(index)\n\n\nCan I update the nth element in the faiss?\n\n\nIf you want to update some encodings, first remove them, then add them again with add_with_ids\nIf you don't remove the original IDs first, you will have duplicates and search results will be messed up.\nTo remove an array of IDs, call index.remove_ids(ids_to_replace)\nNota bene: IDs must be of np.int64 type.","Q_Score":1,"Tags":"python,word-embedding,faiss","A_Id":71927179,"CreationDate":"2022-03-26T12:10:00.000","Title":"Update an element in faiss index","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"pd.set_option(\"precision\", 2)\npd.options.display.float_format = '{:.2f}'.format\nIm not able to figure out what these code line do","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":17,"Q_Id":71628260,"Users Score":1,"Answer":"These code lines fix float numbers precision to two decimal places for pandas output. I belive it's done because your data is banking data, which contains a lot of different money amounts, which should be displayed with 2 decimal places (because there are 100 cents in a dollar)","Q_Score":1,"Tags":"python,pandas,data-science,data-analysis","A_Id":71628396,"CreationDate":"2022-03-26T13:00:00.000","Title":"can anyone explain me these pandas code, Im looking at a EDA project on a Banking Data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a graph theory problem with a given disconnected unweighted, undirected graph (given an edge list). The following operation can be made on the graph:\nA set of edges represented by two vertex pairs (i, j) and (x, y) can be switched into (i, y) and (x, j)\nThe following are the tasks that need to be accomplished:\n\nDetermine if it is possible to connect the graph.\nIf yes, find the minimum number of operations to connect the graph.\nOutput the edges used per operation in the following order: i j x y\n\nAn additional constraint is that a vertex cannot be connected to itself. Vertices can be up to n = 10^5\nI have already implemented the switch function defined above, however, my current solution is very inefficient and may not be applicable to all possible inputs. It basically checks for any four vertices with multiple connections and applies the switch operation, then runs a DFS (depth-first search) algorithm to check if the graph is connected or not, so it runs a DFS every time it makes an operation. Additionally, this doesn't deal with the minimum operations, it just does operations until the graph becomes connected. Are there any implementation tips or algorithms that can help in solving the problem, preferably in Python?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":179,"Q_Id":71635245,"Users Score":1,"Answer":"Say your graph G has n vertices, e edges, and k > 1 components.\nA solution is possible iff there are no isolated vertices and e >= n-1. It will take k-1 switches.\nIf e < n-1 then there aren't enough edges for a connected graph to exist. If there's an isolated vertex, switch operations can't affect it so it will remain isolated.\nOtherwise, repeatedly perform the following switch: One edge should belong to a component where its removal won't disconnect the component. This is guaranteed to exist if there are multiple components and e >= n-1. The other edge can belong to any other component (and it doesn't matter if its removal would disconnect the component). Performing the switch operation will merge these components, reducing the total number of components by one.\nWe perform k-1 switches, reducing the number of components from k to 1, at which point the graph is connected.\n--- O(V+E) approach ---\n\nConfirm there are no isolated vertices and that E >= V-1.\nUse BFS to split the input graph G into its components. While doing this, keep track of every edge that completes a cycle (connects to an already visited vertex). Call these 'swappable edges'\nRepeatedly perform swap operations where one edge is swappable, and the other is an arbitrary edge of some other component.\nNote that if the other edge is also swappable, then both new edges after the swap are on a cycle. Choose one of them (arbitrarily) to add to the list of swappable edges.\nEach swap reduces the number of swappable edges by one and the number of components by one. Keep going until you're down to one component.","Q_Score":1,"Tags":"python,algorithm,graph-theory,shortest-path","A_Id":71637163,"CreationDate":"2022-03-27T09:44:00.000","Title":"Given a disconnected graph, find the minimum operations to rearrange the vertices such that the graph becomes connected","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently trying to use a number of medical codes to find out if a person has a certain disease and would require help as I tried searching for a couple of days but couldn't find any. Hoping someone can help me with this. Considering I've imported excel file 1 into df1 and excel file 2 into df2, how do I use excel file 2 to identify what disease does the patients in excel file 1 have and indicate them with a header? Below is an example of what the data looks like. I'm currently using pandas Jupyter notebook for this.\nExcel file 1:\n\n\n\n\nPatient\nPrimary Diagnosis\nSecondary Diagnosis\nSecondary Diagnosis 2\nSecondary Diagnosis 3\n\n\n\n\n\nAlex\n50322\n50111\n\n\n\n\n\nJohn\n50331\n60874\n50226\n74444\n\n\n\nPeter\n50226\n74444\n\n\n\n\n\nPeter\n50233\n88888\n\n\n\n\n\n\n\nExcel File 2:\n\n\n\n\nPrimary Diagnosis\nMedical Code\n\n\n\n\nDiabetes Type 2\n50322\n\n\nDiabetes Type 2\n50331\n\n\nDiabetes Type 2\n50233\n\n\nCardiovescular Disease\n50226\n\n\nHypertension\n50111\n\n\nAIDS\n60874\n\n\nHIV\n74444\n\n\nHIV\n88888\n\n\n\n\nIntended output:\n\n\n\n\nPatient\nPositive for Diabetes Type 2\nPositive for Cardiovascular Disease\nPositive for Hypertension\nPositive for AIDS\nPositive for HIV\n\n\n\n\nAlex\n1\n1\n0\n0\n0\n\n\nJohn\n1\n1\n0\n1\n1\n\n\nPeter\n1\n1\n0\n0\n1","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":71638833,"Users Score":0,"Answer":"Maybe you could convert your excel file 2 to some form of key value pair and then replace the primary diagnostics column in file 1 with the corresponding disease name, later apply some form of encoding like one-hot or something similar to file 1. Not sure if this approach would definitely help, but just sharing my thoughts.","Q_Score":0,"Tags":"python,excel,pandas,jupyter-notebook,jupyter","A_Id":71638966,"CreationDate":"2022-03-27T17:49:00.000","Title":"How do i use medical codes to determine what disease a person have using jupyter?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two NumPy arrays saved in .npy file extension. One contains x_train data and other contains y_train data.\nThe x_train.npy file is 5.7GB of size. I can't feed it to the training by loading the whole array to the memory.\nEvery time I try to load it to RAM and train the model, Colab crashes before starting the training.\nIs there a way to feed large Numpy files to tf.fit()\nfiles I have:\n\n\"x_train.npy\" 5.7GB\n\"y_train.npy\"","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":71646169,"Users Score":0,"Answer":"Depending on how much RAM your device has, it may not be possible from a hardware point of view.","Q_Score":0,"Tags":"python,numpy,tensorflow,dataloader,tf.dataset","A_Id":71646207,"CreationDate":"2022-03-28T10:43:00.000","Title":"How to feed large NumPy arrays to tf.fit()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an image with which consists of 8 bits for each pixel, how I can create new picture that consists of 5 bits for each pixel using python and OpenCV?\nI know that in an RGB image, each pixel is represented by three 8 bit numbers associated to the values for Red, Green, Blue respectively, but I can't figure it out how I can create an image from 8 bits for each pixel to a new image with 5 bits of each pixel.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":91,"Q_Id":71679794,"Users Score":0,"Answer":"You can rescale the value of the pixels i.e. multiply every pixel value with 32\/256 (or just divide by 8) if you want to generate a mapping value on 5 bit scale for your image.","Q_Score":0,"Tags":"python,opencv,image-processing","A_Id":71679895,"CreationDate":"2022-03-30T15:10:00.000","Title":"Create an image with 5 bits for each pixel","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"for the equation Ax = b, let A = USV.t, i need to calculate inverse of (S.T@S). I noticecd that using np.linalg.inv() and np.linalg.pinv() gives extremely different results. np.allclose() infact returns false.\nI want to know why this is happening, any mathematical insight? maybe due to some property of A? here A is a non-linear function of a dynamic time series.\nBasically when can you expect pinv() and inv() to give very different results?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":87,"Q_Id":71719948,"Users Score":0,"Answer":"kind of figured it out. np.linalg.pinv, works by SVD decomposition and if A = USVt\nthen pinv(A) = V S^-1 Ut, and the shape of U and V are changed such that S^-1 is either mxm or nxn matrix. also, there is a cutoff for singular values, where less than cutoff are treated as zero. so if there are many small singular values, many rows\/columns of V\/U will be ignored and as such inv() and pinv() will give significantly different results.","Q_Score":0,"Tags":"python,numpy,linear-algebra,svd","A_Id":71764360,"CreationDate":"2022-04-02T18:01:00.000","Title":"When will numpy.linalg.inv() and numpy.linalg.pinv() give very different values?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on building a hashtag recommendation system. I am looking into what are the best ways I can evaluate the system.\nThe problem statement is: For a given hashtag I need to recommend most relevant (3 or 5) hashtags to the user.\nThe dataset contains post id in each row. And each row contains the hashtag contained in the post.\n\n\n\n\n\npost_id\nhashtags\n\n\n\n\n1\n100001\n#art #artgif #fanart #digitalArt\n\n\n\n\nThese are the steps I have followed.\n\nPreprocessed the hashtag data.\nTrained a fastText model on the entire hashtag corpus.\nGenerate word embeddings of all the hastags.\nUse K Nearest Neighbor to recommend hashtags.\n\nI am trying to evaluate the model using MAP@K.\nSo for each unique hashtag I check what are the top 3 or top 5 recommendations from the model and then compare with what are the actual hashtags that occurred with those hashtags.\nI am using MAP@K to evaluate the recommendations and treating the recommendation like a ranking task. Since a user has a finite amount of time and attention, so we want to know not just three tags they might like, but also which are most liked or which we are most confident of. For this kind of task we want a metric that rewards us for getting lots of \u201ccorrect\u201d or relevant recommendations, and rewards us for having them earlier on in the list (higher ranked). Hence MAP@K (K=3 or 5) [Not finalised the value of K].\nBelow table shows how I am evaluating my recommendation for each hashtag.\n\n\n\n\n\npost_id\nquery_hashtag\nhashtags\nrecommended_hashtags\n\n\n\n\n1\n100001\n#art\n#art #artgif #fanart #digitalArt\n#amazingArt #artistic #artgif\n\n\n1\n100001\n#artgif\n#art #artgif #fanart #digitalArt\n#fanArt #artistic #artgif\n\n\n1\n100001\n#fanart\n#art #artgif #fanart #digitalArt\n#art #wallart #fans\n\n\n1\n100001\n#digitalArt\n#art #artgif #fanart #digitalArt\n#crypto #nft #artgif\n\n\n\n\n\nI am basically looking for answers to 4 questions.\n\nAm I moving in the right direction to evaluate the hashtag recommendations?\nShould Calculate the MAP@K on the entire dataset (which I cam currently doing) or split the dataset into training and testing set and calculate the metric. In case I decide to split the dataset. Should I also restrict the hashtags to be seen by the model from the\ntesting data? I am unable to figure this out.\nWhat value of MAP@K is good enough for 5 recommendations, I am getting approximately 0.12 for MAP@5\nAny other evaluation metric that can help me to understand the quality of recommendations","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":71725340,"Users Score":0,"Answer":"Answers:\n\nperhaps, read-on\n\"cross-validation\" tests like MAP@k require that the data is split into \"test\" and \"training\" data. save 20% of the data for the \"test\" part then train the model on the rest. For the \"test\" set get a hashtag and make the query of the model. For every time the query returns a tag associated with the \"test\" datum you have a positive result. This allows you to calculate MAP@k. You can perform subsequent splits to use all data and combine the results but this is usually not necessary.\nthere is no fixed \"good\" for MAP@k. Find MAP@k for a random dataset as well as using your dataset to create \"popular\" hashtags. Using random and popular tags will give you 2 more MAP@k results. These should be significantly lower that the recommender MAP@k. Also the MAP@k for recs can be used as a baseline for future improvements, like changes to word embeddings. Better than the baseline means you have have a better recommender.\nresults with humans are the best metric since a recommender is trying to guess what humans are interested in. This requires an A\/B test for 2 variants, like random and recs -- or no recs and recs. Set your test up with where the app has no recs or random recs. This will be the \"A\" part and the \"B\" will be using your recs. If you get significantly more clicks using \"B\" you have clearly improved results for you app -- this assumes your app considers more clicks to the the thing to optimize. If you want to optimize time-on-site, then replace your metric for the A\/B test.","Q_Score":0,"Tags":"python,evaluation,recommendation-engine,mahout-recommender,recommender-systems","A_Id":71730386,"CreationDate":"2022-04-03T11:12:00.000","Title":"How to evaluate hashtag recommendation system using MAP@K and MAR@K?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a series of data points which form a curve I do not have an equation for, and for which i have not been able to satisfyingly calculate an equation with either libreoffice or the online curve fitting tools in the first 2 pages of google results.\nI would like the equation for the curve and ideally a python implementation of calculating y values for a given x value along that curve in case there are unexpected hoops to jump through. Failing that I would like any more elegant python solution than a list of elif statements incrementing y if x is high enough for it to increase by a whole number, which is the ugly solution of last resort - my immediate plans do not require decimal precision.\nThe curve crosses the zero line at 10, and every whole number incrementation of y requires x to be incremented by one more whole number than the previous, so y1 is reached at x11, y2 at x13, y3 at x16 etc, with the curve bending in the other direction in the negatives such that y-1 is at x9, y-2 is at x7 etc. I suspect i am missing something obvious as far as finding the curve equation when i already have this knowledge.\nIn addition to trying to use libreoffice calc and several online curve-fitting websites to no avail, i have tried slicing the s-curve (I have given up on searching the term sigmoid function as all my results are either related to neural nets or expect my y values to never exceed +-1) into two logarythmic curves, which almost works - 5 *(np.log(x) - 11) gets something frustratingly close to the top half of the curve, but which i ultimately haven't been able to use - in addition to crossing the number line at 9 it produced some odd behaviour when I returned round() rounded y values directly, displaying results in the negative 40s when returned directly, but seeming to work fine when those numbers are fed into other calculations.\nIf somebody can give me two working logarythms that round to the right numbers for x values between 0 and 50 that is good enough for this project.\nThank you for your time and patience.\n-EDIT-\nthese are triangular numbers apparently, x-10 is equal to the number of dots in a triangle with y dots on each side, what I need is the inverse of the triangular number formula. Thank you to everyone who commented.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":71732601,"Users Score":0,"Answer":"What you're looking for are a class of functions called \"Sigmoid functions\". They have a characteristic S-shape. Go to Wolfram and play around with some common Sigmoid funcs, remembering that the \"a\" in a function, f(x-a), shifts the entire curve left or right, and appending a value \"b\" to the function, f(x-a) + b will shift the curve up and down. Using a coefficient of \"c\", f(c*x - a) + b here acts as a scalar. That should get you where you want to be in short time.\nExample: (1\/(1 + C*exp(-(x + A)))) + B","Q_Score":0,"Tags":"python,curve-fitting,curve","A_Id":71744703,"CreationDate":"2022-04-04T05:54:00.000","Title":"Creating an s-curve based on data points","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've tried to perform static quantization with a trained EfficientNetB0. Once I apply the torch.quantization.convert to transform the model from float32 to int8, an error occurs, specifically: NotImplementedError: Could not run 'aten::silu.out' with arguments from the 'QuantizedCPU' backend. I wonder if anyone has run into the same error and been able to resolve it. I have also tried with mobilenet and I get an error of the style (not the same).\nThank you very much in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":71733167,"Users Score":0,"Answer":"Silu, Leaky ReLu is not easy to be implemented in 8bit quantization and a lot of frameworks are not able to implement it without model degradation (256 numbers represent for the whole output range).\nHowever, there is a trick to quantize this layer in 8-bit\n1\/ You can try to inference the training dataset and measure the output activation range of this layer to replicate the boundary from this function. This is usually called PostQuantize\n2\/ Reduce the complexity of the activation function. SiLu x*sigmoid(x) => HardSwish x*relu6(x + 3)*0.166666667. This is the idea of ReLu => ReLu6 where the output is bounded from 0-6.","Q_Score":0,"Tags":"python","A_Id":71733766,"CreationDate":"2022-04-04T07:00:00.000","Title":"Static quantization on efficientNet","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have run the same code(with packages I needed) before and it worked, not sure what's happening now. This show the error,\nAttributeError: module 'PIL.Image' has no attribute 'Resampling'. Probably it's small issue, but I can't figure it out, I am working in databricks.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6700,"Q_Id":71738218,"Users Score":0,"Answer":"Same happened when I upgraded some module. I just restarted runtime and it helped.","Q_Score":5,"Tags":"python,python-imaging-library,resampling","A_Id":71937406,"CreationDate":"2022-04-04T13:36:00.000","Title":"Module PIL has not attribute \"Resampling\"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"While working with the matplotlib, when I got to the colormap, I realized that the default colormap is jet colormap, but I heard that this choice is not very good. Can someone tell me the reason or is there a reason that this was chosen by default?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":71742116,"Users Score":1,"Answer":"the most commonly used \"jet\" colormap \u00a0(also known as \"rainbow\") is a poor choice for a variety of reasons.\n\nWhen printed in black and white, it doesn't work.\nFor colorblind persons, it doesn't work properly.\nNot linear in color space, so it\u2019s hard to estimate numerical values from the resulting image.\n\nAnd 'jet' is no more the default colormap in matplotlib. 'Viridis' is the new default colormap from Matplotlib 2.0 onwards.","Q_Score":2,"Tags":"python,matplotlib","A_Id":71742329,"CreationDate":"2022-04-04T18:31:00.000","Title":"Default colormap in Matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i m searching for something (library,function ecc..) in Python to generate a random discrete trajectory in the 2-D space.\nFor example: by providing the dimensions lenght and width of the plane and a starting point (x,y) i need to generate a sequence of points that represent the movement of an object (E.G. a human walking) over a random path.\nAre you aware of any such library or tool that helps accomplishing this?\nI have tried searching for something like that, but without success, I was searching for a shortcut\/an easy to implement method","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":45,"Q_Id":71751160,"Users Score":1,"Answer":"I don\u2019t know any tools that can make that, however you can easily make a function that generate randoms points on a plane (representing a path). If you don\u2019t want points to be too far away from the previous one, you generate a random point in a specific area around the point.","Q_Score":1,"Tags":"python","A_Id":71751424,"CreationDate":"2022-04-05T11:42:00.000","Title":"Movement\/Trajectory generation in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to find the point that will minimise the sum of Manhattan distances from a list of points.\nso given a list lets say [[x0, y0], [x1, y1] ...] (not sorted) I have to find the point to minimise the manhattan distance from that list of points. I understand the question but am having trouble with how i can complete this in O(n) time.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":74,"Q_Id":71764035,"Users Score":1,"Answer":"You can find the median of a list of numbers in linear time.\nTaking the x-coordinates and y-coordinates separately, find the median of each. If you need integer coordinates, round half-values to the nearest integer.\nThe distance minimizing point (DMP) is (median of x-values, median of y-values). There may be multiple DMPs, but this will be one of them.\nWhy? Well, along either axis, if there are more points in one direction than the other, say p to the left and q to the right, p < q, then moving 1 to the right will increase the distance to p points by 1 and reduce the distance to q points by 1, so reduce the sum of Manhattan distances to points by q-p.","Q_Score":0,"Tags":"python,algorithm","A_Id":71766537,"CreationDate":"2022-04-06T09:12:00.000","Title":"How to find coordinate to minimise Manhattan distance in linear time?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have multiple CT datasets in Dicom format all with varying number of slices or 2D CT images.\nExample:\nDataset 1 Shape: (512 x 512) x 100\nDataset 2 Shape: (512 x 512) x 130\nDataset 3 Shape: (512 x 512) x 122\nHow can I resize the data such that the depth (number of slices) is the same for al datasets?\nThe idea being this data will be passed into a 2D CNN with input shape: [slices, 512, 512, channels 1]\nThanks for the help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":71770891,"Users Score":0,"Answer":"IMHO the short answer is you can't, but also you shouldn't even try.\n\nClinical data is like that. Even for a scan of the same anatomical region (say pelvis), each scan (depending on clinical protocol, organization's protocols, slice thickness, technician decisions, patient symptoms, ..., ...) will have a varying number of slices.\n\nIf you try to train an algorithm based on a fixed number of slices you are guaranteed to develop an algorithm that may work for your training\/test data, but will absolutely fail in real clinical use.\n\nI would suggest you google why AI algorithms fail in clinical use so often to get an understanding of how AI algorithms developed without a) broad clinical understanding, b) technical understanding of the data, c) extensive and broad training data and d) understanding of clinical workflows will almost always fail\n\nYou could, in theory, try to normalize the data's dimensions based on anatomy your looking at, but then you need to be able to correctly identify the anatomy you're looking at, which itself is a big problem. ...and even then, every patient has different dimensions and anatomical shape.\n\nYou need to train with real data, the way it is, and with huge training sets that will cover all technical, clinical and acquisition variability to ensure you don't end up with something that only works 'in the lab', but will fail completely when it hits the real world.","Q_Score":0,"Tags":"python,conv-neural-network,image-preprocessing,pydicom,medical-imaging","A_Id":71799249,"CreationDate":"2022-04-06T17:02:00.000","Title":"Resize the Depth of CT data in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a point cloud and meshes (vertices=points of the point cloud).\nI want to project the point cloud with a certain virtual camera.\nHere, since the point cloud is sparse, the rendered result includes the points which should be occluded by foreground objects.\nTo resolve this issue, I want to use mesh information to identify which points should be occluded.\nIs there any smart way to do this in python?\nKind advice will be greatly appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":193,"Q_Id":71778597,"Users Score":1,"Answer":"After hours of searching, I conclude that I have to re-implement a novel rendering pipeline to achieve my goal.\nSo, instead of this, I use a mesh-based renderer to render a depth map.\nAnd then I simply project the points of the point cloud with a projection matrix.\nHere, I use the depth map to check whether the point fits with the depth or not.\nIf the projected point is the one that should be occluded, then the depth of the point would be larger than the depth map value at the corresponding pixel.\nSo, such points should be ignored while rendering.\nI know that this is a less elegant and inefficient trick but anyway it works very well :)","Q_Score":1,"Tags":"python,render,mesh,point-clouds","A_Id":71795823,"CreationDate":"2022-04-07T08:16:00.000","Title":"Projection of point cloud on 2D image based on mesh information","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create a script in Python to be used with Notepad++'s PythonScript plugin. So far I've gotten Pandas working correctly in the environment and I've read in a CSV and sorted it the way I desired, the end goal is to remove all of the text in Notepad (which I know how to work with at this point) and write in the text from my Pandas sorted CSV.\nThe issue is that when I write the text from that CSV to the console to check it, Pandas has reformated my CSV to make it easier to look at, it removes all of the quotes from the fields and adjusts the tab sizes (my files are tab delimited, with some tabs having different length). I need my CSV to be the exactly the same just sorted differently, If anyone can help it would be greatly appreciated.\nSome statements I'm using:\n(csv is a String containing all of the text in my CSV file)\npanda_csv = pd.read_csv(csv, sep=\"\\t\")\nsorted = panda_csv.sort_values(by=[\"Name\"], ascending=True, inplace=False)\nconsole.write(sorted.to_string())","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":71783681,"Users Score":0,"Answer":"Since your original file seems to be tab-delimited, you can use the following to write output with a tab separator.\nsorted.to_csv('output.csv', sep ='\\t')","Q_Score":0,"Tags":"python,pandas,csv","A_Id":71783747,"CreationDate":"2022-04-07T14:04:00.000","Title":"How to read a CSV with Pandas but raw?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to develop a machine learning model that would predict staff performance (e.g. staff ID 12345 will sell 15 insurance products next month.) I don't want to input staff ID into the training dataset because it will skew results. However I do need to be able to associate each staff with their predicted performance once the model is functional.\nIs the only way to go about this to develop the model excluding staff detail, then for prediction passing in a dataframe w\/o staff ID, then associate the model output with staff detail by index \/ instance order?\nIt just seems like a round-about way for doing this.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":29,"Q_Id":71795286,"Users Score":1,"Answer":"I think so. That is the only way I can think of too. Because you need to know you should not include the staff ID as the training data in your training model.\nSince you have used the Pandas module, you can easily search for which staff you want by using the DataFrame. Don't worry. I think it is a quite straightforward and fast way to map your predictions back to the staff IDs.\nSorry for not providing a new and better way. But I don't think you need to worry too much about the existing solutions, because I can't think of any bad effects like runtime. Hope it is helpful for you.","Q_Score":0,"Tags":"python,pandas","A_Id":71799994,"CreationDate":"2022-04-08T10:15:00.000","Title":"Advice re: retaining client ID when training machine learning model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to train a neural network using a subset of class labels?\nFor eg, I have a set of cifar10 images and I intend to train on [0-3,4-6,7-9] class labels, will it affect testing accuracy?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":71814590,"Users Score":0,"Answer":"obviously --if you measure accuracy over the full set of labels-- as your network will never be able to predict unseen classes reliably","Q_Score":0,"Tags":"python,deep-learning,pytorch","A_Id":71817082,"CreationDate":"2022-04-10T07:30:00.000","Title":"Pytorch - training on a subset of class labels, does it affect testing accuracy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I don't know if this question has been covered earlier, but here it goes - I have a notebook that I can run manually using the 'Run' button in the notebook or as a job.\nThe runtime for running the notebook directly is roughly 2 hours. But when I execute it as a job, the runtime is huge (around 8 hours). The piece of code which takes the longest time is calling an applyInPandas function, which in turn calls a pandas_udf. The pandas_udf trains an auto_arima model.\nCan anyone help me figure out what might be happening? I am clueless.\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":71824867,"Users Score":0,"Answer":"When running a notebook as a Job, you have to define a \"job cluster\" (in the contrast with an \"interactive cluster\" where you can attach to the notebook and hit run). There is a possible delay when the \"job cluster\" has to be spun up, but this usually only takes less than 10 minutes. Other than that, makes sure your job cluster's spec is the same as your interactive cluster (i.e. same worker's type, worker's size, autoscaling, etc).","Q_Score":0,"Tags":"python,pyspark,databricks,pmdarima,pandas-udf","A_Id":71848105,"CreationDate":"2022-04-11T08:35:00.000","Title":"Databricks notebook runs faster when triggered manually compared to when run as a job","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"as the title says really. I've managed to implement a very simple LSTM model that takes an input sequence and outputs a given float value.\nFirstly, I'd like to know if it's possible to get answers taken from a set of possible answers. E.g. if I know the answer should be in [1,2,3] to output the answer as being 1.\nSecondly, if possible I'd like the output to be a probability distribution on the possible answers, e.g. [0.5,0.3,0.2].\nI've implemented my simple LSTM model in Python using the various keras packages. Any pointers to the right direction to learn about how to implement this would be great!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":26,"Q_Id":71829227,"Users Score":2,"Answer":"LSTM is basically one type of recurrent neural network which provide many to many functionality. For that you need to add final dense layer with same number of input class in softmax layer, so you will get exact probability for each input class.","Q_Score":0,"Tags":"python,machine-learning,keras,lstm","A_Id":71839746,"CreationDate":"2022-04-11T14:03:00.000","Title":"Is it possible for LSTM to output a list of probabilities from a given list of possible outputs?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Assume we have a list of numbers (samples)\n\ndata = [0,0,1,2,3]\n\nI would like to fit a probability mass function for this dataset, in such a way that if I do something like\n\npmf.fit(data)\n\nand by executing something like\n\npmf.eval(0)\n\nI get\n\n0.2\n\nas return\nand\nby executing\n\npmf.eval(-1)\n\nI get\n\n0\n\nas return.\nNote that I am working with a discrete random variable here, so I am not fitting a pdf...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":71832022,"Users Score":0,"Answer":"I finally figured out myself\n\nrandom_array = [0,0,1,2,3]\n\n\nunique, counts = np.unique(random_array, return_counts=True)\n\n\nrandom_variable = sp.stats.rv_discrete(a = 0, b = np.inf, values = (unique, counts\/np.sum(counts)))","Q_Score":0,"Tags":"python,statistics,probability,data-fitting","A_Id":71832404,"CreationDate":"2022-04-11T17:34:00.000","Title":"Is there any easy way to fit probability mass function to a given dataset?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to write a model and have two input tensors of shape = (None, 8, 384) and I need to select them based on index in their second position and combine them to get eight tensors of size (None, 2, 384).\nFor example, suppose T1 has a size of (None, 8, 384), which corresponds to the first variable with 8 cities and 384 days. T2 has a size of (None, 8, 384), which corresponds to the second variable with 8 cities and 384 days.\nI want to select the first city (None, 1, 348) from both T1 and T2 and combine them to make a new tensor of size (None, 2, 384).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":25,"Q_Id":71838780,"Users Score":0,"Answer":"column_indices = tf.concat([tf.gather(T1, [0], axis=1),tf.gather(T2, [0], axis=1)], axis=1)","Q_Score":1,"Tags":"python,tensorflow,tensor","A_Id":71839095,"CreationDate":"2022-04-12T07:35:00.000","Title":"How to get specific index (Column) in Tensors and merge them using TensorFlow","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I used 3 different algorithms (Linear Regression, Logistics Regression, Decision Tree) to solve the same prediction problem and I have to compare their error measures. The problem at first was that the MAE, MSE, and RMSE values kept changing with each run, it was really problematic for me. The suggested solution was to use random_state.\nThe \"random_state\" argument works for Logistic Regression and Decision Tree but Linear Regression doesn't take this argument. In that case, how do I keep the error measure values from changing? Is there any alternative to \"random_state\" for Linear Regression?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":71845879,"Users Score":0,"Answer":"The answer is simple : you don't need it since there is no local optima to stuck in with different random seeds\nbecause generally in logistic regression problems; there is a global optimum.","Q_Score":0,"Tags":"python,python-3.x,machine-learning,data-science,linear-regression","A_Id":71847954,"CreationDate":"2022-04-12T16:04:00.000","Title":"Alternative to \"random_state\" for Linear Regression?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As part of a university research project, I scraped job posts for 4 professions in Germany. Because I could not get enough job posts in only 1 language in the time frame I have, I decided to scrape for both English and German posts.\nI already went through the whole NLP workflow with both the English and the German text (tokenize, lemmatize, POS, stopwords,...) using different tools due to the language being different.\nNow I would need to extract the most common skills required for each profession and differences between them.\nI realize that this is a problem I should have predicted, but now I have two corpuses in two different languages which have to be analyzed together.\nWhat do you suggest is the best way to reach a scientifically sound end result with input data in two languages?\nSo far, no good solution came to my mind:\n\ntranslate the German input to English and treat with the rest\ntranslate the German input after processing word by word\nmanually map English and German words","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":71859184,"Users Score":0,"Answer":"I work at a company that analyses news agency data in various languages. All our analytics process English texts only. Foreign language input is machine translated \u2014 this gives good results.\nI would suggest that for job adverts this should also work, as it is a very restricted domain. You're not looking at literature or peotry where it would cause a real problem.","Q_Score":0,"Tags":"python,web-scraping,nlp,nltk,multilingual","A_Id":71860218,"CreationDate":"2022-04-13T14:26:00.000","Title":"Best practice for dealing with NLP input in multiple languages for combined text analysis?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Where is the linear predictors (eta) located in the statsmodels.GLM class?\nIf a fitted model mdl = sm.GLM(Y, X, family = family()).fit() is equal to R's mdl <- glm.fit(X, Y, family = family()), then R's eta can be found mdl$linear.predictors. But i can't seem to find eta in statsmodels.\nRight now i calculate them by X @ mdl.params, which seems a bit tedious","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":25,"Q_Id":71861561,"Users Score":2,"Answer":"eta is not a very descriptive name. The internal name in statsmodels is linpred.\nThe linear predictor including offset and exposure can be obtained using the results predict method\nresults_glm.predict(..., linear=True)\nor md1.predict in your case.\nOffset can be set to zero using the keyword to obtain the linear predictor without offset, similar for exposure.","Q_Score":0,"Tags":"python,statsmodels,glm","A_Id":71862739,"CreationDate":"2022-04-13T17:23:00.000","Title":"Where is eta in statsmodels GLM?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a df:\n\n\n\n\nmonth\nA\nB\nC\nD\n\n\n\n\n1994-07\n1\n2\nNAN\nNAN\n\n\n1994-08\n5\n2\n3\n4\n\n\n1994-09\n1\n2\n1\n1\n\n\n1994-10\n1\n2\n3\n1\n\n\n1994-11\n1\nNAN\n3\n1\n\n\n1995-07\n1\n2\n2\n4\n\n\n1995-08\n1\n2\n3\n4\n\n\n\n\nI want, for each column, to get the product of a rolling window of size 5, ignoring NAN values. Which means, in this case:\n\n\n\n\nmonth\nA\nB\nC\nD\n\n\n\n\n1994-11\n5\n16\n27\n4\n\n\n1995-07\n5\n16\n54\n16\n\n\n1995-08\n1\n16\n54\n16\n\n\n\n\nFor D(1994-11), for example, I would get 4 (4111), and C (1995-07) results in 54 (2331*3). I have tried:\ndf = df.rolling(window=5,axis=0).apply(prod(min_count=1))\nIt is an attempt of applying the function product from pandas.\nBut I get the error \"NameError: name 'prod' is not defined\"","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":71895665,"Users Score":0,"Answer":"In case somebody comes to this: I have solved in the following way:\ndf = df.rolling(window=5, min_periods=1).apply(lambda x: np.prod(1 + x) - 1)","Q_Score":1,"Tags":"python,pandas,product,rolling-computation","A_Id":71896047,"CreationDate":"2022-04-16T16:34:00.000","Title":"Compute product with rolling window","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 3D MR image as a NIfTI file (.nii.gz). I also have a 'mask' image as a NIfTI file, which is just a bunch of 0s and 1s. The 1s in this mask image represent the region of the 3D MR image I am interested in.\nI want to retrieve the intensities of the pixels in the 3D MRI image which exist in the mask (i.e. are 1s in the mask image file). The only intensity feature I have found is sitk.MinimumMaximumImageFilter which isn't too useful since it uses the entire image (instead of a particular region), and also only gives the minimum and maximum of said image.\nI don't think that the GetPixel() function helps me in this case either, since the 'pixel value' that it outputs is different to the intensity which I observe in the ITK-SNAP viewer. Is this correct?\nWhat tool or feature could I use to help in this scenario?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":37,"Q_Id":71896315,"Users Score":3,"Answer":"use itk::BinaryImageToStatisticsLabelMapFilter","Q_Score":0,"Tags":"python,itk,simpleitk,nifti","A_Id":71994179,"CreationDate":"2022-04-16T18:06:00.000","Title":"Getting the intensities of a certain region of an MR image","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Editing this to reflect addition work:\nSituation\nI have 2 pandas dataframes of Twitter search tweets API data in which I have a common data key, author_id.\nI'm using the join method.\nCode is:\ndfTW08 = dfTW07.join(dfTW04uf, on='author_id', how='left', lsuffix='', rsuffix='4')\nResults\nWhen I run that, everything comes out as expected, except that all the other dataframe (dfTW04uf) values come in as NaN. Including the values for the other dataframe's author_id column.\nAssessment\nI'm not getting any error messages, but have to think it's something about the datatypes. The other dataframe is a mix of int64, object, bool, and datetime datatypes. So it seems odd they'd all be unrecognized.\nAny suggestions on how to troubleshoot this greatly appreciated.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":71906218,"Users Score":0,"Answer":"Couldn't figure out the NaN issue using join, but was able to merge the databases with this:\ncallingdf.merge(otherdf, on='author_id', how='left', indicator=True)\nThen did sort_values and drop_duplicates to get the final list I wanted.","Q_Score":0,"Tags":"python,pandas,dataframe,join","A_Id":71919399,"CreationDate":"2022-04-17T23:27:00.000","Title":"`join` method importing `other` dataframe values as `NaN`","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Editing this to reflect addition work:\nSituation\nI have 2 pandas dataframes of Twitter search tweets API data in which I have a common data key, author_id.\nI'm using the join method.\nCode is:\ndfTW08 = dfTW07.join(dfTW04uf, on='author_id', how='left', lsuffix='', rsuffix='4')\nResults\nWhen I run that, everything comes out as expected, except that all the other dataframe (dfTW04uf) values come in as NaN. Including the values for the other dataframe's author_id column.\nAssessment\nI'm not getting any error messages, but have to think it's something about the datatypes. The other dataframe is a mix of int64, object, bool, and datetime datatypes. So it seems odd they'd all be unrecognized.\nAny suggestions on how to troubleshoot this greatly appreciated.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":71906218,"Users Score":0,"Answer":"You can use merge instead of join since merge had everything join does but with more \"power\". (anything you can do with join you can do with merge)\nI am assuming the NaN is coming up since the results aren't being discarded when you asked the first join to use on author ID and then include suffixes fo x an y. When you left join with merge you are discarding the non matches without any x and y suffixes.","Q_Score":0,"Tags":"python,pandas,dataframe,join","A_Id":71919568,"CreationDate":"2022-04-17T23:27:00.000","Title":"`join` method importing `other` dataframe values as `NaN`","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Traceback (most recent call last):\nFile \"D:\\Miniconda3\\envs\\ppy39\\lib\\site-packages\\flask\\app.py\", line 2073, in wsgi_app\nresponse = self.full_dispatch_request()\nFile \"D:\\Miniconda3\\envs\\ppy39\\lib\\site-packages\\flask\\app.py\", line 1518, in full_dispatch_request\nrv = self.handle_user_exception(e)\nFile \"D:\\Miniconda3\\envs\\ppy39\\lib\\site-packages\\flask\\app.py\", line 1516, in full_dispatch_request\nrv = self.dispatch_request()\nFile \"D:\\Miniconda3\\envs\\ppy39\\lib\\site-packages\\flask\\app.py\", line 1502, in dispatch_request\nreturn self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)\nFile \"C:\\Users\\admin\\Desktop\\VScode\\WorkProjects\\2022\\Product_Classification\\retention_ml.py\", line 169, in output_result\nresult_28 = xgboost_reg_281.predict(data[col_reg_28])\nFile \"D:\\Miniconda3\\envs\\ppy39\\lib\\site-packages\\xgboost\\sklearn.py\", line 1047, in predict\nif self._can_use_inplace_predict():\nFile \"D:\\Miniconda3\\envs\\ppy39\\lib\\site-packages\\xgboost\\sklearn.py\", line 983, in _can_use_inplace_predict\npredictor = self.get_params().get(\"predictor\", None)\nFile \"D:\\Miniconda3\\envs\\ppy39\\lib\\site-packages\\xgboost\\sklearn.py\", line 636, in get_params\nparams.update(cp.class.get_params(cp, deep))\nFile \"D:\\Miniconda3\\envs\\ppy39\\lib\\site-packages\\xgboost\\sklearn.py\", line 633, in get_params\nparams = super().get_params(deep)\nFile \"D:\\Miniconda3\\envs\\ppy39\\lib\\site-packages\\sklearn\\base.py\", line 205, in get_params\nvalue = getattr(self, key)\nAttributeError: 'XGBModel' object has no attribute 'callbacks'","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1159,"Q_Id":71912084,"Users Score":2,"Answer":"Check your xgboost library version. I loaded a model saved from xgboost==1.5.0 env to a xgboost==1.6.0 env and got the same error when operating on the model. I downgraded xgboost to 1.5.0 and everything worked fine. I suspect the model saving format is changing since 1.6.0 as it gives warning about me loading a binary model file using pickle dump.","Q_Score":1,"Tags":"python,flask,scikit-learn,xgboost","A_Id":71920893,"CreationDate":"2022-04-18T12:54:00.000","Title":"AttributeError: 'XGBModel' object has no attribute 'callbacks'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Say I have multiple images\/plots produced by seaborn or matplotlib in a Python Jupyter Notebook. How do I save all of them into one PDF file in one go?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":71938230,"Users Score":0,"Answer":"in notebook : file => Download as => PDF ...\nor\nyou can import your file in google drive, and open it with google colab then : file => print => save it as a pdf.","Q_Score":1,"Tags":"python,matplotlib,jupyter-notebook,jupyter","A_Id":71938562,"CreationDate":"2022-04-20T10:39:00.000","Title":"Jupyter Notebook Save Multiple Plots As One PDF","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I have a metadata dataset with labels and their descriptions.\nA sample from the dataset looks like the following:\n\n\n\n\nLabel\nDescriptions\n\n\n\n\nRelease Date\nDate of formal issuance\n\n\nLanguage\nThe language of the dataset\n\n\n\n\nI want to train a ML model which learns the relationship between Label (Input X) and Descriptions (Target Y-categories) and then, can predict the category or the description of an unseen Label from the given list of Categories \/ Descriptions. Here we assume, the new label would be similar, in spelling or in meaning, to one of the labels used in the training model.\nUnfortunately, most of the algorithms try to map the description (which is usually a text document, review etc) to one of the categories (positive or negative etc)\nWould be great to get some help here, as to which algorithm would help me solve this problem.\nThanks in advance!!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":19,"Q_Id":71965472,"Users Score":1,"Answer":"I don't think it is possible: it can't be framed as a classification task nor a translation\/transformation one, in fact at a high level the description is a better explaination of the label, tailored with external (domain?) knowledge that cannot be expressed in any model I know of.\nBesides that, I don't think you have the necessary data amount and variability to express a sufficent generalization over the output.","Q_Score":0,"Tags":"python,text-classification","A_Id":71965790,"CreationDate":"2022-04-22T08:12:00.000","Title":"Is there a ML Algorithm to map labels (Single or max a couple of words) to description (a text telling more about a label)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have clustered the pixels of an image into clusters of different sizes and shapes. I want to max pool each cluster as fast as possible because the max pooling happens in one layer of my CNN.\nTo clarify:\nInput is a batch of images with the following shape [batch_size, height of image, width of image, number of channels]. I have clustered each image before I start training my CNN. So for each image I have a ndarray of labels with shape [height of image, width of image].\nHow can I max pool over all pixels of an image that have the same label for all labels? I understand how to do it with a of for loop but that is painstakingly slow. I am searching for a fast solution that ideally can max pool over every cluster of each image in less than a second.\nFor implementation, I use Python3.7 and PyTorch.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":24,"Q_Id":71967972,"Users Score":0,"Answer":"I figured it out. torch_scatter. scatter_max(img, cluster_labels) outputs the max element from each cluster and removes the for loop from my code.","Q_Score":0,"Tags":"python-3.x,pytorch,conv-neural-network,k-means,max-pooling","A_Id":72337879,"CreationDate":"2022-04-22T11:25:00.000","Title":"How can I speed up max pooling clusters of different sizes and shapes of an image?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Normally I would use the code torch.save(model.state_dict(), 'test.pth') to save the best model based on the performance of the validation set.\nIn the training phase, I print the loss and Accuracy in the last epoch and I got Loss:0.38703016219139097 and Accutacy:86.9.\nHowever, When I load the model which I just got from the training phase to print the loss and Accuracy, I would get the same Accuracy and different loss: 0.38702996191978456.\nWhy would that happen? I try different datasets and neural networks, but get the same result.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":71968887,"Users Score":0,"Answer":"If I've understood correctly, at the end of each epoch, you print the training accuracy\/loss, and also save the model if it beats the current best model on the validation set. Is that it?\nBecause if my understanding of the situation is correct, then it is perfectly normal. Your \"best\" model in regards of the TRAINING accuracy\/loss is under no obligation to also be the best in regards of the VALIDATION accuracy\/loss. (One of the best examples of this is the overfitting phenomenon)","Q_Score":1,"Tags":"python,machine-learning,deep-learning,pytorch","A_Id":71969008,"CreationDate":"2022-04-22T12:40:00.000","Title":"Why the accuracy and loss number is different compared with the train phase if I load a trained model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am observing that if a is a list (or a numpy array) with elements [1,2,3] and I ask for a[1:-1:-1], then I get the empty list. I would expect to get [2,1] assuming that the slicing spans the indexes obtainable decrementing from 1 to -1 excluding the last value (that is excluding -1), that is indexes 1 and 0.\nThe actual behavior may have some justification but makes things more complex than expected when one needs to take a subarray of an array a starting from some generic index i to index i+m (excluded) in reverse order. One would tend to write a[i+m-1:i-1:-1] but this suddenly breaks if i is set to 0. The fact that it works for all i but zero looks like a nasty inconsistency. Obviously, there are workarounds:\n\none could write a[i+m-1-n:i-1-n:-1] offsetting everything by -n where n is the array length; or\none could write a[i:i+m][::-1].\n\nHowever, in case 1 the need to know the array length appears rather unnatural and in case 2 the double indexing appears as a not very justified overhead if the slicing is done in a tight loop.\n\nIs there any important reason that I am missing for which it is important that the behavior is as it is?\n\nHas this issue been considered by the NumPy community?\n\nIs there some better workaround than those I came up with?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":71,"Q_Id":71980382,"Users Score":1,"Answer":"List[1:-1:-1] means List[start index : end index : jump]\nIndexing in List:\n\n\n\n\nNumber\n1\n2\n3\n\n\n\n\nIndex\n0\n1\n2\n\n\nIndex\n-3\n-2\n-1\n\n\n\n\nSo, if we take list a[1,2,3] and find list of a[1:-1:-1] means starting index = 1, ending index = -1, jump = -1\nSo, list traversing through the\n\nindex 1 (i.e. number=2) to index -1 (i.e. number=3) but jump = -1 (means backward position)\n\n\nSo, return an empty list i.e. []","Q_Score":2,"Tags":"python,arrays,numpy,numpy-slicing","A_Id":71980899,"CreationDate":"2022-04-23T14:04:00.000","Title":"Why does a[1:-1:-1] with a=[1,2,3] return []?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am observing that if a is a list (or a numpy array) with elements [1,2,3] and I ask for a[1:-1:-1], then I get the empty list. I would expect to get [2,1] assuming that the slicing spans the indexes obtainable decrementing from 1 to -1 excluding the last value (that is excluding -1), that is indexes 1 and 0.\nThe actual behavior may have some justification but makes things more complex than expected when one needs to take a subarray of an array a starting from some generic index i to index i+m (excluded) in reverse order. One would tend to write a[i+m-1:i-1:-1] but this suddenly breaks if i is set to 0. The fact that it works for all i but zero looks like a nasty inconsistency. Obviously, there are workarounds:\n\none could write a[i+m-1-n:i-1-n:-1] offsetting everything by -n where n is the array length; or\none could write a[i:i+m][::-1].\n\nHowever, in case 1 the need to know the array length appears rather unnatural and in case 2 the double indexing appears as a not very justified overhead if the slicing is done in a tight loop.\n\nIs there any important reason that I am missing for which it is important that the behavior is as it is?\n\nHas this issue been considered by the NumPy community?\n\nIs there some better workaround than those I came up with?","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":71,"Q_Id":71980382,"Users Score":3,"Answer":"-1 as an index has a special meaning [1], it's replaced with the highest possible = last index of a list.\nSo a[1:-1:-1] becomes a[1:2:-1] which is empty.\n[1] Actually, all negative indices in Python work like that. -1 means the last element of a list, -2 the second-to-last, -3 the one before that and so on.","Q_Score":2,"Tags":"python,arrays,numpy,numpy-slicing","A_Id":71980474,"CreationDate":"2022-04-23T14:04:00.000","Title":"Why does a[1:-1:-1] with a=[1,2,3] return []?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a data set that has the following columns. funciton: pd.melt()\nyears name date m1 m2 m3 m4 m5 m6 \u2026. to m12\nI set me variable name to month and try to include m1-m12, but I just cant get it to work. it will instead put everything in the new week column which looks like\nweek\nyear\nname\ndate\nm1\nm2\nI don't want the week year name date, is there a way to just put m1-m12 in like indexing? i have tried it it didn't work","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":71992998,"Users Score":0,"Answer":"sample output\nmonth\nm1\nm2\nm3\n...\nm12\nhere is the answer i come up with using iloc!\nsorry for asking a easy question that I can figure out myself\npd.melt(.......value_vars=billboard.iloc[-12:])","Q_Score":0,"Tags":"python,pandas,indexing,jupyter-lab,pandas-melt","A_Id":71993162,"CreationDate":"2022-04-24T22:55:00.000","Title":"How do I include many columns in pd.melt function instead of just typing each and everyone out?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to RASA. I gone through updated documentation Rasa 3 but I don't know how to pre-process the message of the user before nlu-model.\ne.g., if user enter hi, so i want to read that text before any action taken by rasa like tokenization etc.\nIf anyone can please guide me for this.\nEDIT: I want to capture user text in rasa itself, before any other pipeline action, so that I can do my own processing. (for learning purpose)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":71993870,"Users Score":0,"Answer":"In such scenario, you can handle the user message from front end (chatbot widget), specifically from JS script.","Q_Score":2,"Tags":"python,rasa,rasa-nlu,rasa-core","A_Id":71994843,"CreationDate":"2022-04-25T02:16:00.000","Title":"RASA preprocessing, user entered text","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 8Khz data for voice recognition model training, but the model does not support 8Khz, so I want to upsample it to 16Khz. How can I upsample through the scipy library?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":71995979,"Users Score":0,"Answer":"This is going to get dirty, fast.\nYou need an interpolation filter. It upsamples the input, then low-pass filters it back to the original passband. This preserves the original signal and does not add \"interesting\" artifacts to it.\nStart by Googling \"interpolation filter\".\nGood luck. You're going to need it. (Yes, I've been down this route, a little bit, but that was some years ago and the code is most emphatically not releasable.)","Q_Score":0,"Tags":"python,audio,resampling","A_Id":71996096,"CreationDate":"2022-04-25T07:36:00.000","Title":"Upsampling Audio data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a labeled dataset and I am going to develop a classifier for a multilabel classification problem (ex: 5 labels). I have already developed BERT, and CNN, but I was wondering if I could use RL for text classification as well.\nAs I know, using RL we can use a smaller training dataset\nI am looking for a python code for RL.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":72007160,"Users Score":1,"Answer":"Reinforcement learning is a different thing from BERT or CNN. It is not actually a technique or a model, it is a type of problem(hidden markov models), and the set of techniques used to solve that problem.\nMore precisely, Reinforcement Learning it the class of problems where you have\n\nAn agent\nwho has to chooses actions to take\nThose actions will change its state and give it a reward\nWhere your goal is to maximize the reward.\n\nThis fits very well with game AI, or robotics applications for example.\nBut in your case, you want to develop a classifier from a labeled dataset. That is not a reinforcement learning problem, it is supervised learning","Q_Score":0,"Tags":"python,reinforcement-learning,multilabel-classification","A_Id":72007195,"CreationDate":"2022-04-26T00:46:00.000","Title":"Are there examples of using reinforcement learning for multi label text classification?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on an inventory forecasting model and I require specific data in order to train and test the model. Currently, I am trying to use one year worth of data to build a basic linear regression model to predict for the following year.\nWhat I am having trouble with is removing outliers from my dataframe that contains 2 different types of outliers (\"quantity\" and \"dates\"), and I am only trying to remove the outliers using \"quantity\".","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":72009873,"Users Score":0,"Answer":"You can remove the outliers by comparing them to the mean or median (I suggest using the median). Divide the distance between each value and the median by the distance between the maximum and median values if it is greater than a threshold value (eg 0.98, It depends on your data and only you can select it) Delete that data.\nFor example, if you set your threshold to 1, the farthest data will be deleted.","Q_Score":1,"Tags":"python,pandas,dataframe,linear-regression","A_Id":72010211,"CreationDate":"2022-04-26T07:22:00.000","Title":"How do I remove outliers from a dataframe that contains floating integers in Y-axis and dates in X-axis?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The following code is giving me some unexpected output. In summary, I am defining a Dictionary (dict2) and then creating a series out of it. Then, I am re-assigning a new value to the Math course and the Science course using the Series' method. Only the value for Science changes (and for Math it is unchanged). Can you please help me understand why? Thank you.\nEdit: My goal is to understand why this is not working as expected, rather than actually reassigning a value to Math. I've also added the code here instead of the screenshot. Thank you.\ndict2 = {'Maths': 60, 'Science': 89, 'English': 76, 'Social Science': 86}\nmarks_series = pd.Series(dict2)\nprint(marks_series)\nmarks_series.Maths = 65\nmarks_series.Science = 90\nprint (marks_series)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":72016959,"Users Score":0,"Answer":"I restarted my notebook and that fixed the issue. I still wonder why it happened in the first place. But that's for another day.","Q_Score":0,"Tags":"python,pandas,numpy,methods,series","A_Id":72017741,"CreationDate":"2022-04-26T15:53:00.000","Title":"Can't Change the First Value in a Series Using Index as a Method in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"The following code is giving me some unexpected output. In summary, I am defining a Dictionary (dict2) and then creating a series out of it. Then, I am re-assigning a new value to the Math course and the Science course using the Series' method. Only the value for Science changes (and for Math it is unchanged). Can you please help me understand why? Thank you.\nEdit: My goal is to understand why this is not working as expected, rather than actually reassigning a value to Math. I've also added the code here instead of the screenshot. Thank you.\ndict2 = {'Maths': 60, 'Science': 89, 'English': 76, 'Social Science': 86}\nmarks_series = pd.Series(dict2)\nprint(marks_series)\nmarks_series.Maths = 65\nmarks_series.Science = 90\nprint (marks_series)","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":72016959,"Users Score":0,"Answer":"According to replays and info , your code working correctly and the problem may be case of importing pandas and your Env","Q_Score":0,"Tags":"python,pandas,numpy,methods,series","A_Id":72020946,"CreationDate":"2022-04-26T15:53:00.000","Title":"Can't Change the First Value in a Series Using Index as a Method in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing a convolutional neural network classification and all my training tiles (1000 of them) are in geotiff format. I need to get all of them to a numpy array, but I only found code that will do it for one tiff file at a time.\nIs there a way to convert a whole folder of tiff files at once?\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":72021328,"Users Score":0,"Answer":"Try using a for loop to go through your folder","Q_Score":0,"Tags":"python,numpy,deep-learning,conv-neural-network,geotiff","A_Id":72021363,"CreationDate":"2022-04-26T22:43:00.000","Title":"Is there a way to convert multiple tiff files to numpy array at once?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to make a face recognition for employees as work. I already have system that gets image from cameras and outputs face embeddings (128-dimensional vectors). So my next step, as far as I understand, is to compare these embeddings with the one stored somewhere in database and find one with nearest distance.\nThe problem is that I want to enable machine learning for this. Initially, on like every tutorial, only one photo of employee is used to create a reference embedding. But what if a want to store multiple embeddings for one person? For example, maybe this person came with glasses, or slightly changed appearance so that my system no longer recognises it. I want to be able to associate multiple embeddings with one person or another, creating a collection of embeddings for each employee, I think this would improve recognition system. And if in future my system will show me that there's unknown person, I could tell it that this embedding corresponds to specific person.\nIs there any database that can store (maybe as array) or associate multiple vectors per person? I've looked into Milvus, FAISS, but didn't find anything about that.\nI use Python 3.9 with OpenCV3, Tensorflow and Keras for creating embeddings.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":76,"Q_Id":72029394,"Users Score":1,"Answer":"If your embeddings come from different dimensions of a person, such as a face and a voiceprint. Then it makes sense to store two vector fields in milvus, one for the face vector and one for the voiceprint vector.","Q_Score":1,"Tags":"python,tensorflow,keras,face-recognition,deepface","A_Id":72124073,"CreationDate":"2022-04-27T13:12:00.000","Title":"What is the best approach for storing multiple vectors per person for face recognition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to make a face recognition for employees as work. I already have system that gets image from cameras and outputs face embeddings (128-dimensional vectors). So my next step, as far as I understand, is to compare these embeddings with the one stored somewhere in database and find one with nearest distance.\nThe problem is that I want to enable machine learning for this. Initially, on like every tutorial, only one photo of employee is used to create a reference embedding. But what if a want to store multiple embeddings for one person? For example, maybe this person came with glasses, or slightly changed appearance so that my system no longer recognises it. I want to be able to associate multiple embeddings with one person or another, creating a collection of embeddings for each employee, I think this would improve recognition system. And if in future my system will show me that there's unknown person, I could tell it that this embedding corresponds to specific person.\nIs there any database that can store (maybe as array) or associate multiple vectors per person? I've looked into Milvus, FAISS, but didn't find anything about that.\nI use Python 3.9 with OpenCV3, Tensorflow and Keras for creating embeddings.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":76,"Q_Id":72029394,"Users Score":1,"Answer":"Maybe you can store one id for one person with different vectors in milvus","Q_Score":1,"Tags":"python,tensorflow,keras,face-recognition,deepface","A_Id":72121639,"CreationDate":"2022-04-27T13:12:00.000","Title":"What is the best approach for storing multiple vectors per person for face recognition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to make a face recognition for employees as work. I already have system that gets image from cameras and outputs face embeddings (128-dimensional vectors). So my next step, as far as I understand, is to compare these embeddings with the one stored somewhere in database and find one with nearest distance.\nThe problem is that I want to enable machine learning for this. Initially, on like every tutorial, only one photo of employee is used to create a reference embedding. But what if a want to store multiple embeddings for one person? For example, maybe this person came with glasses, or slightly changed appearance so that my system no longer recognises it. I want to be able to associate multiple embeddings with one person or another, creating a collection of embeddings for each employee, I think this would improve recognition system. And if in future my system will show me that there's unknown person, I could tell it that this embedding corresponds to specific person.\nIs there any database that can store (maybe as array) or associate multiple vectors per person? I've looked into Milvus, FAISS, but didn't find anything about that.\nI use Python 3.9 with OpenCV3, Tensorflow and Keras for creating embeddings.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":72029394,"Users Score":0,"Answer":"1- you can store many embeddings for a person. when you have a face to compare, then you will compare it to the many images of each person. then, find the average of the similarity and decide they are same person or different.\n2- if you have many facial images for a person, then you will find embeddings for each photo then find an average embedding. suppose that you have 10 images for a person, you find 128-d embeddings for all of 10 photos. thereafter, you will find the average of each dimension and finally you will have one 128-d embedding.\ni recommend you to store your embeddings in spotify annoy, facebook faiss, nmslib or elasticsearch. you can find implementations of deepface library for python with those vector databases with a basic google search.","Q_Score":1,"Tags":"python,tensorflow,keras,face-recognition,deepface","A_Id":72143469,"CreationDate":"2022-04-27T13:12:00.000","Title":"What is the best approach for storing multiple vectors per person for face recognition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 3 scripts at the end of each script i have a dataframe results and i want to run this 3 scrips from one script and to show results (3 dataframes) that i will regroupe in one dataframe.\nIf you know how to run this 3 scripts at the same time and get results in one file (Dataframe)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":72032280,"Users Score":0,"Answer":"In scripts make sure you run them inside if __name__ == __main__: block(so you don't run then while importing). Turn those scripts into functions (or classes, depending on the structure of your code) and then import them to the main python file. Then write their results to one file inside the main script.","Q_Score":0,"Tags":"python","A_Id":72032349,"CreationDate":"2022-04-27T16:25:00.000","Title":"How run multiple scripts from one scripts and retrieve results","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using the tf_Agents library for contextual bandits usecase.\nIn this usecase predictions (daily range between 20k and 30k predictions, 1 for each user) are made daily (multiple times a day) and training only happens on all the predicted data from 4 days ago (Since the labels for predictions takes 3 days to observe).\nThe driver seems to replay only the batch_size number of experience (Since max_step length is 1 for contextual bandits). Also the replay buffer has the same constraint only handling batch size number of experiences.\nI wanted to use checkpointer and save all the predictions (experience from driver which are saved in replay buffer) from the past 4 days and train only on the first of the 4 days saved on each given day.\nI am unsure how to do the following and any help is greatly appreciate.\n\nHow to (run the driver) save replay buffer using checkpoints for the entire day (a day contains, say, 3 predictions runs and each prediction will be made on 30,000 observations [say batch size of 16]). So in this case I need multiple saves for each day\nHow to save the replay buffers for past 4 days (12 prediction runs ) and only retrieve the first 3 prediction runs (replay buffer and the driver run) to train for each day.\nUnsure how to handle the driver, replay buffer and checkpointer configurations given the above #1, #2 above","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":38,"Q_Id":72032751,"Users Score":1,"Answer":"On the Replay Buffer I don't think there is any way to get that working without implementing your own RB class (which I wouldn't necessarily recommend). Seems to me like the most straight forward solution for this is to take the memory inefficiency hit and have two RB with a different size of max_length. One of the two is the one given to the driver to store episodes and then rb.as_dataset(single_determinsitic_pass=true) is used to get the appropriate items to place in the memory of the second one used for training. The only thing you need to checkpoint of course is the first one.\nNote: I'm not sure off-the-top-of-my head how exactly single_determinsitic_pass works, you may want to check that in order to determine which portion of the returned dataset corresponds to the day you want to train from. I also have the suspicion that probably the portion corresponding to the last day shifts, because if I don't remember wrong the RB table that stores the experiences works with a cursor that once reached the maximum length starts overwriting from the beginning.\nNeither RB needs to know about the logic of how many prediction runs there are, in the end your code should manage that logic and you might want to keep track (maybe in a pickle if you want to save this) how many predictions correspond to each day so that you know which ones to pick.","Q_Score":0,"Tags":"python,tensorflow,machine-learning,bandit,tf-agent","A_Id":72049599,"CreationDate":"2022-04-27T17:05:00.000","Title":"How to use the replay buffer in tf_agents for contextual bandit, that predicts and trains on a daily basis","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a huge set of numbers with a defined order. The logic in simple terms would look like this:\n\ndata['values'] = [1,1,3,4,4,-9,10]\ndata['order'] = [1,2,3,4,5,6,7]\nExpectedSum = 0\n\nWhat I wish to return is the original order and values of biggest possible subset of values that we can get with total sum equal 0.\nFor this case one optimal solution would be\n\nsolution['values'] = [1,1,3,4,-9]\nsolution['order'] = [1,2,3,4,6]\n\nThe sum could be also achieved by replacing 4th order number with 5th order number, however, one optimal solution is enough. The goal is to reach maximum possible size of subset with total sum =0.\nWas looking for variations of Knapsack problem and maximum subarray algorithms but none met my needs.\nAny hints or directions appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":71,"Q_Id":72034742,"Users Score":1,"Answer":"Maybe I'm missing something (it's getting late), but if we denote a subset with k elements as S(k) and we have N elements in total, you could:\n\nsee if S(N) sums to 0; if so, that's the largest subset\nif not, see if any of S(N-1) sums to 0; there are N such sets\nif not, see if any of S(N-2) does; there are N*(N-1) such sets\n\nand so on. This is a \"brute force\" solution and probably far from optimal, but if the largest subset is expected to be relatively large (close to N in size) it shouldn't be too bad. Note that each step can utilize the sums computed in the preceding step.\nYour solution[order] seems to be the indices to the solution subset. It can be done of course, but I'm not sure why you need to get both the values and their indices? It's kind of redundant.\nFinally, while doable in pure Python, the NumPy library might be useful for this kind of problem.","Q_Score":1,"Tags":"python,algorithm,math,dynamic-programming,pseudocode","A_Id":72035141,"CreationDate":"2022-04-27T20:12:00.000","Title":"Largest subset from set of numbers (also negative) having sum equal 0","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am totally new to machine learning, after going through many tutorials I am bit confused over which python version is most stable for libraries like tensorflow and keras ?\nSome are suggesting python 3.7 while some are telling to use latest one. Which one should I use, any suggestions? Please help!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":72038914,"Users Score":2,"Answer":"anywhere from Python 3.6\u20133.9 should work fine, the version doesn't differ too much.","Q_Score":0,"Tags":"python,tensorflow,keras","A_Id":72038976,"CreationDate":"2022-04-28T06:23:00.000","Title":"Which python version should I use for machine learning?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running a classification model in Microsoft Azure using the pyspark.ml.classification library with RandomForestClassifier.\nMy question:\nI know in sklearn.ensemble.RandomForestClassifier you can specify the n_jobs parameter to configure number of jobs to run in parallel.\nWhen using pyspark.ml.classification.RandomForestClassifier in Azure, I find that each job is run separately. It first runs, Job 1, when done it runs Job 2 etc.\nIs there a way to specify the number of jobs to run in parallel in the pyspark.ml.classification.RandomForestClassifier function?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":72040474,"Users Score":0,"Answer":"The Spark job you're describing does not have the same meaning with sklearn's job (which defining the parallelism via n_jobs parameter).\nSpark does run your classifier in parallel (in the background). The \"Job 1\" and \"Job 2\" etc is more about running some sequential steps, one after another, and each of them still running with multiple executors behind the scene.","Q_Score":0,"Tags":"python,azure,pyspark,parallel-processing,random-forest","A_Id":72049183,"CreationDate":"2022-04-28T08:34:00.000","Title":"Is Microsoft Azure running jobs in parallel automatically?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have fit a GAM model in h2o with several gam variables (P-splines) using the h2o.estimators.gam package. I'd like to get a table with the factor loading for every level of each gam variable. For example, one of my variables is age, and I need a table of the coefficient for each age.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":29,"Q_Id":72048657,"Users Score":1,"Answer":"Right now we do not support categorical columns in generating splines. It is on our roadmap though.","Q_Score":1,"Tags":"python,h2o,gam,bspline,h2o.ai","A_Id":72062440,"CreationDate":"2022-04-28T18:30:00.000","Title":"Is there a way to create a factor table for spline terms using GAMs in Python's h2o","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am installing matplotlib with pip install matplotlib on a juypter hub instance.\nWhen I go to import I get the following error:\n\n----> 4 import matplotlib.plot as plt\n5 import seaborn as sns\nModuleNotFoundError: No module named 'matplotlib.plot'\n\nI couldn\u2019t find anything on here. I have tried to manually install via command line on Linux from git but that didn\u2019t seem to work either. Any help would be great.\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20,"Q_Id":72061349,"Users Score":1,"Answer":"The proper command is import matplotlib.pyplot. If everything is correctly installed it shoud work.","Q_Score":0,"Tags":"python,matplotlib","A_Id":72061490,"CreationDate":"2022-04-29T16:59:00.000","Title":"Plot attribute missing from matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"filtered_df = df[~df.index.isin(df_to_remove)]\nWhat does this ~ reduction mean?\nFound it in answers to the task? Was written by smn smart","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":27,"Q_Id":72070246,"Users Score":1,"Answer":"~ Operator performs a not logical operation bitwise, which means it takes a set of data and negates the condition that you are performing on your dataframe.\nIn your case, df.index.isin(df_to_remove) will return a certain set of values, like [True, False, True...]\nWith ~ operator, the output would be the logical negation, [False, True, False...] or just the negation of the original condition.","Q_Score":1,"Tags":"python,pandas","A_Id":72070300,"CreationDate":"2022-04-30T16:30:00.000","Title":"What ~ in isin pandas method means&","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a pandas DataFrame df and a set of values set_vals.\nFor a particular column (let's say 'name'), I would now like to compute a new column which is True whenever the value of df['name'] is in set_vals and False otherwise.\nOne way to do this is to write:\ndf['name'].apply(lambda x : x in set_vals)\nbut when both df and set_vals become large this method is very slow. Is there a more efficient way of creating this new column?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":72076003,"Users Score":1,"Answer":"The real problem is the complexity of df['name'].apply(lambda x : x in set_vals) is O(M*N) where M is the length of df and N is the length of set_vals if set_vals is a list (or another type for which the search complexity is linear).\nThe complexity can be improved to O(M) if set_vals is hashed (turned into dict type) and the search complexity will be O(1).","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":72076058,"CreationDate":"2022-05-01T10:46:00.000","Title":"Check for a column in pandas dataframe for all elements if they are in a set of values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Premise\nI am working on a private project in python using numpy. My program works with hexagons, which edges have certain features.\nI currently represent these hexagons by numpy arrays with one entry for each edge.\nThere is a need to compare hexagons for equality, which is special because some differentiable features are considered equal. Also hexagons should be equal if they are the same just in another rotation (numpy array is barrel-rolled by some amount).\nTo solve the first problem, edges are represented by bytes, each bit marking an \"equal\" feature. Now two edges can simply be compared by bitwise_and and any value non zero prooves equality.\nTo solve the second problem equality is checked by comparing all six rotations of a hex aginst the other hex.\nAll arrays I describe here are of dtype=uint8, so 255 is the neutral element for bitwise_and.\nProblem\nEach arrangement of a hexagon has a value associated with it. What I mean by arrangement is how features of the hexagon are arranged, for example two adjacent edges have one feature, the other four an other. The specific features dont matter for this value. So the hex [4, 4, 2, 8, 8, 8] has the same value associated with it as hex [2, 4, 4, 1, 2, 2] (notice rotation by one and substitution of values).\nAdditionally edges can have \"don't care\" as a feature (represented by all ones in binary to make the equality check work as intended). In that case I need to find the associated values of all compatible hexagons, no matter the feature of that particular edge.\nWhat I need is a clever representation of these \"meta\"-hexagons, which just care about arrangement not type of features.\nI then need a way for a given hexagon to find all meta-hexagons that could describe its arrangement.\nIdeas\nOne idea is to represent the meta-hexes like every other hexagon, using numbers to differentiate each unique feature. A meta-hex like [1, 1, 2, 1, 2, 2] would need to match any hex [x, x, y, x, y, y].\nHow to find all meta-hexes for a given hexagon with \"dont care\"-edges?\nA: One possibility would be to create all possible meta-hexagons for a given hexagon. The problem with this approach is that the number of possibilities can be quite large. For example, a common hexagon in my application is one with five adjacent \"don't care\"-edges (only one important feature). The number of features is only about 5 (it's actually more, but some features are mutually exclusive so 5 independent features is a good approximation) but even then 5^5=3125 (minus a couple because of equality under rotation) seems quite a lot. The advantage of this approach would be that I don't need any equality checks against the meta-hexes and could use a dictionary to access the values (for example using the numpy bytes as key).\nB: Another possibility would be to save the meta-hexes in a numpy array, which allows fast comparisons against all meta-hexes at once. In that case one could leave \"don't care\"-edges as they were (all bits one) and would only need to transform the given features into meta-hex representation. So [2, 8, 8, 255, 255, 255] would become something like [1, 2, 2, 255, 255, 255] and comparison could be done with bitwise_and and a nonzero check again to make the \"dont care\" edges match anything. The problem in this case is that the hexagon wouldn't match meta-hexes like [2, 3, 3, 1, 1, 1] where the features are simply numbered differently, even though it should. So all possible numbering schemes and rotations would have to be created. Even with rules such that numbers are increased by one from one feature to the next, it would be a couple dozen possible representations.\nQuestions\n\nIn general, is there a way to represent polygons so that two polygons with different rotations but otherwise equal, can be identified as such without having to compare all possible rotations?\n\nWhich numpy functions should I look into to implement my idea in A. Replacing all \"dont cares\" with all possible features sounds like permutations to me?\n\nFor my approach in B, is there a way to further reduce the amount of hexagons I have to create?\n\n\nAny help is appreciated, I've thought about this for three days now, going back and forth in my head. So even remotely related links and reading material or just keywords to lookup and\/or add to the tags of this question are happily received.\nAlso I'm new here, so any tips regarding stack overflow are welcome!\nIf you've read this far, thank you for your time!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":41,"Q_Id":72092211,"Users Score":1,"Answer":"It seems like you are chasing performance here... I'll answer some of the questions keeping that in mind but be warned: premature optimization is always a bad idea. If it is optimization you are looking for it is almost always best to write any solution and then take it from there.\nIn general you can win in terms of time complexity if you are willing to deal with a bit of extra memory.\n\nFor each hexagon create a copy of it but permuted such that the smallest \"edge\" is first. This has a little bit of startup cost and a bit of extra memory but then the comparison is easy as you only need to compare one array. (if the smallest element is repeated you can come up with some heuristic to create a unique order)\n\n2.3. I would create \"mask\" arrays that each hexagon has. This mask array is then used by your comparison function to decide what comparison rules to check for. If you just want code neatness then this just means creating a hexagon object which contains all these extra arrays and then overloading the __eq__property.\nIf you want all this to be fast then you probably have no choice but to implement this \"comparison\" operation in C and then having your python code call it.","Q_Score":0,"Tags":"python,arrays,algorithm,numpy","A_Id":72107281,"CreationDate":"2022-05-02T20:44:00.000","Title":"How do I efficiently represent and compare my data-type (hexagons)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two large datasets. Let's say few thousands rows for V dataset with 18 columns. I would need to find correlations between individual rows (e.g., row V125 is similar to row V569 across the 18 columns). But since it's large I don't know how to filter it after. Another problem is that I have B dataset (different information on my 18 columns) and I would like to find similar pattern between the two datasets (e.g., row V55 and row B985 are similar, V3 is present only if B45 is present, etc...). Is there a way to find out? I'm open to any solutions. PS: this is my first question so let me know if it needs to be edited or I'm not clear. Thank you for any help.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":72095659,"Users Score":0,"Answer":"Row V125 is a value, perhaps you meant row 125. If two rows are the same, you can use the duplicate function for pandas or find from the home menu in excel.\nFor the second question, this can be done using bash or the windows terminal for large datasets, but the simplest would be to merge the two datasets. For datasets of few thousand rows, this is very quick. If you are using a pandas dataframe, you can then use the append function to merge them and find the duplicates.","Q_Score":0,"Tags":"python,statistics,data-science,correlation","A_Id":72102325,"CreationDate":"2022-05-03T06:42:00.000","Title":"Data science: How to find patterns and correlations in two datasets in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pandas dataframe such as:\n\n\n\n\ngroup\nmonth\nvalue\n\n\n\n\n1\n1\n2\n\n\n1\n2\n2\n\n\n1\n3\n3\n\n\n2\n1\n7\n\n\n2\n2\n8\n\n\n2\n3\n8\n\n\n3\n1\n9\n\n\n3\n2\n0\n\n\n3\n3\n1\n\n\n\n\nAnd I want to calculate a new column ('want' in the below) equal to the value where month == 2, per group, as shown below:\n\n\n\n\ngroup\nmonth\nvalue\nwant\n\n\n\n\n1\n1\n2\n2\n\n\n1\n2\n2\n2\n\n\n1\n3\n3\n2\n\n\n2\n1\n7\n8\n\n\n2\n2\n8\n8\n\n\n2\n3\n8\n8\n\n\n3\n1\n9\n0\n\n\n3\n2\n0\n0\n\n\n3\n3\n1\n0\n\n\n\n\nAnyone able to help?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":72097545,"Users Score":0,"Answer":"Guess I could just create a groupby df (groupby group where mth == 2) then merge back to it.\nWill just go with that instead of attempting to do via a groupby.apply route!","Q_Score":1,"Tags":"python,pandas","A_Id":72097589,"CreationDate":"2022-05-03T09:45:00.000","Title":"How do I return a value based on another column per group in pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I used tf.stop_gradient() to turn off the gradient calculation for some of the weights in my neural network. Unfortunately, tf.GradientTape().gradient() assigns the gradient for those weights as None, which does not work with optimizer.apply_gradients. The workaround is to afterwards assign zeros to those gradients.\nIs there a better work around?\nIs it possible to have tf.GradientTape().gradient() automatically replace None with zeros? Alternatively, is there a way to get the optimizer to work with None in the gradients list?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":72104379,"Users Score":0,"Answer":"If you are turning off the gradients for some weights, GradientTape().gradient() automatically sets the gradient for those weights to be None which are not compatible with an optimizer. However, they can be replaced by zeros by setting unconnected_gradients as such:\ntape.gradient(loss, weights, unconnected_gradients=tf.UnconnectedGradients.ZERO)\nThis is then compatible with the optimizer.","Q_Score":0,"Tags":"python,tensorflow","A_Id":72113077,"CreationDate":"2022-05-03T19:13:00.000","Title":"TensorFlow: Can \"None\" in gradients be automatically replaced with zeros or used in an optimizer?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to reproduct the code of cross modal focal loss cvpr2021. But I ran into some difficulties and I don't know where to find the solution. The difficulties are the following.\n\nFile \"\/data\/run01\/scz1974\/chenjiawei\/bob.paper.cross_modal_focal_loss_cvpr2021\/src\/bob.io.stream\/bob\/io\/stream\/stream_file.py\", line 117, in get_stream_shape\ndescriptor = self.hdf5_file.describe(data_path)\nRuntimeError: HDF5File - describe ('\/HOME\/scz1974\/run\/yanghao\/fasdata\/HQ-WMCA\/MC-PixBiS-224\/preprocessed\/face-station\/26.02.19\/1_01_0064_0000_00_00_000-48a8d5a0.hdf5'): C++ exception caught: 'Cannot find dataset BASLER_BGR' at \/HOME\/scz1974\/run\/yanghao\/fasdata\/HQ-WMCA\/MC-PixBiS-224\/preprocessed\/face-station\/26.02.19\/1_01_0064_0000_00_00_000-48a8d5a0.hdf5:''","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":72126159,"Users Score":0,"Answer":"The instructions assumes that you have obtained the raw dataset which has all the data channels. The preprocessed files only contains grayscale and swir differences. If you want to use grayscale and one of the swir differences as two channels you can skip the preprocessing part as given in the documentation.","Q_Score":0,"Tags":"python-bob","A_Id":72127830,"CreationDate":"2022-05-05T11:06:00.000","Title":"RuntimeError: HDF5File - describe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a GeoDataframe of about 3200 polygons, and another GeoDataframe of about 26,000 points. I want to get a third GeoDataframe of only the polygons that contain at least one point. This seems like it should be a simple sjoin, but geopandas.sjoin(polygons, points, predicate='contains') returns a GeoDataframe with more polygons than I started with (and very near the number of input points). Examining this GeoDataframe shows that there seem to be some duplicate polygons, perhaps explaining why I have more polygons than I expected. How do I find only the polygons that contain any point without duplicates?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":72134132,"Users Score":0,"Answer":"I found a workaround, although I feel like it's not the best solution. My polygons have a unique ID column on which I was able to remove duplicates:\ngeopandas.sjoin(polygons, points, predicate='contains').drop_duplicates(subset=['UNIQUE_ID'], keep='first')","Q_Score":0,"Tags":"python,pandas,geopandas","A_Id":72145076,"CreationDate":"2022-05-05T22:09:00.000","Title":"How do I find all the polygons of a GeoDataframe that contain any point of another GeoDataframe in GeoPandas?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when I import statsmodels.api as sm it gives the error import statsmodels.api as sm\nBut if I only import statsmodels as sm then it does not give any error\nbut a few days ago import statsmodels.api as sm was also working\nand I also tried pip install statsmodels --force-reinstall --user But it did not fix the problem\nAnd also my python file is not named statsmodels.py or statsmodels.ipynb","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":72136331,"Users Score":0,"Answer":"after I reloaded vs code after running pip install statsmodels --force-reinstall --user it fixed my problem","Q_Score":0,"Tags":"python,statsmodels","A_Id":72136393,"CreationDate":"2022-05-06T04:54:00.000","Title":"Error While Importing statsmodels.api \"AttributeError: module 'scipy' has no attribute '_lib'\"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So this may seem like a simple question, but every question I've checked isn't exactly approaching the problem in the same way I am.\nI'm trying to bin the timestamps of a dataframe into specific buckets. I want to be able to count every minute of a dataframe starting from the first row until the last. I then want to turn that counted minute into a bucket (starting from 1 going to n). I then want to count every row what second it was for the timestamp of that row until the end of the bin.\nHere is an example of what I want it to look like:\n\n\n\n\ntime_bin\nseconds_in_bin\ntime\n\n\n\n\n1\n1\n2022-05-05 22:12:59\n\n\n1\n2\n2022-05-05 22:13:00\n\n\n1\n3\n2022-05-05 22:13:01\n\n\n1\n4\n2022-05-05 22:13:02\n\n\n\n\nI'm currently working in python and am trying to do this in pandas with my data. I feel like this problem is much easier than I think it is and I'm just not thinking of the right solution, but some help would be appreciated.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":72166499,"Users Score":0,"Answer":"I am not sure I quite get what you are going for here but wouldn't this be equivalent to getting the rank of seconds?\nAs far as I understand it, binning has to do with putting together an interval (fixed or not) and counting the number of items in it. If you could please elaborate on this I'll do my best to help with a more plausible answer.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":72166573,"CreationDate":"2022-05-09T02:45:00.000","Title":"Bin rows by time with pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the basic difference between these two loss functions? I have already tried using both the loss functions.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":72167344,"Users Score":0,"Answer":"BCEloss is the Binary_Cross_Entropy loss.\ntorch.nn.functional.binary_cross_entropy calculates the actual loss inside the torch.nn.BCEloss()","Q_Score":0,"Tags":"python,pytorch,loss-function","A_Id":72167501,"CreationDate":"2022-05-09T05:30:00.000","Title":"torch.nn.BCEloss() and torch.nn.functional.binary_cross_entropy","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"img_height,img_width=180,100 batch_size=32 train_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir1,validation_split=0.01,subset=\"training\",seed=123,image_size=(img_height, img_width),batch_size=batch_size)\nOutput: Found 1376 files belonging to 4 classes.\nUsing 1363 files for training.\nhow can I get the total number of classes in a variable?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":72167658,"Users Score":0,"Answer":"label_map = (train.ds.class_indices)","Q_Score":0,"Tags":"python,tensorflow,keras","A_Id":72168046,"CreationDate":"2022-05-09T06:15:00.000","Title":"how to obtain the number of classes using tf.keras.preprocessing.image_dataset_from_directory?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A noob question.\nAs I understand, the pipeline library of scikit learn is a kind of automation helper, which brings the data through a defined data processing cycle. But in this case I don't see any sense in it.\nWhy can't I implement data preparation, model training, score estimation, etc. via functional or OOP programming in python? For me it seems much more agile and simple, you can control all inputs, adjust complex dynamic parameter grids, evaluate complex metrics, etc.\nCan you tell me, why should anyone use sklearn.pipelines? Why does it exist?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":72183414,"Users Score":0,"Answer":"I have used pipelines recently for data exploration purposes.\nI wanted to random search different pipelines.\nThis could be at least one reason to use pipelines.\nBut you are right pipelines aren't verry useful for many other purposes.","Q_Score":2,"Tags":"python,scikit-learn,data-science,pipeline","A_Id":72183513,"CreationDate":"2022-05-10T09:00:00.000","Title":"Scikit learn pipelines. Does it make any sense?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running one kernel to learn a Tensorflow model and that's using my GPU. Now, in the same conda environment, I would like to evaluate another model learned before, and the model is also a Tensorflow one. I'm sure I can run two kernels with the same conda environment mostly but I'm not sure when using GPU. Now if I run a kernel using Tensorflow, can it affect a kernel running early somehow, especially in terms of GPU usage?\nMy environment: Windows10, tensorflow2.1, python3.7.9","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":14,"Q_Id":72185529,"Users Score":0,"Answer":"This is not the best answer though, I realized that I can evaluate my model in another conda environment that has another version of Tensorflow. In this environment, my CUDA and CUDNN versions are not compatible with the version of Tensorflow, so my GPU was not used. In this sense, I evaluated a model without stopping or affecting learning a model in the running kernel.","Q_Score":0,"Tags":"python,tensorflow,kernel,gpu,conda","A_Id":72189933,"CreationDate":"2022-05-10T11:27:00.000","Title":"Can two kernels access the same conda environment at the same time even when using GPU?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Good day,\nI am quite new to Bokeh and I'm making a few charts. I want to bold only part of the text in the title of the chart.\n\nAs an example, I want to take this:\n\n\"Number of schools in District B in 2022\"\n\nAnd turn it into this:\n\n\"Number of schools in District B in 2022\"\n\n\nIs there a way to do that with a Bokeh chart and maybe some LaTeX?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":23,"Q_Id":72186948,"Users Score":1,"Answer":"Basic plot titles are rendered directly on the HTML canvas, which has very simple text rendering options. Bokeh only exposes one set of text properties for standard titles. Accordingly, what you are asking for is not possible using standard titles.\nFor now, your best option is to not use a plot title at all. Instead. Put a Div above the plot in a column, and use the Div for a \"fancy title\" (since can contain arbitrary HTML to look however you like).\nIn the near future (Bokeh 3.0) LaTeX for plot titles will also be an option.","Q_Score":0,"Tags":"python,latex,bokeh","A_Id":72189901,"CreationDate":"2022-05-10T13:09:00.000","Title":"Bokeh bold some text in title","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to translate coordinates from one picture (Res: 311, 271) to another picture (Res: 1920, 1080).\nThe coordinates don't need to be accurate in the 2nd picture, it just needs to be the same vector relative to the center of the images\nDon't know if that makes sense...\nEdit:\nSo far I've tried to calculate the difference between the center of the first image and the coordinates and then apply them to the bigger image. However this doesn't seem to work very consistently.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":38,"Q_Id":72187971,"Users Score":0,"Answer":"You'll need to use trigonometry.\nSay there's some object in the image you're trying to get the vector for. Given the x and y distances from the center of the original image, you can tabulate the angle and hypotenuse distance. Simply use the same angle and scale the hypotenuse distance with the new size image.","Q_Score":0,"Tags":"python","A_Id":72188084,"CreationDate":"2022-05-10T14:13:00.000","Title":"Translating coordinates of two pictures","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Newbie here, I recon this may be a very foolish question. I am simultaneously running on cuda, in two distinct processes, a simple 3-layer MLP neural network over two different datasets.\nSince these two processes are practically using the same script, they are both creating variables with the same names, so my question is: is each process completely isolated from the other, or is there any way that by running one process after the other I will be overwriting my variables, e.g. my variable x, pertaining to the dataset #1's feature vector I'm giving the model in the first process will be overwritten with the dataset #2's feature vector once I start process 2, therefore influencing my first process's model's predictions?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":29,"Q_Id":72194176,"Users Score":1,"Answer":"is each process completely isolated from the other, or is there any way that by running one process after the other I will be overwriting my variables, e.g. my variable x, pertaining to the dataset #1's feature vector I'm giving the model in the first process will be overwritten with the dataset #2's feature vector once I start process 2, therefore influencing my first process's model's predictions?\n\nThe processes are isolated from each other. One process will not overwrite variables in another process that happen to have the same \"name\".","Q_Score":0,"Tags":"python,linux,process,cuda","A_Id":72236285,"CreationDate":"2022-05-11T00:26:00.000","Title":"Can two processes running simultaneously share a variable?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to do demand sensing for a dataset. Presently I have 157 weeks of data(~3years) and I have to predict next month(8 weeks).In the training dataset, I'm using 149weeks as a train and the last 8 weeks as Val to get the best hyperparameters. But I have observed that in the pred result, there's a huge gap in wmapes between Val and pred. I'm not sure if im overfitting because Val wmape is good.\nthe aim is to get best parameters such that the result pred will good for last month(last 4 weeks\/8weeks).\nnote: there is a gap in train and pred i.e. if the train is till 31st jan22, pred will start from 1st mar22.\nHow can I overcome this problem?\nDetails: dataset: timeseries , algo: TCNmodel(darts lib),lang:python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":72197029,"Users Score":0,"Answer":"How you should split the data depends on some factors:\nIf you have a seasonal influence over a year, you can take a complete year for validation and two years for training.\nIf your data can be predicted from the last n-weeks, you can take some random n-week splits from the full dataset.\nWhats more important here is that I think there's an error in your training pipeline. You should have a sliding window over n-weeks over the full training data and always predict the next 8 weeks from every n-week sequence.","Q_Score":0,"Tags":"python,deep-learning,time-series,hyperparameters,azureml","A_Id":72200405,"CreationDate":"2022-05-11T07:28:00.000","Title":"Get best parameters for time series without losing data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"so i want to use scipy to minimize a function. In my application i am required to do a function evaluation every time the gradient is required.\nI undersand that i can pass a function that will return both, functionvalue and gradient, when i set the arg jac=True. However, sometimes i assume that this procedure will compute gradients when they are not required, e.g. for linesearch, which is very expensive. Is there any way to pass an argument to evaluate the function and an argument to evaluate function and gradient?\nEDIT:\ni also dont want do compute gradient and functionvalue independently by passing fun and jac since then the fun evaluation inside of jac is often wasted.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":72206589,"Users Score":0,"Answer":"You can pass a callable to the jac argument and it will be used to compute the gradient, while fun will still be called to compute the function value.","Q_Score":0,"Tags":"python,scipy,minimize","A_Id":72208127,"CreationDate":"2022-05-11T19:25:00.000","Title":"Using Scipy.Optimize.Minimize efficiently when compting gradient requires function evaluation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using PPO stable baselines in Google Colab with Tensorboard activated to track the training progress but after around 100-200K timesteps tensorboard stops updating even with the model still training (learning), does anyone else have this issue and know a fix for it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20,"Q_Id":72221554,"Users Score":0,"Answer":"stable baselines doesnt seem to run well on CoLab because of the need to downgrade to tensorflow 1.6 which doesnt run well with tensorboard so instead I used to the newer stable baselines3 with current tensorflow version and tensorboard works fine.","Q_Score":0,"Tags":"python,reinforcement-learning,tensorboard","A_Id":72222603,"CreationDate":"2022-05-12T20:08:00.000","Title":"Tensorboard stops updating in Google Colab during learning with stable baselines","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been looking around here and on the Internet, but it seems that I'm the first one having this question.\nI'd like to train an ML model (let's say something with PyTorch) and write it to an Apache Kafka cluster. On the other side, there should be the possibility of loading the model again from the received array of bytes. It seems that almost all the frameworks only offer methods to load from a path, so a file.\nThe only constraint I'm trying to satisfy is to not save the model as a file, so I won't need a storage.\nAm I missing something? Do you have any idea how to solve it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":72223191,"Users Score":1,"Answer":"One reason to avoid this is that Kafka messages have a default of 1MB max. Therefore sending models around in topics wouldn't be the best idea, and therefore why you could instead use model files, stored in a shared filesystem, and send URIs to the files (strings) to download in the consumer clients.\nFor small model files, there is nothing preventing you from dumping the Kafka record bytes to a local file, but if you happen to change the model input parameters, then you'd need to edit the consumer code, anyway.\nOr you can embed the models in other stream processing engines (still on local filesystems), as linked in the comments.","Q_Score":0,"Tags":"python,apache-kafka,scikit-learn,pytorch","A_Id":72223262,"CreationDate":"2022-05-12T23:58:00.000","Title":"Send and load an ML model over Apache Kafka","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"User program failed with ImportError: cannot import name '_joblib_parallel_args' from 'sklearn.utils.fixes' (\/azureml-envs\/azureml_39c082289e18c74c5b8523a75d2c0d1e\/lib\/python3.8\/site-packages\/sklearn\/utils\/fixes.py)\nAnyone know why? Is there a workaround or a fix?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":444,"Q_Id":72225714,"Users Score":1,"Answer":"Try\npip uninstall scikit-learn\npip install scikit-learn==1.0.2","Q_Score":1,"Tags":"python,azure,visual-studio-code,scikit-learn,joblib","A_Id":72250062,"CreationDate":"2022-05-13T07:14:00.000","Title":"I am having this issue in imports in Visual Studio and azure","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 130 GB csv.gz file in S3 that was loaded using a parallel unload from redshift to S3. Since it contains multiple files i wanted to reduce the number of files so that its easier to read for my ML model(using sklearn).\nI have managed to convert multiple from from S3 to a spark dataframe (called spark_df) using :\nspark_df1=spark.read.csv(path,header=False,schema=schema)\nspark_df1 contains 100s of columns (features) and is my time series inference data for millions of customers IDs. Since it is a time series data, i want to make sure that a the data points of 'customerID' should be present in same output file as I would be reading each partition file as a chunk.\nI want to unload this data back into S3.I don't mind smaller partition of data but each partitioned file SHOULD have the entire time series data of a single customer. in other words one customer's data cannot be in 2 files.\ncurrent code:\ndatasink3=spark_df1.repartition(1).write.format(\"parquet\").save(destination_path)\nHowever, this takes forever to run and the ouput is a single file and it is not even zipped. I also tried using \".coalesce(1)\" instead of \".repartition(1)\" but it was slower in my case.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":72237180,"Users Score":1,"Answer":"This code worked and the time to run reduced to 1\/5 of the original result. Only thing to note is that make sure that the load is split equally amongst the nodes (in my case i had to make sure that each customer id had ~same number of rows)\nspark_df1.repartition(\"customerID\").write.partitionBy(\"customerID\").format(\"csv\").option(\"compression\",\"gzip\").save(destination_path)","Q_Score":0,"Tags":"python,amazon-s3,pyspark,apache-spark-sql","A_Id":72239368,"CreationDate":"2022-05-14T03:41:00.000","Title":"How to EFFICIENTLY upload a a pyspark dataframe as a zipped csv or parquet file(similiar to.gz format)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have dataset with quite a lot data missing which stores hourly data for several years. I would now to implement a seasonal filling method where I need the best data I have for two following years (2*8760 entries). This means the least amount of data missing (or least amount of nan values) for two following years. I then need then the end time and start time of this period in datetime format. My data is stored in a dataframe where the index is the hourly datetime. How can I achieve this?\nEDIT:\nTo make it a bit clearer I need to select all entries (values and nan values) from a time period of of two years (or of 2*8760 rows) where the least amount of nan values occur.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":18,"Q_Id":72258335,"Users Score":3,"Answer":"You can remove all the NAN values from your data by using df = df.dropna()","Q_Score":1,"Tags":"python,pandas,datetime,missing-data,data-preprocessing","A_Id":72258486,"CreationDate":"2022-05-16T11:17:00.000","Title":"How can I select with least amount of nan values for a certain time period in panda?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Normally we predict a target using static user features, here I am making serial predictions with increasing user behavior data.\nI use 7 days as a period, and predict the user whether to be converted at the end of the period on a daily basis, using the feature data as of yesterday. I predict 6 times in total starting from day 2.\nBut this will have a problem, I'm using a logistic regression-scorecard model to train the complete historical data(of day 7s since the period's data is all available), and each feature will get a coefficient. But in production environment, feature data come day-by-day, eg. in day 2, I have feature_day1, to predict if_buy for the 1st time; in day 3, I have feature_day1 and feature_day2, to predict if_buy for the 2nd time. Technically, I can get a predicted result by setting the features of future days to null, but I doubt the correctness. How to design this model so I can make predictions properly every day?\n\n\n\n\nnote\ndate\nday_no\nuser_id\nif_buy\nfeature_day1\nfeature_day2\nfeature_day3\nfeature_day4\nfeature_day5\nfeature_day6\nfeature_day7\n\n\n\n\ncomplete_entry\n20220501\n7\n1000\n0\n9\n5\n9\n3\n2\n7\n6\n\n\ncomplete_entry\n20220501\n7\n1001\n1\n5\n4\n4\n9\n10\n10\n7\n\n\ncomplete_entry\n20220508\n7\n1010\n1\n1\n6\n3\n7\n3\n0\n2\n\n\ncomplete_entry\n20220508\n7\n1011\n0\n9\n6\n3\n10\n7\n2\n2\n\n\nto_predict_1\n20220509\n1\n1200\n?\n6\n\n\n\n\n\n\n\n\nto_predict_2\n20220510\n2\n1200\n?\n6\n8\n\n\n\n\n\n\n\nto_predict_3\n20220511\n3\n1200\n?\n6\n8\n1\n\n\n\n\n\n\nto_predict_4\n20220512\n4\n1200\n?\n6\n8\n1\n5\n\n\n\n\n\nto_predict_5\n20220513\n5\n1200\n?\n6\n8\n1\n5\n9\n\n\n\n\nto_predict_6\n20220514\n6\n1200\n?\n6\n8\n1\n5\n9\n6\n\n\n\ncomplete_entry\n20220515\n7\n1200\n0\n6\n8\n1\n5\n9\n6\n8","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":72273810,"Users Score":0,"Answer":"Using a predicted value would introduce bias in your models.\nFor me you should train 7 different models :\n\nOne with feature_day1 only\nOne with feature_day1 and feature_day2\nOne with feature_day1, feature_day2 and feature_day3\netc ...\nThe last one with all the feature_day1 to feature_day7\n\nThen you have to investigate how to weight all your models to get the best predictions. I assume that your first model with only feature_day1 won't be the best, so give it low weight or don't use it at all.\nYou have to test different weights anyway","Q_Score":0,"Tags":"python,machine-learning,logistic-regression,feature-engineering","A_Id":72274604,"CreationDate":"2022-05-17T12:09:00.000","Title":"Logistic regression with chronologically available features","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been working with neural networks for a few months now and I have a little mystery that I can't solve on my own.\nI wanted to create and train a neural network which can identify simple geometric shapes (squares, circles, and triangles) in 56*56 pixel greyscale images. If I use images with a black background and a white shape, everything work pretty well. The training time is about 18 epochs and the accuracy is pretty close to 100% (usually 99.6 % - 99.8%).\nBut all that changes when I invert the images (i.e., now a white background and black shapes). The training time skyrockets to somewhere around 600 epochs and during the first 500-550 epochs nothing really happens. The loss barely decreases in those first 500-550 epochs and it just seems like something is \"stuck\".\nWhy does the training time increase so much and how can I reduce it (if possible)?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":39,"Q_Id":72286749,"Users Score":1,"Answer":"Color inversion\n\nYou have to essentially \u201cswitch\u201d WxH pixels, hence touching every possible pixel during augmentation for every image, which amounts to lots of computation.\nIn total it would be DxWxH operations per epoch (D being size of your dataset).\nYou might want to precompute these and feed your neural network with them afterwards.\n\nLoss\n\nIt is harder for neural networks as white is encoded with 1, while black is encoded with 0. Inverse giving us 0 for white and 1 for black.\nThis means most of neural network weights are activated by background pixels!\nWhat is more, every sensible signal (0 in case of inversion) is multiplied by zero value and has not effect on final loss.\nWith hard {0, 1} encoding neural network tries to essentially get signal from the background (now 1 for black pixels) which is mostly meaningless (each weight will tend to zero or almost zero as it bears little to no information) and what it does instead is fitting distribution to your labels (if I predict 1 only I will get smaller loss, no matter the input).\n\nExperiment if you are bored\n\nTry to encode your pixels with smooth values, e.g. white being 0.1 and black being 0.9 (although 1.0 might work okayish, although more epochs might be needed as 0 is very hard to obtain via backprop) and see what the results are.","Q_Score":2,"Tags":"python,machine-learning,neural-network,pytorch","A_Id":72296256,"CreationDate":"2022-05-18T09:42:00.000","Title":"Massive neural network training time increase by inverting images in a data set","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe with 500+ columns and I want to just store the values from certain columns that contain the string \"valid\" in their names and store them in a new empty list.\nI have used df1=df.filter(regex='valid').values.tolist()\nMy earlier method was - df1=[df['1valid'],df['2valid'],...df['nvalid']] \nI'm unable to differentiate between the two. Any help would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":72288534,"Users Score":0,"Answer":"df.filter(regex='valid') returns DataFrame whose column contains pattern valid. df.values return a Numpy representation of the DataFrame. numpy.tolist() convert the Numpy array to Python list.\ndf1=[df['1valid'],df['2valid'],...df['nvalid']] is a list of Series.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":72288830,"CreationDate":"2022-05-18T11:41:00.000","Title":"Append Dataframe columns as a list","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to create rules for rounding a datetime column. So far I can only find how to round either to the current year or round up to the next year using df['Year'] = df['Year] + pd.offsets.YearBegin(0) to round up, and df['Year'] = df['Year] + pd.offsets.YearBegin(-1) to round to current year.\nHowever, I am trying to round as follows:\nIf df['Year'] is in the last quarter of the year (> Sep 30th), then round up to next year, otherwise leave as current year.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":72295360,"Users Score":0,"Answer":"I seem to have found an answer to my own question. If I'm not mistaken the following should work with all dates:\ndf['Year'] = np.where(df['Year'].dt.month < 10, df['Year'].dt.year, df['Year'].dt.year + 1)","Q_Score":0,"Tags":"python,pandas,datetime","A_Id":72295577,"CreationDate":"2022-05-18T20:06:00.000","Title":"Pandas rounding up to next year if above a certain date","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed miniforge on my mac , in that using a 'local env' on jupyter notebook. I can't change the numpy version (need to downgrade to 1.19.5) on this kernel, have tried:\n(1)pip install numpy==1.19.5 &\n(2)conda install -c conda-forge numpy=1.19.5.\nnumpy version seems to be changing easily on conda-ipython3 kernel, but my project is running on 'local env'\nvery new to all this, still learning. Please help","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":72303328,"Users Score":0,"Answer":"first make sure that your local environment is activated by running: ...\/{your venv folder path}\/Scripts\/activate. Because if you install numpy on the wrong virtual environment then it won't work.\nThen uninstall numpy by running pip uninstall numpy. Then install the numpy version you want.","Q_Score":1,"Tags":"python,macos,numpy,pip,virtualenv","A_Id":72303828,"CreationDate":"2022-05-19T10:59:00.000","Title":"how to change numpy version with miniforge","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do you measure with features of your dataframe are important for your Kmeans model?\nI'm working with a dataframe that has 37 columns of which 33 columns are of categorical data.\nThese 33 data columns go through one-hot-encoding and now I have 400 columns.\nI want to see which columns have an impact on my model and which don't.\nIs there a method for this or do I loop this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":72305273,"Users Score":0,"Answer":"For categorical values there is K-Modes and for mixed (categorical and continuous values) there is K-Prototype. That might be worth trying and potentially easier to evaluate. You wouldn't use one-hot encoding there though.","Q_Score":0,"Tags":"python,data-science,cluster-analysis,k-means,one-hot-encoding","A_Id":72305443,"CreationDate":"2022-05-19T13:16:00.000","Title":"Kmeans clustering measuring important features","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create ML Model to identify if the Transaction is fraud or not.\nEach row represents one Transaction. I understand that this ML Model can be built. What the model will be missing is the behaviour when Multiple Transactions are done within short duration. How do I capture that behaviour? If 1st transaction for a card happens at 10 am and other transaction happens at 10.01 am then that Transaction is generally Fraud. But my model is missing that. Please help","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":16,"Q_Id":72310049,"Users Score":0,"Answer":"Add another column(s) to your data which is \"time since last transaction\" and\/or perhaps \"number of transactions in the previous n mins\". You could experiment on different values of n or even include multiple.\nThis seems like it would capture the information required? Hope that helps!","Q_Score":0,"Tags":"python,machine-learning,data-science","A_Id":72316024,"CreationDate":"2022-05-19T19:21:00.000","Title":"How to create Credit Card Fraud Detection model so that it captures dependency in the observations","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a data frame and around 10-12 columns. One of the column is the student number e.g. 1234567 and the other is an identifier e.g passport numbers, license number . How can I find that each student has a unique identifier. Like student 1234567 has identifier ABC5679K only. Also I want to store the students who are tagged with duplicate identifier. For e.g. If student 1234567 also has identifier ABC3408T, I want to know those.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":72315550,"Users Score":0,"Answer":"df.groupby([\"student_name\"])[\"passport_number\"].nunique() > 1\nYou can use the groupby and nunique function to help you identify repeats. Hope this answer your question.","Q_Score":0,"Tags":"python,dataframe,duplicates,data-manipulation","A_Id":72315699,"CreationDate":"2022-05-20T08:07:00.000","Title":"How to find that each student number belongs to unique student?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"index\nvariable\nvalue\n\n\n\n\n0\nA\nup\n\n\n1\nA\ndown\n\n\n2\nA\nup\n\n\n3\nA\nup\n\n\n4\nB\ndown\n\n\n5\nB\nup\n\n\n6\nB\ndown\n\n\n7\nB\nup\n\n\n8\nC\nup\n\n\n9\nC\nup\n\n\n10\nC\ndown\n\n\n11\nC\ndown\n\n\n12\nD\nup\n\n\n13\nD\ndown\n\n\n14\nD\ndown\n\n\n15\nD\nup\n\n\n\n\nFor example, I want to draw a boxplot by using seaborn to show values in (variable =A and variable=B). How can I solve it?\nAS before, the row table contain attributes{\"variableA\", \"variableB\",\"variableC\",\"variableD\",\"value\"}\nso I can use :\nsns.boxplot(x=df[\"variableA\"],y=df[\"variableB],order=[\"up\",\"down\"])\nAnd now I got a melt table(tidy dataframe). How to draw the same picture?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":72326649,"Users Score":0,"Answer":"You can select B and A from variable by\ndf.loc[(df.variable == \"A\") & (df.variable == \"B\")]\nThen Transpose the df by:\ndf_T = df.loc[(df.variable == \"A\") | (df.variable == \"B\")].T\nThen sns:\nsns.boxplot(x='variable', y = 'value', data = df_T)","Q_Score":0,"Tags":"python,dataframe,matplotlib,seaborn","A_Id":72326775,"CreationDate":"2022-05-21T04:26:00.000","Title":"How to use seaborn to draw boxplot by choosing specific row?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Numpy ndarray must have all elements of the same type and all sub-arrays on the same level must be of the same length. Those properties are also properties of C multidimensional arrays. Is it the case that numpy ndarray have those properties purely because it is implemented on top of C array? Are those properties really required to create a fast multidimensional array implementation?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":45,"Q_Id":72331363,"Users Score":3,"Answer":"Is it the case that numpy ndarray have those properties purely because it is implemented on top of C array?\n\nNo. Using C internally is not really what cause Numpy to make this choice. Indeed, Numpy array are just a raw contiguous memory buffer internally (allocated dynamically). Numpy does not actually make use of C array in its own implementation. It only uses C pointers. Views are Python objects referencing the buffer and olding some meta information like the strides, shape, type, etc. Python users always operate on view as raw buffers cannot be directly read\/written.\nIn fact, it is not very difficult to create jagged array in C (ie. arrays containing arrays of variable size), but one need to to that manually (ie. not directly supported by the standard). For 2D jagged array, this is typically done by allocating an array of T* items and then performing N allocation for the N sub-arrays. The sub-arrays are not guaranteed to be contiguously stored.\nThe point is jagged arrays are not efficient because of memory fragmentation\/diffusion and non-contiguity. Additionally, many features provided by Numpy would not be (efficiently) possible with jagged arrays. For example, creating sub-view for other view with a stride would be tricky. The operations working on multiple axis (eg. np.sum(2D_array, axis=0)) would have to be redefined so it make sense with jagged array. It would also make the implementation far more complex.\nAs a result, they choose not to implement jagged array but only ND-array. Please note that Numpy have been initially created for scientists and especially physicists which rarely need jagged array but care about high-performance. Jagged arrays can be implemented relatively efficiently by allocating 2 Numpy arrays: 1 array concatenating all lines and a slice-based array containing the start\/end offsets.\n\nAre those properties really required to create a fast multidimensional array implementation?\n\nHaving homogeneous types is critical for performance. Dynamic typing force a type check for each item which is expensive. Additionally, dynamic typing often requires an additional expensive indirection (eg. the array only store pointers\/references and not the object itself) and the access to the objects cause memory fragmentation\/diffusion. Such operations are very expensive compared to basic numerical ones like addition\/subtraction\/multiplication. Furthermore, the life cycle of the object must certainly be carefully controlled (eg. garbage collection) liek CPython does. In fact, a CPython list of list behave like that and is pretty inefficient. You can make Numpy array of objects that are Numpy arrays but this is also inefficient.\nAs for the rectangular arrays, it is dependent of the use-case, but this is at least critical for matrix multiplication and matrix-vector products (as BLAS operates on contiguous arrays possibly with a stride between lines), as well as operations not working on the most contiguous dimension (compilers can make more aggressive optimizations with a constant stride). Not to mention the additional overheads specified above (eg. additional checks and memory fragmentation\/diffusion).","Q_Score":3,"Tags":"python,c,numpy,multidimensional-array,numpy-ndarray","A_Id":72331924,"CreationDate":"2022-05-21T16:24:00.000","Title":"Is numpy ndarray homogeneous and rectangular (sub-arrays must be the same length) because it uses C array under the hood?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset that is imbalanced and wanted to use techniques such as SMOTE, ADASYN etc, to balance it out.\nWould it be acceptable to use Doc2vec and then incorporate SMOTE to the training sample?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":11,"Q_Id":72337834,"Users Score":1,"Answer":"The best way to know if SMOTE (or some other augmentation technique) might help with your particular data, goals, & classification-algorithms is to try it, and see if it improves results on your relevant evaluations, compared to not using it.\nIt's \"acceptable\" if it works; there's no other outside\/1st-principles to judge its potential applicability, without trying it.","Q_Score":0,"Tags":"python,doc2vec","A_Id":72340778,"CreationDate":"2022-05-22T13:07:00.000","Title":"would it be possible to combine Balancing techniques with Doc2vec","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"AFAIK, unlike SMOTE, RandomUnderSampler selects a subset of the data. But I am not quite confident to use it for categorical data.\nSo, is it really applicable for categorical data?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":16,"Q_Id":72339926,"Users Score":0,"Answer":"Under\/Over sampling has nothing to do with features. It relies on targets and under\/oversamples majority\/minority class, no matter whatheter it is composed of continuous variables, categorical ones, or elephants :)","Q_Score":0,"Tags":"python,machine-learning,classification,imbalanced-data","A_Id":72340545,"CreationDate":"2022-05-22T17:38:00.000","Title":"Can I use RandomUnderSampler for categorical data as well?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm training a text classifier for binary classification. In my training data, there are null values in the .csv file in the text portion, and there are also null values in my test file. I have converted both files to a dataframe (Pandas). This is a small percentage of the overall data (less than 0.01).\nKnowing this - is it better to replace the null text fields with an empty string or leave it as as empty? And if the answer is replace with empty string, is it \"acceptable\" to do the same for the test csv file before running it against the model?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":72341819,"Users Score":0,"Answer":"IMO, if dropping the null values is not an option, you could replace the nulls with the most frequent words.\nJust make sure that you do this separately for each set, meaning what words are most frequent in the training set, and what words are most frequent in the test set, as they may differ.\nAnother option, is to replace the nulls with something like IGNORE TEXT.","Q_Score":0,"Tags":"python,sentiment-analysis,naivebayes","A_Id":72342084,"CreationDate":"2022-05-22T22:55:00.000","Title":"Machine Learning Question on missing values in training and test data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have dataframe as follows:\n\n\n\n\nEmployee\nSalary\n\n\n\n\nTony\n50\n\n\nAlan\n45\n\n\nLee\n60\n\n\nDavid\n35\n\n\nSteve\n65\n\n\nPaul\n48\n\n\nMicky\n62\n\n\nGeorge\n80\n\n\nNigel\n64\n\n\nJohn\n42\n\n\n\n\nThe question is to identify:\n\nTop 30% gets a value \u201chigh\u201d\nThe next 40% gets \u201caverage\u201d\nthe Rest as \"Low\"\n-and put it in a new column as the corresponding value\n\nit would be easy to identify top N of them but top 30% is something I am unable to understand how to go about the %. Can anyone help me with python code for this??","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":36,"Q_Id":72344900,"Users Score":1,"Answer":"If you think about what a percentage actually is, it only shows the proportion of something. It depends on the amount of people in your list.\nTherefore, the top 30% can actually be translated into a number of people.\nAssume your data has N employees. Taking the top 30% salaries is the same as taking the 30xN\/100 people that have the biggest wage.\nIf you order your data, then the only thing you actually have to do is setting \"high\" for these 30xN\/100 people, \"average\" for the 40x100\/N next, and \"low\" for the rest","Q_Score":0,"Tags":"python","A_Id":72345035,"CreationDate":"2022-05-23T07:43:00.000","Title":"How to identify top 30% salary in python dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm getting a terrible trouble with my Deep Learning porjects. My google colab files mostly fail to save. The status shows\n\nSaving Changes...\n\nHowever, never succeeds. After a while\n\nAutomatic document saving has been pending for n minutes. Reloading\nmay fix the problem. Save and reload the page.\n\nReloading is not the remedy and after reloading, the problem is not solved. I really don't know what to do with it. Any ideas?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":72376799,"Users Score":0,"Answer":"You can check disk space or permits, sometimes drive and collab became unstable.\nAlso check if it can be saved while kernel is free and not busy.","Q_Score":0,"Tags":"python,keras,deep-learning,google-colaboratory","A_Id":72386661,"CreationDate":"2022-05-25T11:29:00.000","Title":"Google Colab IPython notebook files often fail to be saved","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently developing a clustering model and seeking something a bit novel.\nI have looked at initial clustering to 5 clusters and then applying another run to cluster these into 2 each (so 10 total).\nResults are similar but definitely not the same as if I ran just once for 10 clusters, rather than 5 and then 2.\nIs there any obvious difference or benefit \/ drawback to such an approach? I cannot find much academia on this and potentially with good reason.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":18,"Q_Id":72377538,"Users Score":0,"Answer":"Think about generalizing your approach by searching for optimal numbers of clusters in both steps (combined) to minimize the number of clusters while maximizing your coverage. This is an objective where your method benefits.","Q_Score":0,"Tags":"python,machine-learning,cluster-analysis,k-means,hierarchical-clustering","A_Id":72501911,"CreationDate":"2022-05-25T12:19:00.000","Title":"Is there benefit in multiple levels of clustering?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a NumPy array of shape (4809, 200, 31) and i want to extract following array out of it : shape (4809, 200, 1). so I want to extract one column from axis=2 from the source array which will have three-axis (0,1,2).","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":13,"Q_Id":72386394,"Users Score":0,"Answer":"Its as simple as doing\nA[:,:,0:1].\nThanks @Mark for the answer.","Q_Score":0,"Tags":"python,numpy,numpy-ndarray","A_Id":72386647,"CreationDate":"2022-05-26T03:32:00.000","Title":"Spliting a 3D numpy array on desired axis","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a small medical dataset (200 samples) that contains only 6 cases of the condition I am trying to predict using machine learning. So far, the dataset is not proving useful for predicting the target variable and is resulting in models with 0% recall and precision, probably due to the scarcity of the minority class.\nHowever, in order to learn from the dataset, I applied Feature Selection techniques to deduct what features are useful in predicting the target variable and see if this supports or contradicts previous literature on the matter.\nWhen I reran my models using the reduced dataset, this still resulted in 0% recall and precision. So the prediction performance has not improved using feature selection. But the features returned by the applying Feature Selection have given me more insight into the data.\nSo my question is, is the purpose of Feature Selection:\n\nto improve prediction performance\nor can the purpose be identifying relevant features in the prediction and learning more about the dataset\n\nSo in other words, is Feature Selection just a tool to achieve improved performance, or can it be an end in itself?\nThank you.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":47,"Q_Id":72389648,"Users Score":3,"Answer":"In short, both answers are correct.\nFeature selection has two main purposes:\n\nIt reduces the number of features in the dataset. This reduces the model training time and reduces the chance of overfitting.\nIt helps you understand the data i.e. which features in the dataset are the most important.\n\nHence, I would not expect feature selection to help when training your model, unless you are overfitting the training data.","Q_Score":0,"Tags":"python,machine-learning,feature-selection,dimensionality-reduction","A_Id":72390973,"CreationDate":"2022-05-26T09:30:00.000","Title":"What is the main purpose of Feature Selection?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a small medical dataset (200 samples) that contains only 6 cases of the condition I am trying to predict using machine learning. So far, the dataset is not proving useful for predicting the target variable and is resulting in models with 0% recall and precision, probably due to the scarcity of the minority class.\nHowever, in order to learn from the dataset, I applied Feature Selection techniques to deduct what features are useful in predicting the target variable and see if this supports or contradicts previous literature on the matter.\nWhen I reran my models using the reduced dataset, this still resulted in 0% recall and precision. So the prediction performance has not improved using feature selection. But the features returned by the applying Feature Selection have given me more insight into the data.\nSo my question is, is the purpose of Feature Selection:\n\nto improve prediction performance\nor can the purpose be identifying relevant features in the prediction and learning more about the dataset\n\nSo in other words, is Feature Selection just a tool to achieve improved performance, or can it be an end in itself?\nThank you.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":47,"Q_Id":72389648,"Users Score":2,"Answer":"Great answer from Tom. I will add another motivation: it helps the model with learning more from small datasets (which is one aspect of overfitting). In an ML task where you do not have a stretchable budget for more data points, feature selection can be one of your best tools.","Q_Score":0,"Tags":"python,machine-learning,feature-selection,dimensionality-reduction","A_Id":72391030,"CreationDate":"2022-05-26T09:30:00.000","Title":"What is the main purpose of Feature Selection?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In sklearn.roc_curve the results returned are fpr, tpr and thresholds.\nDespite drop_intermediate set to False, the shapes of fpr, tpr and thresholds\nchange with random states.\nWhy is that?\nAs an example, I have:\n\ntest_labels and predicted_probabilities are (158,).\nfpr, tpr and thresholds are (149,), in another run they are (146,).","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":24,"Q_Id":72423299,"Users Score":2,"Answer":"the internal algorithm eliminates repeated scores from thresholds, so if you have repeated entries whose scores are exactly equal then they will be removed.","Q_Score":0,"Tags":"python,scikit-learn","A_Id":72423526,"CreationDate":"2022-05-29T11:40:00.000","Title":"Why does sklearn.roc_curve return varying shapes for fpr and tpr and thresholds with varying random states?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Python, I have a vector v of 300 elements and an array arr of 20k 300-dimensional vectors. How do I get quickly the indices of the k closest elements to v from the array arr?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":72424998,"Users Score":0,"Answer":"Since 300 is a very small number, sorting all elements and then just using the `k first is not an expensive operation (usually; it depends on how many thousand times per second you need to do this).\nso, sorted() is your friend; use the key= keyword argument, sorted_vector = sorted(v ,key=\u2026) to implement sorting by euclidean distance.\nThen, use the classic array[:end] syntax to select the first k.","Q_Score":0,"Tags":"python,arrays,numpy,scipy,euclidean-distance","A_Id":72425032,"CreationDate":"2022-05-29T15:33:00.000","Title":"Argmin with the Euclidean distance condition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to solve the optimization problem with 4 variables.\nI have to give constraint in the scipy.optimize,\nthe constraint is x[1] < x[2] < x[3] < x[4].\nIs there any methodology to solve this problem in scipy.optimise","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":24,"Q_Id":72436763,"Users Score":0,"Answer":"You can do a variable transformation, for example,\ny[1]=x[1]\ny[2] = x[2]-x[1]\ny[3] = x[3]-x[2]\ny[4] = x[4]-x[3]\nThen you can use constraints like y[2] > 0, y[3] > 0, etc.","Q_Score":1,"Tags":"python,optimization,scipy,scipy-optimize,minimization","A_Id":72437783,"CreationDate":"2022-05-30T15:33:00.000","Title":"Scipy optimise minimize","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a help on below image I want to achieve below logic in Python and I am newbie in Python.\n[![I need a help on below image I want to achieve below logic in Python and I am newbie in Python.][1]][1]\nAny help is appreciated.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":53,"Q_Id":72447343,"Users Score":2,"Answer":"We can easily solve this problem by using recursion. The idea is to start from the top-left cell of the matrix and recur for the next node (immediate right or immediate bottom cell) and keep on doing that for every visited cell until the destination is reached. Also maintain a path array to store the nodes in the current path and update the path array (including the current node) whenever any cell is visited. Now, whenever the destination (bottom-right corner) is reached, print the path array.","Q_Score":0,"Tags":"python,python-3.x,dataframe,python-2.7","A_Id":72448119,"CreationDate":"2022-05-31T11:55:00.000","Title":"Want algorithm to find shortest path in grid which 3*3 dimension","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been using monotonically_increasing_id() for a long time and I just discovered a weird behaviour, I need explanations please.\nSo I have a df with 20 000 lines.\nI added an Id column:\nval df2 = df.withColumn(\"Id\", monotonically_increasing_id().cast(\"int\"))\nAnd surprise, I didnt get monotonically increasing ids, I found Id=1 on 5 different rows, Id=2 on 2 rows ....\nSo I thought maybe it was because of Spark distributing my dataframe, to be sure I did the following:\nval df2 = df.coalesce(1).withColumn(\"Id\", monotonically_increasing_id().cast(\"int\"))\nAnd the weird behaviour disappeared.\nAre my thoughts right ? Doesn't monotonically_increasing_id() manage dataframes repartitions automatically ?\nWhy didn't I encounter this behaviour previously, I always worked with much bigger dataframes and never did I have this error.\nThanks","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":45,"Q_Id":72449657,"Users Score":-1,"Answer":"This may have to do with integer overflow as monotonically_increasing_id returns a long datatype.","Q_Score":1,"Tags":"python,dataframe,apache-spark","A_Id":72449836,"CreationDate":"2022-05-31T14:28:00.000","Title":"monotonically_increasing_id function behaviour explanation in Spark","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had run the following OneHotEncoder code on jupyter notebook, it's fine:\nColumnTransformer(transformers=[('col_tnf',OneHotEncoder(sparse=False,drop='first'),0,1,3,8,11])],remainder='passthrough')\nIt's running and gives the output,\nwhile the same, I am running using PyCharm as a Streamlit app, its throwing error as\nAttributeError: 'OneHotEncoder' object has no attribute '_infrequent_enabled'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":72451591,"Users Score":0,"Answer":"Issue is resolve. You need to update the sklearn version same as the version you are using in jupyter notebook","Q_Score":2,"Tags":"python,scikit-learn,pycharm,streamlit","A_Id":72451728,"CreationDate":"2022-05-31T16:57:00.000","Title":"steamlit results in 'AttributeError: 'OneHotEncoder' object has no attribute '_infrequent_enabled''","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have a few questions regarding H2O AI. As per my understanding, h2o AI powers Auto ML functionality. but need to integrate my own python jupyetr ML model. so my questions are,\n\nCan we use H2O AI without Auto ML and with our own python jupyter ML algorithm?\nIf yes, can we integrate that own manual scripted ML with Snowflake?\nIf we can integrate our own scripted ml algorithm with snowflake, what are the advantages of doing it that way? instead of an own manually-created python ML algorithm?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":30,"Q_Id":72456723,"Users Score":1,"Answer":"H2O.ai offers a bunch of ML solutions: h2o-3, driverless ai, hydrogen torch to name the main ones.\nDriverless AI is AutoML driven, the user has, however, an option to provide a custom recipe (in Python) to customize it. Driverless AI has Snowflake integration.\nH2O-3 is a framework that implements a collection of popular ML algorithms. H2O-3 also integrates an AutoML solution utilizing the built-in algos. There is no option to integrate a 3rd party solution into H2O-3 AutoML and to extend H2O-3 algos other than by coding in Java (small Python customizations can be made by providing eg. custom loss function in GBM).","Q_Score":0,"Tags":"python,snowflake-cloud-data-platform,h2o,automl,h2o.ai","A_Id":72463879,"CreationDate":"2022-06-01T05:17:00.000","Title":"H2O AI with own python machine learning model integration with snowflake","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone help with this error please:\nScripts\\gw_kd_tree.py\", line 89, in build_kdtree\nnode = KDTreeNode [(point=point_list[median], left=build_kdtree(point_list[0:median], depth+1), right=build_kdtree(point_list[median+1:], depth+1))]\nTypeError: list indices must be integers or slices, not float","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":72479003,"Users Score":0,"Answer":"The only variable used to index is median thus the error probably triggering because median was calculated to be a float. This is unusable for indexing because you can't grab position 4.5 in a list, there is no half position in a list.\nTo solve this all you have to do is put int(median) wherever it appears or cast it to an int before this line","Q_Score":0,"Tags":"python,scripting,maya","A_Id":72479037,"CreationDate":"2022-06-02T15:53:00.000","Title":"# TypeError: list indices must be integers or slices, not float #","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to install tensorflow, to do so I use the following:\nconda install -c conda-forge\/label\/cf201901 tensorflow\nHowever when I import tensorflow the following error raises up: ModuleNotFoundError: No module named 'tensorflow.python.tools'. I took a look at other questions here but the solutions didn't work for me. Can you help?\nI'm using python 3.7.1 and conda 4.12.0","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":25,"Q_Id":72486882,"Users Score":3,"Answer":"By default Tensorflow will be installed on GPU.\nTo install on CPU run this command pip install tensorflow-cpu\nIf that doesn't work try pip install tensorflow\nIf you are using anaconda environment, you can try conda install tensorflow\nI hope this will help you.","Q_Score":0,"Tags":"python,tensorflow,anaconda,conda","A_Id":72486963,"CreationDate":"2022-06-03T08:36:00.000","Title":"tensorflow: ModuleNotFoundError: No module named 'tensorflow.python.tools'?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Am building a model using K nearest neighbours. The model predicts if someone has cancer or not. 1 for true, 0 for false. I need the model other than predicting presence of cancer or not giving a 0 or 1,how can i make the model also show the probability of the prediction being 1?\nEdit:Am doing a project and it specifies i use the K nearest Neighbour classifier","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":72487146,"Users Score":0,"Answer":"You must use a regressor instead of a classifier, meaning that a regression model can give you a probability of someone having a concern or not and the probability between the two values of 0 and 1, 0~1 (0~100%).","Q_Score":0,"Tags":"python,pandas,numpy,scikit-learn","A_Id":72487230,"CreationDate":"2022-06-03T08:58:00.000","Title":"How to make a model predict probability","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a half precision input that may be an array scalar x = np.float32(3). I'd like to add one to it: y = x + 1. However y will be float64 instead of float32.\nI can only think of two workarounds:\n\nconvert the input to 1d array: x = np.float32([3]) so that y = x + 1 is float32\nconvert 1 into lower precision: y = np.float32(3) + np.float16(1) is float32\n\nHowever, I have a lot of functions, so the above fixes require me to add if-else statements to each function... Are there any better ways? Thanks!","AnswerCount":3,"Available Count":1,"Score":-0.0665680765,"is_accepted":false,"ViewCount":24,"Q_Id":72490315,"Users Score":-1,"Answer":"0x5 \"Adding Integer to half-float not producing the expected result\" Why is half the size?\n0x6a0100 \"float64 cannot be cast to numpy.complex64\" in ufuncs. Numpy should have known\nWe are going through a type conversion uncertainty since numpy 1.13. It was discussed in 0x67 \"Quick fix for integer operation with half dtype in NumPy\". A decision was made to resolve as follows: \"compatibility with Matlab, always convert to float16 before operation\".\nThe bug reported in 0x6e \"sum(a) where a = float32(1) is float64\" backtracked that decision, but without a clear understanding that:\nThe issue is with how datatypes propagate through scalar inputs. That's a bigger issue than just summing. Mixing scalars with arrays is always a gray area, as you experienced. In some contexts (deconte abd deduce) such a mix should raise, but there is no consensus how np should handle them (see 0x75 \"Array scalar artifact at a ufunc boundary\"). Until that's resolved..\nMatlab's upcasting, because it does it to 16, is not a good one for numpy. That upcasting is especially problematic for product, and might be the reason why numpy issues sometimes suggest that, but \"matlab doesn't need to be revised because mathematicians are used to this surprise\", which also means matlab is used by these mathematicians with warnings, and \"doesn't need to be revised because C was defined this way\", which also means C is used on floats as if they are integers to avoid the surprise.","Q_Score":1,"Tags":"python,numpy","A_Id":72490317,"CreationDate":"2022-06-03T13:25:00.000","Title":"numpy: how to keep datatype of half precision array scalar input when adding a number","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I hope this isn\u2019t off topic, I am not really sure which forum to use for a question like this:\nI have a series of datapoints of about an hour in time from a sensor that retrieves data 20 times per second. Along with it I receive timestamps of a periodic event in this data in the format of %Y-%m-d %H:%M:%S.%f, which looks e.g. like this 2019-05-23 17:50:34.346000.\nI now created a method to calculate these periodic events myself and was wondering how I could evalute my methods accuracy. My calculations are sometimes bigger and sometimes smaller by a few milliseconds compared to the actual timestamp. But when I run my own calculated timestamp against the actual timestamp by using pythons scipy.stats.pearsonr(x,y) method, I always receive a correlation of nearly 1. I assume that\u2018s because these small differences in the order of millisenconds don\u2018t seem relevant in an hour of data. But how could I evaluate the accuracy of two timestamps a reasonable way? Are there better metrics to use than the correlation?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":11,"Q_Id":72490580,"Users Score":1,"Answer":"It seems that you are trying to compute a linear statistical correlation (pearson) for something that is, by nature, a timeseries data. This will not tell you much and drawing a conclusion based on the results is dangerous.\nIt so happens that your two vectors x and y are growing linearly in the same direction which is not surprising given that they are timestamps.\nLet's take an example for stationary data and time series data:\nTime series data:\nYour sensor starts giving measurements at time t1 and continues to do so until time t2 is reached. You compute the periodic event's timestamp using your own method then compare it to the actual timestamp. However, there is no reliable way using linear statistical correlations to see if the two are related and how related are they.\nStationary data :\nNow consider the same sensor giving measurements, but now instead of computing your periodic events all at once, take a single event and compute it multiple times using your empirical data using different measurements (so forget about any notion of time at this point (i.e. repeat the measurement multiple times). The result can be averaged and an error on the mean can be computed (see info on standard error). This, now, can be compared to your single event. Based on the error, you can get a more or less feel of how good or bad your method is.\nI would recommend the following :\n\nYou have your ground truth answer (say, the periodic event) y_truth. You compute a vector of the periodic events based on your sensor and your own method mapped as a function f(sensor_input) = y_measured\n\nNow you have two vectors, one measured and one that is ground truth. In each of those vectors, you have an indicator of a the periodic events such as an id. I would repeat the whole set of measurements, on all id's tens of times.\n\nFor each 'id' I would compute whatever measurement you are looking for (either a timestamp or time in seconds or whatever...) then I would subtract the two timestamps : |y_truth - y_measured|. This is called residuals or in other words, your error.\n\nNow averging all the residuals of all the id's gives you something called mean absolute error (1\/n * sum (|y_truth - y_measured|) which you can very confidently use to report how much error, in a unit of time (seconds for example), your method produces.","Q_Score":0,"Tags":"python,statistics,timestamp,correlation","A_Id":72491026,"CreationDate":"2022-06-03T13:47:00.000","Title":"Correlation between timestamps","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to train a model and I've changed the runtime to include GPU (all my work is on Colab). I'm printing the predicted output of my model to make sure it's working properly, at first it was predicting the output just fine, however after the runtime disconnected once, it started predicting '0's and has been ever since. I've tried changing accounts, using VPNs, changing runtime types but without an accelerator it predicts the output once then proceeds with 'nan'. Am I missing some sort of restriction to Colab's GPU usages besides the 12 hour limit?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":8,"Q_Id":72499545,"Users Score":0,"Answer":"It's likely that some sort of training instability, for example caused by invalid data, an edge case in your data ingestion code or loss function, or exploding gradients, has caused some of your model weights to become NaN.","Q_Score":0,"Tags":"python,machine-learning,neural-network,computer-vision,artificial-intelligence","A_Id":72500593,"CreationDate":"2022-06-04T11:57:00.000","Title":"Does Colab's GPU predict '0's after using the resources over a long period of time?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to use matplotlib.pyplot.loglog with log binning?","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":25,"Q_Id":72503111,"Users Score":-1,"Answer":"Maybe use the function set_xscale() o set_yscale() e semilogx() o semilogy(). If you have to set both axes in the logarithmic scale, we use the function loglog().","Q_Score":0,"Tags":"python,numpy,matplotlib","A_Id":72503160,"CreationDate":"2022-06-04T20:59:00.000","Title":"How to have logarithmic bins in a Python loglog plot","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Images are represented as matrices. Is there a practical way to make sort of frame around the content of the image? (in a monoton color)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":72520178,"Users Score":0,"Answer":"Theres a lot of ways to do that. I think the easiest way is just to add an image with everything transparent except for the borders and then draw it on top of the screen every frame.","Q_Score":0,"Tags":"python,image","A_Id":72521380,"CreationDate":"2022-06-06T15:44:00.000","Title":"Make a frame to an image in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to machine learning, but I have decent experience in python. I am faced with a problem: I need to find a machine learning model that would work well to predict the speed of a boat given current environmental and physical conditions. I have looked into Scikit-Learn, Pytorch, and Tensorflow, but I am having trouble finding information on what type of model I should use. I am almost certain that linear regression models would be useless for this task. I have been told that non-parametric regression models would be ideal for this, but I am unable to find many in the Scikit Library. Should I be trying to use regression models at all, or should I be looking more into Neural Networks? I'm open to any suggestions, thanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":72533096,"Users Score":0,"Answer":"I think multi-linear regression model would work well for your case. I am assuming that the input data is just a bunch of environmental parameters and you have a boat speed corresponding to that. For such problems, regression usually works well. I would not recommend you to use neural networks unless you have a lot of training data and the size of one input data is also quite big.","Q_Score":0,"Tags":"python,machine-learning,data-science,artificial-intelligence","A_Id":72533499,"CreationDate":"2022-06-07T14:35:00.000","Title":"Suggestions for nonparametric machine learning models","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Trying to convert the batch normalization layer from Tensorlayer version 1.11.1 to Tensorflow 2 and getting different outputs from this layer during inference using the same pretrained model.\nTensorlayer 1.11.1\ntensorlayer.layers.BatchNormLayer(network, is_train=False, name=\"batch_norm\") \nTensorflow 2.8.0\ntf.keras.layers.BatchNormalization(trainable=False, momentum=0.9, axis=3, epsilon=1e-05, gamma_initializer=tf.random_normal_initializer(mean=1.0, stdev=0.002))(network)\nWhat am I missing to get the BatchNorm output to match?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":130,"Q_Id":72552100,"Users Score":0,"Answer":"The TF1 model I had was in NPZ format.\nThe weights from Tensorlayer are saved in the order of:\nbeta, gamma, moving mean, variance.\nIn TF2, the batch norm layer is in the order of:\ngamma, beta, moving mean, variance.\nIf the order of the weights for beta and gamma are reversed when moving from TF1 to TF2 it solves the issue.","Q_Score":0,"Tags":"python,tensorflow,keras","A_Id":72973077,"CreationDate":"2022-06-08T20:56:00.000","Title":"Convert batch normalization from Tensorlayer tf1.x to TF2 keras","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Originally, my input dataset had blank spaces. But I have cleaned it, and checked with:\ndf.isnull().sum()\nAnd everthing is 0.\nNow, after fitting my dataset into the LinearRegression model and about to make predictions, it's bringing the above error.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":65,"Q_Id":72552953,"Users Score":1,"Answer":"Since you did mention that the error is happening during the prediction time, I would suggest that you make the testing data go through the same pipeline as the training data.\nFor example:\nraw training input -> preprocessing -> training input\nIt is necessary the test data also goes through the same preprocessing.","Q_Score":0,"Tags":"python,pandas,numpy,data-science,linear-regression","A_Id":72553342,"CreationDate":"2022-06-08T22:41:00.000","Title":"Input contains NaN, infinity or a value too large for dtype('float64') LinearRegression: but there are no empty values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm generating PSF-free images, so no atmosphere and no diffraction, and the images I'm getting out have stars in \"quantized\" positions. I'm wondering if there is an option in GalSim to prevent this, i.e. to have a more sinc-like distribution of the photons, so the behaviour of photons landing somewhere between pixels is taken into account. If there isn't an option for this, I suppose I would need to create my own sinc-function PSF and implement it around the drawImage() step?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":20,"Q_Id":72553128,"Users Score":1,"Answer":"Stars are inherently supposed to look like point sources if you don't have any PSF at all (no atmosphere, no diffraction). They are a delta function in that case, so all of the photons should fall into a single pixel. GalSim is doing exactly what you are asking it to do.\nIt sounds like you actually do want to have a PSF; I suggest using the galsim.Airy class, representing a diffraction-limited PSF.","Q_Score":0,"Tags":"python,simulation,galsim","A_Id":72589220,"CreationDate":"2022-06-08T23:11:00.000","Title":"Is there a simple way to prevent GalSim from shooting all the photons from a star into a single pixel?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to run a Python script in my terminal (mac) that takes a csv file as input. At the beginning of the Pyton script, a package named cvxpy is imported. When running the code with data in the terminal I get the error:\nImportError: No module named cvxpy.\nI'm feeling it's a directory fault, but I don't know how to fix this (e.g. how to get the Python script and python packaga in the same directory)\nSomebody got a clue?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":59,"Q_Id":72560729,"Users Score":0,"Answer":"You need to have the module installed.\nTo install it, type : pip3 install cvxpy\nIf you already have it installed, please double check by typing pip3 list","Q_Score":0,"Tags":"python,terminal,package","A_Id":72560782,"CreationDate":"2022-06-09T12:58:00.000","Title":"Import python package from terminal","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I run into this error in google colabs running cells.\nfrom sklearn.feature_extraction.text import TfidVectorizer\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn import preprocessing\nimport pandas as pd\nimport json\nimport pickle\nImportError Traceback (most recent call last)\n in ()\n----> 1 from sklearn.feature_extraction.text import TfidVectorizer\nImportError: cannot import name 'TfidVectorizer' from 'sklearn.feature_extraction.text' (\/usr\/local\/lib\/python3.7\/dist-packages\/sklearn\/feature_extraction\/text.py)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":580,"Q_Id":72562377,"Users Score":0,"Answer":"You have it misspelled. Try from sklearn.feature_extraction.text import TfidfVectorizer","Q_Score":0,"Tags":"python,scikit-learn","A_Id":72562436,"CreationDate":"2022-06-09T14:47:00.000","Title":"ImportError: cannot import name 'TfidVectorizer' from 'sklearn.feature_extraction.text'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When preforming image co-registration of multiple subjects, how should we select the reference image?\n\nCan a randomly selected image form one dataset could be the reference image for an image from the other dataset?\nIf we do that, should all the images belonging to the reference image dataset be co-registered with the reference image as well?\n\nI couldn't find any material in this area. Could someone please advice?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":92,"Q_Id":72566603,"Users Score":1,"Answer":"I'm not sure exactly what you mean by the term \"dataset\", but I will assume you are asking about co-registering multiple images from different patients (i.e. multiple 3D images per subject).\nTo answer your questions:\n\nIf there are no obvious choices about which image is best, then a random choice is fine. If you have e.g. a CT and an MRI for each subject, then co-registration using the CT images is likely going to give you better results because of intrinsic image characteristics (e.g. less distortion, image value linked to physical quantity).\nI suppose that depends on what you want to do, but if it is important to have all imaging data in the same co-registered reference space then yes.\n\nAnother option is to try and generate an average image, and then use that as a reference to register other images to. Without more information about what you are trying to achieve it's hard to give any more specific advice.","Q_Score":1,"Tags":"python-3.x,registration,medical-imaging,simpleitk","A_Id":72654397,"CreationDate":"2022-06-09T21:02:00.000","Title":"3D Image co-registration between multiple subjects","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a datatable\nDataTable(columns=columns, editable=True, selectable=True, autosize_mode='fit_columns', visible=False, height_policy='fit', index_position=None, width=60, margin=(5,5,5,0)).\nEven when it only has a few rows and everything is visible, when I try to edit the (only) column, a horizontal and vertical scroll bar appear. How can I get read of both of them, especially the vertical one?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":89,"Q_Id":72570789,"Users Score":0,"Answer":"I probably cheated, but when I put non-editable columns next to the editable column then no scroll bars appear.","Q_Score":0,"Tags":"python,dataframe,datatable,scrollbar,bokeh","A_Id":72683863,"CreationDate":"2022-06-10T07:49:00.000","Title":"Why does a vertical scroll bar appear when I edit a datatable in bokeh?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new at coding and feel like to really understand it, I have to truly grasp the concepts.\nQuality of life edit:\nWhy do we do df[df['col a']] == x? INSTEAD of df['col a'] == x? when making a search? I understand that on the second expression I would be looking at column names that equal X but I'd love to know what does the addition of making it a list (df[]) does for the code\nI would love to know the difference between those two and what I am actually doing when I nest the column on a list.\nany help is appreciated thank you so much!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":348,"Q_Id":72576630,"Users Score":0,"Answer":"So we use df[df['col a']== x] instead of just df['col a'] == x because to optimize the dataframe itself you are escencially telling the data frame with df['col a'] == x that you want a bool of true false if the condition is met (you can try this on your df and will see that when you do not put it in the df[] that it only will list df['col a'] == x as a list of true and false). so it pandas will first say \"What asking\"? then it will say \"You asked for X here is a series of True\/False based on what you asked\" and finally \"You asked for all of the only True values of the series here is the dataframe the reflects only true\"\nDoes that help clear up what it is doing? Basically just pandas trying to be as optimal as possible. As well as when you start learning more and more you can add multiple arguments df[(df['col a'] == x) & (df['col b'] == y)] which would be hard to write and keep together if you only did df['col a'] for your serach","Q_Score":0,"Tags":"python,pandas,list,nested","A_Id":72576693,"CreationDate":"2022-06-10T15:22:00.000","Title":"Difference between df[df['col a']] and df['col a']?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have an excel sheet that contains in this order:\nSample_name | column data | column data2 | column data ... n\nI also have a .txt file that contains\nSample_name\nWhat I want to do is filter the excel file for only the sample names contained in the .txt file. My current idea is to go through each column (excel sheet) and see if it matches any name in the .txt file, if it does, then grab the whole column. However, this seems like a nonefficient way to do it. I also need to do this using python. I was hoping someone could give me an idea on how to approach this better. Thank you very much.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":59,"Q_Id":72579505,"Users Score":1,"Answer":"Excel PowerQuery should do the trick:\n\nLoad .txt file as a table (list)\nLoad sheet with the data columns as another table\nMerge (e.g. Left join) first table with second table\nOptional: adjust\/select the columns to be included or excluded in the resulting table\n\nIn Python with Pandas\u2019 data frames the same can be accomplished (joining 2 data frames)\nP.S. Pandas supports loading CSV files and txt files (as a variant of CSV) into a data frame","Q_Score":0,"Tags":"python,excel","A_Id":72580893,"CreationDate":"2022-06-10T20:25:00.000","Title":"how to filter a .csv\/.txt file using a list from another .txt","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to do some clustering using the alghorithm k-means but I'm getting this error:ValueError: could not convert string to float: 'M'.\nI think this happens because my variable is categorical one and clustering only allows continuous variables.\nWhat should I do to the variable to make it continuous. Converting it using a dictionary is not a good idea because it makes no sense to say that M>F for example.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":538,"Q_Id":72595880,"Users Score":0,"Answer":"K-means clustering is going to need numbers in order to compute the centers of the clusters in the space defined by the variables. You can just decide to define M as 0 and F as 1, or the opposite.\nHere M being greater than F or the opposite doesn't really matter as long as it gives the opportunity for the algorithm to separate the different data points in space in order to cluster them.\nHowever, if the clusters that are being looked for are not supposed to be subgroups of the different genders, there are going to be some problems with the fact of trying to use this feature and I would advise to only use continuous variables in that case.","Q_Score":0,"Tags":"python,dataframe,k-means,hierarchical-clustering","A_Id":72595973,"CreationDate":"2022-06-12T21:07:00.000","Title":"K-means clustering: ValueError: could not convert string to float","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm not quite sure how to phrase this question, so let me illustrate with an example.\nLet's say you have a Pandas dataframe called store_df with a column called STORE_NUMBER. There are two ways to access a given column in a Pandas dataframe:\nstore_df['STORE_NUMBER']\nand\nstore_df.STORE_NUMBER\nNow let's say that you have a variable called column_name which contains the name of a column in store_df as a string. If you run\nstore_df[column_name]\nAll is well. But if you try to run\nstore_df.column_name\nPython throws an AttributeError because it is looking for a literal column named \"column_name\" which doesn't exist in our hypothetical dataframe.\nMy question is: Is there a way to look up columns dynamically using second syntax (dot notation)? Not so much because there is anything wrong with the first syntax (list notation), but because I am curious if there is some advanced feature of Python that allows users to replace variable names with their value as another variable (in this case a state variable of the dataframe). I know there is the exec function but I was wondering if there was a more elegant solution. I tried\nstore_df.{column_name}\nbut received a SyntaxError.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":63,"Q_Id":72607729,"Users Score":0,"Answer":"Would getattr(df, 'column_name_as_str') be the kind of thing you're looking for, perhaps?","Q_Score":0,"Tags":"python,pandas","A_Id":72607788,"CreationDate":"2022-06-13T18:50:00.000","Title":"Replacing variable name with literal value in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can you please help me understanding how KNN regressor works:\n\nHow does KNN look for 5 nearest neighbours, when there are several predictors?\nDoes it look for K nearest neighbours for each predictor separately and then somehow combines the results together? If so, then why wouldn't it be possible for example to look for K1 neighbours on predictor P1, but K2 predictors on predictor P2 etc...Why is it \"K\" rather than an \"array of Ks\", where the length of the array equals the number of predictors?\n\nKNN is sensitive to the scale of the predictors, therefore MinMaxScaler is recommended (Python) to be used. Does it mean, that essentially I can leverage this property to my benefit, for example by increasing the scale of certain predictor that I want KNN to give a priority to.\n\n\nThank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":72609174,"Users Score":0,"Answer":"kNN would in the case of multiple predictors look at the Euclidian distance between vectors in the predictor space. E.g., if you have three predictors x1, x2, and x3, all data points will be a point in the 3-dimensional space. To measure the distance you simply compute $d=\\sqrt{(p_1-x_1)^2+(p_2-x_2)^2+(p_3-x_3)^2}$, and use that to find the neighbors.\n\nYou can definitely influence the distance measurements by scaling differently. However, this should probably be done with some care and I would use something like cross-validation to make sure the assumptions work as expected.\n\n\nHope this helps!","Q_Score":0,"Tags":"python,artificial-intelligence,knn","A_Id":74070899,"CreationDate":"2022-06-13T21:21:00.000","Title":"KNN regressor algorithm when there are several predictors","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how to install numpy and pandas in python 2.7.9 version using command line.\nplease help with complete installing process.\nI have tried in windows 10 OS , but it's not installed . Showing syntax error.\nI have used command in python 2.7.9 as below\npip install numpy \/ python -m pip install numpy\npip install pandas \/ python -m pip install pandas","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":233,"Q_Id":72613168,"Users Score":0,"Answer":"First I recommend you upgrade Python Package otherwise try to find which NumPy version supports python 2.7.9 then install like this pip install numpy==1.9.2\n[https:\/\/stackoverflow.com\/questions\/28947345\/which-numpy-library-is-compatible-with-python-2-7-9]","Q_Score":0,"Tags":"python","A_Id":72613441,"CreationDate":"2022-06-14T07:41:00.000","Title":"how to install numpy and pandas in python 2.7.9 version","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe of patient records as rows and various features as columns.\nI'd like to be able to search through this dataframe based on one feature and display the rows that correspond to that search in a Dash DataTable.\nWhat's the best way to do this? I understand DataTables have their own native filtering function, but is it possible to filter based on a user-entered value from an input field or dropdown value?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":682,"Q_Id":72623726,"Users Score":0,"Answer":"Yes, completely doable. You need a callback for that. It will take as input the value of the dropdown, or input field, and its output will be the data prop for the table component. Inside the callback, you can filter your dataframe based on the input value.\nDepending on where you load the data for the table from, you may want to put the original, unfiltered data in something like a dcc.Store and pass that to your callback as a state value, to avoid having to make network or database calls repeatedly.","Q_Score":0,"Tags":"python,dataframe,datatable,plotly-dash","A_Id":72625009,"CreationDate":"2022-06-14T21:46:00.000","Title":"Dash - search and display results in datatable","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to plot a large amount of data in python (a list of size 3 million) any method\/libraries to plot them easily since matplotlib does not seem to work.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":72623783,"Users Score":0,"Answer":"I use quite intensively matplotlib in order to plot arrays of size n > 10**6.\nYou can use plt.xscale('log') which allow you to display your results.\nFurthermore, if your dataset shows great disparity in value, you can use plt.yscale('log') in order to plot them nicely if you use the plt.plot() function.\nIf not (ie you use imshow, hist2d and so on) you can write this in your preamble :\nfrom matplotlib.colors import LogNorm and just declare the optional argument norm = LogNorm().\nOne last thing : you shouldn't use numpy.loadtxt if the size of the text file is greater than your available RAM. In that case, the best option is to read the file line by line, even if it take more time. You can speed up the process with from numba import jit and declare @jit(nopython=True, parallel =True) .\nWith that in mind, you should be able to plot in a reasonably short time array of size of about ten millions.","Q_Score":0,"Tags":"python,matplotlib,plot","A_Id":72624406,"CreationDate":"2022-06-14T21:52:00.000","Title":"Easy way for plotting large amount of data in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am training a CNN model to classify simple images (squares and crosses) and everything works just fine when I use the cpu but when I use the gpu everything works until the training starts and i get this error:\n2022-06-15 04:25:49.158944: I tensorflow\/stream_executor\/cuda\/cuda_dnn.cc:384] Loaded cuDNN version 8401\nAnd then the program just stops.\nDoes anyone have an idea how to fix this?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":135,"Q_Id":72625342,"Users Score":2,"Answer":"if you use pycharm, you can select the \"Emulate terminal in output console\" option to print detailed error information.\nRun->Edit Configration->Execution ->Emulate terminal in output console\nOn windows, maybe CUDA is missing zlibwapi.dll file, and you can download it and move it to bin of cuda.","Q_Score":1,"Tags":"python,tensorflow,keras","A_Id":73670094,"CreationDate":"2022-06-15T02:35:00.000","Title":"Training CNN model using keras works with CPU but not with GPU","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am facing some difficulties using merge function in Pandas. I am looking for some kind of Vlookup formula to assist me on this. However, I couldn't solve my problem.\nMy data is huge and I couldn't share here due to confidential. However, I try to came up with similar data here.\n\n\n\n\nOld Code\nNew Code\nName\nInvoice Date\n\n\n\n\n1001011\nNA\nCheese Cake\n02\/02\/2021\n\n\n1001012\nNA\nCoffee\n03\/05\/2021\n\n\n1001011\nNA\nCheese Cake\n30\/05\/2021\n\n\nNA\n2002093\nJasmine Tea\n21\/08\/2021\n\n\nNA\n2002042\nCookies\n31\/12\/2021\n\n\nNA\n2002080\nCoffee\n09\/01\/2022\n\n\nNA\n2002093\nJasmine Tea\n05\/05\/2022\n\n\nNA\n2002058\nCheese Cake\n07\/06\/2022\n\n\n\n\nI would like to have a COST Column input in my table above. However, the cost is very by invoice date (Also take note on the changing of product code). We have 2 cost table.\nFor year 2021:\n\n\n\n\nOld Code\nNew Code\nName\nJan-21\nFeb-21\nMar-21\nApr-21\nMay-21\nJune-21\nJul-21\nAug-21\nSep-21\nOct-21\nNov-21\nDec-21\n\n\n\n\n1001011\n2002058\nCheese Cake\n50\n51\n50\n53\n54\n52\n55\n53\n50\n52\n53\n53\n\n\n1001012\n2002080\nCoffee\n5\n6\n5\n6\n6\n5\n7\n5\n6\n5\n6\n6\n\n\n1001015\n2002093\nJasmine Tea\n4\n3\n3\n4\n4\n3\n5\n3\n3\n3\n3\n4\n\n\n1001020\n2002042\nCookies\n20\n20\n21\n20\n22\n20\n21\n20\n22\n20\n21\n22\n\n\n\n\nAnd also for Year 2022:\n\n\n\n\nOld Code\nNew Code\nName\nJan-22\nFeb-22\nMar-22\nApr-22\nMay-22\nJune-22\nJul-22\nAug-22\nSep-22\nOct-22\nNov-22\nDec-22\n\n\n\n\n1001011\n2002058\nCheese Cake\n52\n52\n55\n55\n56\n52\nNA\nNA\nNA\nNA\nNA\nNA\n\n\n1001012\n2002080\nCoffee\n5\n6\n5\n6\n6\n6.5\nNA\nNA\nNA\nNA\nNA\nNA\n\n\n1001015\n2002093\nJasmine Tea\n4\n3\n3\n5\n5\n5.5\nNA\nNA\nNA\nNA\nNA\nNA\n\n\n1001020\n2002042\nCookies\n22\n22\n23\n23\n23.5\n23\nNA\nNA\nNA\nNA\nNA\nNA\n\n\n\n\nSo basically, I would like to have my cost column in my first Data Frame to reflect the correct costing for different Year and different Month.\nExample:\nInvoice Date Costing for 03\/05\/2021 = May_2021\nWould you mind to assist me on this?\nHighly Appreciated.\nThank you very much","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":74,"Q_Id":72630426,"Users Score":0,"Answer":"You need to have the month and code number on both sides when merging, so:\n\nCreate a year-month column in the invoice dataframe that is consistent with the cost table\nCombine two cost tables\nMerge with new code and old code respectively\n\n\n\nimport pandas as pd\nimport io\nimport datetime\n\ninvoice_data_text = '''Old Code New Code Name Invoice Date\n1001011 NA Cheese Cake 02\/02\/2021\n1001012 NA Coffee 03\/05\/2021\n1001011 NA Cheese Cake 30\/05\/2021\nNA 2002093 Jasmine Tea 21\/08\/2021\nNA 2002042 Cookies 31\/12\/2021\nNA 2002080 Coffee 09\/01\/2022\nNA 2002093 Jasmine Tea 05\/05\/2022\nNA 2002058 Cheese Cake 07\/06\/2022\n'''\n\ncost_2021_text = '''\nOld Code New Code Name Jan-21 Feb-21 Mar-21 Apr-21 May-21 June-21 Jul-21 Aug-21 Sep-21 Oct-21 Nov-21 Dec-21\n1001011 2002058 Cheese Cake 50 51 50 53 54 52 55 53 50 52 53 53\n1001012 2002080 Coffee 5 6 5 6 6 5 7 5 6 5 6 6\n1001015 2002093 Jasmine Tea 4 3 3 4 4 3 5 3 3 3 3 4\n1001020 2002042 Cookies 20 20 21 20 22 20 21 20 22 20 21 22\n'''\n\ncost_2022_text = '''\nOld Code New Code Name Jan-22 Feb-22 Mar-22 Apr-22 May-22 June-22 Jul-22 Aug-22 Sep-22 Oct-22 Nov-22 Dec-22\n1001011 2002058 Cheese Cake 52 52 55 55 56 52 NA NA NA NA NA NA\n1001012 2002080 Coffee 5 6 5 6 6 6.5 NA NA NA NA NA NA\n1001015 2002093 Jasmine Tea 4 3 3 5 5 5.5 NA NA NA NA NA NA\n1001020 2002042 Cookies 22 22 23 23 23.5 23 NA NA NA NA NA NA\n'''\n\n# Prepare\ninvoice_df = pd.read_csv(io.StringIO(invoice_data_text),sep=\"\\t\",parse_dates=[\"Invoice Date\"])\ncost21 = pd.read_csv(io.StringIO(cost_2021_text),sep='\\t')\ncost22 = pd.read_csv(io.StringIO(cost_2022_text),sep='\\t')\n\n# Create Month column for merging\ninvoice_df[\"Month\"] = invoice_df[\"Invoice Date\"].map(lambda x:datetime.datetime.strftime(x,\"%b-%y\"))\n\n# Combine two cost tables\ncost21_stack = cost21.set_index(list(cost21.columns[:3])).stack().reset_index(name=\"Cost\")\ncost22_stack = cost22.set_index(list(cost22.columns[:3])).stack().reset_index(name=\"Cost\")\ncost_table = pd.concat([cost21_stack,cost22_stack]).rename({\"level_3\":\"Month\"},axis=1)\n\n# Merge with new code and old code respectively\nold_code_result = pd.merge(invoice_df[pd.isna(invoice_df[\"Old Code\"]) == False], cost_table[[\"Old Code\",\"Month\",\"Cost\"]], on=[\"Old Code\",\"Month\"] ,how=\"left\")\nnew_code_result = pd.merge(invoice_df[pd.isna(invoice_df[\"New Code\"]) == False], cost_table[[\"New Code\",\"Month\",\"Cost\"]], on=[\"New Code\",\"Month\"] ,how=\"left\")\n\n# Combine result\npd.concat([old_code_result,new_code_result])","Q_Score":0,"Tags":"python,pandas,dataframe,merge,vlookup","A_Id":72654088,"CreationDate":"2022-06-15T11:21:00.000","Title":"Python - Anyone mind to assist in this Pandas Dataframe problem? URGENT","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a data frame sorted by a column; and I need to perform a binary search to find the first value equal or greater than a specified value.\nIs there any way to do this efficiently in Spark?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":72637404,"Users Score":0,"Answer":"What you want is not possible. It is a bulk processing framework in which JOINs play a prevalent role using different techniques.\nNo where in the docs have I seen or read elsewhere of a binary search. That I did at University with in-memory Pascal structures.","Q_Score":0,"Tags":"python,apache-spark,pyspark","A_Id":72637969,"CreationDate":"2022-06-15T20:21:00.000","Title":"How to perform binary search on a preordered DataFrame?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let imagine we have a game with 4 players. And after playing game, we will get ranking of 4 players based on their score, rank 1 is the best, rank 4 is the worst. I have created a model for predicting player ranking. In detail, I have created 2 models for predict who will be in rank1 and rank2 of the game:\n\nmodel A predict probabilities for who win in rank 1.\nmodel B predict probabilities for who win in rank 2.\n\nAnd all of probability outputs will be in this matrix:\n\n\n\n\nPlayerID\nRank1(prob)\nRank2(prob)\n\n\n\n\nPlayerA\n0.7\n0.8\n\n\nPlayerB\n0.2\n0.05\n\n\nPlayerC\n0.1\n0.1\n\n\nPlayerD\n0.1\n0.05\n\n\n\n\nBased on above table, how can I calculate probability for this event: \"Player A and Player B will be in first 2 ranks\" ?\nPlease help","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":183,"Q_Id":72640602,"Users Score":0,"Answer":"The Premise\nFirst of all, you have a bug in your premise. You predict a probability of 0.7 for PlayerA to be at first place and you have a prediction for the same PlayerA to be ranked 2nd at the same game with a probability of 0.8. A value of 1 means full certainty, a value of 0 means full certainty of the negation. Now, your\n0.7 + 0.8 = 1.5\nwhich violates the basic framework of boolean algebra, as summing the probabilities of distinct outcomes for the same event you get a higher value than the maximum supported value of 1.\nAlso, there should be some probability of PlayerA being ranked lower than 2., so we should have\nP(first) + P(second) + P(lower) = 1\nfor any player. If this is false (and in your case it is false), then the premise is incorrect.\nAnother problem with the premise can be seen from the fact that summing Rank1(prob) we get 1.1, even though, summing Rank2(prob) we get the expected value of 1.\nBut let's focus on Rank1(prob) at this point.\nWe know as an absolutely certain fact (probability of 1) that one of the four players will be ranked 1, which means that their probability should have a sum that is exactly equal to 1. Since it is 1.1 (0.7 + 0.2 + 0.1 + 0.1) in your case, we see another problem with your premise. So, first things first: you need to fix your premise to make sure that they are corresponding to reality and they do not violate the basic framework of boolean logic (violating this framework is an absolutely sure sign of not being in line with reality)\nLogics and Probability\nIn probability calculation, applying logics is not difficult. For example, if you are interested to know whether p(X) AND p(y) is true, then you can compute it like this:\np(X AND Y) = p(X) * p(Y)\nExplanation: The probability itself is a conjunction already with the surety (value of 1), as p(X) = 1 * p(X). 0 <= p(X) <= 1 is the full problem-space when you calculate the result of logical AND with p(Y), hence you compute a further conjunction, resulting in p(X) * p(Y)\nIn the case of logical OR\nComputing the disjunction is as\np(X OR Y) = p(X) + p(Y) - p(X AND Y) = p(X) + p(Y) - p(X) * p(Y)\nExplanation: Intuitively, the result of the logical OR should be the sum of the cases, but there is a caveat: p(X AND Y) is already included as a possibility both into p(X) and p(Y), so it appears twice (in a hidden manner) when you compute p(X) + p(Y), so, as a result, you need to subtract it to make sure that it's computed into the result exactly once.\nComputing your formula\nWe are interested to know whether PlayerA will be first and PlayerB will be second or PlayerB will be first and PlayerA will be second.\nSince your premises have some bugs, I will not use your values. Instead of that, I will denote Rank1(A) as the probability that PlayerA will be ranked first and so on.\nSo:\np(Rank1(A) AND Rank2(B)) = Rank1(A) * Rank2(B) (1)\nSimilarly:\np(Rank1(B) AND Rank2(A)) = Rank1(B) * Rank2(A) (2)\nSo:\np((Rank1(A) AND Rank2(B)) OR (Rank1(B) AND Rank2(A))) = p(Rank1(A) AND Rank2(B)) + p(Rank1(B) AND Rank2(A)) - p((Rank1(A) AND Rank2(B)) AND (Rank1(B) AND Rank2(A)))\nWe know that p((Rank1(A) AND Rank2(B)) AND (Rank1(B) AND Rank2(A))) is exactly 0, because it is a self-contradiction, because it assumes PlayerA to be ranked first and second at the same time and it similarly assumes PlayerB to be ranked first and second at the same time. So:\np(Rank1(A) AND Rank2(B)) + p(Rank1(B) AND Rank2(A)) - p((Rank1(A) AND Rank2(B)) AND (Rank1(B) AND Rank2(A))) = p(Rank1(A) AND Rank2(B)) + p(Rank1(B) AND Rank2(A)) - 0 = p(Rank1(A) AND Rank2(B)) + p(Rank1(B) AND Rank2(A))\nLet's apply formula (1) and (2) at the same time:\np(Rank1(A) AND Rank2(B)) + p(Rank1(B) AND Rank2(A)) = Rank1(A) * Rank2(B) + Rank1(B) * Rank2(A)","Q_Score":2,"Tags":"python,scipy,probability","A_Id":72685590,"CreationDate":"2022-06-16T04:50:00.000","Title":"How to calculate the combination of matrix of probabilities (win rate ranking in game)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having issues with running yolov5 on colab. I was able to run the code fine when I had I had more classes, and a slightly smaller dataset, but now I have decreased the amount of classes and 70 instances when the overall one has 3400 instances. Now I am getting this error.\nterminate called after throwing an instance of 'c10::CUDAError'\nOther times I will get\n cuda assertion index >= -sizes[i] && index < sizes[i] && index out of bounds\nany idea what could be causing this and how to fix it?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1296,"Q_Id":72645027,"Users Score":0,"Answer":"The issue was that I was not outputting any labels that I said existed. So for example if I claimed that there would be labels \"0\",\"1\",\"2\" in the training dataset. There was no instances of label \"2\".","Q_Score":0,"Tags":"python,pytorch,google-colaboratory,yolov5","A_Id":72645265,"CreationDate":"2022-06-16T11:22:00.000","Title":"issue terminate called after throwing an instance of 'c10::CUDAError'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had two dataframes that are being read from two almost identical .csv using pd.read_csv().\nWhen I use .loc[index1] on one of them it returns a Dictionary such as:\ncol1 val1\ncol2 val2\ncol3 val3\nName: (index1), dtype: object\nBut with the other I've realized it actually returns a Dataframe. Some operations such as df1[col1] = df2[col2] + constant will through errors.\nTo make it even harder I'm actually using MultiIndex. I'm getting this error:\nCannot handle a non-unique multi-index!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":72645620,"Users Score":0,"Answer":"I've figured out that .loc returns a Dataframe or an Dictionary-like object depending on if there are duplicated indexes. This condition is not explained in the pandas documentation or I've not find it.\nIf the index are actually unique try using something along this code:\ndf.reset_index().drop_duplicates(subset=[\"index1\"]).set_index([\"index1\"])\nor just df.drop_duplicates(subset=[\"index1\"]) after reading the csv but before setting the index","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":72645621,"CreationDate":"2022-06-16T12:10:00.000","Title":"Dataframe.loc returns dictionary or a Dataframe [Solved] (Cannot handle a non-unique multi-index!)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Pandas ExcelWriter to create an excel file from a dataframe. I have also applied formatting on the excel file like Font size, font colour etc\nNow I am trying to convert the excel to CSV using to_csv method.\nAfter conversion, the CSV file is not retaining any formatting done previously.\nMy question is how do I retain formatting in CSV ?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":196,"Q_Id":72656844,"Users Score":2,"Answer":"CSV cannot store formatting. If you want that, save as an excel file. (Or of course other outputs that save formatting - including HTML - but have other feature drawbacks - it depends on what you need.)","Q_Score":1,"Tags":"python-3.x,pandas,dataframe,export-to-csv,pandas.excelwriter","A_Id":72656926,"CreationDate":"2022-06-17T09:01:00.000","Title":"Pandas to_csv not retaining formatting","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to singularity concept but successfully created singularity image for running alpha fold tool. I am encountering below-mentioned error.\nI would like to request if anyone could explain how to troubleshoot the error or any related information that may help to combat it.\nThank you in advance.\nsingularity run --nv alphafold220.sif --fasta_paths=\/home\/igib\/AF_singualrity\/test.fasta\n**\n*> \/sbin\/ldconfig.real: Can't create temporary cache file\n\n\/etc\/ld.so.cache~: Read-only file system Traceback (most recent call\nlast): File \"\/app\/alphafold\/run_alphafold.py\", line 37, in \nfrom alphafold.model import data File \"\/app\/alphafold\/alphafold\/model\/data.py\", line 19, in \nfrom alphafold.model import utils File \"\/app\/alphafold\/alphafold\/model\/utils.py\", line 22, in \nimport haiku as hk File \"\/opt\/conda\/lib\/python3.7\/site-packages\/haiku\/init.py\", line 17,\nin \nfrom haiku import data_structures File \"\/opt\/conda\/lib\/python3.7\/site-packages\/haiku\/data_structures.py\",\nline 17, in \nfrom haiku._src.data_structures import to_immutable_dict File \"\/opt\/conda\/lib\/python3.7\/site-packages\/haiku\/_src\/data_structures.py\",\nline 30, in \nfrom haiku._src import utils File \"\/opt\/conda\/lib\/python3.7\/site-packages\/haiku\/_src\/utils.py\", line 24,\nin \nimport jax File \"\/opt\/conda\/lib\/python3.7\/site-packages\/jax\/init.py\", line 108, in\n\nfrom .experimental.maps import soft_pmap File \"\/opt\/conda\/lib\/python3.7\/site-packages\/jax\/experimental\/maps.py\",\nline 25, in \nfrom .. import numpy as jnp File \"\/opt\/conda\/lib\/python3.7\/site-packages\/jax\/numpy\/init.py\", line\n16, in \nfrom . import fft File \"\/opt\/conda\/lib\/python3.7\/site-packages\/jax\/numpy\/fft.py\", line 17, in\n\nfrom jax._src.numpy.fft import ( File \"\/opt\/conda\/lib\/python3.7\/site-packages\/jax\/_src\/numpy\/fft.py\", line\n19, in \nfrom jax import lax File \"\/opt\/conda\/lib\/python3.7\/site-packages\/jax\/lax\/init.py\", line\n330, in \nfrom jax._src.lax.fft import ( File \"\/opt\/conda\/lib\/python3.7\/site-packages\/jax\/_src\/lax\/fft.py\", line\n144, in \nxla.backend_specific_translations['cpu'][fft_p] = pocketfft.pocketfft AttributeError: module 'jaxlib.pocketfft' has no\nattribute 'pocketfft'*\n\n**","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":281,"Q_Id":72657925,"Users Score":0,"Answer":"Singularity images run on a read-only file system, with the exception being directories that have been mounted from the host OS.\nYou can enable a tmpfs overlay when running by using the --writable-tmpfs flag. Note that the max size of the tmpfs overlay is the size of \/dev\/shm, which can be smaller than expected in some cloud VMs.","Q_Score":0,"Tags":"python,python-3.x,anaconda,job-scheduling,singularity-container","A_Id":72685970,"CreationDate":"2022-06-17T10:26:00.000","Title":"Runing alphafold job in singularity","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to now the numbers of headers my csv file contains (between 0 and ~50). The file itself is huge (so not reading the complete file for this is mandatory) and contains numerical data.\nI know that csv.Sniffer has a has_header() function, but that can only detect 1 header.\nOne idea I had is to recursivly call the has_header funcion (supposing it detects the first header) and then counting the recursions. I am sure though, there is a much smarter way.\nGoogling was kind of a pain, since no matter what you search, if it includes \"count\" and \"csv\" at some point, you get all the \"count rows in csv\" results :D\nClarification:\nWith number of headers I mean number of rows containing information which is not data. There is no general rule for the headers (could be text, floats, or white spaces) and it may be a single line of text. The data itself however is only floats. For me this was super clear, because I've been working with these files for a long time, but forgot this isn't the normal case.\nI hoped there was a easy and smart builtin function from Numpy or Pandas, but it doesn't seem so.\nInspired by the comments so far, I think my best bet is to\n\nread 100 lines\ncount number of separators in each line\ndetermine most common number of separators per line\nComing from the end of 100 lines, find first line with different amount of separators, or isn't floats. That line is the last header line.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":569,"Q_Id":72661086,"Users Score":0,"Answer":"Well, I think that you could get the first line of the csv file and then split it by a \",\". That will return an array with all the headers in it. Now you can just count them with len.","Q_Score":0,"Tags":"python,pandas,csv","A_Id":72661396,"CreationDate":"2022-06-17T14:41:00.000","Title":"Python: count headers in a csv file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After we've created VideoCapture with cv2.VideoCapture(filename) how can we retrieve filename? Looks like get() method with propId is not what I'm looking for","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":149,"Q_Id":72667884,"Users Score":0,"Answer":"sorry, but you cannot retrieve the filename from a cv2.VideoCapture.\n(also, webcams or ip captures wont even have one)\nsince the filename is in your code, you need to cache it in a variable instead.","Q_Score":0,"Tags":"python,opencv","A_Id":72668303,"CreationDate":"2022-06-18T08:35:00.000","Title":"How to retrieve filename from OpenCV's VideoCapture() in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a classic panda data frame made of ID and Text. I would like to get just one column and therefore i use the typical df[\"columnname\"]. But at this point it becomes a Pandas Series. Is there a way to make a new dataframe with just that single column?\nI'm asking this is because if I cast the Pandas series in a string (columnname = columnname.astype (\"string\")) and I save it in a text file, I see that it only saves the first sentence of each line and not the entire textual content, as I would like.\nIf there are any other solution, I'm open to learn :)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":214,"Q_Id":72669988,"Users Score":0,"Answer":"Try this: pd.DataFrame(dfname[\"columnname\"])","Q_Score":0,"Tags":"python,pandas,csv","A_Id":72676251,"CreationDate":"2022-06-18T14:22:00.000","Title":"Pandas Read csv just read a line of a row","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\ndaal4py 2021.5.0 requires daal==2021.4.0, which is not installed.\nmxnet 1.7.0.post2 requires numpy<1.17.0,>=1.8.2, but you have numpy 1.18.5 which is incompatible.\nd2l 0.17.5 requires numpy==1.21.5, but you have numpy 1.18.5 which is incompatible.\nd2l 0.17.5 requires requests==2.25.1, but you have requests 2.18.4 which is incompatible.","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":9542,"Q_Id":72672196,"Users Score":-1,"Answer":"Try adding --use-deprecated=legacy-resolver after your pip install commands\nfor example:\n\n!pip install -r\nrequirements.txt --use-deprecated=legacy-resolver","Q_Score":1,"Tags":"python,numpy","A_Id":73220802,"CreationDate":"2022-06-18T19:31:00.000","Title":"ERROR: pip's dependency resolver does not currently take into account all the packages that are installed","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"After some extensive research I have figured that\nParquet is a column-oriented data file format designed for efficient data storage and retrieval. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.\nHowever, I am unable to understand why parquet writes multiple files when I run df.write.parquet(\"\/tmp\/output\/my_parquet.parquet\") despite supporting flexible compression options and efficient encoding.\nIs this directly related to parallel processing or similar concepts?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":559,"Q_Id":72676423,"Users Score":1,"Answer":"It's not just for parquet but rather a spark feature where to avoid network io it writes each shuffle partition as a 'part...' file on disk and each file as you said will have compression and efficient encoding by default.\nSo Yes it is directly related to parallel processing","Q_Score":1,"Tags":"python,pyspark,parquet","A_Id":72678563,"CreationDate":"2022-06-19T11:19:00.000","Title":"Why do Parquet files generate multiple parts in Pyspark?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After some extensive research I have figured that\nParquet is a column-oriented data file format designed for efficient data storage and retrieval. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.\nHowever, I am unable to understand why parquet writes multiple files when I run df.write.parquet(\"\/tmp\/output\/my_parquet.parquet\") despite supporting flexible compression options and efficient encoding.\nIs this directly related to parallel processing or similar concepts?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":559,"Q_Id":72676423,"Users Score":1,"Answer":"Lots of frameworks make use of this multi-file layout feature of the parquet format. So I\u2019d say that it\u2019s a standard option which is part of the parquet specification, and spark uses it by default.\nThis does have benefits for parallel processing, but also other use cases, such as processing (in parallel or series) on the cloud or networked file systems, where data transfer times may be a significant portion of total IO. in these cases, the parquet \u201chive\u201d format, which uses small metadata files which provide statistics and information about which data files to read, offers significant performance benefits when reading small subsets of the data. This is true whether a single-threaded application is reading a subset of the data or if each worker in a parallel process is reading a portion of the whole.","Q_Score":1,"Tags":"python,pyspark,parquet","A_Id":72680097,"CreationDate":"2022-06-19T11:19:00.000","Title":"Why do Parquet files generate multiple parts in Pyspark?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to compare each distribution of measurement n, with all other measurements. I have about 500 measurements and 5000 distributions per measurement, so that's a lot of comparisons. I have the data in one csv file:\n\n\n\n\n\ndistribution 1\ndistribution 2\n\n\n\n\nmeasurement 1\n[10,23,14,16,28,19,28]\n[4,1,3,2,5,8,4,2,4,6]\n\n\nmeasurement 2\n[11,23,24,10,27,19,27]\n[9,2,5,2,5,7,3,2,4,1]\n\n\n\n\nas you can imagine the file is huge and as I have to do many comparisons I run it in parallel and the RAM consumption is insane. If I split the file and only open sample by sample, it's a bit better, but still not good and also it's not very efficient.\nMy idea was to create some kind of database and query only the cells needed, but have never done it, so I don't know if that will be RAM heavy and fairly efficient.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":39,"Q_Id":72678371,"Users Score":1,"Answer":"This probably has something to do with destroying objects. The way to limit RAM usage would be to limit the number of threads. Then you don't start every comparison at the beginning and then solve them by four (assuming you have four threads per process) to end an hour later to let the garbage collector start destroying objects of the solved cases.\nI am just spitballing here. A bit of code would be helpful. Maybe you are already doing that?","Q_Score":1,"Tags":"python,memory,ram","A_Id":72679294,"CreationDate":"2022-06-19T15:58:00.000","Title":"Looking for RAM efficient way to compare many distributions in parallel in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I'm trying to import the 'skfuzzy' module, I get this error. I installed the scikit-fuzzy package and I can see it among installed packages (using the 'pip list' command). I tried installing and re-installing it several times with various commands ('pip install'; 'pip3 install'; 'pip3 -U install') but nothing helped. Other modules such as numpy and matplotlib work fine. Also, after the installation I get this warning:\n\"WARNING: The script f2py.exe is installed in 'C:\\Users\\anton\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python39\\Scripts' which is not on PATH.\nConsider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\"\nIs this connected to my problem? How can I fix it?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":508,"Q_Id":72678873,"Users Score":0,"Answer":"According to the warning, try and do the following:\n\nWindows + R\nType sysdm.cpl\nGo to Advance Tab and click on Environment Variables\nIn User variables [preferably] click on PATH\nClick on New and add C:\\Users\\anton\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python39\\Scripts to PATH\n\nThis will add the scripts to your environment variables. Hope this helps!","Q_Score":0,"Tags":"python,module,package","A_Id":72678962,"CreationDate":"2022-06-19T17:13:00.000","Title":"ModuleNotFoundError: No module named 'skfuzzy'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I'm trying to import the 'skfuzzy' module, I get this error. I installed the scikit-fuzzy package and I can see it among installed packages (using the 'pip list' command). I tried installing and re-installing it several times with various commands ('pip install'; 'pip3 install'; 'pip3 -U install') but nothing helped. Other modules such as numpy and matplotlib work fine. Also, after the installation I get this warning:\n\"WARNING: The script f2py.exe is installed in 'C:\\Users\\anton\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python39\\Scripts' which is not on PATH.\nConsider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.\"\nIs this connected to my problem? How can I fix it?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":508,"Q_Id":72678873,"Users Score":0,"Answer":"I installed the scikit-fuzzy using the \"easy_install -U scikit-fuzzy\" command instead of pip, and it did remove the error.","Q_Score":0,"Tags":"python,module,package","A_Id":72679114,"CreationDate":"2022-06-19T17:13:00.000","Title":"ModuleNotFoundError: No module named 'skfuzzy'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to change the positions of spellers to a simple format. By changing RW to forward or CM to midfielder. Only there are several values \u200b\u200bin a cell. How do I combine or drop the other values \u200b\u200bin the cell?\n\n\n\n\nplayer\nplayer_positions\n\n\n\n\nmessi\nRW, ST, CF\n\n\nRonaldo\nST,LW\n\n\n\n\nhow do i change RW, ST, CF just simple to Forward?\nAm trying:\ndf.replace(to_replace=r'^RW', value='Forward', regex=True)\nbut then i get:\n\n\n\n\nplayer\nplayer_positions\n\n\n\n\nmessi\nForward, ST, CF\n\n\nRonaldo\nST,LW","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":72683983,"Users Score":1,"Answer":"You can add everything in the replace statement.\ndf = df.replace(to_replace=r'^RW, ST, CF', value='Forward', regex=True)\nor\ndf = df.replace(to_replace=r'^RW\\D*', value='Forward', regex=True)","Q_Score":0,"Tags":"python,pandas,dataframe,jupyter-notebook","A_Id":72684141,"CreationDate":"2022-06-20T08:07:00.000","Title":"combine multiple variables in cell to one variable","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of XY co-ordinates, and I am looking for a way to sort them as they appear on the plot with top-to-bottom, left-to-right precedence.\nI tried sorting them as a tuple in Python but it didn't work.\nHere are the (normalized) co-ordinates:\n(0.48425699105850684, 0.4852200502470339)\n(0.8003207976544613, 0.1794844315136523)\n(0.663158173206857, 0.19739922702645016)\n(0.26770425263394393, 0.20288883507443173)\n(0.5214529814719886, 0.2032096846467844)\n(0.4768268032594222, 0.3875097802042241)\n(0.5400594055964151, 0.5870619715600098)\n(0.5445470099105095, 0.8064964338255158)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":185,"Q_Id":72691300,"Users Score":0,"Answer":"I eventually ended up using the product of the X and Y coordinates as the sorting key. It worked for my test cases!","Q_Score":1,"Tags":"python,math,geometry,coordinates","A_Id":72692990,"CreationDate":"2022-06-20T18:09:00.000","Title":"Is there a way to sort a set of co-ordinates in a top-to-bottom, left-to-right precedence as they appear?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"celery.conf.update(result_serializer='pickle') uses pickle for serializing results generated by Celery tasks. Is there a way to tell which serializer (JSON, pickle, etc...) to be used at the individual task level?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":72695668,"Users Score":0,"Answer":"As far as I know, that is not possible.","Q_Score":0,"Tags":"python,rabbitmq,celery","A_Id":72712131,"CreationDate":"2022-06-21T05:18:00.000","Title":"Task specific result_serializer in Celery","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"How to define an empty 2 dimensional list instead of data = [[\"\",\"\"],[\"\",\"\"],[\"\",\"\"],[\"\",\"\"]]\nFor larger number of elements","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":89,"Q_Id":72697236,"Users Score":1,"Answer":"lis = [[] for _ in range(3)]\ntry it","Q_Score":1,"Tags":"python,dataframe,empty-list","A_Id":72697568,"CreationDate":"2022-06-21T08:03:00.000","Title":"How to define an empty 2 dimensional list instead of data = [[\"\",\"\"],[\"\",\"\"],[\"\",\"\"],[\"\",\"\"]]","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been encountering this message after trying to import numpy, pandas, matplotlib, and seaborn all by themselves. I am not sure how to fix this. Any suggestions?\nI am using Python 3.8.8, matplotlib 3.3.4, pandas 1.2.4, numpy 1.20.1, seaborn 0.11.1.\nI have recently updated my Anaconda navigator to 2.1.0. Would this possibly have caused any issues?\nIn the shell command, after trying to import each of those packages individually, I see this message:\nIntel MKL FATAL ERROR: Cannot load libmkl_intel_thread.1.dylib.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":634,"Q_Id":72703299,"Users Score":1,"Answer":"Solution: I reinstalled Anaconda Navigator.","Q_Score":0,"Tags":"python,pandas,numpy,anaconda,conda","A_Id":72704362,"CreationDate":"2022-06-21T15:16:00.000","Title":"How to fix error: \"The kernel appears to have died. It will restart automatically.\" message?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This happened out of the blue, I was able to import cv2 but now I get 'AttributeError: partially initialized module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline' (most likely due to a circular import)' error when I import it. The things I tried:\n1-uninstalling and installing opencv.\n2-In cmd, I typed \"pip list\" and opencv-python package is listed. I ran \"python\" command and tried importing cv2 but I get the same error. Please help.","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":54688,"Q_Id":72706073,"Users Score":0,"Answer":"I changed my anaconda environment but it caused some other bugs. I just uninstall anaconda and installed it. It works now","Q_Score":37,"Tags":"python,opencv","A_Id":72718108,"CreationDate":"2022-06-21T19:17:00.000","Title":"AttributeError: partially initialized module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline' (most likely due to a circular import)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This happened out of the blue, I was able to import cv2 but now I get 'AttributeError: partially initialized module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline' (most likely due to a circular import)' error when I import it. The things I tried:\n1-uninstalling and installing opencv.\n2-In cmd, I typed \"pip list\" and opencv-python package is listed. I ran \"python\" command and tried importing cv2 but I get the same error. Please help.","AnswerCount":7,"Available Count":3,"Score":0.0285636566,"is_accepted":false,"ViewCount":54688,"Q_Id":72706073,"Users Score":1,"Answer":"Upgrading opencv solved the issue for me: !pip install opencv-python==4.6.0.66","Q_Score":37,"Tags":"python,opencv","A_Id":73304124,"CreationDate":"2022-06-21T19:17:00.000","Title":"AttributeError: partially initialized module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline' (most likely due to a circular import)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This happened out of the blue, I was able to import cv2 but now I get 'AttributeError: partially initialized module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline' (most likely due to a circular import)' error when I import it. The things I tried:\n1-uninstalling and installing opencv.\n2-In cmd, I typed \"pip list\" and opencv-python package is listed. I ran \"python\" command and tried importing cv2 but I get the same error. Please help.","AnswerCount":7,"Available Count":3,"Score":0.0285636566,"is_accepted":false,"ViewCount":54688,"Q_Id":72706073,"Users Score":1,"Answer":"As of February 2023, had the same error with opencv-python version 3.4.4.19. Upgrading to version 3.4.5.20 solved the problem.","Q_Score":37,"Tags":"python,opencv","A_Id":75506069,"CreationDate":"2022-06-21T19:17:00.000","Title":"AttributeError: partially initialized module 'cv2' has no attribute 'gapi_wip_gst_GStreamerPipeline' (most likely due to a circular import)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have pre build machine learning model (saved as pickle file) to predict classification.\nMy question is when I use new dataset to predict using Pickle file do I need do all preprocessing steps (like transformation and encoding) to the new testing dataset or can I use raw data set.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":147,"Q_Id":72706415,"Users Score":0,"Answer":"Yes, You will have to perform all the preprocessing on the test dataset as well. Such as scaling, one hot encoding etc.","Q_Score":0,"Tags":"python,machine-learning,data-science,pickle,data-science-experience","A_Id":72724390,"CreationDate":"2022-06-21T19:52:00.000","Title":"Predict a data using Pickle file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a 3D scatter plot using Plotly. I want to use tableau to visualize the plot so that it can be updated in realtime as data gets updated?\nCan we use Tabpy to show visualizations generated from Plotly?\nAs per my knowledge, Tabpy script can work only when return type is real, int or string. If my script return as figure will it work?\nAny help is much appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":45,"Q_Id":72708854,"Users Score":1,"Answer":"No it does not work with visualization. As you mentioned it only return values.","Q_Score":0,"Tags":"python,plotly,tabpy,scatterplot3d","A_Id":73272381,"CreationDate":"2022-06-22T01:51:00.000","Title":"Can we use Tabpy to show visualizations generated from Plotly?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Why do we use zip in optimizer.apply_gradients(zip(grads,self.generator.trainable_variables)) ?\nHow does it work?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":72719084,"Users Score":0,"Answer":"When computing gradients using tape.gradient() it returns the gradient for weight and bias as list of lists.\nExample:\n\ngrads= [ [ [1,2,3] , [1] ], [ [2,3,4] , [2] ] ] #Op from tape.gradient()\nshould interpret as [ [ [W],[B] ], [ [W],[B] ] ]\n\nConsider this as trainable_weights or Initialized_weights\n\ntrainable_weights= [ [ [2,3,4] , [0] ], [ [1,5,6],[8] ] ]\n\nSo Zip will take the first values of both variables and zip them for the optimizer to minimize it.\nThe Zipped zip(grads,trainable_weights) values will look like this.\n\n[ [1, 2, 3], [1] ], [ [2, 3, 4], [0] ]\n[ [2, 3, 4], [2] ], [ [1, 5, 6], [8] ]","Q_Score":0,"Tags":"python,tensorflow,keras","A_Id":72719085,"CreationDate":"2022-06-22T16:34:00.000","Title":"What does zip in optimizer.apply_gradients(zip(grads, self.generator.trainable_variables)) do?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm running docker-compose that has a php front end for uploading files, python watchdog for monitoring uploads via php and pandas for processing the resulting excel files (and later passed to a neo4j server).\nMy issue is that when pd.read_excel is reached in python, it just hangs with idle CPU. The read_excel is reading a local file. There are no resulting error messages. When i run the same combo on my host, it works fine. Using ubuntu:focal for base image for the php\/python\nAnyone run into a similar issue before or what could be the cause? Thanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":72721159,"Users Score":0,"Answer":"Fixed,\nI wasn't properly logging python exceptions and was missing openpyxl module.\nA simple pip install openpyxl fixed it.","Q_Score":0,"Tags":"python-3.x,pandas,docker,docker-compose","A_Id":72736445,"CreationDate":"2022-06-22T19:40:00.000","Title":"Panda's Read_Excel function stalling in Docker Container","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am having trouble because of\ncannot import name '_ClassNamePrefixFeaturesOutMixin' from 'sklearn.base' (C:\\Users\\yunhu\\anaconda3\\lib\\site-packages\\sklearn\\base.py)\nand I have no clue how to solve this problem. I uninstalled and then installed sklearn again\nbut It still does not work.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":727,"Q_Id":72725758,"Users Score":0,"Answer":"I had the same thing trying to import SMOTE in a Jupyter notebook (after having just installed imblearn).\nRestarting the kernel solved it.","Q_Score":0,"Tags":"python,scikit-learn","A_Id":72867877,"CreationDate":"2022-06-23T06:54:00.000","Title":"cannot import name '_ClassNamePrefixFeaturesOutMixin' from 'sklearn.base' (C:\\Users\\yunhu\\anaconda3\\lib\\site-packages\\sklearn\\base.py)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know how to add leading zeros for all values in pandas column. But my pandas column 'id' involves both numeric character like '83948', '848439' and Alphanumeric character like 'dy348dn', '494rd7f'. What I want is only add zeros for the numeric character until it reaches to 10, how can we do that?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":34,"Q_Id":72736848,"Users Score":1,"Answer":"I understand that you want to apply padding only on ids that are completely numeric. In this case, you can use isnumeric() on a string (for example, mystring.isnumeric()) in order to check if the string contains only numbers. If the condition is satisfied, you can apply your padding rule.","Q_Score":0,"Tags":"python,pandas,leading-zero","A_Id":72736895,"CreationDate":"2022-06-23T21:37:00.000","Title":"Adding leading zeros for only numeric character id","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Have a few questions regarding SnowPark with Python.\n\nWhy do we need Snowpark when we already have Snowflake python connector(freely) that can use to connect to Python jupyter with Snowflake DW?\n\nIf we use snowpark and connect with Local jupyter file to run ML model. Is it use our local machine computing power or Snowflake computing power?If its our local machine computing power how can we use Snowflake computing power to run the ml model?","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":915,"Q_Id":72755915,"Users Score":2,"Answer":"Using the existing Snowflake Python Connector you bring the Snowflake data to the system that is executing the Python program, limiting you to the compute and memory of that system. With Snowpark for Python, you are bringing your Python code to Snowflake to leverage the compute and memory of the cloud platform.","Q_Score":1,"Tags":"python-3.x,snowflake-cloud-data-platform,snowpark","A_Id":72773066,"CreationDate":"2022-06-25T17:23:00.000","Title":"Snowflake SnowPark Python -Clarifications","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can i passs a datetime format on a column with str such as June 13, 1980 (United States)\ni tried df['format_released'] = pd.to_datetime(df['released'], format='%m\/%d\/%Y')\ngot this error\n\ntime data 'June 13, 1980 (United States)' does not match format '%m\/%d\/%Y' (match)","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":69,"Q_Id":72758325,"Users Score":2,"Answer":"The correct format is: pd.to_datetime(pd.to_datetime(df['released'], format='%B %d, %Y')\nFor the full name, you need to specify %B for the format.\nYou don't need the value \"(United States)\" in the string.","Q_Score":2,"Tags":"python,jupyter-notebook","A_Id":72758347,"CreationDate":"2022-06-26T01:28:00.000","Title":"time data 'June 13, 1980 (United States)' does not match format '%m\/%d\/%Y' (match)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to develop a android application which will capable of doing predictions on device meaning I have to perform every action on the device itself\nThe application has to extract features from audio and feed them to tensorflow lite model for prediction\nFor training the model, I extracted the features from audio using Librosa, but I am not able to find a suitable framework which can help me extract features from audio like librosa and make prediction using tflite model\nI found out that I can do something using Ironpython or python.net in unity but I am still confused about how to achieve it.\nSo my question is whether there is way to run the python script written on android device with unity.\nAlso if there are other frameworks, that can help me achieve my goal of on-device prediction, ,I will welcome those suggestions","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":68,"Q_Id":72761172,"Users Score":1,"Answer":"It was not feasible to accomplish the task using unity effectively.\nI solved the problem using chaquopy plugin for android studio, this enables you to use python with java or you can code the whole android application in python using chaquopy.","Q_Score":1,"Tags":"python,android,unity3d,audio","A_Id":72775590,"CreationDate":"2022-06-26T11:29:00.000","Title":"Using python for obtaining features from audio in unity","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been looking for the .csv file from my directory, but pandas can't quite figure out where is it even when I already specified the entire directory. I use Jupyter Notebook in my android phone and I'm coding python but I'm still new to this.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75,"Q_Id":72765998,"Users Score":0,"Answer":"you can add path over here....\nfilenames = glob.glob(path + \"*.csv\")","Q_Score":0,"Tags":"android,python-3.x,pandas,csv,jupyter","A_Id":72767814,"CreationDate":"2022-06-27T00:28:00.000","Title":"How do I find the directory of a csv using pandas in Jupyter Android?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a CSV file where each row has date and time and temperature, where the date and time are UTC.\nI can plot temperature as a function of time, but the X axis shows the UTC time.\nWhat is the best way to generate a plot where the X axis shows my local time (which is currently Pacific Daylight Time)?\nThe current Python code is as follows, so currently I am just plotting temperature as a function of the UTC time and am not using date, which is in a separate column.\nI have done some searching and reading, but so far have not found anything that directly addresses this problem. I am a newbie in Pandas, so I am sure that a straightforward solution is out there somewhere.\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\ndf = pd.read_csv('LFEM_data-.csv',skiprows=1)\nxvalues='Time'\nyvalues = ['Temperature']\ndf.plot(x=xvalues, y=yvalues)\nplt.show()","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":232,"Q_Id":72780440,"Users Score":0,"Answer":"The following worked:\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\ndf = pd.read_csv('LFEM_data-.csv',skiprows=1)\ndf['DateTime'] = df['Date'] + ' ' + df['Time']\ndf['DateTime'] = pd.to_datetime(df['DateTime'])\nts_utc = df['DateTime'].dt.tz_localize('UTC')\ndf['DateTime_local'] = ts_utc.dt.tz_convert('America\/Tijuana')\nxvalues='DateTime_local'\nyvalues = ['Temperature']\ndf.plot(x=xvalues, y=yvalues)\nplt.show()","Q_Score":0,"Tags":"python,pandas,time-series,utc","A_Id":72781669,"CreationDate":"2022-06-28T03:29:00.000","Title":"How to plot local date and time vs temperature from CSV file that uses UTC for date and time","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am converting multiple log-mel spectrograms from .wav files to images.\nI want to destroy as little information as possible as I plan to use the resulting images for a computer vision task.\nTo convert the data to an image format, I currently use a simple sklearn.MinMaxScaler((0, 255)).\nTo fit this scaler, I use the minimal and the maximal energy of all frequencies on all my spectrograms.\nShould I scale my spectrograms with minimal and maximal energy for each specific frequency?\nDoes it make sense to have different frequencies with different scaling features?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":726,"Q_Id":72785857,"Users Score":1,"Answer":"Spectrograms are tricky to use as input to computer vision algorithms, specially to neural networks, due to their skewed, non-normal distribution nature. To tackle this you should:\n\nNormalized the input: transform the values either with a simple log(1+c) (first option) or a box-cox transformation (second option), which should expand low values and compress high ones, making the distribution more Gaussian.\nThen bring the transformed values into an interval suitable for your use case. In the case of CNNs a MinMaxScaler should be good enough for this, but change the interval to [0, 1], i.e. sklearn.MinMaxScaler((0, 1)). For classic computer vision, this could be sklearn.MinMaxScaler((0, 255))\n\nSo,\n\nShould I scale my spectrograms with minimal and maximal energy for\neach specific frequency?\n\nYes, once the normalization is done\nand\n\nDoes it make sense to have different frequencies with different\nscaling features?\n\nIt depends. For CNNs your input data needs to be consistent for good results. For classic computer vision approaches, could be, depending on what you want to do with it","Q_Score":2,"Tags":"python,normalization,scaling,spectrogram,frequency-analysis","A_Id":72934638,"CreationDate":"2022-06-28T11:47:00.000","Title":"Normalize a melspectrogram to (0, 255) with or without frequency scaling","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What would cause pandas to set a column type to 'object' when the values I have checked are strings? I have explicitly set that column to \"string\" in the dtypes dictionary settings in the read_excel method call that loads in the data. I have checked for NaN or NULL etc, but haven't found any as I know that may cause an object type to be set. I recall reading string types need to set a max length but I was under the impression that pandas sets that to the max length of the column.\nEdit 1:\nthis seems to only happen in fields holding email addresses. While I don't think this has an effect, would the @ character be triggering this behavior?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":72791372,"Users Score":0,"Answer":"The dtype object comes from NumPy, it describes the type of element in a ndarray. Every element in an ndarray must have the same size in bytes. For int64 and float64, they are 8 bytes. But for strings, the length of the string is not fixed. So instead of saving the bytes of strings in the ndarray directly, Pandas uses an object ndarray, which saves pointers to objects; because of this the dtype of this kind ndarray is object.","Q_Score":0,"Tags":"python,pandas,numpy","A_Id":72791454,"CreationDate":"2022-06-28T18:18:00.000","Title":"pandas dtypes column coercion","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using Napari Image Analysis GUI to run the Allen Cell Segmenter (no response in Napari github or Allen Cell Forum, thought I'd try here) and getting the following error when I attempt to run the watershed for cutting function:\nImportError: cannot import name 'watershed' from 'skimage.morphology' (C:\\Users\\Murryadmin\\anaconda3\\envs\\napari-env\\lib\\site-packages\\skimage\\morphology_init_.py)\n\nc:\\users\\murryadmin\\anaconda3\\envs\\napari-env\\lib\\site-packages\\aicssegmentation\\core\\utils.py(449)watershed_wrapper()\n-> from skimage.morphology import watershed, dilation, ball\n\nAnyone have any potential fixes for this?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1688,"Q_Id":72793916,"Users Score":3,"Answer":"watershed was moved from skimage.morphology to skimage.segmentation in version 0.17. There was a pointer from morphology to the new function in segmentation in 0.17 and 0.18, but it was removed in 0.19. The Allen Cell Segmenter needs to be updated to match the more modern scikit-image version, so I would raise an issue in their GitHub repository if I were you.\nDowngrading to scikit-image 0.18 could fix the Allen Cell Segmenter itself, but unfortunately napari requires 0.19+.","Q_Score":1,"Tags":"python,image,scikit-image,python-napari","A_Id":72796604,"CreationDate":"2022-06-28T22:44:00.000","Title":"How can I import watershed function from scikit-image?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using xgboost with python in order to perform a binary classification in which the class 0 appears roughly 9 times more frequently than the class 1. I am of course using scale_pos_weight=9. However, when I perform the prediction on the testing data after training the model using train_test_split, I obtain a y_pred with twice the elements belonging to the class 1 than it should (20% instead of 10%). How can I correct this output? I thought the scale_pos_weight=9 would be enough to inform the model the expected proportion.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":72795022,"Users Score":0,"Answer":"Your question seems sketchy: what is y_pred?\n+Remember you are better to run a grid search or Bayesian optimizer to figure out the best scores.","Q_Score":0,"Tags":"python,xgboost,imbalanced-data","A_Id":74901739,"CreationDate":"2022-06-29T02:09:00.000","Title":"Imbalanced classification with xgboost in python with scale_pos_weight not working properly","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently installed anaconda and was using jupyter notebook to write my code. I also installed Visual Studio code and ran my jupyter files (.ipynb) in VSC.\nWhenever I try to import pandas in VSC within a jupyter file (.ipynb), I get an error that says ModuleNotFoundError: No module named 'pandas'. However, when I run the same file in Chrome on the Jupyter notebook website, I get no such error and my file is able to import pandas.\nHow can I fix the problem?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":544,"Q_Id":72795608,"Users Score":0,"Answer":"This is due to the fact that when you open an .ipynb file in jupyter-notebook, it is running in a conda environment with pandas installed. And it is running on VSC in a Windows or OS environment, which may not have pandas installed.\nOn cmd run, pip install pandas then import it on VSC.","Q_Score":0,"Tags":"python,pandas,jupyter-notebook,jupyter","A_Id":72795726,"CreationDate":"2022-06-29T03:51:00.000","Title":"\"No module named pandas\" error on visual studio code - NO ISSUE on jupyter notebook","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I recently installed anaconda and was using jupyter notebook to write my code. I also installed Visual Studio code and ran my jupyter files (.ipynb) in VSC.\nWhenever I try to import pandas in VSC within a jupyter file (.ipynb), I get an error that says ModuleNotFoundError: No module named 'pandas'. However, when I run the same file in Chrome on the Jupyter notebook website, I get no such error and my file is able to import pandas.\nHow can I fix the problem?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":544,"Q_Id":72795608,"Users Score":0,"Answer":"Thanks for the above comments. On cmd run (pip show pandas), it actually showed pandas was installed.\nHowever, the reason was because the selected interpreter was a non-conda version, which can be changed in the top right of VCS. Hope this helps anyone who has a similar issue!","Q_Score":0,"Tags":"python,pandas,jupyter-notebook,jupyter","A_Id":72809811,"CreationDate":"2022-06-29T03:51:00.000","Title":"\"No module named pandas\" error on visual studio code - NO ISSUE on jupyter notebook","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"This is day 1 of my journey into python (day 0 was a right pita).\nI have am using Azure DataBricks (Python\/Pyspark) and ADLS Gen2 Storage container.\nWithin my container I have the below partition structure. Which is data stored post ADF Pipeline.\nARCHIVE\/[YEAR]\/[Month]\/[Day]\/[Time]\/[approx 150 files].parquet (account.parquet, customer.parquet, sales.parquet etc)\nWhat I would like to achieve is to be able to do is to traverse the container and for example any files where the filemask contains \"account\" send to the accountdf.\nThis would allow me to be able to compare the data frame with the data in the synapse pool to ensure there are no gaps within the data.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":72797133,"Users Score":0,"Answer":"If all account, customer, sales are under one date time partition, then you can use\naccountdf = spark.read.parquet(\"wasbs:\/\/@.blob.core.windows.net\/account*.parquet\")","Q_Score":0,"Tags":"python,pyspark,parquet,azure-data-lake-gen2","A_Id":72797263,"CreationDate":"2022-06-29T07:06:00.000","Title":"PySpark - Pull all files into a dataframe based off filemask","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I have files in a S3 bucket\nMyBucket\/object\/file 1.csv, file 2.csv, file 3.csv,\nI have loaded this data into single dataframe and need to do some transformation based on columns.Then I want to write to transform column values now I want to overwrite the files back in to same file1.csv, file2.csv,file3.csv.\nWhen I give overwrite commands its creating another file in same folder and loading values\nHow to write function or code using python and spark or scala","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":72807141,"Users Score":0,"Answer":"Well, I'm not sure if my answer is the best, but I hope it is.\nBasically To write output to file, Spark repects hadoop config, which is mapreduce.output.basename\nDefault value should be something like part-00000.\nYou can adjust this config but can't make exactly same as your file name convention.\nSo you have to write and rename to your file name convention.\nSo procedure is simple.\n\nWrite file to the path.\nRename output file to original name(may delete old and rename)","Q_Score":0,"Tags":"python,scala,apache-spark,amazon-s3,pyspark","A_Id":72810939,"CreationDate":"2022-06-29T19:55:00.000","Title":"Overwrite in to same partition files after transformation based on the filename using spark","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I have files in a S3 bucket\nMyBucket\/object\/file 1.csv, file 2.csv, file 3.csv,\nI have loaded this data into single dataframe and need to do some transformation based on columns.Then I want to write to transform column values now I want to overwrite the files back in to same file1.csv, file2.csv,file3.csv.\nWhen I give overwrite commands its creating another file in same folder and loading values\nHow to write function or code using python and spark or scala","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":72807141,"Users Score":0,"Answer":"Whenever you are saving a file in spark it creates directory then part files are created.\nyou can limit part files from many files to 1 using coalesce(1), but you can't control the directory creation.\ndf2.coalesce(1).write.mode(\"overwrite\").csv(\"\/dir\/dir2\/Sample2.csv\")\nit will create one directory namely Sample2.csv and will create one part file.\nI hope it cleared your doubt.","Q_Score":0,"Tags":"python,scala,apache-spark,amazon-s3,pyspark","A_Id":72816120,"CreationDate":"2022-06-29T19:55:00.000","Title":"Overwrite in to same partition files after transformation based on the filename using spark","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently building a docker image that can be used to deploy a deep learning application. The image is fairly large with a size roughly 6GB. Since the deployment time is affected by the size of docker container, I wonder if there are some of best practices to reduce the image size of ml-related applications.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":73,"Q_Id":72811155,"Users Score":1,"Answer":"First, keep the data (if any) apart from the image (in volumes for example).Also, use .dockerignore to ignore files you don't want in your image.\nNow some techniques:\nA first technique is to use multistage builds. For example, an image just to install dependencies and another image that starts from the first one and run the app.\nA second technique is to minimize the number of image layers. Each RUN , COPY and FROM command creates a different layer. Try to combine commands in a single one using linux operators (like &&).\nA third technique is to take profit of the caching in docker image builds. Run every command you can before copying the actual content into the image. For exemple, for a python app, you might install dependencies before copying the contents of the app inside the image.","Q_Score":0,"Tags":"python,docker,machine-learning,deep-learning","A_Id":72811405,"CreationDate":"2022-06-30T06:25:00.000","Title":"What are some of the best practices to reduce the size of ml-related docker image?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a date column in pandas data frame as following\n\n\n\n\nDate\nDepartment Cash flow\n\n\n\n\nFriday, 1 April 2022\n1550\n\n\nThursday, 26 August 2021\n2550\n\n\nWednesday, 9 September 2020\n1551\n\n\n\n\nI want to remove the days on the left of actual dates including the comma as in the Date column so that it looks as in\n\n\n\n\nDate\nDepartment Cash flow\n\n\n\n\n1 April 2022\n1550\n\n\n26 August 2021\n2550\n\n\n9 September 2020\n1551\n\n\n\n\nThis will help me organise the data as per the chronology in the dates.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":59,"Q_Id":72829222,"Users Score":1,"Answer":"It depends on the data type of your date. Is it a string or a datetime format?\nIf its a string you can use slicing methods, otherwise you can use the datetime library to stringify your date and then slice it.","Q_Score":0,"Tags":"python,pandas,dataframe,datetime","A_Id":72829364,"CreationDate":"2022-07-01T12:24:00.000","Title":"Removing days from date columns in pandas dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a Python datastructure that functions as a sorted list that has the following asymptotics:\n\nO(1) pop from beginning (pop smallest element)\nO(1) pop from end (pop largest element)\n>= O(log n) insert\n\nDoes such a datastructure with an efficient implementation exist? If so, is there a library that implements it in Python?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":47,"Q_Id":72835881,"Users Score":1,"Answer":"A regular red\/black tree or B-tree can do this in an amortized sense. If you store pointers to the smallest and biggest elements of the tree, then the cost of deleting those elements is amortized O(1), meaning that any series of d deletions will take time O(d), though individual deletions may take longer than this. The cost of insertions are O(log n), which is as good as possible because otherwise you could sort n items in less than O(n log n) time with your data structure.\nAs for libraries that implement this - that I\u2019m not sure of.","Q_Score":1,"Tags":"python,data-structures,time-complexity","A_Id":72835896,"CreationDate":"2022-07-02T01:37:00.000","Title":"Efficient Sorted List Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In numpy if we want to raise a matrix A to power N (but raise it as defined in mathematics, in linear algebra in particular), then it seems we need to use this function\nnumpy.linalg.matrix_power\nIsn't there a simpler way? Some Python symbol\/operator?\nE.g. I was expecting A**N to do this but it doesn't.\nSeems that A**N is raising each element to power N, and not the whole matrix to power N (in the usual math sense). So A**N is some strange element-wise raising to power N.\nBy matrix I mean of course a two-dimensional ndarray.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":60,"Q_Id":72839191,"Users Score":0,"Answer":"numpy.linalg.matrix_power is the best way as far as I know. You could use dot or * in a loop, but that would just be more code, and probably less efficient.","Q_Score":2,"Tags":"python,numpy","A_Id":72839234,"CreationDate":"2022-07-02T12:51:00.000","Title":"Raise matrix to power N as in maths","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to find a method of duplicating all of the data in a row for every month between dates. Start date and end date.\nThis is the dataset:\n\n\n\n\nID\nStart\nEnd\n\n\n\n\n1007\n2022-03-01\n2022-08-01\n\n\n1008\n2019-11-01\n2020-02-01\n\n\n\n\nWhat I would like to do is repeat the row, incrementing the date, every month between the start and end values.\nExample outcome:\n\n\n\n\nID\nStart\nEnd\n\n\n\n\n1007\n2022-03-01\n2022-08-01\n\n\n1007\n2022-04-01\n2022-08-01\n\n\n1007\n2022-05-01\n2022-08-01\n\n\n1007\n2022-06-01\n2022-08-01\n\n\n1007\n2022-07-01\n2022-08-01\n\n\n1007\n2022-08-01\n2022-08-01\n\n\n1008\n2019-11-01\n2020-02-01\n\n\n1008\n2019-12-01\n2020-02-01\n\n\n1008\n2020-01-01\n2020-02-01\n\n\n1008\n2020-02-01\n2020-02-01\n\n\n\n\nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":72850040,"Users Score":0,"Answer":"you can move in all row data and check data_start is preset start duplicated and when present the date_end can you exit the loop\nThanks","Q_Score":0,"Tags":"python,pandas,date","A_Id":72850064,"CreationDate":"2022-07-03T21:28:00.000","Title":"Pandas duplicate data between 2 dates","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have plotted a 3D radiation plot using Python, with theta on the x-axis and phi on the y-axis and magnitudes along z. I initially used numpy.meshgrid to create the 2d array for thetas and phis. Now how can I find the peak points( main lobe and side lobes) from this graph?\nfind_peak function of the scipy.signal library seems to deal with 1d array only.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":98,"Q_Id":72855208,"Users Score":0,"Answer":"Try to use maximum_filter from scipy.ndimage.filters, or even just a simple thresholding could do the trick, provided prior smoothing\/transformations like erosion\/dilation.","Q_Score":0,"Tags":"python","A_Id":72856738,"CreationDate":"2022-07-04T10:21:00.000","Title":"How to find peaks in python for a 3D plot?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My dataset is composed of records of music streamings from users.\nI have around 100 different music genres and I would like to cluster them depending on the distribution of ages of listeners.\nTo be more clear, ages of users are divided into \"blocks\" (A1: 0-10 years; A2: 11-20 years,..., A6: 61+) and thus an example of the data I would like to cluster is the following:\nPop: 0.05 A2; 0.3 A3; 0.35 A3; 0.2 A4; 0.05 A5; 0.05 A6\nRock: 0.05 A2; 0.2 A3; 0.2 A3; 0.1 A4; 0.15 A5; 0.1 A6\nI would like to obtain clusters of genres with similar distributions.\nHow can I do this in Python? Can I just treat each genre as a datapoint in a 6-dimensional space or should I use something more refined? For example, can I use a custmized distance for distirbutions in a clustering algorithm?\nThank you","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":104,"Q_Id":72855249,"Users Score":0,"Answer":"If you have prior knowledge to design your distance function with, all algorithms from scipy.cluster.hierarchy should support that.\nMy opinion: you should be fine with classic clustering methods from the problem statement, at least one (KMeans, Spectral, DBSCAN ... with proper parameters) should do the trick.","Q_Score":1,"Tags":"python,cluster-analysis,distribution","A_Id":72856831,"CreationDate":"2022-07-04T10:24:00.000","Title":"Clustering discrete distributions in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project where products in production have a defect, but in very rare cases. For example 1\/1,000,000 products have a defect.\nHow could I generate data, in R, Python, or Excel, that would represent samples from this distribution ?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":38,"Q_Id":72868358,"Users Score":1,"Answer":"In R you could do: sample(c(1, rep(0, (1e6)-1)), size = 10)\nYou can adjust the sizing parameter accordingly. With size=10 you'll get 10 samples: [1] 0 0 0 0 0 0 0 0 0 0\nIt'll take a while before you see a 1 with this probability of 1\/1e6.","Q_Score":0,"Tags":"python,r,excel,statistics,statistics-bootstrap","A_Id":72868439,"CreationDate":"2022-07-05T11:04:00.000","Title":"Generate sample data according to a suposed proportion","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with a dataset. As a precautionary measure, I created a back-up copy using the following command.\nOrig. Dataframe = df\ndf_copy = df.copy(deep = True)\nNow, I dropped few columns from original dataframe (df) by mistake using inplace = True.\nI tried to undo the operation, but no use.\nSo, the question is how to get my original dataframe (df) from copied dataframe (df_copy) ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":109,"Q_Id":72871156,"Users Score":0,"Answer":"Yoy cannot restore it. Code like below dosen't work.\ndf = df_copy.copy(deep = True)\nEvery variables which reference original df keep reference after operation above.","Q_Score":0,"Tags":"python,pandas,dataframe,copy,drop","A_Id":72871423,"CreationDate":"2022-07-05T14:30:00.000","Title":"How to undo the changes made in original DataFrame?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to calculate the number of nodes a tree will have at a given depth if the binary tree is not balanced.\nI know that in the case of a perfectly balanced, you can use 2^d to calculate the number of nodes, where d is the depth of a tree.\nAssume there is a binary tree. At the root level, it only has one node. Also, assume that the root node only has one child instead of 2. So at the next depth, there is only one node instead of 2. which means that at the next depth, there will be only two nodes instead of 4. in the next depth, there will be eight instead of 16.\nSo yeah is there any way I can foretell the number of nodes there will be at a given depth based on the number of nodes present or not present in the previous depth.\nAny kind of answer would do if there is a mathematical formula that will help. If you know a way I could do it iteratively in breadth-first search order in any programming language that would help too.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":115,"Q_Id":72876079,"Users Score":0,"Answer":"If you know the number of nodes at depth \ud835\udc51 is \ud835\udc5b, then the number of nodes at depth \ud835\udc51 + 1 lies between 0 and 2\ud835\udc5b. The minimum of 0 is reached when all those nodes at depth \ud835\udc5b happen to be leaves, and the maximum of 2\ud835\udc5b is reached when all those nodes at depth \ud835\udc5b happen to have two children each.","Q_Score":1,"Tags":"python,algorithm,math,binary-tree,nodes","A_Id":72878184,"CreationDate":"2022-07-05T22:13:00.000","Title":"How to calculate the number of nodes in an unbalanced binary tree at a given depth","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Could you tell me please if there is a suitable quantizing method in the following case (preferrably implemented in python)?\nThere is an input range where majority of values are within +-2 std from mean, while some huge outliers are present.\nE.g. [1, 2, 3, 4, 5, 1000]\nQuantizing it to output range of e.g. 0-255 would result in loss of precision because of huge outlier 1000 (1, 2, 3, 4, 5 will all become 0).\nHowever, it is important to keep precision for those values which are within several std from mean.\nThrowing away the outliers or replacing them with NaN is not acceptable. They should be kept in some form. Roughly, using example above, output of quantization should be something like [1, 2, 3, 4, 5, 255]\nThank you very much for any input.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":201,"Q_Id":72894055,"Users Score":1,"Answer":"I can think of 2 answers to your question.\n\nYou write \"huge outlier\". The term outlier suggest that this number does not really fit the data. If you really have evidence that this observation is not representative (say because the measurement device was broken temporarily), then I would omit this observation.\nAlternatively, such high values might occur because this variable can truly span a large range of outcomes (e.g. an income variable with Elon Musk in the sample). In this situation I would consider a transformation of the input, say take the logarithm of the numbers first. This would transform your list [1,2,3,4,5,1000] to [0,0.69,1.10,1.39,1.61,6.91]. These values are already closer together.\n\nHowever, regardless of choices 1 or 2, it is probably best to anyways compare the outcomes with and without this outlier. You really want to avoid your conclusions being driven by this single observation.","Q_Score":0,"Tags":"python,precision,outliers,quantization,data-transform","A_Id":72894260,"CreationDate":"2022-07-07T07:46:00.000","Title":"Method to quantize a range of values to keep precision when signficant outliers are present in the data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a data flowing in from STDF file format , which is testing machines output file format used by semiconductor manufacturing industry\nI need to read the file in python and analyze machine output downtime and other details uploaded in the file\nI googled for solutions in Github and other platform , there is no bug free modules available in python and also not documented properly to implement the codes with the existing modules","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":699,"Q_Id":72897110,"Users Score":0,"Answer":"I wrote a commercial module STDF QuickChange that will transform STDF into more usable formats such as CSV. The primary output format has one row per unit and one column per test. It's not python but you could execute it from python and then load the csv in with python. If you are loading datalog data and want the limits also, there are options to store the limits in the first rows.","Q_Score":0,"Tags":"python,pandas,dataframe,file,analytics","A_Id":73333779,"CreationDate":"2022-07-07T11:35:00.000","Title":"How to transfer data from STDF file to Pandas dataframe in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I make a contour plot with Python, I have been using a set_aspect function. Because this allows me to avoid image\/contour-lines distortion caused by changing both axis.\n\n: for example, ax1.set_aspect('equal', 'box-forced')\n\nBut, I just found that the option 'box-forced' is not valid in Python3.\n\nSo my question is, is there any alternative of the 'box-forced' option in Python3? I want exactly the same effect as the ax1.set_aspect('equal', 'box-forced') in Python2.\n\nThanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":66,"Q_Id":72898548,"Users Score":0,"Answer":"I just found that there is a function plt.axis('scaled').\nIt seems that this does what I wanted in a recent version (3.7) of Python3.","Q_Score":0,"Tags":"python-3.x,matplotlib,contour,aspect-ratio","A_Id":72898665,"CreationDate":"2022-07-07T13:17:00.000","Title":"The x and y axis scaling in Contour plot","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to compare two dataframes (df and df2) using .eq(), but it gives me false. I'm sure about the values:\n\nprint(df['ano'])\n0 2021\nName: ano, dtype: int64\n\n\nprint(df2['ano'])\n0 2020\n1 2019\n2 2019\n3 2018\n4 2017\n... \n89 2020\n90 2017\n91 2018\n92 2021\n93 2021\nName: ano, Length: 94, dtype: int64\n\n\nprint(df['ano'].eq(df2['ano']))\n0 False\n1 False\n2 False\n3 False\n4 False\n... \n89 False\n90 False\n91 False\n92 False\n93 False\nName: ano, Length: 94, dtype: bool","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":53,"Q_Id":72901872,"Users Score":0,"Answer":"Solution:\ndf = df.drop_duplicates()\nx = 0\ny = 0\ndf = df.reset_index()\ndf2 = df2.reset_index()\nwhile(x = len(df)):\n while(y = len(df2)):\n if((df.at[x, 'a'] == df2.at[y, 'a']) & (df.at[x, 'b'] == df2.at[y, 'b']) & (df.at[x, 'c'] == df2.at[y, 'c'])):\n print('found')\n y += 1\n x += 1\n y = 0","Q_Score":0,"Tags":"python-3.x,pandas,dataframe","A_Id":72917331,"CreationDate":"2022-07-07T17:13:00.000","Title":"Problem with .eq() when comparing dataframes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example, some numpy functions want me to specify datatype, like np.fromiter(). What am I supposed to choose for floats so that it is consistent with everything else I have?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":87,"Q_Id":72910434,"Users Score":0,"Answer":"Use float, the Python built-in type. NumPy understands that (and will use C doubles under the hood, which is exactly what Python itself does as well).","Q_Score":0,"Tags":"python,numpy","A_Id":72910840,"CreationDate":"2022-07-08T11:00:00.000","Title":"What is the default datatype for numpy floats?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I generate random field with GStools for my correlation function: c(x)= (x)^(a-2)\/(x^2 + 1)^(a\/2) for differences values x and a=0.5?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":72926277,"Users Score":0,"Answer":"This is not possible with GSTools, since your covariance model is unbound, which refers to an infinite variance of the random field. GSTools only supports random fields with finite mean and variance.","Q_Score":2,"Tags":"python,r","A_Id":73461400,"CreationDate":"2022-07-10T05:11:00.000","Title":"generate random number with GStools","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"While trying to do ndimage.convolve on big numpy.memmap, exception occurs:\nException has occurred: _ArrayMemoryError\nUnable to allocate 56.0 GiB for an array with shape (3710, 1056, 3838) and data type float32\nSeems that convolve creates a regular numpy array which won't fit into memory.\nCould you tell me please if there is a workaround?\nThank you for any input.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":73,"Q_Id":72928565,"Users Score":2,"Answer":"Scipy and Numpy often create new arrays to store the output value returned. This temporary array is stored in RAM even when the array is stored on a storage device and accessed with memmap. There is an output parameter to control that in many functions (including ndimage.convolve). However, this does not prevent internal in-RAM temporary arrays to be created (though such array are not very frequent and often not huge). There is not much more you can do if the output parameter is not present or a big internal is created. The only thing to do is to write your own implementation that does not allocate huge in-RAM array. C modules, Cython and Numba are pretty good for this. Note that doing efficient convolutions is far from being simple when the kernel is not trivial and there are many research paper addressing this problem.","Q_Score":0,"Tags":"python,numpy,convolution,ndimage,numpy-memmap","A_Id":72928780,"CreationDate":"2022-07-10T12:44:00.000","Title":"Performing ndimage.convolve on big numpy.memmap: Unable to allocate 56.0 GiB for an array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to detect the unique\/foreign objects in a conveyor. The problem is in our case is, we don't know which type of featured object is passes through conveyor along with raw material. I am familiar with object detection techniques such as yolov and detectron which can detect object based on the feature of object that we annotate. But in our case we don't know the feature of object.\nI am wondering for some generic object proposal models for detection. Please give some idea is there any pre-trained unsupervised models which suits for this? or some methods or algorithm that what can i go with?. I hope i had explained my problem as much enough. Thanks in advance.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":258,"Q_Id":72935129,"Users Score":2,"Answer":"I think I understood well your issue...\nIf you do not want to train an object detection model because you may do not have the bounding boxes corresponding to the objects, you have several options. However, I do not think there is a pretrained model that fits on your problem since you should fine-tune it, and therefore, you should have some annotations.\n\nOne think you could do, as Virgaux Pierre said, you could use some classic clustering segmentation.\nOn the other hand, you could use a weakly-supervised approach which it only needs image-level labels, instead of the bounding boxes. This approach could fit well if you do not need high mAP. You could use CAM, GradCAM or other techniques to obtain activation maps. Furthermore, this approaches are easy to implement with a simple NN and some forward\/backward hooks.\n\nHope it helps.","Q_Score":2,"Tags":"python,object-detection,unsupervised-learning","A_Id":72935735,"CreationDate":"2022-07-11T07:35:00.000","Title":"Unsupervised object detection","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to use XGBRegressor for my data but keep getting the above error when doing a model.fit.\nI have tried:\nnp.any(np.isnan(df))\nnp.all(np.isfinite(df))\nwhich are both true.\nI tried getting rid of the inf and null values using:\ndf.replace([np.inf, -np.inf], np.nan, inplace=True)\ndf.fillna(0, inplace=True)\nbut the error still occurs.\nnp.all(np.isfinite(df)) is still showing true.\nMost errors I found on the website says \"Input contains..\" and not \"Label contains..\"","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":210,"Q_Id":72937490,"Users Score":0,"Answer":"This is a long shot, but I was having a similar error and couldn't figure it out. It turns out I was doing a log transform right before I tossed my data into the regressor, and I had negative values in my output that were going to infinity. I didn't catch it because I looked for NAs\/infinite values before it hit the log transform part of the pipeline.","Q_Score":0,"Tags":"python,pandas,numpy,xgboost","A_Id":72968427,"CreationDate":"2022-07-11T11:01:00.000","Title":"XGBoostError:[18:46:19] D:\\Build\\xgboost\\xgboost-1.6-1.git\\src\\data\\data.cc:487:Check failed: valid: Label contains NaN, infinity or a value too large","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Typically the forward function in nn.module of pytorch computes and returns predictions for inputs happening in the forward pass. Sometimes though, intermediate computations might be useful to return. For example, for an encoder, one might need to return both the encoding and reconstruction in the forward pass to be used later in the loss.\nQuestion: Can Pytorch's nn.Module's forward function, return multiple outputs? Eg a tuple of outputs consisting predictions and intermediate values?\nDoes such a return value not mess up the backward propagation or autograd?\nIf it does, how would you handle cases where multiple functions of input are incorporated in the loss function?\n(The question should be valid in tensorflow too.)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":223,"Q_Id":72940912,"Users Score":2,"Answer":"\"The question should be valid in Tensorflow too\", but PyTorch and Tensorflow are different frameworks. I can answer for PyTorch at least.\nYes you can return a tuple containing any final and or intermediate result. And this does not mess up back propagation since the graph is saved implicitly from the tensors outputs using callbacks and cached tensors.","Q_Score":0,"Tags":"python,pytorch,autograd","A_Id":72944357,"CreationDate":"2022-07-11T15:22:00.000","Title":"Forward function with multiple outputs?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a matrix m = np.array([[3,4], [5,6], [7,5]]) and a vector v = np.array([1,2]) and these two tensors can be multiplied.\nFor multiplication of the above two tensors, no. of columns of m must be equal to no. of rows of v\nThe shapes of m and v are (3,2) and (2,) respectively.\nHow is the multiplication possible, if m has 3 rows and 2 columns whereas v has 1 row and 2 columns?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":214,"Q_Id":72965968,"Users Score":2,"Answer":"In NumPy, I would recommend not thinking too much about \"rows\" and \"columns\"\nAn array in numpy can have any number of dimensions - you can make 1-dimensional, 2-dimensional or 100-dimensional arrays. A 100-dimensional array doesn't have \"rows\" and \"columns\" and neither does a 1-dimensional array.\nThe simple rule for multiplying 1- or 2-dimensional arrays is: the last axis \/ dimension of the first array has to have the same size as the first axis \/ dimension of the second array.\nSo you can multiply:\n\na (3, ) array by a (3, 2) array\na (3, 2) array by a (2, 3) array\na (2, 3) array by a (3, ) array","Q_Score":1,"Tags":"python,numpy,numpy-ndarray,tensor","A_Id":72966392,"CreationDate":"2022-07-13T12:02:00.000","Title":"Matrix by Vector multiplication using numpy dot product","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have retail sales data of the whole Germany, for example beer revenue. Now I want to find a way to divide that number into 596 cities of Germany based on the GDP per capita of each city and consumer spending of each city. So after that I can have the beer revenue of each single city in Germany.\nMy assumption is: city beer = city consumer spending * x + city GDP per cap * y. and then sum of city beer = national beer\nCould you please advice which kind of algorithm or a way to do it in Python?\nThank you so much.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60,"Q_Id":72976377,"Users Score":0,"Answer":"Your assumption is not so good. Some cities may spend a bigger fraction of their total consumation in beer.\nI think a better assumption is that it's a variable fraction of the total consumation in beer, let's say, city i consumes a fraction xi of national consumation in beer, where xi is somehow dependent on the GDP and on the city consumation.\nTo find xi, firstly scale the GDP to be in [delta, 1-delta], where delta is a positive quantity very close to zero, and keep their relative order. To do that, consider the biggest GDP is GDPmax and the minimum GDP is GDPmin. Then, map each GDPi to\nscaleGDPi = [(GDPi - GDPmin) * (1 - 2 * delta)\/(GDPmax-GDPmin)] + delta.\nIn a similar way, also scale the consumation to be in [delta, 1 - delta].\nThen, consider xi = scaleGDPi * scaleConsumationi * x and you get (city beer)i = scaleGDPi * scaleConsumationi * x * national beer\nBy imposing that the sum of city beer is equal to national beer, you get:\nx = 1 \/ (sum scaleGDPi * scaleConsumationi).\nSo, city beer = (scaleGDPi * scaleConsumationi * national beer)\/(sum scaleGDPi * scaleConsumationi).\nI think this would be a more adequate modelization of your problem.","Q_Score":0,"Tags":"python","A_Id":72977168,"CreationDate":"2022-07-14T06:53:00.000","Title":"Divide a number to many groups based on several factor using Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am performing multi-class text classification using BERT in python. The dataset that I am using for retraining my model is highly imbalanced. Now, I am very clear that the class imbalance leads to a poor model and one should balance the training set by undersampling, oversampling, etc. before model training.\nHowever, it is also a fact that the distribution of the training set should be similar to the distribution of the production data.\nNow, if I am sure that the data thrown at me in the production environment will also be imbalanced, i.e., the samples to be classified will likely belong to one or more classes as compared to some other classes, should I balance my training set?\nOR\nShould I keep the training set as it is as I know that the distribution of the training set is similar to the distribution of data that I will encounter in the production?\nPlease give me some ideas, or provide some blogs or papers for understanding this problem.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":273,"Q_Id":72976599,"Users Score":1,"Answer":"P(label | sample) is not the same as P(label).\nP(label | sample) is your training goal.\nIn the case of gradient-based learning with mini-batches on models with large parameter space, rare labels have a small footprint on the model training. So, your model fits in P(label).\nTo avoid fitting to P(label), you can balance batches.\nOverall batches of an epoch, data looks like an up-sampled minority class. The goal is to get a better loss function that its gradients move parameters toward a better classification goal.\nUPDATE\nI don't have any proof to show this here. It is perhaps not an accurate statement. With enough training data (with respect to the complexity of features) and enough training steps you may not need balancing. But most language tasks are quite complex and there is not enough data for training. That was the situation I imagined in the statements above.","Q_Score":2,"Tags":"python,nlp,text-classification,multiclass-classification","A_Id":72980134,"CreationDate":"2022-07-14T07:12:00.000","Title":"Is it necessary to mitigate class imbalance problem in multiclass text classification?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am performing multi-class text classification using BERT in python. The dataset that I am using for retraining my model is highly imbalanced. Now, I am very clear that the class imbalance leads to a poor model and one should balance the training set by undersampling, oversampling, etc. before model training.\nHowever, it is also a fact that the distribution of the training set should be similar to the distribution of the production data.\nNow, if I am sure that the data thrown at me in the production environment will also be imbalanced, i.e., the samples to be classified will likely belong to one or more classes as compared to some other classes, should I balance my training set?\nOR\nShould I keep the training set as it is as I know that the distribution of the training set is similar to the distribution of data that I will encounter in the production?\nPlease give me some ideas, or provide some blogs or papers for understanding this problem.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":273,"Q_Id":72976599,"Users Score":1,"Answer":"This depends on the goal of your classification:\n\nDo you want a high probability that a random sample is classified correctly? -> Do not balance your training set.\nDo you want a high probability that a random sample from a rare class is classified correctly? -> balance your training set or apply weighting during training increasing the weights for rare classes.\n\nFor example in web applications seen by clients, it is important that most samples are classified correctly, disregarding rare classes, whereas in the case of anomaly detection\/classification, it is very important that rare classes are classified correctly.\nKeep in mind that a highly imbalanced dataset tends to always predicting the majority class, therefore increasing the number or weights of rare classes can be a good idea, even without perfectly balancing the training set..","Q_Score":2,"Tags":"python,nlp,text-classification,multiclass-classification","A_Id":72979672,"CreationDate":"2022-07-14T07:12:00.000","Title":"Is it necessary to mitigate class imbalance problem in multiclass text classification?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 6 points with their coordinates in the Cartesian plane, XY, placed on one side of a circle. Using the least square method it is relatively easy to fit a circle to those 6 points and to find the radius and the center of the fitted circle again in XY coordinates..\nHowever, I also have Altitude Azimuth coordinates for those 6 points, because those points are on the sky, so I was wondering is it possible to fit a curve to those curved coordinates and then find the center of that circle.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":94,"Q_Id":72981025,"Users Score":0,"Answer":"Project your points on the unit sphere and compute the best fitting plane. The normal vector of the plane points towards the center of that circle. The radius of your circle will be equal to sqrt(1-d\u00b2) if d is the distance between the plane and the origin or acos(d) if you want the angle between the center and a point of the circle (since we're doing spherical geometry).\nEDIT : do an orthogonal regression because if you don't, the z-axis could be favored over the others or vice-versa.","Q_Score":1,"Tags":"python,math,optimization,physics,curve-fitting","A_Id":72994215,"CreationDate":"2022-07-14T13:06:00.000","Title":"How to find center of the circle if the data points are in curved coordinates system(Horizontal - AltAz)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My problem is as follows. I have a 2d image of some tissue and a 3d stack of the same region of the tissue and plus more tissue that does not go into my 2d image. Now, the 3d stack is slightly rotated with respect to the 2d image, but also has some local deformation, so I can't simply apply a rigid rotation transformation. I can scroll through the 3d stack and find individual features that are common to the 2d image. I want to apply a nonlinear transformation such that in the end I can find my source 2d image as a flat plane in the 2d stack.\nMy intuition is that I should use thin plate spline for this, may the scipy RBF interpolator, but my brain stops working when I try to implement it. I would use as input arguments let's say 3 points (x1, y1, 0), (x2, y2, 0) and (x3, y3, 0) with some landmarks on the 2d image and then (x1', y1', z1'), (x2', y2', z2') and (x3', y3', z3') for the corresponding points into the 3d stack. And then I get a transformation but how do I actually apply this to an image? The bit that confuses me is that I'm working with a 3D matrix of intensities, not a meshgrid.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":123,"Q_Id":72985698,"Users Score":0,"Answer":"scipy RBF is designed to interpolate scattered data, it's just a spline interpolator. To warp a domain, however, you need to find another library or write TPS (thin plate spline) yourself; scipy doesn't do it. I recommend you check VTK, for example. You feed your landmark information of the reference image and the target image to a vtkThinPlateSplineTransform object. Then you can get the transformation matrix and feed it to a vtkImageReslice object, which warps your image accordingly.","Q_Score":1,"Tags":"python,scipy,interpolation,spline","A_Id":74988616,"CreationDate":"2022-07-14T19:25:00.000","Title":"Thin plate spline interpolation of 3D stack python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've tried finding information regarding this online but the word overwrite does not show up at all in the official Tensorflow documentation and all the Stack Overflow questions are related to changing the number of copies saved by the model.\nI would just like to know whether or not the save function overwrites at all. If I re-train a model and would like to re-run the save function will the newer model load in when I use the load_model function? Or will it be a model that is trained on the same data twice? Do older iterations get stored somewhere?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":64,"Q_Id":72985903,"Users Score":0,"Answer":"You can use\nmodel.save('.\/model.h5')\nwhich will save the model to a file\nand\nmodel = tf.keras.models.load_model('.\/model.h5')\nto load the model","Q_Score":0,"Tags":"python,tensorflow","A_Id":72986147,"CreationDate":"2022-07-14T19:45:00.000","Title":"Does the TensorFlow save function automatically overwrite old models? If not, how does the save\/load system work?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've tried finding information regarding this online but the word overwrite does not show up at all in the official Tensorflow documentation and all the Stack Overflow questions are related to changing the number of copies saved by the model.\nI would just like to know whether or not the save function overwrites at all. If I re-train a model and would like to re-run the save function will the newer model load in when I use the load_model function? Or will it be a model that is trained on the same data twice? Do older iterations get stored somewhere?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":64,"Q_Id":72985903,"Users Score":0,"Answer":"I think Eyal's answer is a good point to start. However, if you want to be sure you can let your program delete the previous model or change it's name on the fly. I also observed different results when deleting a model and not, but this could also be effects of the different training process, due to random initialization and updating the weights.","Q_Score":0,"Tags":"python,tensorflow","A_Id":74481711,"CreationDate":"2022-07-14T19:45:00.000","Title":"Does the TensorFlow save function automatically overwrite old models? If not, how does the save\/load system work?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I used OpenCV's connectedComponentsWithStats function to do connected component labeling on an image. I would now like to be able to access all the pixels that have a certain label, or check the label that is assigned to a particular pixel while iterating through the image. How can I do this? I plan on iterating through the image using a nested for loop.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":126,"Q_Id":72988695,"Users Score":1,"Answer":"connectedComponents* literally gives you a \"labels map\". You look up the pixel's position in there and you get the label for that pixel.\nIf you need a mask for one specific label, you calculate mask = (labels_map == specific_label)\nDo not \"iterate\" through images. Python loops are slow. Whatever you do, consider how to express that with library functions (numpy, OpenCV, ...). There are ways to speed up python loops but that's advanced and likely not the right solution for your problem.","Q_Score":0,"Tags":"python,numpy,opencv,pixel,connected-components","A_Id":72993128,"CreationDate":"2022-07-15T03:15:00.000","Title":"How do you access a pixel's label ID that is given to each pixel after connected component labeling?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've done research and can't find anything that has solved my issue. I need a python script to read csv files using a folder path. This script needs to check for empty cells within a column and then display a popup statement notifying users of the empty cells. Anything helps!!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":282,"Q_Id":72997237,"Users Score":0,"Answer":"Use the pandas library\npip install pandas\nYou can import the excel file as a DataFrame and check each cell with loops.","Q_Score":1,"Tags":"python,csv","A_Id":72997258,"CreationDate":"2022-07-15T16:43:00.000","Title":"Python script to check csv columns for empty cells that will be used with multiple excels","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In pandas, how can I filter for rows where ALL values are higher than a certain threshold?\nSay I have a table that looks as follows:\n\n\n\n\nCity\nBird species one\nBird species two\nBird Species three\nBird species four\n\n\n\n\nA\n7\n11\n13\n16\n\n\nB\n11\n12\n13\n14\n\n\nC\n20\n21\n22\n23\n\n\nD\n8\n6\n4\n5\n\n\n\n\nNow I only want to get rows that have ALL COUNTS greater than 10. Here that would be Row B and Row C.\nSo my desired output is:\n\n\n\n\nCity\nBird species one\nBird species two\nBird Species three\nBird species four\n\n\n\n\nB\n11\n12\n13\n14\n\n\nC\n20\n21\n22\n23\n\n\n\n\nSo, even if a single values is false I want that row dropped. Take for example in the example table, Row A has only one value less than 10 but it is dropped.\nI tried doing this with df.iloc[:,1:] >= 10 which creates a boolean table and if I do df[df.iloc[:,1:] >= 10] it gives me table that shows which cells are satisfying the condition but since the first column is string all of it labelled false and I lose data there and turns out the cells that are false stay in there as well.\nI tried df[(df.iloc[:,2:] >= 10).any(1)] which is the same as the iloc method and does not remove the rows that have at least one false value.\nHow can I get my desired output? Please note I want to keep the the first column.\nEdit: The table above is an example table, that is a scaled down version of my real table. My real table has 109 columns and is the first of many future tables. Supplying all column names by hand is not a valid solution at all and makes scripting unfeasible.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1613,"Q_Id":72998419,"Users Score":1,"Answer":"df[(df[df.columns[1:]]>x).all(axis=1)] where x should be replaced with the values one wants to test turns out to be the easiest answer for me. This makes it possible to parse the dataframe without having to manually type out the column names. This also assumes that all of your columns other than the first one are integers. Please make note of the other answer above that tells you how to make note of dtypes if you have mixed data.\nI only slightly changed Rodrigo Laguna answer above.","Q_Score":1,"Tags":"python,pandas,dataframe","A_Id":72998940,"CreationDate":"2022-07-15T18:43:00.000","Title":"In pandas, how can I filter for rows where ALL values are higher than a certain threshold? And keep the index columns with the output?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"hello im on the path of learning the python and i am struggling to understand this problem can you please help me to solve this problem\nPrint out the 50th row of np_baseball.\nwhy the answer for this command is [49, :]\nFrom my perspective if the asking for the 50th it should be just [49] why there is additional :\nWill be extremely glad for your respond","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":166,"Q_Id":73004655,"Users Score":0,"Answer":"baseball is available as a regular list of lists\nImport numpy package\nimport numpy as np\nCreate np_baseball (2 cols)\nnp_baseball = np.array(baseball)\nPrint out the 50th row of np_baseball\nprint(np_baseball[49:50])\nSelect the entire second column of np_baseball: np_weight_lb\nnp_weight_lb=np_baseball[:,1]\nPrint out height of 124th player\nprint(np_baseball[123, 0])","Q_Score":0,"Tags":"python,numpy,2d,subset","A_Id":75346242,"CreationDate":"2022-07-16T13:36:00.000","Title":"Subsetting 2D NumPy Arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project where I am combining 300,000 small files together to form a dataset to be used for training a machine learning model. Because each of these files do not represent a single sample, but rather a variable number of samples, the dataset I require can only be formed by iterating through each of these files and concatenating\/appending them to a single, unified array. With this being said, I unfortunately cannot avoid having to iterate through such files in order to form the dataset I require. As such, the process of data loading prior to model training is very slow.\nTherefore my question is this: would it be better to merge these small files together into relatively larger files, e.g., reducing the 300,000 files to 300 (merged) files? I assume that iterating through less (but larger) files would be faster than iterating through many (but smaller) files. Can someone confirm if this is actually the case?\nFor context, my programs are written in Python and I am using PyTorch as the ML framework.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":448,"Q_Id":73006936,"Users Score":0,"Answer":"Usually working with one bigger file is faster than working with many small files.\nIt needs less open, read, close, etc. functions which need time to\n\ncheck if file exists,\ncheck if you have privilege to access this file,\nget file's information from disk (where is beginning of file on disk, what is its size, etc.),\nsearch beginning of file on disk (when it has to read data),\ncreate system's buffer for data from disk (system reads more data to buffer and later function read() can read partially from buffer instead of reading partially from disk).\n\nUsing many files it has to do this for every file and disk is much slower than buffer in memory.","Q_Score":0,"Tags":"python,machine-learning,pytorch,iteration,dataloader","A_Id":73011459,"CreationDate":"2022-07-16T19:02:00.000","Title":"Is it more beneficial to read many small files or fewer large files of the exact same data?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I installed vscode and anaconda on macos. I'm not a very experienced programmer and I can't figure out what is wrong. When I try to import numpy or pandas it says module not found. Any help to get this working is appreciated?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":46,"Q_Id":73013473,"Users Score":0,"Answer":"In VSCode you need to specify the interpreter you want to use to run your code. To do so, either click the button next to \"Python\" in the bottom right of the UI or search (CMD shift P) \"Python: select interpreter,\" then select the right interpreter \/ environment you want to use.","Q_Score":0,"Tags":"python,pandas,numpy,visual-studio-code,anaconda","A_Id":73013544,"CreationDate":"2022-07-17T16:23:00.000","Title":"VSCode, Anaconda on MacOS. Module not found (Pandas and Numpy for eg.)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I stored some financial market data in a Polars DataFrame. As for analysis, it is is fast to run some groupby(\"date\").agg() action.\nBut in a realtime scenario , the new data is coming time by time, I don't want to concat the new data with old data again and again, it is slow and use a lot of memory. So is there a blazing fast way to spilt the old data DataFrame into small DataFrame groupby datetime column which stored in a vector or hashmap, so when the new data comes, I just push the new into vector for future calculation?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":135,"Q_Id":73019936,"Users Score":0,"Answer":"Polars has a DataFrame::partition_by function for this.","Q_Score":0,"Tags":"dataframe,rust,python-polars,rust-polars","A_Id":73020125,"CreationDate":"2022-07-18T09:17:00.000","Title":"How to spilt a big DataFrame into Vec by group in Polars","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Trying to read excel table that looks like this:\n\n\n\n\n\nB\nC\n\n\n\n\nA\ndata\ndata\n\n\ndata\ndata\ndata\n\n\n\n\nbut read excel doesn't recognizes that one column doesn't start from first row and it reads like this:\n\n\n\n\nUnnamed : 0\nB\nC\n\n\n\n\nA\ndata\ndata\n\n\ndata\ndata\ndata\n\n\n\n\nIs there a way to read data like i need? I have checked parameters like header = but thats not what i need.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":147,"Q_Id":73023645,"Users Score":0,"Answer":"You can skip automatic column labeling with something like pd.read_excel(..., header=None)\nThis will skip random labeling.\nThen you can use more elaborate computation (e.g. first non empty value) to get the labels such as\ndf.apply(lambda s: s.dropna().reset_index(drop=True)[0])","Q_Score":1,"Tags":"python,excel,pandas","A_Id":73041155,"CreationDate":"2022-07-18T14:04:00.000","Title":"pandas read excel without unnamed columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"using pydiffmap, I could find a nice low dimension Manifold in my data, and extract what seems to be meaningful low dimension components.\nI would like now to reverse the operator, and project my data back to my original high dimensional space keeping only these few important dimensions I could identify.\nFirst, is this mathematically possible? And if so how to do it?\nThanks a lot!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":78,"Q_Id":73025089,"Users Score":0,"Answer":"I just went into the Diffusion algorithm behind the package, and realized that there is no guarantee that you can go from a vector in the diffusion space back into the data space.\nThis is because the diffusion space represent the distances to the original data points. So if at least two points are different, the null vector in the diffusion space (at distance 0 of all original data points in the data space) will have no equivalent in the data space.\nHope this can help someone else!","Q_Score":0,"Tags":"python,reverse,dimensionality-reduction","A_Id":73101038,"CreationDate":"2022-07-18T15:45:00.000","Title":"pydiffmap: How to reverse Diffusion Map Embedding and reconstruct original variables from several principal components?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Check application requirements \nCompile platform \nRun '\/usr\/bin\/python3 -m pythonforandroid.toolchain create --dist_name=demo3 --bootstrap=sdl2 --requirements=python3,kivy,pillow,kivymd,asyncio,bleak --arch armeabi-v7a --copy-libs --color=always --storage-dir=\"\/content\/.buildozer\/android\/platform\/build-armeabi-v7a\" --ndk-api=21 --ignore-setup-py --debug' \nCwd \/content\/.buildozer\/android\/platform\/python-for-android \n\n[INFO]: Will compile for the following archs: armeabi-v7a \n[INFO]: Found Android API target in $ANDROIDAPI: 27 \n[INFO]: Available Android APIs are (27) \n[INFO]: Requested API target 27 is available, continuing.\n[INFO]: Found NDK dir in $ANDROIDNDK: \/root\/.buildozer\/android\/platform\/android-ndk-r19c \n[INFO]: Found NDK version 19c \n[INFO]: Getting NDK API version (i.e. minimum supported API) from user argument \nTraceback (most recent call last): \nFile \"\/usr\/lib\/python3.7\/runpy.py\", line 193, in _run_module_as_main\n\"main\", mod_spec) \nFile \"\/usr\/lib\/python3.7\/runpy.py\", line 85, in _run_code\nexec(code, run_globals) \nFile \"\/content\/.buildozer\/android\/platform\/python-for-android\/pythonforandroid\/toolchain.py\", line 1294, in \nmain() \nFile \"\/content\/.buildozer\/android\/platform\/python-for-android\/pythonforandroid\/entrypoints.py\", line 18, in main\nToolchainCL() \nFile \"\/content\/.buildozer\/android\/platform\/python-for-android\/pythonforandroid\/toolchain.py\", line 728, in init\ngetattr(self, command)(args) \nFile \"\/content\/.buildozer\/android\/platform\/python-for-android\/pythonforandroid\/toolchain.py\", line 144, in wrapper_func\nuser_ndk_api=self.ndk_api) \nFile \"\/content\/.buildozer\/android\/platform\/python-for-android\/pythonforandroid\/build.py\", line 423, in prepare_build_environment\nself.ccache = sh.which(\"ccache\") \nFile \"\/usr\/local\/lib\/python3.7\/dist-packages\/sh-1.14.3-py3.7.egg\/sh.py\", line 1524, in call\nreturn RunningCommand(cmd, call_args, stdin, stdout, stderr) \nFile \"\/usr\/local\/lib\/python3.7\/dist-packages\/sh-1.14.3-py3.7.egg\/sh.py\", line 788, in init\nself.wait() \nFile \"\/usr\/local\/lib\/python3.7\/dist-packages\/sh-1.14.3-py3.7.egg\/sh.py\", line 845, in wait\nself.handle_command_exit_code(exit_code) \nFile \"\/usr\/local\/lib\/python3.7\/dist-packages\/sh-1.14.3-py3.7.egg\/sh.py\", line 869, in handle_command_exit_code \nraise exc \nsh.ErrorReturnCode_1:\nRAN: \/usr\/bin\/which ccache\nSTDOUT:\nSTDERR:\n\nCommand failed: \/usr\/bin\/python3 -m pythonforandroid.toolchain create --dist_name=demo3 --bootstrap=sdl2 --requirements=python3,kivy,pillow,kivymd,asyncio,bleak --arch armeabi-v7a --copy-libs --color=always --storage-dir=\"\/content\/.buildozer\/android\/platform\/build-armeabi-v7a\" --ndk-api=21 --ignore-setup-py --debug\n\nENVIRONMENT: \nCUDNN_VERSION = '8.0.5.39' \nPYDEVD_USE_FRAME_EVAL = 'NO' \nLD_LIBRARY_PATH = '\/usr\/local\/nvidia\/lib:\/usr\/local\/nvidia\/lib64' \nCLOUDSDK_PYTHON = 'python3' \nLANG = 'en_US.UTF-8' \nENABLE_DIRECTORYPREFETCHER = '1' \nHOSTNAME = 'ca63256296ed' \nOLDPWD = '\/' \nCLOUDSDK_CONFIG = '\/content\/.config' \nUSE_AUTH_EPHEM = '1' \nNVIDIA_VISIBLE_DEVICES = 'all' \nDATALAB_SETTINGS_OVERRIDES = '{\"kernelManagerProxyPort\":6000,\"kernelManagerProxyHost\":\"172.28.0.3\",\"jupyterArgs\":[\"--ip=172.28.0.2\"],\"debugAdapterMultiplexerPath\":\"\/usr\/local\/bin\/dap_multiplexer\",\"enableLsp\":true}' \nENV = '\/root\/.bashrc' \nPAGER = 'cat' \nNCCL_VERSION = '2.7.8' \nTF_FORCE_GPU_ALLOW_GROWTH = 'true' \nJPY_PARENT_PID = '41' \nNO_GCE_CHECK = 'False' \nPWD = '\/content' \nHOME = '\/root' \nLAST_FORCED_REBUILD = '20220712' \nCLICOLOR = '1' \nDEBIAN_FRONTEND = 'noninteractive' \nLIBRARY_PATH = '\/usr\/local\/cuda\/lib64\/stubs' \nGCE_METADATA_TIMEOUT = '3' \nGLIBCPP_FORCE_NEW = '1' \nTBE_CREDS_ADDR = '172.28.0.1:8008' \nTERM = 'xterm-color' \nSHELL = '\/bin\/bash' \nGCS_READ_CACHE_BLOCK_SIZE_MB = '16' \nPYTHONWARNINGS = 'ignore:::pip._internal.cli.base_command' \nMPLBACKEND = 'module:\/\/ipykernel.pylab.backend_inline' \nCUDA_VERSION = '11.1.1' \nNVIDIA_DRIVER_CAPABILITIES = 'compute,utility' \nSHLVL = '1' \nPYTHONPATH = '\/env\/python' \nNVIDIA_REQUIRE_CUDA = ('cuda>=11.1 brand=tesla,driver>=418,driver<419 '\n'brand=tesla,driver>=440,driver<441 brand=tesla,driver>=450,driver<451') \nTBE_EPHEM_CREDS_ADDR = '172.28.0.1:8009' \nCOLAB_GPU = '0' \nGLIBCXX_FORCE_NEW = '1' \nPATH = '\/root\/.buildozer\/android\/platform\/apache-ant-1.9.4\/bin:\/opt\/bin:\/usr\/local\/nvidia\/bin:\/usr\/local\/cuda\/bin:\/usr\/local\/sbin:\/usr\/local\/bin:\/usr\/sbin:\/usr\/bin:\/sbin:\/bin:\/tools\/node\/bin:\/tools\/google-cloud-sdk\/bin' \nLD_PRELOAD = '\/usr\/lib\/x86_64-linux-gnu\/libtcmalloc.so.4' \nGIT_PAGER = 'cat' \n_ = '\/usr\/local\/bin\/buildozer' \nPACKAGES_PATH = '\/root\/.buildozer\/android\/packages' \nANDROIDSDK = '\/root\/.buildozer\/android\/platform\/android-sdk' \nANDROIDNDK = '\/root\/.buildozer\/android\/platform\/android-ndk-r19c' \nANDROIDAPI = '27' \nANDROIDMINAPI = '21' \n\nBuildozer failed to execute the last command\nThe error might be hidden in the log above this error\nPlease read the full log, and search for it before\nraising an issue with buildozer itself.\nIn case of a bug report, please add a full log with log_level = 2","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":344,"Q_Id":73029750,"Users Score":0,"Answer":"There is a problem with the python-for-android build.py file which they currently fixing(something related to sh, and shoudl be fixed soon)\nmeanwhile as a workaround it is suggested to uncomment p4a.branch = master in the buildozer spec file and change the \"master\" to \"develop\"","Q_Score":1,"Tags":"python,android,buildozer","A_Id":73044331,"CreationDate":"2022-07-18T23:58:00.000","Title":"Buildozer stopped working all of a sudden despite always uploading the same files to Colab","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If there is no predefined column types(nominal\/interval) stored and some of variables are encoded as 1,2,3... in place of actual Categories (e.g. Good, better, bad....) if we see, automatically it may be classified as interval variables but actually they are nominal variables that are encoded.\nIs there any way to identify such variables?\nI thought of cardinality but threshold becomes an issue here please suggest some other solution.\nI'm good with python solution but if someone can give idea on SAS will be helpful :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":253,"Q_Id":73031357,"Users Score":0,"Answer":"as a Data Analyst, its your call to consider the categorical column as nominal or ordinal (depending on the data).\n\nif nominal data --> use dummy variable.(or one hot encoding)\n\nif ordinal data --> use map() function for label-encoding.\n\nif nominal data and cardinality is high --> encoding according to frequency count (lets say there are 30 different categories in a column, there are 1000 rows , 3 categories have high frequency count ,so these will be in separate 3 categories, other 17 have very low, so put all these 17 in 1 single category. ie. There will be only 4 categories, not 30).\n\n\napart from object type(string) columns, To identify categorical variables:\nfrequency count plays very important role for numeric columns.","Q_Score":0,"Tags":"python,pandas,statistics,sas,sas-macro","A_Id":73037116,"CreationDate":"2022-07-19T05:03:00.000","Title":"Classify variables in Nominal\/ordinal\/interval\/binary in case user inputs not provided?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a table that looks like this\n\n\n\n\nindex\nGroup\nrank\nValues\n\n\n\n\n0\na\n2\n344.0\n\n\n1\na\n3\nNaN\n\n\n2\nb\n1\n455.0\n\n\n3\na\n1\nNaN\n\n\n4\nb\n2\nNaN\n\n\n\n\nI want to group data by 'Group', then sort according to 'rank' and then bfill only for rank == 1. The dataset is very big so I want to avoid loops.\nI tried\ntemp[temp['rank']<=2].sort_values('rank', ascending = True).groupby('Group').bfill(axis='rows', inplace = True)\nbut this gives me\n\"backfill() got an unexpected keyword argument 'axis'\"","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":45,"Q_Id":73032109,"Users Score":0,"Answer":"df.sort_values(by = 'rank', inplace = True)\ndf = df.assign(Values1 = lambda x: x['rank'] == 1).fillna(method = 'bfill')\ndf.groupby(by = 'Group')['Values']","Q_Score":1,"Tags":"python,pandas,dataframe,group-by","A_Id":73032590,"CreationDate":"2022-07-19T06:30:00.000","Title":"Bfill on Groupby object","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to train my custom license plate using Yolov5, but I have a problem.\nMy problem is that my dataset is separated for each character and I have no idea how to make annotation file suitable for Yolo which is as follows.\nBecause what I've seen so far, for triainig, you definitely need the entire license plate, which can be used to label each of the characters.\nAnd my question is, if I train these images, can I achieve a license plate recognition system?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":73034619,"Users Score":0,"Answer":"With Yolov5, you can achieve a good licence plate detection system, Yolov5 wont be reach the succes to recognite the licence plates itself. After the detection(with Yolov5) you can extract the information from bounding boxes and use it for recognition.","Q_Score":0,"Tags":"python,yolo,yolov5,lpr","A_Id":73080455,"CreationDate":"2022-07-19T09:38:00.000","Title":"train custom plate recognition using yolo","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to retrieve the industry from occupation values. There are around 50 distinctive values, some of them are straightforward for example: 'Financial services professional','Consultant', 'Lawyer'; some are very non specific like 'Member of management', 'Entrepreneur' or 'Independent Gentleman'...\nIs there a way for me to sort the readable data into categories like 'Law', 'Financial Services' and all the rest into 'Other'?\nThank you so much in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":73036157,"Users Score":0,"Answer":"If you only have 50 distinctive values, the most straightforward way is for you to create the categories manually.\nIf you are doing this as a project to improve your data science and programming skills you can read on how to do text classification with BERT or other transformer models","Q_Score":0,"Tags":"python,classification,prediction,categorization","A_Id":73036239,"CreationDate":"2022-07-19T11:35:00.000","Title":"Is there a way to classify industry category through occupation name?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given\n\nDataFrame contains column df['words']\n\nNeed to make sure that there is no word that isn't English or Hebrew & that there is no number in the input :\nfor example: wrong words:\npla!n, *, \/, ?, mouna\u7b11, ~,!, ad\u05e7\u05e8, etc..\nfor example: good words:\nplan, mountain, \u05d0\u05e8\u05d8\u05d9\u05e7, ok...\nin python alone.\nthanks","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":35,"Q_Id":73041435,"Users Score":-1,"Answer":"\"\".isalpha() \u05d0\u05ea\u05d4 \u05d9\u05db\u05d5\u05dc \u05dc\u05d4\u05e9\u05ea\u05de\u05e9 \u05d1\nyou can use \"\".isalpha()","Q_Score":0,"Tags":"python,pandas,hebrew,non-english","A_Id":73041573,"CreationDate":"2022-07-19T17:49:00.000","Title":"detected if a word in column from Dataframe contains any value that is not Hebrew or English","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the best way to check if a dataframe is a Pandas.Dataframe or pandas.Series?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1834,"Q_Id":73055221,"Users Score":1,"Answer":"Another way of doing it:\ntype(your_object)","Q_Score":2,"Tags":"python,pandas,dataframe,spyder","A_Id":73055264,"CreationDate":"2022-07-20T16:33:00.000","Title":"check if pandas dataframe is dataframe or series?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the best way to check if a dataframe is a Pandas.Dataframe or pandas.Series?","AnswerCount":2,"Available Count":2,"Score":0.4621171573,"is_accepted":false,"ViewCount":1834,"Q_Id":73055221,"Users Score":5,"Answer":"to expand on Ryan's comment:\nisinstance(df,pd.DataFrame) will return True if it is a dataframe. to check if it is a series, it would be isinstance(df,pd.Series).","Q_Score":2,"Tags":"python,pandas,dataframe,spyder","A_Id":73055401,"CreationDate":"2022-07-20T16:33:00.000","Title":"check if pandas dataframe is dataframe or series?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been trying to understand RNNs better and am creating an RNN from scratch myself using numpy. I am at the point where I have calculated a Loss but it was suggested to me that rather than do the gradient descent and weight matrix updates myself, I use pytorch .backward function. I started to read some of the documentation and posts here about how it works and it seems like it will calculate the gradients where a torch tensor has requires_grad=True in the function call.\nSo it seems that unless create a torch tensor, I am not able to use the .backward. When I try to do this on the loss scalar, I get a 'numpy.float64' object has no attribute 'backward' error. I just wanted to confirm. Thank you!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":118,"Q_Id":73057998,"Users Score":1,"Answer":"Yes, this will only work on PyTorch Tensors.\nIf the tensors are on CPU, they are basically numpy arrays wrapped into PyTorch Tensors API (i.e., running .numpy() on such a tensor returns exactly the data, it can modified etc.)","Q_Score":0,"Tags":"python,pytorch,recurrent-neural-network,backpropagation","A_Id":73058741,"CreationDate":"2022-07-20T20:51:00.000","Title":"Can I use pytorch .backward function without having created the input forward tensors first?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been trying to run a certain cell in Google Colab for a while now and keep running into the same issue. The cell runs for about 20-25 mins and terminates the code and restarts the runtime due to running out of memory\/RAM, which causes all variables to be lost. I first deleted variables that would be re-initialized in the next iteration by calling \"del\". After deleting the variable I called the gc.collect() function. Once that didn't work, I noticed that there were some data structures that increased every iteration (a couple of lists). I removed the lists and wrote the information to a csv file instead. I then read in the information\/csv file after the for loop and obtained the information that way, instead of appending to a list every iteration in the for loop. However, that didn't solve the issue either. I do not have Colab Pro+, I am utilizing the free version.\nAny assistance would be greatly appreciated. Thanks!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1196,"Q_Id":73058853,"Users Score":1,"Answer":"I first deleted variables that would be re-initialized in the next iteration by calling \"del\"\n\nIf that variable is quickly reassigned to a new value, deleting it won't do anything.\n\nI then read in the information\/csv file after the for loop and obtained the information that way, instead of appending to a list every iteration in the for loop\n\nIf the end result is the same amount of information stored in variables, then this won't do anything either.\nWithout seeing your actual code, all I can say is \"your variables are too big\".","Q_Score":0,"Tags":"python,google-colaboratory,ram","A_Id":73058943,"CreationDate":"2022-07-20T22:41:00.000","Title":"Running Out of RAM - Google Colab","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using an excel sheet with many different dataframes on it. I'd like to import those dataframes but separately. For now When i import the excel_file with pandas, it creates one single dataframe full of blanks where the dataframe are delimited. How can I create a different dataframe for each on of them?\nThanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":73064527,"Users Score":0,"Answer":"If you're using the pandas.read_excel() function, you can simply use the usecols parameter to specify which columns you want to include in each dataframe. Only downside would be you'd need to do a read_excel call for each of the dataframes you want to read in.","Q_Score":2,"Tags":"python,excel,pandas","A_Id":73064574,"CreationDate":"2022-07-21T10:25:00.000","Title":"How to separate many dataframes from one excel Sheet Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Age\nGender\nBusinessTravel\nDepartment\nDistance\nEducation\nMaritalStatus\nSalary\nYearsWorked\nSatisfaction\n\n\n\n\n41\nFemale\nFrequent\nSales\n12\n5\nMarried\n5000\n4\n4\n\n\n24\nMale\nRarely\nHR\n22\n4\nSingle\n3400\n1\n3\n\n\n\n\nSatisfaction - Scale from 1 to 5, 5 is the most satisfied.\nDistance - Distance from home to workplace\nAbove is a sample of the data.\nWould Kmeans or Kmodes be appropriate for such a dataset?\nThank you for any answers in advance.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":73088018,"Users Score":0,"Answer":"Kmean clustering would not be ideal as it cannot handle discrete data","Q_Score":1,"Tags":"python,scikit-learn,data-science,cluster-analysis","A_Id":73099194,"CreationDate":"2022-07-23T04:22:00.000","Title":"What is the best python approach\/model for clustering dataset with many discrete and categorical variables?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I am trying to solve a linear programming problem with around 10,000 binary variables using the PULP python library. It's taking me a lot of time to solve the problem.\nI was wondering if there is anyway for the code to use GPUs available in Colab to solve these linear programming issues.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":189,"Q_Id":73096443,"Users Score":1,"Answer":"GPUs have little or no advantage for general large, sparse LP and MIP models. Apart from some academic exercises on highly structured problems, there are few or no solvers available that use GPUs. The underlying problem is that GPUs are really good for data-parallel problems (SIMD architecture). Large, sparse LPs are different.","Q_Score":0,"Tags":"python,gpu,google-colaboratory,pulp","A_Id":73096778,"CreationDate":"2022-07-24T07:11:00.000","Title":"Using Colab GPU for PULP Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried in combination\ndf2 = df.loc[['Entity'] != ('World')]\ndf2.loc[['Code'].str.contains('',na=False)]\nresult from memory\nshowing code NaN\nWorld removed\nboth have succeeded my needs the problem is combining them together. it seems to just not want to work. In one column titled 'Code' in the data the continents came with 'NaN' so I filtered that out using 'df.loc[['Code'].str.contains('',na=False)]' and it worked but then combined with \"df2 = df.loc[['Entity'] != ('World')]\" I apologise for wasting anyones time this is for an assessment and the thought of it is bugging me out. Did i do anything wrong or misread the purpose of a function?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":73107848,"Users Score":0,"Answer":"To check your missing values you could use function isnull() or notnull() to filter, then correct them.\nTo remove and replace null values in Pandas DataFrame You could use \u2018fillna()\u2019. This is commonly replaced with 0 but depends on your DataFrame.","Q_Score":0,"Tags":"python,pandas,dataframe,jupyter-notebook,jupyter","A_Id":73108263,"CreationDate":"2022-07-25T10:39:00.000","Title":"Is there a function to filter out NaN\/Na and a word in panda?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating a bar chart using Bokeh. My chart renders fine initially, but when I add in the following line (to rotate the X labels):\np.xaxis.major_label_orientation = 1.2\nThe chart becomes blank. Why is this occurring?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":14,"Q_Id":73113123,"Users Score":0,"Answer":"It turns out that this occurred because my x axis labels were too long. When I shortened the labels, the chart reappeared with the rotated labels. (Increasing the height of the figure might be another way to solve this issue.)","Q_Score":0,"Tags":"python,bokeh","A_Id":73113124,"CreationDate":"2022-07-25T17:23:00.000","Title":"Adding in major_label_orientation value makes chart blank","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I would like to know if there is a way to delete an line already plotted using matplotlib. But here is the thing:\nI'm plotting within a for loop, and I would like to check before plotting if the line that I'm about to draw is already on the figure, and if so, I would like to delete it.\nI was told an idea about plotting it anyways but with the same color of the background, but again, to do this I would have to check if the line already exists. Any idea how to do this?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":64,"Q_Id":73117106,"Users Score":0,"Answer":"Any idea how to do this?\n\n\nDuring iteration, before making a new line\n\ncheck if x and y coordinates of the new line are the same as any of the lines already contained in the plot's Axes..","Q_Score":0,"Tags":"python,matplotlib","A_Id":73117510,"CreationDate":"2022-07-26T02:21:00.000","Title":"Checking if a line was already drawn in matplotlib and deleting it","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a project that needs to update a CSV file with user info periodically. The CSV is stored in an S3 bucket so I'm assuming I would use boto3 to do this. However, I'm not exactly sure how to go about this- would I need to download the CSV from S3 and then append to it, or is there a way to do it directly? Any code samples would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":356,"Q_Id":73117401,"Users Score":1,"Answer":"Ideally this would be something where DynamoDB would work pretty well (as long as you can create a hash key). Your solution would require the following.\n\nDownload the CSV\nAppend new values to the CSV Files\nUpload the CSV.\n\nA big issue here is the possibility (not sure how this is planned) that the CSV file is updated multiple times before being uploaded, which would lead to data loss.\nUsing something like DynamoDB, you could have a table, and just use the put_item api call to add new values as you see fit. Then, whenever you wish, you could write a python script to scan for all the values and then write a CSV file however you wish!","Q_Score":0,"Tags":"python,amazon-web-services,csv,amazon-s3,boto3","A_Id":73129306,"CreationDate":"2022-07-26T03:21:00.000","Title":"Writing to a CSV file in an S3 bucket using boto 3","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an issue with a dataframe I try to pivot. The error message says that it contains duplicate entires. However I have checked the file and there are no duplicates (checked with df.duplicated, in Excel and manually). As I am running out of ideas, is there a way to know in which line in the dataframe is causing the error to throw? The Python error message is unfortuneately not very clear...\nThe code itself is working with another dataframe so I assume my code should be fine...","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":324,"Q_Id":73119031,"Users Score":0,"Answer":"a\nb\nc\n\n\n\n\n54545\n3\n8\n\n\n54545\n2\n16\n\n\n54545\n1\n64\n\n\n\n\nThe idea is to generate a Pivot out of it with B being the columns, column A is going to be the index and C is the value of the columns.\ndf = df_2.pivot(index='A', columns=\"B\", values='C').reset_index()\nHope it is understandable what I want to do.","Q_Score":1,"Tags":"python,pandas,pivot","A_Id":73119480,"CreationDate":"2022-07-26T07:04:00.000","Title":"Pivot - Index contains duplicate entries, cannot reshape - which line in the dataframe is causing the error?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My website is currently using Heroku-18 stack, which is deprecated. I therefore need to redeploy my site to have it up to date (Heroku-22 stack) but I'm getting errors when trying. The log mentions numpy related errors numerous times, so I assume it could be the source of my problem.\nI've already looked online for some solutions but none of them have worked. I notably tried upgrading pip, changing the python version in my runtime.txt file, reinstalling numpy but nothing worked.\nBefore redeploying my website, the python version in runtime.txt was python-3.7.0. It is currently set to python-3.9.13.\nNumpy is installed and the version is 1.18.1.\nHere are some of the errors I'm getting:\n\n! [remote rejected] master -> master (pre-receive hook declined)\nerror: failed to push some refs to 'https:\/\/git.heroku.com\/mywebsite.git'\n\n\nerror: Command \"gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy\/core\/include -Ibuild\/src.linux-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/private -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/app\/.heroku\/python\/include\/python3.9 -Ibuild\/src.linux-x86_64-3.9\/numpy\/core\/src\/private -Ibuild\/src.linux-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.linux-x86_64-3.9\/numpy\/core\/src\/private -Ibuild\/src.linux-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.linux-x86_64-3.9\/numpy\/core\/src\/private -Ibuild\/src.linux-x86_64-3.9\/numpy\/core\/src\/npymath -c numpy\/random\/mtrand\/mtrand.c -o build\/temp.linux-x86_64-3.9\/numpy\/random\/mtrand\/mtrand.o -MMD -MF build\/temp.linux-x86_64-3.9\/numpy\/random\/mtrand\/mtrand.o.d\" failed with exit status 1\n\n\nERROR: Failed cleaning build dir for numpy\nremote: Failed to build numpy\n\n\nERROR: Failed building wheel for numpy\n\nHow can I fix these errors?\nAlso, could it be someting else non numpy-related that causes the failure of the deployment?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":133,"Q_Id":73122354,"Users Score":0,"Answer":"I finally managed to solve the issue. Many dependencies were outdated. I had to use python-3.9.14 version, upgrade psycopg2-binary to v2.9.3 and scipy to v1.6.","Q_Score":0,"Tags":"python,numpy,heroku","A_Id":73847517,"CreationDate":"2022-07-26T11:13:00.000","Title":"Can't deploy my website onto Heroku - is Numpy causing the issue?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"The difference between coalesce and repartition is fairly straightforward. If I were to coalesce a DataFrame to 1 partition and write it to a storage service (Azure Blob\/ AWS S3 etc), would the entire DataFrame be sent to the driver and then to the storage service; or would an executor send it directly?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":170,"Q_Id":73134060,"Users Score":3,"Answer":"The Spark official documentation describes it as follows:\n\nIf you\u2019re doing a drastic coalesce, e.g. to numPartitions = 1, this\nmay result in your computation taking place on fewer nodes than\nyou like (e.g. one node in the case of numPartitions = 1).\n\nFrom the above it can be inferred that it should be an executor send it directly.","Q_Score":1,"Tags":"python,apache-spark,pyspark","A_Id":73134142,"CreationDate":"2022-07-27T07:47:00.000","Title":"Does coalesce(1) bring all the data to the driver?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two arrays using the MNIST dataset. First array shape is (60000,28,28) and the second array is (60000,).\nIs it possible to combine these and make a new array that is (60000,28,28,1)? I've tried reshaping, resizing, inserting, concatenating and a bunch of other methods to no avail!\nWould really appreciate some help! TIA!","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":247,"Q_Id":73143929,"Users Score":2,"Answer":"It seems like you might have misunderstood how numpy arrays work or how they should be used.\nEach dimension(except for the inner most dimension) of a an array is essentially just an array of arrays. So for your example with dimension (60000, 28, 28). You have an array with 60000 arrays, which in turn are arrays with 28 arrays. The final array are then a array of 28 objects of some sort.(Integers in the mnist dataset I think).\nYou can convert this into a (60000, 28, 28, 1) by using numpys expand_dims method like so:\nnew_array = numpy.expand_dims(original_array, axis=-1)\nHowever, this will only make the last array be an array of 1 objects, and will not include the other array in any way.\nFrom what I can read from your question it seems like you want to map the labels of the mnist dataset with the corresponding image. You could do this by making the object of the outermost dimension a tuple of(image<28x28 numpy array>, label), but this would remove the numpy functionality of the array. The best course of action is probably to keep it as is and using the index of an image to check the label.","Q_Score":0,"Tags":"python,numpy,multidimensional-array,mnist","A_Id":73153409,"CreationDate":"2022-07-27T19:55:00.000","Title":"Insert new column from 1D array to Numpy 3D array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to drop duplicate based on the length of the column \"Employee History\". The column with the longest length should be kept\nNote: (there are many, many more columns, but this is the 2 columns that matter for this case)\n\n\n\n\n\nCompany ID\nEmployee History\n\n\n\n\n\n\n253\n462106-27\n2021: 21, 2022: 26\n\n\n\n\n264\n181831-33\n2019: 20, 2020: 60, 2021: 172, 2022: 225\n\n\n\n\n338\n181831-33\n2019: 20, 2020: 60, 2021: 172\n\n\n\n\n3481\n462106-27\n2021: 21","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":72,"Q_Id":73148116,"Users Score":1,"Answer":"First, sort the data set by the length of \"Employee History\". Then insert every row into a OrderedDict using the \"Company ID\" as key and other columns as value. Finally, restore the dict to table.\nNote: from python 3.7, regular dicts are guaranteed to be ordered, too.","Q_Score":0,"Tags":"python,pandas","A_Id":73148263,"CreationDate":"2022-07-28T06:22:00.000","Title":"drop duplicate based on column length","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to NLP and i am confused about the embedding.\nIs it possible, if i already have trained GloVe embeddings \/ or Word2Vec embeddings and send these into Transformer? Or does the Transformer needs raw data and do its own embedding?\n(Language: python, keras)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":370,"Q_Id":73159747,"Users Score":0,"Answer":"If you train a new transformer, you can do whatever you want with the bottom layer.\nMost likely you are asking about pretrained transformers, though. Pretrained transformers such as Bert will have their own embeddings of the word pieces. In that case, you will probably get sufficient results just by using the results of the transformer.","Q_Score":0,"Tags":"python,stanford-nlp,word2vec,word-embedding,transformer-model","A_Id":73189136,"CreationDate":"2022-07-28T22:15:00.000","Title":"Transformers (Attention is all you need) with Word2Vec or GloVe?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have set of two image namely main and noise and the maximum value is found using np.max(image_array) function. This will return the values for\n\nmaximum of main image = 1344.056\nmaximum of noise image = 34.46\n\nBut, when I subtract both image using np.subtract(main,noise) and now, if I find the maximum value of subtracted image, the value is found to be\n\nMaximum of subtracted image = 1342.312\n\nNow, why the main image is loosing its maximum value from 1344 to 1342 ? Any help in figuring out the issue is appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":62,"Q_Id":73165938,"Users Score":2,"Answer":"Probably the pixel where the maximun value occurred for the image at 1344.056 had a value around 2 in the noise. Thus, when you substracted both then you get a maximun of 1342.312.\nIf you are substracting both values I supposed your goal is to remove noise from the image and then the one you call image is actually image_and_noise. So, if this is correct the image maximum is 1342.312 and the 2 that was removed belonged to the noise.","Q_Score":0,"Tags":"python,numpy,image-processing,max,subtraction","A_Id":73166112,"CreationDate":"2022-07-29T11:23:00.000","Title":"why subtracting an image from noise (having very low intensity), causes image to loose its maximum value (intensity)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"!python train.py --img 415 --batch 16 --epochs 30 --data dataset.yaml --weights yolov5s.pt --cache\nafter writing this coded it is showing the error\npython: can't open file 'train': [Errno 2] No such file or directory","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":256,"Q_Id":73167576,"Users Score":0,"Answer":"Make sure you cd to correct directory~\nIf you are using colab, run this command @cd yolov5","Q_Score":0,"Tags":"python,tensorflow,image-processing,object-detection,yolov5","A_Id":73188287,"CreationDate":"2022-07-29T13:31:00.000","Title":"while working on yolov5 algorithm I am training the dataset the train.py file in in the yolov5 folder is not finding","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I was some pyspark code, it required to me to install a Python module called fuzzywuzzy (that I used to apply the leiv distance)\nThis is a python libraries and seems that pyspark doesn't have the module installed... so, How can I install this module inside Pyspark??","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":246,"Q_Id":73177943,"Users Score":1,"Answer":"You'd use pip as normal, with the caveat that Spark can run on multiple machines, and so all machines in the Spark cluster (depending on your cluster manager) will need the same package (and version)\nOr you can pass zip, whl or egg files using --py-files argument to spark-submit, which get unbundled during code execution","Q_Score":0,"Tags":"python,apache-spark,pyspark","A_Id":73177961,"CreationDate":"2022-07-30T17:59:00.000","Title":"How to install external python libraries in Pyspark?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I have some weights [w_1,w_2,...,w_n] and I have the following conditions:\n\na < w_i < b for each i\nw_1 + w_2 + ... + w_n = 1\n\nIs there a way to transform (squeeze) my original weights to obey these rules?\nAny help would be hugely appreciated!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":73179444,"Users Score":0,"Answer":"The conditions may fail to be compatible, and there can be different ways to cope:\n\ndeclare \"no solution\",\n\nrelax type 1 or type 2 constraint,\n\ncompute a \"best fit\" by assigning a penalty to a constraint that is not fulfilled.\n\n\nDivide every w by the sum of all w to achieve the condition 2. Then if you are lucky, the conditions 1 may hold by chance.","Q_Score":0,"Tags":"python,math,optimization,statistics,weighted","A_Id":73182035,"CreationDate":"2022-07-30T22:26:00.000","Title":"How to transform a list of weights, given that they must sum to 1 and obey bounds?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently working in jupyter notebook, and I am getting the error that cos is not defined, I imported pandas and numpy. Is there any other library I am missing?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":360,"Q_Id":73179671,"Users Score":1,"Answer":"You don't have to import numpy or pandas (which as far as I am aware doesn't implement any general-purpose cosine function) to calculate cosine. You can import math and then use math.cos().","Q_Score":0,"Tags":"python,jupyter-notebook","A_Id":73179796,"CreationDate":"2022-07-30T23:15:00.000","Title":"How to get cosine in jupyter notebook?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In order to download OpenCV on through Anaconda prompt, I run the following:\nconda install -c conda-forge opencv\nHowever, whenever I try to download, there I get the messages of\nfailed with initial frozen solve. Retrying with flexible solve. Failed with repodata from current_repodata.json, will retry with next repodata source\nThis would continue as the prompt tries to then diagnose what conflicts there are in my system that could prevent OpenCV from downloading.\nI kept my laptop on over night, but when I woke up in the morning, there was still diagnosing for potential conflicts going on. I'm not too sure what to do at this point. I just started trying again, but the same issues are being experienced.\nI am trying to download OpenCV so that I can import cv2 to work on machine learning projects for object\/image detection.\nI have also tried pip install -c anaconda opencv but am having the same issues.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1609,"Q_Id":73183197,"Users Score":1,"Answer":"Please note that to import cv2, the library\/package to install is called opencv-python.\nFrom Jupyter notebook, you can try !pip install opencv-python\nIf you're using anaconda, you can try conda install -c conda-forge opencv-python","Q_Score":3,"Tags":"python,opencv,anaconda,conda,anaconda3","A_Id":73183355,"CreationDate":"2022-07-31T12:12:00.000","Title":"Opencv not installing on Anaconda prompt","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working on a multi-layer 1d-CNN. Recently I shifted my work over to an HPC server to train on both CPU and GPU (NVIDIA).\nMy code runs beautifully (albeit slowly) on my own laptop with TensorFlow 2.7.3. The HPC server I am using has a newer version of python (3.9.0) and TensorFlow installed.\nOnto my problem: The Keras callback function \"Earlystopping\" no longer works as it should on the server. If I set the patience to 5, it will only run for 5 epochs despite specifying epochs = 50 in model.fit(). It seems as if the function is assuming that the val_loss of the first epoch is the lowest value and then runs from there.\nI don't know how to fix this. The function would reach lowest val_loss at 15 epochs and run to 20 epochs on my own laptop. On the server, training time and epochs is not sufficient, with very low accuracy (~40%) on test dataset.\nPlease help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":150,"Q_Id":73201539,"Users Score":0,"Answer":"For some reason, reducing my batch_size from 16 to 8 in model.fit() allowed the EarlyStopping callback to work properly. I don't know why this is though. But it works now.","Q_Score":0,"Tags":"python,tensorflow,keras,hpc,early-stopping","A_Id":73251022,"CreationDate":"2022-08-02T03:02:00.000","Title":"Keras Earlystopping not working, too few epochs","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have looked at several questions and tried their respective answers, but I cannot seem to understand why VSCode is unable to find the sklearn module.\nI use a virtual conda environment called ftds, in which I have scikit-learn successfully show up when I run conda list. In jupyter notebook, I use the same ftds environment and sklearn works fine. In VSCode, I keep getting the ModuleNotFoundError: No module named 'sklearn' error message.\nI ensure I have activated my conda environment using conda activate ftds prior to running my code. I also ensure that scikit-learn was successfully installed into the ftds environment using conda. I have the latest version, which is version 1.1.1 at the time of this question.\nFor further information, I am using MacOS Monterey (Version 12.5). Has anyone had the same issue? I am only able to find those who had issues with jupyter notebook, which is the opposite of my problem.\nI have already selected the ftds environment as the python interpreter in VSCode. Other packages like pandas, numpy, etc. are all functioning as normal.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":626,"Q_Id":73208411,"Users Score":0,"Answer":"If you are sure you have installed the sklearn package, but you still get the ModuleNotFoundError error message. In most cases you must not choose the corresponding interpreter. Or your sklearn package is not installed in the current python environment.\nPlease use the pip show sklearn command to view the installation information of sklearn. Make sure to choose the correct interpreter.\nOr activate the environment you want to use and install the sklearn package using the pip install sklearn command.","Q_Score":1,"Tags":"python,visual-studio-code,scikit-learn,anaconda","A_Id":73216657,"CreationDate":"2022-08-02T13:26:00.000","Title":"sklearn module not found when using VSCode, but works fine in Jupyter Notebook?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Faster R CNN detector which I've trained with pytorch lightning on a quite noisy, but large, dataset. I would expect that after 1 epoch of training, the model would only output labels in the dataset, in my case 0 to 56. However, it is giving me labels such as 64 and 89. What is going on here? Where is it coming up with these labels it was never trained on?\nCan't share any code because this problem probably relates to my dataset, not my code. With the COCO pretrained model, it works fine.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":73208711,"Users Score":0,"Answer":"The problem was not my data or my model. The problem is the pytorch nn.module.load_state_dict() method. This method has a argument strict which is supposed to allow users to load a state_dict without the exact same weight keys, but it actually causes the loaded model to be completely wrong. I highly recommend against using strict=False when loading a model with load_state_dict() in pytorch.","Q_Score":0,"Tags":"python,pytorch,computer-vision,pytorch-lightning,faster-rcnn","A_Id":73223974,"CreationDate":"2022-08-02T13:48:00.000","Title":"Pytorch model outputting labels it was not trained on","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a DF which has the following Schema :\n\no_orderkey --- int32\no_custkey --- int32\no_orderstatus --- object\no_totalprice --- object\no_orderdate --- object\no_orderpriority --- object\no_clerk --- object\no_shippriority --- int32\no_comment --- object\n\nHere the total price is actually a float(Decimals) and the order date is date time.\nBut on using df.convert_dtypes or df.infer_objects, its not automatically convering them into float\/int and date time.\nIs there any way to automatically read and convert the column data type into the correct one? For example in case we do not know the schema of such a data frame beforehand, how would we read and convert the data type to the correct one, without using a regex method to go through every object in the DF.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":73216568,"Users Score":0,"Answer":"Pandas tries to use the right datatype when read the data. However, if, for example, the totalprice column has string, it doesn't make sense for you to convert it to float. You also cannot force pandas to convert it to float, it will just report errors, which is the correct behaviour!\nYou have to use regex to clean up the string, then you can safely convert the column to float.","Q_Score":0,"Tags":"python,dataframe,schema","A_Id":73220745,"CreationDate":"2022-08-03T05:17:00.000","Title":"Autoconversion of data types in python in a dataframe","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to train a neural net for semantic segmentation of kidney and his tumor, starting from the dataset available from the kiTS 19 Challenge.\nIn this dataset, I have 100 CT scans for the training set, with a great variety in terms of size and pixel spacing.\nStudying several approaches on the internet, I found that it is a good practice to decide a unique set of pixel spacing that has to be the same for all the volumes (e.g. new_spacing = [2. , 1.5, 1.5]); by resampling the volumes to this new spacing, of course their dimensions will change according to this formula: new_size = original_size*(original_spacing\/new_spacing).\nWhat I did until now was using the scipy.ndimage.zoom in order to resample the volume to the desired new_spacing and new_size computed, then padding or cropping the obtained volume to the desired dimension (the dimensions for the input of the NN, which in my case are (n_slice, 512,512)).\nThe problem is that this approach is really time-consuming, I'd need a faster way to do what I need to, is there any?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":165,"Q_Id":73220578,"Users Score":1,"Answer":"You can use itkResampleImageFilter, it is available in C++ which is the fastest. If you know about C++, you can use the Cpp version. Otherwise, you can use ResampleImageFilter in simpleItk which is available in many different languages. Note that you should do this step as preprocessing and before NN.","Q_Score":4,"Tags":"python,resolution,resampling,semantic-segmentation,simpleitk","A_Id":74985029,"CreationDate":"2022-08-03T10:59:00.000","Title":"What is the fastest and easiest way to resample a set of CT scans to same pixel spacing and volume size?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to add OpenCV to my python by using pip install but i get the error\n##'pip' is not recognized as an internal or external command,\noperable program or batch file.\nwhen i use the echo %PATH% i get this\n##C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\Users\\jashp\\AppData\\Local\\Programs\\Python\\Python39;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\dotnet;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\Users\\jashp\\AppData\\Local\\Programs\\Python\\Python39;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\dotnet;C:\\Program Files (x86)\\Common Files\\Oracle\\Java\\javapath;C:\\ProgramData\\Oracle\\Java\\javapath;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0;C:\\Windows\\System32\\OpenSSH;C:\\Program Files (x86)\\NVIDIA Corporation\\PhysX\\Common;C:\\Program Files\\NVIDIA Corporation\\NVIDIA NvDLISR;C:\\Program Files\\dotnet;C:\\Users\\jashp\\AppData\\Local\\Microsoft\\WindowsApps;C:\\Python34\\Scripts;;C:\\Python34\\Scripts\nI even tried C:\\Users\\jashp>setx PATH \"%PATH%;C:\\pip\" and got\n##SUCCESS: Specified value was saved.\nthen i tried C:\\Users\\jashp>pip install numpy and got\n'pip' is not recognized as an internal or external command,\noperable program or batch file.\nThe path to my Python is -C:\\Users\\jashp\\AppData\\Roaming\\Python","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":73221449,"Users Score":0,"Answer":"You need to add the path of your pip installation to your PATH system variable. By default, pip is installed to C:\\Python34\\Scripts\\pip (pip now comes bundled with new versions of python), so the path \"C:\\Python34\\Scripts\" needs to be added to your PATH variable.\nTo check if it is already in your PATH variable, type echo %PATH% at the CMD prompt\nTo add the path of your pip installation to your PATH variable, you can use the Control Panel or the set command. For example:\nset PATH \"%PATH%;C:\\Python34\\Scripts\"\nNote: According to the official documentation, \"variables set with set variables are available in future command windows only, not in the current command window\". In particular, you will need to start a new cmd.exe instance after entering the above command in order to utilize the new environment variable.","Q_Score":0,"Tags":"python","A_Id":73221544,"CreationDate":"2022-08-03T12:07:00.000","Title":"How to install OpenCV on python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a 1 by 400 array in Python. I want to send this output in the format of dataframe (df) to LabVIEW. Is LabVIEW able to receive the array as a dataframe (df)?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":231,"Q_Id":73221602,"Users Score":0,"Answer":"If you have recent (2019 and up) version of LabVIEW, you can use the Python node, call the python function and get the return values of your function.\nAlternately, a message\/transport library such as 0MQ can connect between the two languages (assuming two processes running simultaneously).\nAs for Nathan's answer, LabVIEW has JSON libraries that can parse JSON strings.","Q_Score":0,"Tags":"python,arrays,pandas,dataframe,labview","A_Id":73418071,"CreationDate":"2022-08-03T12:18:00.000","Title":"Sending Dataframe from Python to LabVIEW","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In my dataframe, I have some Null values. I want to calculate the correlation, so does my Null values affect my correlation value or shall I replace the Null values with 0 and then find the correlation?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":73232044,"Users Score":0,"Answer":"You can't calculate correlation with null values in your dataset.\nYou need to impute your columns to get rid of the null values.\nDon't replace the null values with 0. Use mean or median of the columns to replace the null values as it will be more related to the data in the columns as compared to 0","Q_Score":0,"Tags":"python,statistics,data-science,data-analysis","A_Id":73232273,"CreationDate":"2022-08-04T07:38:00.000","Title":"Correlation for Null Values","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am facing a very strange issue : json.dumps(np.int64(1), default=lambda o: o.__dict__) returns : AttributeError: 'numpy.int64' object has no attribute '__dict__'\nWhile json.dumps(np.float64(1), default=lambda o: o.__dict__) returns correctly : '1.0'\nOnly difference is going from int64 to float64... Any suggestion ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":246,"Q_Id":73234696,"Users Score":1,"Answer":"numpy.float64 inherits from float, so json.dumps handles instances like a regular float, and doesn't use your default callback.\nnumpy.int64 doesn't inherit from int (and cannot reasonably do so due to conflicting semantics and memory layout), so json.dumps tries to use your default callback, which fails because numpy.int64 instances don't have a __dict__.","Q_Score":0,"Tags":"python,numpy,dictionary","A_Id":73234743,"CreationDate":"2022-08-04T10:57:00.000","Title":"AttributeError: 'numpy.int64' object has no attribute '__dict__' but float64 works","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given String;- \"\\NA*(0.0001,0.,NA,0.99999983,0.02) \\EVENT=_Schedule185 \\WGT=_WEEKS\"\nOutput = EVENT=_Schedule185","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":17,"Q_Id":73240427,"Users Score":0,"Answer":"You can use string extract\ndf['col'].str.extract('(EVENT=_\\S*) ')","Q_Score":1,"Tags":"python,pandas,jupyter-notebook","A_Id":73240506,"CreationDate":"2022-08-04T17:59:00.000","Title":"Extract sub String from column in Pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Edit 1 - We have created a python script which will read a data from excel\/csv using pandas and then, will be cleaning it. After cleansing of the data, it will connect to snowflake server and append the data in a table which is already available in snowflake. Now the question is -\nIn this process of transferring data from python to snowflake. But would I need to ensure that columns names in pandas dataframe should be same (case-sensitive) as column names in snowflake?\nOr, any case would work to push the data?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75,"Q_Id":73252779,"Users Score":0,"Answer":"There are many steps involved in importing data into a snowflake:\n\nOpening the Data Load Wizard:\na.Click on Database -> Tables\nb.Click on\n\nTable Row to select it and Load Data\nTable Name to select it and Load Table\n\n\nSelecting a Warehouse:\na. Select a Warehouse from the dropdown list to include any warehouse on which you have the USAGE privilege. Snowflake will use this warehouse to load data into the table.\nb. Click Next\nSelecting a Source Files:\nThe users can load the local machine or cloud storage data like AWS S3, Google Cloud Storage, and Azure.\na.Local Machine:\ni. Load files from the computer\nii. Select one or more files and click on Open\niii. Click on Next\nCloud Storage:\n1.Existing Stage: (i) Select the name of the existing stage and then select the Next button\nNew Stage:\nClick the plus (+) symbol beside the Stage dropdown list.\nSelect the location where your files are located: Snowflake or any one of the supported cloud storage services, and click the Next button.\nComplete the fields that describe your cloud storage location.\nClick the Finish button.\nSelect your new named stage from the Stage dropdown list.\nClick the Next button.\n\nSelect File Format: Select a named set of options that describes the format of the data files.\nExisting Name Format:\nSelect the name of the existing file from the dropdown list.\nClick on the Next Button.\nNew File Format:\nBeside the dropdown list, select the (+) button.\nUpdate the fields according to the files\u2019 format.\nClick on Finish.\nSelect the new named file format from the dropdown list.\nClick on Next\nSelect the Load Options\nSpecify how Snowflake should behave if there are errors in the data files.\nClick on the Load button. This will prompt Snowflake to load the data in the selected table in the required warehouse.\nClick on Ok.\n\n\nI guess you, it helps you a lot.","Q_Score":0,"Tags":"python,sql,snowflake-connector","A_Id":73253128,"CreationDate":"2022-08-05T16:26:00.000","Title":"Data Transfer - Python to Snowflake","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a set of 'n' data points and 'd' possible cluster centers that are known a priori. I need to pick the \"best\" 'k' (with the value of 'k' also known) out of those 'd' cluster centers so that clustering the 'n' data points over those 'k' cluster centers yields the minimum total cumulative distance.\nFurthermore, the number of data points associated to each of the k chosen clusters should be soft-balanced, but that's not a hard requirement.\nOne approximate solution I thought of would be to first blindly cluster the data points (e.g., Gaussian Mixture clustering with cluster number = k), and then pick the k known cluster centers that minimize their cumulative distance from those found empirically with GM clustering.\nOr, of course, there's always the brute force approach of trying all the possible combinations of picking k out of d centers and then computing the cumulative distance of the set.\nMagnitudes of the parameters, if that can help:\n\nn~10^2\nd~10^1\nk~10^1\n\nNOTE1: non-optimal but fast solutions are preferred, as this should run close to real-time.\nNOTE2: I'm currently using Python, but I don't necessarily need canned solutions\nThanks a lot!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":72,"Q_Id":73258349,"Users Score":0,"Answer":"Here is a greedy algorithm that runs in O(d^2) time complexity and has good performance in practice. I couldn't prove it's optimal (probably it's not).\nLet the d cluster centers be vertices of a graph that we will build. For each one of them, find its closest one, link them by an edge and update the degree of both vertices. This procedure has O(d^2) time complexity. In the end of it, you will have an adjacency list representing the graph as well as an array telling the degree of each vertex.\nPut the vertices in a priority queue (with the degree in the previous graph as the priority criteria). Now, iteratively run the following procedure: take the element from the top of the priority queue. Mark it as taken out of the graph, insert it in the set of k best clusters and decrease the degree of all its neighbors. If the set of k best clusters got k elements, stop the procedure. Else, continue it. This procedure has O(d log d) time complexity. In the end of it, you will have a set with the k best clusters.","Q_Score":0,"Tags":"python,classification,k-means","A_Id":73270604,"CreationDate":"2022-08-06T08:36:00.000","Title":"Find k out of d closest cluster centers to a set of n points","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"matplotlib.pyplot.gcf()\nGets the current figure.\nBut if you have multiple figures, how do you know which one is the current one?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":46,"Q_Id":73265590,"Users Score":1,"Answer":"According to the documentations, gcf() will get the current figure from pyplot figure stack. As stack works in LIFO(Last in first out) manner. The current figure will be that figure which you have made most recently.","Q_Score":0,"Tags":"python,matplotlib","A_Id":73266071,"CreationDate":"2022-08-07T07:16:00.000","Title":"The \"Current Figure\" is often mentioned in matplotlib, but what actually IS the current figure?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Now I need to randomize an existing tensor of m*m and make sure all rows or columns will not stay in the original position, the shuffle() function provided by python can only achieve the randomization function, but can not guarantee that all elements are moved. Alternatively, a diagonal matrix can be randomly disordered to ensure that there are no zeros on the diagonal and that element 1 appears only once in each row and column, and multiplying this matrix with the original matrix will also achieve the above function, but how to generate such a matrix is also a problem. If anyone knows how to solve this problem, please reply to me, I would be very grateful!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":23,"Q_Id":73277432,"Users Score":0,"Answer":"Representing a shuffling via indices or via a binary matrix are equivalent. In both cases you may have instances that are mapped back to themselves: either by having 1's on the diagonal of the binary matrix or by shuffled indices that equal their relative position.","Q_Score":1,"Tags":"python-3.x,matrix,random,pytorch,shuffle","A_Id":73278510,"CreationDate":"2022-08-08T11:50:00.000","Title":"How to ensure that all rows or columns are shifted while shuffling the tensor\uff1f","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had a csv file of 33GB but after converting to HDF5 format the file size drastically reduced to around 1.4GB. I used vaex library to read my dataset and then converted this vaex dataframe to pandas dataframe. This conversion of vaex dataframe to pandas dataframe did not put too much load on my RAM.\nI wanted to ask what this process (CSV-->HDF5-->pandas dataframe) did so that now pandas dataframe did not take up too much memory instead of when I was reading the pandas dataframe directly from CSV file (csv-->pandas dataframe)?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":76,"Q_Id":73287450,"Users Score":0,"Answer":"I highly doubt it is anything to do with compression.. in fact I would assume the file should be larger in hdf5 format especially in the presence of numeric features.\nHow did you convert from csv to hdf5? Is the number of columns and rows the same\nAssuming you converting it somehow with vaex, please check if you are not looking at a single \"chunk\" of data. Vaex will do things in steps and then concatenate the result in a single file.\nAlso if some column are of unsupported type they might not be exported.\nDoing some sanity checks will uncover more hints.","Q_Score":0,"Tags":"python,pandas,csv,hdf5,vaex","A_Id":73296290,"CreationDate":"2022-08-09T06:37:00.000","Title":"What did the HDF5 format do to the csv file?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have DataFrame containing values about shops and categories in one column.\n\n\n\n\nDate\nSpent\n...\nCategory\/Shop\n\n\n\n\n2022-08-04\n126.98\n...\nSupermarkets\n\n\n2022-08-04\nNaN\n...\nShopName\n\n\n2022-08-04\n119.70\n...\nSupermarkets\n\n\n2022-08-04\nNaN\n...\nShopName\n\n\n\n\n...\nI need to separate last column into to columns:\n\n\n\n\nDate\nSpent\n...\nCategory\nShop\n\n\n\n\n2022-08-04\n126.98\n...\nSupermarkets\nShopName\n\n\n2022-08-04\n119.70\n...\nSupermarkets\nShopName\n\n\n\n\nHow can this be done?\nWe can assume that every second row in the Category\/Shop column contains the name of the store that needs to be moved to a new column.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":79,"Q_Id":73288227,"Users Score":0,"Answer":"Apply the pandas series str. split() function on the \u201cAddress\u201d column and pass the delimiter (comma in this case) on which you want to split the column. Also, make sure to pass True to the expand parameter.","Q_Score":2,"Tags":"python,pandas","A_Id":73288457,"CreationDate":"2022-08-09T07:48:00.000","Title":"How to separate column in dataframe pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way to compress Images through LZMA using Python. The pre-existing package can compress text files, but I cannot find any way to do so with Images. Any help is great!\nThank you!","AnswerCount":3,"Available Count":1,"Score":-0.1325487884,"is_accepted":false,"ViewCount":235,"Q_Id":73288359,"Users Score":-2,"Answer":"Of course there is, first read the pictures with img = cv2.imread() using opencv, then the img value becomes a numpy list anyway, then convert this numpy list to a python list and you can save the lists in db or anywhere as text.","Q_Score":0,"Tags":"python,compression,image-compression,lzma","A_Id":73288436,"CreationDate":"2022-08-09T07:59:00.000","Title":"Image Compression using LZMA Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"please could any one help!\nTrying to import matplotlib I get this ImportError: *cannot import name 'artist' from 'matplotlib'*.\nI removed the package (*py -m pip uninstall matplotlib*) and reinstall it but I still get the same error.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":46,"Q_Id":73288777,"Users Score":0,"Answer":"One of the common reasons why Python packages cannot be imported is because of its virtual environments. Basically when you install a package manager, it creates some root environment for you and you by default install all packages there. Now this issue may arise if you download a package into another VE, then in your code editor you should pick the one where it lies. I recommend you to check if your package is installed and pick that environment in the IDE, however from what I know it generates a different error message, something like the import couldn't be resolved. Did it help you?","Q_Score":0,"Tags":"python,pandas,matplotlib,importerror","A_Id":73289280,"CreationDate":"2022-08-09T08:32:00.000","Title":"ImportError after importing matplotlib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting the following error:\n\n'ascii' codec can't decode byte 0xf4 in position 560: ordinal not in range(128)\n\nI find this very weird given that my .csv file doesn't have special characters. Perhaps it has special characters that specify header rows and what not, idk.\nBut the main problem is that I don't actually have access to the source code that reads in the file, so I cannot simply add the keyword argument encoding='UTF-8'. I need to figure out which encoding is compatible with codecs.ascii_decode(...). I DO have access to the .csv file that I'm trying to read, and I can adjust the encoding to that, but not the source file that reads it.\nI have already tried exporting my .csv file into Western (ASCII) and Unicode (UTF-8) formats, but neither of those worked.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":73295461,"Users Score":0,"Answer":"Fixed. Had nothing to do with unicode shenanigans, my script was writing a parquet file when my Cloud Formation Template was expecting a csv file. Thanks for the help.","Q_Score":1,"Tags":"python-unicode","A_Id":73297897,"CreationDate":"2022-08-09T16:50:00.000","Title":"Fix Unicode Decode Error Without Specifying Encoding='UTF-8'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to add an image to a fixed position in a plotly figure; however, the only way i've been able to find so far is using add_layout_image, but i don't really think it's a very practical way, since the source of the image must be an url (it's basically meant for dash apps).\nIs there any simple way to embed an image, from, let's say.. a numpy array in a fixed position in a plotly fig?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":220,"Q_Id":73309823,"Users Score":0,"Answer":"Thanks for the answers, i nailed it using add_layout_image() and using the image converted into a PIL image from the np array.","Q_Score":1,"Tags":"python,plotly","A_Id":73339906,"CreationDate":"2022-08-10T16:34:00.000","Title":"Add image in a plotly plot from numpy array (python)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Example i have 4 column in my dataframe,\ni want to use jaro similarity for col: A,B vs col: C,D containing strings\nCurrently i am using it between 2 columns using\ndf.apply(lambda x: textdistance.jaro(x[A], x[C]),axis = 1))\nCurrently i was comparing with names\n|A|C |result|\n|--| --- | --- |\n|Kevin| kenny |0.67|\n|Danny |Danny|1|\n|Aiofa |Avril|0.75|\nI have records over 100K in my dataframe\nCOLUMN A -contains strings of person name\nCOLUMN B -contains strings of city\nCOLUMN C -contains strings of person name (to compare with)\nCOLUMN D -contains strings of city (to compare with)\nExpected Output\n|A|B|C|D |result|\n|--|--|---| --- | --- |\n|Kevin|London| kenny|Leeds |0.4|\n|Danny |Dublin|Danny|dublin|1|\n|Aiofa|Madrid |Avril|Male|0.65|","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":73313174,"Users Score":0,"Answer":"df.apply(lambda x: textdistance.jaro(x['A'] + x['B'], x['C'] + x['D']),axis = 1))\nthank you DarrylG","Q_Score":0,"Tags":"python,pandas,jaro-winkler","A_Id":73359930,"CreationDate":"2022-08-10T22:00:00.000","Title":"I am working on Jaro wrinkler similarity, and I am able to use between 2 columns, but how do I use it with 2 pairs of columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Exactly what the title says. If I pass [0,0,0] into cv2.projectPoints() as rvec, whats the orientation? What direction is it facing?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":148,"Q_Id":73315040,"Users Score":0,"Answer":"A rotation vector's length encodes the amount of rotation in radians.\nA vector of length 0 encodes a rotation of nothing at all.\nSuch a \"rotation\" has no unique axis. It's an identity transformation.\nthe rvec argument to cv::projectPoints() does not represent a camera angle\/orientation. It represents the orientation of the model points, at the model's position. The tvec positions the model's points in front of the camera.\nIf your points are already relative to the camera's coordinate frame, then you must use zero rotation and zero translation.","Q_Score":0,"Tags":"python,opencv","A_Id":73318212,"CreationDate":"2022-08-11T03:44:00.000","Title":"What is the default orientation of the camera in OpenCV projectPoints()?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I understand that to drop a column you use df.drop('column name', axis=1, inplace =True)\nThe file format is in .csv file\nI want to use above syntax for large data sets and more in robust way\nsuppose I have 500 columns and I want to keep column no 100 to 140 using column name not by indices and rest want to drop , how would I write above syntax so that I can achieve my goal and also in 100 to 140 column , I want to drop column no 105, 108,110 by column name","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":304,"Q_Id":73317209,"Users Score":0,"Answer":"Instead of using a string parameter for the column name, use a list of strings refering to the column names you want to delete.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":73317252,"CreationDate":"2022-08-11T07:58:00.000","Title":"python dataframe pandas drop multiple column using column name","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can I call indexer.search_transactions with a group id? Or otherwise, search for multiple transactions by group id.\nThe Python SDK doesn't like the group id format: algosdk.error.IndexerHTTPError: invalid input: unable to parse base32 digest data 'txid': illegal base32 data at input byte 0","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":72,"Q_Id":73326794,"Users Score":2,"Answer":"You cannot search by group ID.\nYou would want to search by txid, find the block it's in, and find the group ID, then fetch that block and identify all transactions that contain the same group ID.","Q_Score":2,"Tags":"python,algorand","A_Id":73326809,"CreationDate":"2022-08-11T20:55:00.000","Title":"Calling indexer.search_transactions with a group ID (Algorand)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Remove duplicates based on multiple criteriaRemove duplicates based on multiple criteria","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":73328951,"Users Score":0,"Answer":"You can use sort_values with key = lambda x: x != '' to order the rows by whether the verified column has a value. Then giving the parameter keep = 'first' to df.duplicated will keep a row with a nonempty value, if any exist.","Q_Score":2,"Tags":"python,pandas,dataframe","A_Id":73330182,"CreationDate":"2022-08-12T03:25:00.000","Title":"Remove duplicates based on multiple criteria","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am coding my own models for a time but I saw huggingface and started using it. I wanted to know whether I should use the pretrained model or train model (the same hugging face model) with my own dataset. I am trying to make a question answering model.\nI have dataset of 10k-20k questions.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":140,"Q_Id":73334654,"Users Score":2,"Answer":"The state-of-the-art approach is to take a pre-trained model that was pre-trained on tasks that are relevant to your problem and fine-tune the model on your dataset.\nSo assuming you have your dataset in English, you should take a pre-trained model on natural language English. You can then fine-tune it.\nThis will most likely work better than training from scratch, but you can experiment on your own. You can also load a model without the pre-trained weights in Huggingface.","Q_Score":2,"Tags":"python,nlp,huggingface-transformers,nlp-question-answering","A_Id":73334712,"CreationDate":"2022-08-12T13:05:00.000","Title":"What is better custom training the bert model or use the model with pretrained data?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to PySpark and I see there are two ways to select columns in PySpark, either with \".select()\" or \".withColumn()\".\nFrom what I've heard \".withColumn()\" is worse for performance but otherwise than that I'm confused as to why there are two ways to do the same thing.\nSo when am I supposed to use \".select()\" instead of \".withColumn()\"?\nI've googled this question but I haven't found a clear explanation.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":405,"Q_Id":73347065,"Users Score":1,"Answer":"@Robert Kossendey You can use select to chain multiple withColumn() statements without suffering the performance implications of using withColumn. Likewise, there are cases where you may want\/need to parameterize the columns created. You could set variables for windows, conditions, values, etcetera to create your select statement.","Q_Score":0,"Tags":"python,pyspark","A_Id":74839901,"CreationDate":"2022-08-13T19:15:00.000","Title":"PySpark Data Frames when to use .select() Vs. .withColumn()?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Eventually I hope to build an algorithm in Python to do this (I'm newer to the language), but I'm hoping to find a direction of a study to help me understand the ideas behind it\u2014I can tell at least it's related to combinatorics.\nIf we had six elements, {a, b, c, d, e, f}, we can identify 15 different pairs that could be made where order doesn't matter (n = 6, k = 2, combination).\nThey'd be:\nab, ac, ad, ae, af,\nbc, bd, be, bf, cd,\nce, cf, de, df, ef\nHowever, what I'm interested in doing is identifying the different sets of pairs. Brute force, they seem to be:\n\n{ab, cd, ef}\n{ab, ce, df}\n{ab, cf, de}\n{ac, bd, ef}\n{ac, be, df}\n{ac, bf, de}\n{ad, bc, ef}\n{ad, be, cf}\n{ad, bf, ce}\n{ae, bc, df}\n{ae, bd, cf}\n{ae, bf, cd}\n{af, bc, de}\n{af, bd, ce}\n{af, be, cd}\n\nPresuming no error on my part, there'd also be 15 lists, with 3 (or n\/2) pairs\/entries, where the order of pairings and the order within pairings doesn't matter. As noted, I'm hoping to eventually create some code that would construct these lists of pairs.\nStarting points are appreciated!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":73355022,"Users Score":0,"Answer":"In your set of characters, each character would have 5 possible pairs if it isn't paired with itself: ab ac ad.... After you get all possible pairs for a, you can then move onto b, you would loop through the list once again but this time omitting the a as ab has already been found. You can repeat this and each time omitting the letters before itself until you are on the last letter. After this to get your 'sets of pairs', you can just loop through your pairs and adding them to a new list.","Q_Score":0,"Tags":"python,combinatorics","A_Id":73355124,"CreationDate":"2022-08-14T20:47:00.000","Title":"Identifying the Unique Sets of Pairs taken from a Set (Order not Important)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My Environment in VS 2022 is Python 3.8 (same as the Python version installed on my system), I installed the pandas package using pip install pandas, and it said successfully installed. But when I import pandas I get the \"import pandas could not be resolved from source\" report.\nany help will be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":594,"Q_Id":73367874,"Users Score":0,"Answer":"Version mismatches problem:\nFirst you need to make sure that the python3 interpreter version from terminal version is the same with your python version selection in VSCode.\n\nOpen terminal\nType 'python3'; you'll see your python version (Eg. version x)\nOpen your IDE VSCode\nOpen any\/current folder that related to python project on IDE VSCode\nCheck your python version at the bottom right on IDE VSCode (Eg. version\ny)\nChange or switch VSCode python version from (version y) to (version x)","Q_Score":0,"Tags":"python,python-3.x,pandas,visual-studio","A_Id":73367948,"CreationDate":"2022-08-16T01:28:00.000","Title":"import pandas in visual studio(2022)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2 time series which each have a record every 30 seconds with a difference of about 21 seconds\n;\n\nts1 starts at 12:30:00\nAnd the second record at 12:30:30\n\n\nts2 starts at 12:30:21\nAnd the second record at 12:30:51\n\nWhat is the best way to merge them without losing information I want to have the same index for both","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":48,"Q_Id":73372556,"Users Score":0,"Answer":"You can have two separate columns for ts1 and ts2, use pd.concat() which with a default 'outer' join method, and resample with ffill(), if necessary.","Q_Score":1,"Tags":"python,pandas,merge,time-series,data-analysis","A_Id":73372919,"CreationDate":"2022-08-16T10:31:00.000","Title":"Resampling 2 time series","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm doing solar panel power generation forecast using 1 year and 3 months data and I want to forecast 1 month of this. The data is in 15 minutes periods.\nMy question is, if I want to make a monthly forecast how many train should I use to get a good prediction? The last 3 months, 6 months or all the data?\nAnd for testing? How many months or weeks should I take?\nThanks and any insight would be appreciated.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":36,"Q_Id":73380726,"Users Score":1,"Answer":"Not sure if you are looking for the train_test_split, or the model input?\nIf it's the first, I suggest you use all data. Use the first 3\/4 for training and the rest for testing. Also you might want to use rolling windows.\nOn the other hand, if you are looking for the model input, the answer is highly dependent on your data set. I can put some assumptions out there that may help you.\n\nA solar power time series can be expected to have a very strong daily seasonality - in 15min periods you will see that nicely.\n\nDepending on the location you might also see a yearly seasonality, e.g. lower power generation in winter.\n\nI would not expect to see a weekly, monthly, or other seasonalities.\n\nSince your time series is only 1y3m you will also most likely not see a general trend in the power generation.\n\n\nThus, your model should address these two seasonalities: daily and yearly. I would expect the daily power generation to have a strong autocorrelation (weather tomorrow is most likely same as today). Therefore, you might not need a very long history for that. Perhaps only one or two months to forecast the following month. However, if you have a strong yearly seasonality you might need longer training data to capture the rising and falling trend correctly.","Q_Score":0,"Tags":"python,time-series,forecasting","A_Id":73381165,"CreationDate":"2022-08-16T21:48:00.000","Title":"Train_test_split for time series forecasting","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was trying to do semantic segmentation using Detectron2, but some tricky errors occurred when I ran my program. It seems there might be some problems in my environment.\nDoes anyone know how to fix it?\n\nImportError: cannot import name 'is_fx_tracing' from 'torch.fx._symbolic_trace' (\/home\/eric\/anaconda3\/envs\/detectron_env\/lib\/python3.9\/site-packages\/torch\/fx\/_symbolic_trace.py)","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2115,"Q_Id":73408083,"Users Score":2,"Answer":"@Mohan Ponduri's solution worked for me. Thanks. Seems to be the problem with new Detectron2 installation. Also I was able to use detectron2 on conda environment, just in case any one wants to know","Q_Score":4,"Tags":"python,pytorch,torch","A_Id":73438638,"CreationDate":"2022-08-18T18:48:00.000","Title":"ImportError about Detectron2","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"No matter what I install, it's either opencv-python, opencv-contrib-python or both at the same time, I keep getting the \"No module named 'cv2'\" error. I couldn't find an answer here. Some say that only opencv-python works, others say opencv-contrib-python works. I tried everything but nothing seems to work. I'm trying to use the aruco function and I know it belongs to the contrib module.\nAny tip? Thanks","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":398,"Q_Id":73409909,"Users Score":0,"Answer":"I would recommend using conda and creating a new enviroment. Then try sudo pip3 install opencv-python if youre using python3 or you can try sudo pip install opencv-python if you're using python2. This worked for me.\nAnother tip is to always check that you have the newest version of pip pip install --upgrade pip","Q_Score":0,"Tags":"python,opencv,aruco","A_Id":73414093,"CreationDate":"2022-08-18T22:03:00.000","Title":"No module named 'cv2' issue with opencv and contrib modules","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"No matter what I install, it's either opencv-python, opencv-contrib-python or both at the same time, I keep getting the \"No module named 'cv2'\" error. I couldn't find an answer here. Some say that only opencv-python works, others say opencv-contrib-python works. I tried everything but nothing seems to work. I'm trying to use the aruco function and I know it belongs to the contrib module.\nAny tip? Thanks","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":398,"Q_Id":73409909,"Users Score":0,"Answer":"Did you try to restart your code editor? I often need to close the file or whole VScoode and reopen for it to see the library I just installed.\nAnother problem could be that you're installing cv2 to a python verse that youre not using on your code editor..\nIf youre using python2.9 on your code editor but you install cv2 on your terminal with \"pip3.10 install opencv-python\" command, your code editor wont find it","Q_Score":0,"Tags":"python,opencv,aruco","A_Id":73409977,"CreationDate":"2022-08-18T22:03:00.000","Title":"No module named 'cv2' issue with opencv and contrib modules","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to add information about paragraphes and headings on my spacy document.\nFor this I have added beacon in between the paragraphs text and heading (such as < p_start > for the beginning of a paragraph).\nFor this I have placed the custom functions that detect these beacon and tagged the spans as paragraphs and\/or headings, after the tokenizer, but before the tok2vec component. Thus the pipeline tokenize the text, the tag the spans, then apply the regular pipeline components.\nNow I have a problem as I don't want these beacons to be processed in my final doc. However I couldn't find a way, either to remove such token during the pipeline processing, or even substitute them into whitespace.\nSo is there a way to change these tagged documents, in order to remove only the non relevant tokens ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":143,"Q_Id":73418943,"Users Score":0,"Answer":"There is no way to change the text of a spaCy Doc after it's created. That's a design decision to avoid data loss.\nIf you need to modify the contents of a Doc after creating it, what you can do is create a new Doc and pass it to a different pipeline. If you pass a Doc rather than a string as input to nlp then tokenization will be skipped.","Q_Score":1,"Tags":"python,spacy","A_Id":73439957,"CreationDate":"2022-08-19T15:19:00.000","Title":"spacy - removing a token from doc while preserving span attributes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large csv file (90000 rows, 20000 columns). I want to read certain rows from it in Python using an index list. But i don't want to load the entire file using something like \"read_csv\" as this will fill up my memory and the program will crash. IS there any way to do it without loading the entire file?\nI am open to changing the format of the input file(.csv) if that will help my case.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":395,"Q_Id":73419113,"Users Score":1,"Answer":"to read certain rows from csv file use skiprows argument of pandas.read_csv function:\n\nskiprows list-like, int or callable, optional\nLine numbers to skip\n(0-indexed) or number of lines to skip (int) at the start of the file.\nIf callable, the callable function will be evaluated against the row\nindices, returning True if the row should be skipped and False\notherwise. An example of a valid callable argument would be lambda x: x in [0, 2].","Q_Score":0,"Tags":"python,python-3.x,csv,file","A_Id":73420122,"CreationDate":"2022-08-19T15:34:00.000","Title":"Reading certain lines from a file in python without loading it in memory","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was wondering why I get funny behaviour using a csv file that has been \"changed\" in excel.\nI have a csv file of around 211,029 rows and pass this csv file into pandas using a Jupyter-notebook\nThe simplest example I can give of a change is simply clicking on the filter icon in excel saving the file, unclicking the filter icon and saving again (making no physical changes in the data).\nWhen I pass my csv file through pandas, after a few filter operations, some rows go missing.\nThis is in comparison to that of doing absolutely nothing with the csv file. Leaving the csv file completely alone gives me the correct number of rows I need after filtering compared to \"making changes\" to the csv file.\nWhy is this? Is it because of the number of rows in a csv file? Are we supposed to leave csv files untouched if we are planning to filter through pandas anyways?\n(As a side note I'm using Excel on a MacBook.)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":40,"Q_Id":73427692,"Users Score":2,"Answer":"Excel does not leave any file \"untouched\". It applies formatting to every file it opens (e.g. float values like \"5.06\" will be interpreted as date and changed to \"05 Jun\"). Depending on the expected datatype these rows might be displayed wrongly or missing in your notebook.\nBetter use sed or awk to manipulate csv files (or a text editor for smaller files).","Q_Score":0,"Tags":"python,excel,pandas,csv,jupyter-notebook","A_Id":73428065,"CreationDate":"2022-08-20T14:57:00.000","Title":"funny behaviour when editing a csv file in excel and then doing some data filtering in pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can i add a list or a numpy array as a column to a Dask dataframe? When i try with the regular pandas syntax df['x']=x it gives me a TypeError: Column assignment doesn't support type list error.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":224,"Q_Id":73428007,"Users Score":0,"Answer":"I finally solved it just casting the list into a dask array with dask.array.from_array(), which i think it's the most direct way.","Q_Score":0,"Tags":"python,dask","A_Id":73428559,"CreationDate":"2022-08-20T15:41:00.000","Title":"Add list or numpy array as column to a dask dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with python-jira, I want to get the attachments(only excel or csv) data from an issue into a readable format(say pandas df for example) without downloading the files.\nIs there any ways to do it?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":73435062,"Users Score":0,"Answer":"The simplest way is to just download it. Otherwise you have to access the Jira file system (on-prem Jira only), work out where the file is in the file system, and download it.","Q_Score":0,"Tags":"python,jira,filereader,python-jira","A_Id":73447800,"CreationDate":"2022-08-21T13:49:00.000","Title":"How to read a jira attachement content without downloading it in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Going crazy trying to need a number ID from each person in a pdf file.\nThe situation: in a pdf file, have a lot of people that received some money. i have to extract which ones received x money in a specific date.\ni used cpf id that looks like: 000.000.000-00\nCPF is an identification document that has an unique number for each brazilian person.\nThe code is ok but when the name of person have more than 5 names, the ID called by CPF break a line, being like:\n234.234.234-\n23\nand the ones who have their CPF's in this \\n, cant be found because the regex don't cover it. i tried everything n nothing works.\nI'm using this code in regex: r\"\\d{3}[\\.]\\d{3}[\\.]\\d{3}[-](\\s?\\d{0,2})\"\nEdit 1:\nI realized that the problem wasn't in the regex but its in the text format received from the function.\nThe text are being collected like: ' 00,0 Benef\u00edcio Saldo Conta Aldair Souza Lima) 143.230.234- Valor Mobilidade 12 '\nThe last 2 digits of cpf are showing up in the end of the text string. I looked and debugged the code and seems like the line break in the PDF is causing all this trouble.\nI changed the regex to find people by name but there's no name pattern cause they are so different.\nI'm thinking in some way that i can make a regex to match: \\d{3}[.]\\d{3}[.]\\d{3}[-]\nthan after N caracthers i match:\n'\\s\\d\\s' (' 12 ' from the example) cause the last 2 digits always have this 2 blank espaces, one before and one after.\nIs there some way that I can do it? Help me guys plz","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":70,"Q_Id":73435740,"Users Score":2,"Answer":"Specific to your 00,0 Benef\u00edcio Saldo Conta Aldair Souza Lima) 143.230.234- Valor Mobilidade 12 example:\n(\\d{3}\\.\\d{3}\\.\\d{3}-)[^\\d]*?(\\d{2})\nIt first matches and captures the 000.000.000- part: (\\d{3}\\.\\d{3}\\.\\d{3}-)\nThen matches but does not capture anything that's not digits: [^\\d]*?\nThen matches and captures two more digits: (\\d{2})\nNot the best implementation, since the results are returned in two separate groups, but hope this helps.","Q_Score":0,"Tags":"python,regex","A_Id":73435874,"CreationDate":"2022-08-21T15:21:00.000","Title":"regex extract text pdf","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I apply H2OAutoML for classification purposes.\nIn the first step, I developed several normalization techniques for my data. Then, I would want to apply H2OAutoML to MY normalized datasets and compare the results depending on various normalizing techniques.\nBut, H2OAutoML first normalized the data and then ran the model on it; I would want to skip the normalization step in H2OAutoML, but I cannot set the option.\nI'm curious if it's possible to deactivate the normalization phase and evaluate my data.\nFor instance, the standardize option is available in H2OGeneralizedLinearEstimator(standardize = False), but not in H2oautoML (H2OAutoML received an unusual keyword argument'standardize').\nThanks for your help and time!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":32,"Q_Id":73438565,"Users Score":1,"Answer":"Based on this R help \"https:\/\/www.rdocumentation.org\/packages\/h2o\/versions\/3.36.1.2\/topics\/h2o.automl\" the preprocessing option is NULL by default (preprocessing = NULL).\nTherefore, my data is not normalized by H2OAutoML.","Q_Score":0,"Tags":"python,automl","A_Id":73503868,"CreationDate":"2022-08-21T22:10:00.000","Title":"skipping pre-processing step in H2OAutoML","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using in Keras the flow_from_directory method to load images from specific categories, where folders in my train path correspond to the different classes. This works all fine.\nI am now in the situation that I\u2019d like to remove specific image files during analysis due to annotation errors. At the same time, to maintain the raw data I would not like to resort to removing the images from disk.\nOn the other hand, creating a new file structure without the removed images would undesirably add disk memory.\nI was wondering if Keras offers a way to enter a simple list of image filenames that should be ignored during the training stage, such as [example1.jpg, example9.jpg, example18.jpg]. Is there anywhere during ImageDataGenerator or flow_from_directory a parameter where this can be done\u00a0?\nAny advice would be welcome\u00a0!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":27,"Q_Id":73446651,"Users Score":1,"Answer":"I did the same thing but i changed my strategy, i would think it would be helpful for you too; i tried out to make a dataframe for the images and the labels and then remove the rows for the images that don't wanna to flow into model and then instead of flow from directory, i used flow from dataframe\nby this way, you could even remove images based on particular labels, size, created time, modified time, and etc.","Q_Score":0,"Tags":"python,image-processing,keras","A_Id":73446850,"CreationDate":"2022-08-22T14:21:00.000","Title":"Online removal of images from Keras pipeline","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to save a dataframe with the same dataframe name in python?\nIn order to do that I need to extract the name of the dataframe and use it as a text to save to csv file. like dataframe.to_csv('name of the dataframe'+'.csv). My question is how to extract the name of the dataframe.\nExample:\nI have a dataframe that is randomly generated as XX_20135.\nXX_20135.to_csv('XX_20135.csv') so in the output I will have the csv name as XX_20135. I don't know the name of the df in advance as it is generated randomly.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":84,"Q_Id":73451039,"Users Score":0,"Answer":"Isn't the dataframe name the same as variable name? If so, the variable name is fixed, isn't it? If it is fixed, you know it when running the code.","Q_Score":0,"Tags":"python,pandas,dataframe,extract,export-to-csv","A_Id":73451080,"CreationDate":"2022-08-22T21:11:00.000","Title":"How to save a dataframe with the same dataframe name in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried run my *.m script (written by Matlab R2018b) from spyder (anaconda3 & python version 3.10 on windows 10). I got compatibility problem of python & Matlab. I installed trial Matlab R2022a but it didn't solve my problem. I also reduced the python version from 3.10 to 3.6 to be compatible with Matlab R2018b, as I saw in some advices in stack overflow.\nHowever, this version reduction took too much time on my laptop and didn't solve the problem.\nI am using Computer Vision, Image Processing, Optimization, and Statistics & Machine Learning tools boxes of Matlab R2022a (trial version) or R2018b.\nMany thanks for your helpful comments.\nBest regards,","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":154,"Q_Id":73452730,"Users Score":0,"Answer":"Python has an initial configuration so that the system recognizes it, and we can call it.\nDid you set the Python 3.6 environment in the setting?","Q_Score":0,"Tags":"python,matlab,windows-10,spyder,anaconda3","A_Id":73470391,"CreationDate":"2022-08-23T03:10:00.000","Title":"How to run a Matlab R2018b script in Spyder (python 3.10) from Anaconda3?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to filter a column of strings types to only return values \u200b\u200bthat are equal to 'NORMAL'. Applying the following codes returned different dataframes:\n1 - df = df[(df['column_name'] == 'NORMAL')]\n2 - df = df.drop(df[(df['column_name'] != 'NORMAL'].index)\n3 - df = df.drop(df[(~df['column_name'].str.contains('NORMAL'))].index)\nThe resulting dataframe at 2 and 3 are equal but different from 1. The expected dataframe is made by example 1.\nAm I missing something or is there a logical difference between the codes ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":41,"Q_Id":73476569,"Users Score":0,"Answer":"As commented by @mozway, the difference in the return between the 3 example codes was in the index, when resetting the index the problem was solved.","Q_Score":0,"Tags":"python,pandas,comparison","A_Id":73478059,"CreationDate":"2022-08-24T16:22:00.000","Title":"Different behavior with the same logic applied in pandas dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"so i've looked online at a far exmaples but they all seem to assume the data is in order.\nSo Row 1 in Both files has the same information.\nIn my case Row 1 File X has an IP and DNS. The idea is to check if this IP address can be found in any of the rows in File Y.\nIdeally I'd get a list of IP addresses not found in File Y.\nI tried to import the files into Pandas but thats about where my knowledge ends.\nEdit: Sample\nFile 1\ndns,ip\nwhat.dont.cz.,12.34.21.90\n........\nFile 2\nip,dns\n1.32.20.25, sea.ocean.cz\n........\n12.34.21.90 what.dont.cz\n..........","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":73486822,"Users Score":0,"Answer":"I ended up using cli53 and that provided me with a pretty clean list of all Records in our zone. I then used find and replace to add a comma to all the values and imported this into excel.\nThat was the best solution for my particular use case.","Q_Score":0,"Tags":"python,pandas,csv,sorting","A_Id":73622637,"CreationDate":"2022-08-25T11:39:00.000","Title":"Comparing 2 CSV files with Domain and IP. Rows are in different order. Reading Row 1 File X compare with all Rows in File Y","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a Hive metatstore with object store as warehouse setup . External table is created over data present in minio. My requirement is to read data from this table in pandas or dask. Currently I am doing in a crude way by accessing the metadata of the table and extracting location of data and then reading that location to create a dataframe.\nPlease suggest any other way for it , which will help me support more user given queries .","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":78,"Q_Id":73502903,"Users Score":0,"Answer":"Dask has the ability to read parquet files, as you have found, so accessing them directly will be the most efficient thing you can do.\nThere is also the possibility of reading dataframes from SQL queries, which you could push to hive, and get it to do the extraction for you; but in this case the compute is being done by hive, and you will get very poor throughput of the results to dask.\nThere have been some attempts in the past for dask to directly interact with the hive metastore and achieve exactly the kind of workflow you are after, but I don't think any of those became generally released or usable","Q_Score":0,"Tags":"python,pandas,dask","A_Id":73503215,"CreationDate":"2022-08-26T14:53:00.000","Title":"Reading from Hive external table in Pandas or DASK","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed numpy using\npip install numpy\nIn the current directory.\nBut when I try to import it in my jupyter notebook, it gives an error.\nModuleNotFoundError Traceback (most recent call last)\n~\\AppData\\Local\\Temp\/ipykernel_17100\/2172125874.py in \n----> 1 import numpy\nModuleNotFoundError: No module named 'numpy'\nPlease help me resolve this problem. I have tried uninstalling numpy and re-installing, yet it does not work at all. I am using a Windows 10 system.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":39,"Q_Id":73508449,"Users Score":1,"Answer":"First, pip install numpy will install NumPy package in python\/site-package, not the current directory. You can type pip show numpy in the terminal to check its information(path, version...).\nSecondly, Maybe the interpreter you choose in Jupyter notebook is not the same as the one you installed numpy on. You might need to check that.\nTo check Whether it has numpy. You might use pip list to check that, in case pip corresponds to the interpreter you wanna check.\nHope this will help you.","Q_Score":3,"Tags":"python,numpy,jupyter-notebook","A_Id":73509246,"CreationDate":"2022-08-27T04:50:00.000","Title":"I have installed numpy, yet it somehow does not get imported in my jupyter notebook","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Deep learning and would like to understand on the below points. Can you please help.\n\nIf I give number of epochs as 100 to train and try to evaluate the model, does it take the best epoch model or the final model after 100 epochs.\n\nIn history, I am seeing loss and val loss. Do the model try to minimize only the training loss and just show the val_loss for our reference, similar to the metrics it shows.\n\nIf I use Keras Tuner (RandomSearch), there is an objective function. I am confused whether the model try to reduce the loss provided during compile or the loss provided in the tuner objective.\n\n\nCan you please clarify on the above points.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":94,"Q_Id":73516836,"Users Score":1,"Answer":"The high value for epoch will only lead into high accuracy and lowest loss for training dataset, but the important thing that you should watch during training on data is the val_loss and val_metric;\nIn most cases if the model continue training on the data will cause overfitting on the validation data (the validation data are not seen by the model and just evaluatted by model after an epoch) so the high value for epochs wont lead into better model\nso the most important thing to notice is the val_loss and discontinue or break model training if you notice continuos increasing in val_loss; so you could implement a callback (EarlyStopping) to stop model training whenever the increasing in val_loss is watched.","Q_Score":0,"Tags":"python,tensorflow,keras,keras-tuner","A_Id":73517151,"CreationDate":"2022-08-28T07:38:00.000","Title":"Keras Hyper tuning - Final model state","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"where employee ID is the column name and then int64 is the datatype.\nemployee_id: int64\nRange: 1.34 - 2.07\nMean: 1.71\nStandard deviation: 0.11\nMedian: 1.71\n(this is just an example as I am new to learning the data science side of python and I want to get more organized with my code. Thanks)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":73522676,"Users Score":0,"Answer":"I think what you\u2019re looking for is df.describe() where df is your pandas dataframe. If that\u2019s not what you\u2019re looking for try df.info().","Q_Score":1,"Tags":"python,pandas","A_Id":73522840,"CreationDate":"2022-08-28T22:21:00.000","Title":"Is there a way utilizing pandas in python to input display the column name and also the datatype. Also how would is be possible to display info under","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I had a list of Arabic and English elements, I transfer it into a dataframe BUT the issue is I have all values in One single column, I want to move the records that contains English words to another column:\nso what I have now:\n\n\n\n\nCOLUMN 1\n\n\n\n\n\u0647\u0644\u0627\n\n\n\u0627\u0644\u0633\u0644\u0627\u0645\n\n\nWELCOMING\n\n\n\u0634\u064a \u0627\u062e\u0631\n\n\n\n\nTHE OUTPUT THAT I WANT IS:\n\n\n\n\nCOLUMN 1\nCOLUMN 2\n\n\n\n\n\u0647\u0644\u0627\nwelcoming\n\n\n\u0627\u0644\u0633\u0644\u0627\u0645\nothers eng. words\n\n\n\n\nhope its clear..","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":73538275,"Users Score":0,"Answer":"You can check for each entry if the first character is part of ASCII. If so, move to new column.\nDisclaimer: Only works if one language contains no ASCII at all and the second language only contains ASCII-Characters","Q_Score":2,"Tags":"python,pandas,dataframe,jupyter","A_Id":73538323,"CreationDate":"2022-08-30T06:52:00.000","Title":"If df records is in English move it to another column using python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two different dataframes and I need to add informatio of one dataframe into another basede on a column where they share the same values.\nSomething like this:\nDF1:\n\n\n\n\nInvoices\nClient\nProduct\nProduct type\n\n\n\n\n00000001\nAAAAAA\nA1a1\n\n\n\n\n\nDF2:\n\n\n\n\nProduct\nProduct type\nProduct description\n\n\n\n\nA1a1\nType A1\ndescription of the product\n\n\n\n\nThe first Dataframe is a list of all invoices over the last year, which has one row for each product in that invoice, I need to add the \"Product type\" from DF2 on DF1 for each product.\nI've tried to use the merge function but it adds the column and that's not what I need to do.\nI need to compare the \"Product\" columns on both DFs and when the value is the same populate DF1 \"Product\" with DF2 \"Product\" value.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":73560521,"Users Score":0,"Answer":"df3 = pd.merge(df1, df2, how =\"left\", on \"Product\")","Q_Score":0,"Tags":"python-3.x,pandas","A_Id":73560540,"CreationDate":"2022-08-31T18:41:00.000","Title":"How can I populate a column of a dataframe with information of another dataframe?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i know we can use say df['col'] >= 1 to return a booleen mask of true\/false values for a specific pandas df column, but is there a way to filter for data types?\nI'd like to filter out NaN values in a column that has both string and NaN values.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12,"Q_Id":73563131,"Users Score":0,"Answer":"You can find the NaN's with df['col'].isna(). Returns a boolean mask.","Q_Score":0,"Tags":"python,pandas","A_Id":73563142,"CreationDate":"2022-09-01T00:21:00.000","Title":"booleen mask for datatypes using pandas?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"After I have loaded all data I needed and did the mapping, I used the following code to extract n-elements using the take() function and get the data in rdd format.\nprint(data.take(10))\nBut if I want to take all data (it could be thousands or more rows) what code shall I write to extract all data?\nThank you in advance.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":65,"Q_Id":73568536,"Users Score":2,"Answer":"The take will accept only Int. This .take() method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver\u2019s memory.\nIf you are trying to enter any number beyond the Integer range, it will give an error, like\n\"error: type mismatch;\nfound: Long\nrequired: Int\"\nthis is not useful for millions of data. Useful when the result is small or in other words the number or size is an integer.\nYou can use other actions like collect(), to get all the data in the RDD","Q_Score":1,"Tags":"python,apache-spark,rdd","A_Id":73571340,"CreationDate":"2022-09-01T11:22:00.000","Title":"How to get all data in rdd pipeline in Spark?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say when we have a randomly generated 2D 3x2 Numpy array a = np.array(3,2) and I want to change the value of the element on the first row & column (i.e. a[0,0]) to 10. If I do\na[0][0] = 10\nthen it works and a[0,0] is changed to 10. But if I do\na[np.arange(1)][0] = 10\nthen nothing is changed. Why is this?\nI want to change some columns values of a selected list of rows (that is indicated by a Numpy array) to some other values (like a[row_indices][:,0] = 10) but it doesn't work as I'm passing in an array (or list) that indicates rows.","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":199,"Q_Id":73573156,"Users Score":4,"Answer":"a[x][y] is wrong. It happens to work in the first case, a[0][0] = 10 because a[0] returns a view, hence doing resul[y] = whatever modifies the original array. However, in the second case, a[np.arange(1)][0] = 10, a[np.arange(1)] returns a copy (because you are using array indexing).\nYou should be using a[0, 0] = 10 or a[np.arange(1), 0] = 10","Q_Score":0,"Tags":"python,numpy,numpy-ndarray","A_Id":73573177,"CreationDate":"2022-09-01T17:06:00.000","Title":"Change Numpy array values in-place","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using sklearn's IterativeImputer with a RandomForestRegressor to impute my data. Considering Random Forests do not need their data scaled, I cannot give the argument \"tol\" a value, because it will not be in any meaningful units. How do I nonetheless force IterativeImputer to continue iterating?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":23,"Q_Id":73598679,"Users Score":0,"Answer":"To figure this out, we should compute a given rank order, so we can calculate the distance between each node and the rank order. I could guess that as our goal is a list of ranking, the value for. The RandomForestRegressor you using has one huge exception: if there are no nonzero values for the first and last moments it should return True. And indeed it does. The first moment is if one excludes the endpoints which have no data.\nFinally, the last moment is when the centroid is derived from all data:\nI got my data. I tried to use randomForestRegressor","Q_Score":0,"Tags":"python,scikit-learn,imputation","A_Id":73598807,"CreationDate":"2022-09-04T10:38:00.000","Title":"Force sklearn IterativeImputer to continue iterating","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The model has been trained, tested, and I have saved both the checkpoint and the model. After the training is complete, I load the stored model in the code. Do I need to retrain the model if I close the Jupyter Notebook?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":73603033,"Users Score":0,"Answer":"The short answer is... no!\nActually saving a model (under format .pt basically) will store all the trained parameters under a pickle dictionary somewhere in persistent memory (CD or SSD). Closing a notebook only clean allocated memory (RAM or video RAM) but not persistent memory so you will lose an hypothetical variable model but not the saved parameters you trained. Note that the architecture is not store (only the weights and the name of the layers)","Q_Score":0,"Tags":"python,tensorflow,jupyter-notebook,torch","A_Id":73603633,"CreationDate":"2022-09-04T21:49:00.000","Title":"Will I have to repeat the training procedure if I close the Jupyter notebook after the model has finished training and I've saved it?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a pretty amateur data science student and I am working on a project where I compared two servers in a team based game but my two datasets are formatted differently from one another. One column for instance would be first blood, where one set of data stores this information as \"blue_team_first_blood\" and is stored as True or False where as the other stores it as just \"first blood\" and stores integers, (1 for blue team, 2 for red team, 0 for no one if applicable)\nI feel like I can code around these difference but whats the best practice? should I take the extra step to make sure both data sets are formatted correctly or does it matter at all?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":73612523,"Users Score":0,"Answer":"Data cleaning is usually the first step in any data science project. It makes sense to transform the data into a consistent format before any further processing steps.","Q_Score":0,"Tags":"python,sql,pandas,data-science,data-analysis","A_Id":73620360,"CreationDate":"2022-09-05T16:51:00.000","Title":"About Data Cleaning","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the best way to append data using matching column names from two different data frames with differing dimensions?\nScenario:\nDf1 = 350(rows)x2778(columns)\nDf2 = 321x2910\nDf1 has <2778 columns with the exact same name as <2910 columns in Df2.\n-It could be 500 columns in each data frame as an example that have equivalent names\nWhat I want to do:\nAppend data from df2 to df1 where the column names match.\nE.x.: df1's data is present in matching column and has df2's data appended to the end of the column, put underneath it so to say.\nIf the col names don't match, the data frame that lacks the matching name should have the name attached as a new column with NA's filling the missing space.\nE.x.: df1 doesn't have a column df2 has, so the column is added while maintaining order of processing.\nI've tried to do this using Pandas in Python but got Index duplication errors (probably the columns). I'm looking at R now but I want to know if anyone has a simple solution.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":52,"Q_Id":73627201,"Users Score":0,"Answer":"Check out merge() from base r or bind_rows() from dplyr.","Q_Score":1,"Tags":"python,r,pandas,dataframe,append","A_Id":73627775,"CreationDate":"2022-09-06T20:00:00.000","Title":"Appending data with unequal data frame dimensions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use MultiHeadAttention layer in my transformer model (my model is very similar to the named entity recognition models). Because my data comes with different lengths, I use padding and attention_mask parameter in MultiHeadAttention to mask padding. If I would use the Masking layer before MultiHeadAttention, will it have the same effect as attention_mask parameter? Or should I use both: attention_mask and Masking layer?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":335,"Q_Id":73636196,"Users Score":0,"Answer":"The masking layer keeps the input vector as it and creates a masking vector to be propagated to the following layers if they need a mask vector ( like RNN layers). you can use it if you implement your own model.If you use models from huggingFace, you can use a masking layer for example if you you want to save the mask vector for future use, if not the masking operations are already built_in, so there is no need to add any masking layer at the beginning.","Q_Score":5,"Tags":"python,tensorflow,keras,transformer-model","A_Id":73671754,"CreationDate":"2022-09-07T13:14:00.000","Title":"Masking layer vs attention_mask parameter in MultiHeadAttention","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been doing competitive programming (USACO) for a couple of months now, in which there are time constraints you cannot exceed. I need to create a large matrix, or 2d array, the dimensions being 2500x2500, in which each value is [0,0]. Using list comprehension is taking too much time, and I needed an alternative (you cannot import modules so numpy isn't an option). I had initially done this:\ngrid = [[[0,0] for i in range(2500)] for i in range(2500)]\nbut it was taking far too much time, so I tried:\n grid= [[[0,0]]*2500]*2500,\nwhich gives the same result initially, but whenever I try to change a value, for example:\ngrid[50][120][0]= 1, it changes the 0th index position of every [0,0] to False in the entire matrix instead of the specific coordinate at the [50][120] position, and this isn't the case when I use list comprehension. Does anybody know what's happening here? And any solution that doesn't involve a crazy run time? I started python just a couple of months before competitive programming so I'm not super experienced.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":85,"Q_Id":73643416,"Users Score":2,"Answer":"grid = [[[0,0] for i in range(2500)] for i in range(2500)]\ntakes around 2.1 seconds on my PC, timing with PowerShell's Measure-Command. Now if the data specifications are strict, there is no magical way to make this faster. However, if the goal is to make this representation generate faster there is a better solution: use tuple instead of list for the inner data (0, 0).\ngrid = [[(0, 0) for i in range(2500)] for i in range(2500)]\nThis snippet generates the same informational value in under quarter of the time (0.48 s). Now what you have to consider here is what comes next. When updating these values in the grid, you need to always create a new tuple to replace the old one - which will always be slower than just updating the list value in the original sample code. This is because tuple doesn't support item assignment operation. Replacing a single value is still as easy as grid[50][120] = (1, grid[50][120][1]).\nFast generation - slow replacement, might be handy if there aren't tons of value changes. Hope this helps.","Q_Score":2,"Tags":"python,arrays,list,indexing,list-comprehension","A_Id":73648353,"CreationDate":"2022-09-08T03:24:00.000","Title":"Creating a very large 2D array without a major impact to code run time in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am wondering if there is a fast way to rearrange the rows of a csv using pandas so that it could match the order of the rows in another csv that have the same data, but arranged differently. To be clear, these two csvs have the same data in the form of several numeric features spread across several columns. I tried doing loops that matches each row of data with its counterpart by comparing the values in multiple columns, but this prove too slow for my purposes.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":16,"Q_Id":73646115,"Users Score":0,"Answer":"You should use pandas DataFrame:\n\n\"read_csv\"__both files.\nConvert both to \"DataFrame\".\nUse \"merge\".\n\"to_csv\"__use to save.\n\nshare your data..","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":73646343,"CreationDate":"2022-09-08T08:43:00.000","Title":"Rearranging CSV rows to match another CSV based on data in multiple columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My df had a lot of categorical variables, so I used\npd.get_dummies()\nto be able to train my Random Forest and Logistic Regression models. Everything worked fine, then I asked myself: which components affect the models prediction the most? I thought about using PCA, but I have dummies binary variables, so I don't know if it has interpretability due to the number of variables I have being dummies. I also tried using\nRF.feature_importances_\nbut it's the same; I only have thousands of columns with data where each one influences very little, losing data interpretability. Is there any method to calculate the importance of each variable being dummie? I've seen some discussion on stackoverflow about this. Some say PCA can be used, others say it loses interpretability. I do not look for papers that propose methods. In case there is a solution, I would like it to be implemented in python to use it","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":73656495,"Users Score":0,"Answer":"In general, I'd be really careful about imparting meaning to feature sensitivity (classic correlation is not causation argument), but one major problem is your categories are one hot encoded and spread out, so you need to pick them back up. How you do that somewhat depends on the data and whether you're trying to get importance of an overall category or a label that appears across categories.\nI can't write any code because you haven't given any code.","Q_Score":0,"Tags":"python,machine-learning","A_Id":73656932,"CreationDate":"2022-09-09T01:37:00.000","Title":"How to calculate most important features for Random Forest and Logistic Regression with dummies variables?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a sentiment analysis project, where the backbone is ofc a model. This was developed using sklearn's off the shelf solutions (MLP) trained with my data. I would like to \"save\" this model and use it again in JavaScript.\nAdam\nI have looked at pickle for python but I'm not sure how i could use this for JS. This is a chrome extension I am developing so I would rather not set up and server. I should add this is course work, so spending money is a no!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":68,"Q_Id":73663673,"Users Score":0,"Answer":"After some research I pretty much determined its not possible using sklearn in JS. My solution was to use keras and use tensorflow JS.\nAlternatively, I have learnt the maths behind the network and \"raw\" code it using no libraries. This took a lot longer than just converting everything to keras although.","Q_Score":1,"Tags":"javascript,python,scikit-learn,sentiment-analysis","A_Id":74995572,"CreationDate":"2022-09-09T14:27:00.000","Title":"Sklearn Model to JS","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to perform classification on some images from Tiny ImageNet dataset. I'm creating a training dataset and test dataset where the features are the numpy representation of the image. However, it seems like there are RGB images which are of shape (64x64x3) and black & white images which are only one channel (64x64); so I can't simply flatten the collection of images into a 1d array as they give different sizes. What's the standard way of dealing with this discrepancy? Do I pad with 0's?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":65,"Q_Id":73668883,"Users Score":1,"Answer":"Two simple approaches come to mind:\n\nYou can either convert all RGB images to grayscale\nYou can also convert all grayscale images to RGB\n\nYou then have a uniform shape for your input.\nIn any case, OpenCV can handle both operations using cv2.cvtColor() and either cv2.COLOR_RGB2GRAY or cv2.COLOR_GRAY2RGB.\nI'm certain there are more complex ways to represent an image independent of its color space, but I'd start with either of the two above.\nEdit: Bear in mind that if you convert a RGB image to grayscale and then back to RGB that they will differ. However, if you plan on using image augmentation there's a good chance it won't impact the model too much.","Q_Score":0,"Tags":"python,machine-learning,image-processing,computer-vision,classification","A_Id":73671629,"CreationDate":"2022-09-10T02:20:00.000","Title":"How to deal with different types of images for image classification (Black & White & RGB) ImageNet","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset with 150+ features, and I want to separate them as text, categories and numerics. The categorical and text variables are having the Object data type. How do we distinguish between a categorical and text variable? Is there any threshold value for the categorical variable?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":141,"Q_Id":73672354,"Users Score":0,"Answer":"There is no clear distinction between categories and text. However, if you want to understand if a particular feature is categorical you can do a simple test.\ne.g. if you are using pandas, you can use value_counts() \/ unique() for a feature. If the number of results are comparable to the size of the dataset, this is not a categorical field.\nSimilarly for numerics too.. But in numerics it may be Ordinal, meaning there is a clear ordering. e.g., size of t-shirts.","Q_Score":0,"Tags":"python-3.x,data-science,data-cleaning,feature-selection,feature-engineering","A_Id":73718768,"CreationDate":"2022-09-10T13:54:00.000","Title":"identifying categorical variables in a dataset","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm not sure what the reason might be for having to specify the name of the dataframe twice when selecting rows using conditional statements in Pandas. For example, if I have a dataframe df:\n\n\n\n\nname\nage\n\n\n\n\nAlice\n31\n\n\nBob\n21\n\n\n\n\nwhen I want to select rows with people over 30 I have to write\nover_thirty = df[df.age > 30]. Why not simply df['age' > 30]]?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":73673330,"Users Score":0,"Answer":"so if you write df[age>3] it will give you output in true or false. I am sure which you not needed","Q_Score":0,"Tags":"python,pandas,dataframe,syntax","A_Id":73673737,"CreationDate":"2022-09-10T16:09:00.000","Title":"Why do I have to specify the dataframe twice while selecting rows with logical statements in Pandas?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have following dataframe, where date was set as the index col,\n\n\n\n\ndate\nrenormalized\n\n\n\n\n2017-01-01\n6\n\n\n2017-01-08\n5\n\n\n2017-01-15\n3\n\n\n2017-01-22\n3\n\n\n2017-01-29\n3\n\n\n\n\nI want to append 00:00:00 to each of the datetime in the index column, make it like\n\n\n\n\ndate\nrenormalized\n\n\n\n\n2017-01-01 00:00:00\n6\n\n\n2017-01-08 00:00:00\n5\n\n\n2017-01-15 00:00:00\n3\n\n\n2017-01-22 00:00:00\n3\n\n\n2017-01-29 00:00:00\n3\n\n\n\n\nIt seems I got stuck for no solution to make it happen.... It will be great if anyone can help...\nThanks\nAL","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":55,"Q_Id":73677560,"Users Score":1,"Answer":"When your time is 0 for all instances, pandas doesn't show the time by default (although it's a Timestamp class, so it has the time!). Probably your data is already normalized, and you can perform delta time operations as usual.\nYou can see a target observation with df.index[0] for instance, or take a look at all the times with df.index.time.","Q_Score":0,"Tags":"python,pandas,date,datetime","A_Id":73677666,"CreationDate":"2022-09-11T07:29:00.000","Title":"How to append hour:min:sec to the DateTime in pandas Dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For my ML-project I have 1d-data with multiple channels (n>2, variable). After data acquisition I noticed that the data in one channel was completely unusable, and therefore would decrease the accuracy of my trained model. Still, I did not want to remove the channel entirely from my model and re-write it into a model with (n-1)-channels, as it would receive future data with n channels during classification, which would break a modified model.\nInstead, I wanted to have the option of telling my model to ignore data coming from one channel during both training and evaluation, such that it would look and behave like a model with n channels, but would internally only use n-1 channels. Is that possible for pytorch-based neural networks? And if yes, how would I approach that?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32,"Q_Id":73690412,"Users Score":0,"Answer":"The quick-and-dirty solution would be to simply add a pre-processing stage where you multiply this un-wanted channel by zero.","Q_Score":0,"Tags":"python,pytorch","A_Id":73690807,"CreationDate":"2022-09-12T13:51:00.000","Title":"Ignoring one\/multiple channels in pytorch-model during training and evaluation","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to come up with a calculation that creates a column that comes up with a number that shows density for that specific location in a 5 mile radius, i.e if there are many other locations near it or not. I would like to compare these locations with themselves to achieve this.\nI'm not familiar with the math needed to achieve this and have tried to find a solution for some time now.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":99,"Q_Id":73693706,"Users Score":1,"Answer":"Ok, i'm not super clear with what your problem may be but i will try to give you my approach.\nLet's first assume that the area you are querying for points is small enough to be considered flat hence the geo coordinates of your area will basically be cartesian coordinates.\nYou choose your circle's center as (x,y) and then you have to find which of your points are within radius of your cirle: in cartesian coordinates being inside of a circle means that the distance of the points from your center are smaller than a given radius. You save those points in your choice of data structure and the density will probably be the number of your points divided by the area of the circle.\nI hope i understood the problem correctyl!","Q_Score":1,"Tags":"python,pandas,latitude-longitude,kernel-density","A_Id":73693806,"CreationDate":"2022-09-12T18:24:00.000","Title":"How do I find the density of a list of points given latitude and longitude in a 5 mile radius in Python Pandas?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have a data that range between 0 - 2,9.\nI need it to be normalized with MinMaxScaler with 0 - 1 range.\nI need to transform 2.9 into 0 and 0 into 1. My solution is to subtract all numbers with 2.9 and make it absolute. But is there any other way more efficient than that ? I'm using sklearn for normalization","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":57,"Q_Id":73698493,"Users Score":0,"Answer":"All you have to do is to multiply your data by -1 prior to normalization. With that your max value (2.9) becomes the new min value (-2.9) that gets normalized to 0 and your min (0) becomea the new max that gets normalized to 1.","Q_Score":0,"Tags":"python,pandas,scikit-learn,bigdata,data-mining","A_Id":73698830,"CreationDate":"2022-09-13T06:39:00.000","Title":"How to Reversed Normalization in MinMaxScaler","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was importing a Swin-Transformer like this, which used to work the last week:\npip install tfswin\nfrom tfswin import SwinTransformerLarge224\nSince today I get the following error:\n\"----> 3 from keras.mixed_precision import global_policy\nImportError: cannot import name 'global_policy' from 'keras.mixed_precision' (\/usr\/local\/lib\/python3.7\/dist-packages\/keras\/mixed_precision\/init.py)\"\nI tried pip installing those packages as well as setting global policy with set_global_policy('float 32'). Nothing seems to work. Is it likely this is going to work again tomorrow ? Im a bit time pressured because it's a master thesis and this was the first Swin import that worked for me.\nTF version is 2.10.0","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":50,"Q_Id":73701970,"Users Score":1,"Answer":"Fixed it with !pip install keras==2.9.0.","Q_Score":0,"Tags":"python,tensorflow,keras","A_Id":73702473,"CreationDate":"2022-09-13T11:13:00.000","Title":"cannot import name 'global_policy' from 'keras.mixed_precision' since today","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on with 'ERA5-Land monthly averaged data from 1950 to present' for total precipitation and evaporation.\nI found out that the number of values in the dataset is 1,306,863,104.\n[1306863104 values with dtype=float32]\nHowever, the total dimension of the dataset is 86418013600 (time, latitude, longitude respectively) =5,601,830,400.\nHave you guys ever worked with such data?\nI want to calculate SPEI index (using climate_indices package), but the error code is like that they cannot reshape 1,306,863,104 data into (864,1801,3600). Because of this issue, I am stuck in..\nPlease help me.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":73702964,"Users Score":0,"Answer":"I solved the problem by myself.\nIt was because of the dtype.\nDefault dtype of built-in function was int type.\nThe number of value exceeds the range that integer can represent.\n5,601,830,400 > 2^32\nTherefore, I set the dtype to float.\nThen, It worked well.","Q_Score":0,"Tags":"python,dataset,dimension","A_Id":75273204,"CreationDate":"2022-09-13T12:23:00.000","Title":"What if the number of values in dataset is different from total dimension of dataset in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to get the ProXAS_v2.43 running for the evaluation of QEXAFS data. I installed the necessary packages the manual provided, but when I try to start the program I get the following error: ImportError: cannot import name 'donaich' from 'lmfit.lineshapes' (C:\\Users\\sq0346\\Anaconda3\\lib\\site-packages\\lmfit\\lineshapes.py)\nAll packages required listed by conda search , should be present.\nMainly: Pandas, Scipy, Numpy-indexed, Xraylarch\nFull error:\n\nFile\n~\\Anaconda3\\envs\\py38\\Lib\\site-packages\\ProQEXAFS-GUI-master\\ProXAS-2.43\\ProXAS_v2.43.py:9\nin \nimport tkinter, time, os, psutil, subprocess, sys, shutil, ast, codecs, re, larch, gc, peakutils.peak, itertools\nFile ~\\Anaconda3\\lib\\site-packages\\larch_init_.py:47 in \nfrom . import builtins\nFile ~\\Anaconda3\\lib\\site-packages\\larch\\builtins.py:21 in \nfrom . import math\nFile ~\\Anaconda3\\lib\\site-packages\\larch\\math_init_.py:4 in\n\nfrom .utils import (linregress, realimag, as_ndarray,\nFile ~\\Anaconda3\\lib\\site-packages\\larch\\math\\utils.py:11 in\n\nfrom .lineshapes import gaussian, lorentzian, voigt\nFile ~\\Anaconda3\\lib\\site-packages\\larch\\math\\lineshapes.py:16 in\n\nfrom lmfit.lineshapes import (gaussian, lorentzian, voigt, pvoigt, moffat,\nImportError: cannot import name 'donaich' from 'lmfit.lineshapes'\n(C:\\Users\\sq0346\\Anaconda3\\lib\\site-packages\\lmfit\\lineshapes.py)\n\nUpdating XRaylrach to version 0.9.60 resolved it, but produced a new error:\n\nFile\n~\\Anaconda3\\Lib\\site-packages\\ProQEXAFS-GUI-master\\ProXAS-2.43\\ProXAS_v2.43.py:9\nin \nimport tkinter, time, os, psutil, subprocess, sys, shutil, ast, codecs, re, larch, gc, peakutils.peak, itertools\nFile ~\\Anaconda3\\lib\\site-packages\\larch_init_.py:48 in \nfrom .version import date, version, release_version\nImportError: cannot import name 'release_version' from\n'larch.version'\n(C:\\Users\\sq0346\\Anaconda3\\lib\\site-packages\\larch\\version.py)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":73705557,"Users Score":0,"Answer":"Update xraylarch to its latest version. That will fix the misspelled import.","Q_Score":0,"Tags":"python-3.x,anaconda,importerror","A_Id":73710282,"CreationDate":"2022-09-13T15:23:00.000","Title":"ProQEXAFS: ImportError: cannot import name 'donaich' from 'lmfit.lineshapes'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I created a model with CatBoostRegressor. My dataset is 74274 rows \u00d7 24 columns.I am using encoding and min-max scaler.\nThe more I raise the n_estimators value in my model, the better the model score. What is the end of this? How do I decide where to stop? That way I guess it goes forever. Is being high good or bad? Where should the stopping point be?\nmodel = CatBoostRegressor(n_estimators=3000,verbose=False)\nmodel = CatBoostRegressor(n_estimators=10000,verbose=False)\nmodel = CatBoostRegressor(n_estimators=20000,verbose=False)\n.\n.\n.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":73706923,"Users Score":0,"Answer":"Which set are you checking the score upon?\nIf that's a train set, the score will likely keep increasing because of overfitting.\nFor a validation set, the score should stop increasing at a certain point once the order of model complexity becomes comparable to the sample size.\nSklearn\/skopt cross validation routines such as GridSearchCV() should aid you in automating the best hyperparameters' selection.","Q_Score":0,"Tags":"python,scikit-learn,model,regression","A_Id":73713322,"CreationDate":"2022-09-13T17:15:00.000","Title":"Are high n_estimators good?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two byte arrays - one from mic and one from soundcard of same duration (15 seconds). They have different formats (sample rate of mic = 44100, n_frames = 1363712; sample rate of stereo = 48000, n_frames=1484160). I had assumed resampling would help (16k desired) but they are still of differing lengths and can't simply be combined (added - am assuming adding tensors will result in mixed audio).\nI can't see a built in method for mixing audio, but perhaps I'm overlooking something.\nI see that sox_effects is included, but none of the effects listed seem relevant - although I know sox can mix audio.\nAm I barking up the wrong tree with torchaudio?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":17,"Q_Id":73718299,"Users Score":1,"Answer":"Mixing audio is simply taking sum or average of source waveforms, so TorchAudio does not provide a specialized method, but users are expected to do the operation with pure PyTorch Tensor operation.\nNow the problem you need to think is how to handle the different lengths, i.e. how to make them the same length.\nYou can cut the long one to align it to the short one, or zero-pad the short one to align it to the long one.","Q_Score":0,"Tags":"python,sox,torchaudio","A_Id":74141349,"CreationDate":"2022-09-14T14:08:00.000","Title":"Is it possible to mix two mono audio tensors of different length (number of frames) in torchaudio?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm using ArangoDB 3.9.2 for search task. The number of items in dataset is 100.000. When I pass the entire dataset as an input list to the engine - the execution time is around ~10 sec, which is pretty quick. But if I pass the dataset in small batches one by one - 100 items per batch, the execution time is rapidly growing. In this case, to process the full dataset takes about ~2 min. Could you explain please, why is it happening? The dataset is the same.\nI'm using python driver \"ArangoClient\" from python-arango lib ver 0.2.1\nPS: I had the similar problem with Neo4j, but the problem was solved using transactions committing with HTTP API. Does the ArangoDB have something similar?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":73725440,"Users Score":0,"Answer":"Every time you make a call to a remote system (Neo4J or ArangoDB or any database) there is overhead in making the connection, sending the data, and then after executing your command, tearing down the connection.\nWhat you're doing is trying to find the 'sweet spot' for your implementation as to the most efficient batch size for the type of data you are sending, the complexity of your query, the performance of your hardware, etc.\nWhat I recommend doing is writing a test script that sends the data in varying batch sizes to help you determine the optimal settings for your use case.\nI have taken this approach with many systems that I've designed and the optimal batch sizes are unique to each implementation. It totally depends on what you are doing.\nSee what results you get for the overall load time if you use batch sizes of 100, 1000, 2000, 5000, and 10000.\nThis way you'll work out the best answer for you.","Q_Score":0,"Tags":"python,arangodb","A_Id":73739997,"CreationDate":"2022-09-15T03:52:00.000","Title":"Query execution time with small batches vs entire input set","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe which includes both numeric and non numeric values (It includes some special characters like -, space etc). I want to encode that Non Numeric value to run corr(). Non numeric Column name eg: 'Department', 'Location', etc. I used Label Encoder(). But it shows a TypeError;\nTypeError: '<' not supported between instances of 'int' and 'str'\nI used this code :\nle = preprocessing.LabelEncoder()\nX_train['Department'] = le.fit_transform(X_train['Department'])","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":73725866,"Users Score":0,"Answer":"If the data is not ordinal, I wouldn't use LabelEncoder with corr(), as that will yield false insight.\npd.getdummies(X_train['Department']) has been adequate for using pd.DataFrame.corr() for me. It will create as many columns as there are classifications, and mark 1 for each row where the classification matches the column label, otherwise 0.\nThe other issue is possibly mixed datatypes in 'Department', which can be fixed with df['Department'] = df['Department'].astype('str'). It's probably most efficient to do this before your train-test split.","Q_Score":0,"Tags":"python,dataframe,random-forest,sklearn-pandas,encoder","A_Id":73732430,"CreationDate":"2022-09-15T05:06:00.000","Title":"Is there any way to encode Non Numeric values in a dataframe column","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it correct to use 'accuracy' as a metric for an imbalanced data set after using oversampling methods such as SMOTE or we have to use other metrics such as AUROC or other presicion-recall related metrics?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":73726929,"Users Score":0,"Answer":"You can use accuracy for the dataset after using SMOTE since now it shouldn't be imbalanced as far as I know. You should try the other metrics though for a more detailed evaluation (classification_report_imbalenced combines some metrics)","Q_Score":0,"Tags":"python,imbalanced-data","A_Id":73727121,"CreationDate":"2022-09-15T07:11:00.000","Title":"imbalabced data set score after smote","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have a MILP with ~3000 binaries, 300000 continuous variables and ~1MM constraints. I am trying to solve this on the VM how long could it potentially take on a 16 core 128 gig machine? also what are the general limits of creating problems using pulp that cplex solver can handle on such a machine? any insights would be appreciated","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":73738378,"Users Score":0,"Answer":"It is impossible to answer either question sensibly. There are some problems with only a few thousand variables that are still unsolved 'hard' problems and others with millions of variables that can be solved quite easily. Solution time depends hugely on the structure and numerical details of your problem and many other non-trivial factors.","Q_Score":0,"Tags":"python,cplex,pulp","A_Id":73741039,"CreationDate":"2022-09-15T23:33:00.000","Title":"scaling MILP using pulp and cplex","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have a MILP with ~3000 binaries, 300000 continuous variables and ~1MM constraints. I am trying to solve this on the VM how long could it potentially take on a 16 core 128 gig machine? also what are the general limits of creating problems using pulp that cplex solver can handle on such a machine? any insights would be appreciated","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":73738378,"Users Score":0,"Answer":"The solution time is not just a function of the number of variables and equations. Basically, you just have to try it out. No one can predict how much time is needed to solve your problem.","Q_Score":0,"Tags":"python,cplex,pulp","A_Id":73738627,"CreationDate":"2022-09-15T23:33:00.000","Title":"scaling MILP using pulp and cplex","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a difference between pandas sum() function and SQL SUM(...) function. I'm using tables with around 100k rows. My current test runs were not good. The runtime was always different with both being not predictable (problem might be my bad wifi...)\nIt will run on a server later, but maybe someone knows it already and I don't have to pay for my server now.\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68,"Q_Id":73746573,"Users Score":0,"Answer":"It might be hard to get a clear answer without actual tests because it depends so much on what machines are used, what you are willing to pay for each part, ...\nHowever, aggregating the data in SQL gives you less network traffic, which can be valuable a lot of the time.","Q_Score":0,"Tags":"python,sql,pandas,sum","A_Id":73746660,"CreationDate":"2022-09-16T14:42:00.000","Title":"Pandas sum vs. SQL sum","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a network that's pretty much UNet. However, the model crashed when I feed in input size of 3x1x1 (channel =3, height =1, width=1) since the first max pooling (with kernel size =2 and stride =2) will reduce the dimension into 3x0x0.\nHow do I modify Unet model such that it can take my 3x1x1 input and handle arbitrary number of poolings? Any help is appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":73798440,"Users Score":0,"Answer":"One must normalize sizes of images with preprocessing, see torchvision.transforms.functional.resize.","Q_Score":0,"Tags":"python,deep-learning,pytorch,conv-neural-network,unet-neural-network","A_Id":73799029,"CreationDate":"2022-09-21T09:29:00.000","Title":"Modify UNet to take an arbitrary input dimension?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to using reinforcement learning, I only read the first few chapters in R.Sutton (so I have a small theoretical background).\nI try to solve a combinatorial optimization problem which can be broken down to:\nI am looking for the optimal configuration of points (qubits) on a grid (quantum computer).\nI already have a cost function to qualify a configuration. I also have a reward function.\nRight now I am using simulated annealing, where I randomly move a qubit or swap two qubits.\nHowever, this ansatz is not working well for more than 30 qubits.\nThat's why I thought to use a policy, which tells me which qubit to move\/swap instead of doing it randomly.\nReading the gym documentation, I couldn't find what option I should use. I don't need Q-Learning or deep reinforcement learning as far as I understood since I only need to learn a policy?\nI would also be fine using Pytorch or whatever. With this little amount of information, what do you recommend to chose? More importantly, how can I set my own value function?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":49,"Q_Id":73802607,"Users Score":1,"Answer":"There are two categories of RL algorithms.\nOne category like Q-learning, Deep Q-learning and other ones learn a value function that for a state and an action predicts the estimated reward that you will get. Then, once you know for each state and each action what the reward is, your policy is simply to select for each state the action that provides the biggest reward. Thus, in the case of these algorithms, even if you learn a value function, the policy depends on this value function.\nThen, you have other deep rl algorithms where you learn a policy directly, like Reinforce, Actor Critic algorithms or other ones. You still learn a value function, but at the same time you also learn a policy with the help of the value function. The value function will help the system learn the policy during training, but during testing you do not use the value function anymore, but only the policy.\nThus, in the first case, you actually learn a value function and act greedy on this value function, and in the second case you learn a value function and a policy and then you use the policy to navigate in the environment.\nIn the end, both these algorithms should work for your problem, and if you say that you are new to RL, maybe you could try the Deep Q-learning from the gym documentation.","Q_Score":0,"Tags":"python,reinforcement-learning","A_Id":73805659,"CreationDate":"2022-09-21T14:34:00.000","Title":"How to set your own value function in Reinforecement learning?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two dataframes, df1 and df2, and a fairly complicated set of logical statements that I have to run as a separate function to merge them. That function returns a pair of indices for the row in df1 and the row in df2, that looks right now like\nmatches = [[1,2,7,14], [1,2,7,14], [3,8]]\nsomething like that so that matches[idx] has a list of indices in df2 to merge with the row df1.loc[idx], so rows 0 and 1 in df1 would merge with rows 1,2,7,14 in df2, and on.\nHow would I merge df1 with df2 on these lists? The logic is prohibitive to try to run through pandas in terms of speed, so I have to start with these lists of matches between the dataframes.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":73820409,"Users Score":0,"Answer":"Per @MYousefi, this was the solution:\n\nTry\npd.concat([df1, pd.Series(matches, name='match')], axis=1).explode('match').merge(df2, left_on='match', right_index=True)\nShould work for numerical indices.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":73884805,"CreationDate":"2022-09-22T20:36:00.000","Title":"Merging pandas dataframes based on lists of paired indices","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project where I am given a round 200 readings\/featured columns and based on those reading there are some attributes about 60(columns) of them ranked from 0-5. now I have about 1000 rows from the featured readings and only 100 from the attributes. I am looking to use a model that I can train the data with the 100 attributes filled out and then predict on the remaining 900 attributes rows from the featured data given.\nAre there are any recommendations for the best approach or even better a similar project I can reference?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":35,"Q_Id":73841442,"Users Score":0,"Answer":"I was able to figure it out I just ran a loop to train on each dependant var separately. if you have a big dataset like 300,000 using random forest take about 2.5- 3 seconds per dependant var and then used the missing data as a df to find predictions and append. if you need more explanation let me know","Q_Score":0,"Tags":"python,machine-learning,prediction","A_Id":73858904,"CreationDate":"2022-09-25T00:50:00.000","Title":"Predict Values based on readings Best approach","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a data frame, when the value of a given column is below 10, I need to change all the values till the end of the column to 5.\nso let's say these are the values of the column:\n\n\n\n\nA\n\n\n\n\n134\n\n\n413\n\n\n12\n\n\n81\n\n\n9\n\n\n483\n\n\n93\n\n\n30\n\n\n\n\nI would need it to become:\n\n\n\n\nA\n\n\n\n\n134\n\n\n413\n\n\n12\n\n\n81\n\n\n5\n\n\n5\n\n\n5\n\n\n5\n\n\n\n\nI apologize if I didn't explain this well, I'm new to coding. Thanks for all the help!","AnswerCount":4,"Available Count":1,"Score":-0.049958375,"is_accepted":false,"ViewCount":58,"Q_Id":73841981,"Users Score":-1,"Answer":"you coutd try:\nmyDataFrame = np.where(myDataFrame < 10, 5, myDataFrame)\nthis checks where the value is lower than ten. if it is it sets it to five, else it just sets the value to what it already was.","Q_Score":2,"Tags":"python,pandas,dataframe,numpy","A_Id":73842059,"CreationDate":"2022-09-25T04:05:00.000","Title":"How do I change the value in a column given a Boolean (in Python)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am reading a parquet file with polars and would like to convert a column called datetime from type datetime[ms, America\/New_York] to datetime[ns,UTC].\nI can take the column out and do it in pandas, use tz_convert and add the column back to polars dataframe but would be nice if there was a way to do it in polars :)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":180,"Q_Id":73843103,"Users Score":1,"Answer":"As of polars 0.14.14 there is:\npl.col(\"datetime\").dt.with_time_zone which sets a timezone without modifying the underlying timestamp.\npl.col(\"datetime\").dt.cast_time_zone which modifies the underlying timestamp by correcting from the current timezone the the given timezone.","Q_Score":1,"Tags":"python,datetime,timezone,python-polars","A_Id":73846791,"CreationDate":"2022-09-25T08:36:00.000","Title":"Is there a way to convert a pl.Series of type pl.Datetime from one timezone to another?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to find the root of a 2D nonlinear equation.\nI wrote: res = mpm.findroot(f=func1, x0=x01, tol=1.e-5, solver='MDNewton', J=JAKOB1)\nbut I get this message: ValueError: could not recognize solver\nAs per the documentation of findroot, MDNewton is an acceptable solver.\nWhere is my mistake?\nscipy.optimize.fsolve works fine, but I need the increased accuracy of mpmath.\nThanks a lot for any help!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":73846430,"Users Score":0,"Answer":"All I had to do was to write mdnewton, all small letters. No idea, why but it worked.","Q_Score":1,"Tags":"python,mpmath","A_Id":73852074,"CreationDate":"2022-09-25T17:17:00.000","Title":"How to use solver in mpmath.findroot","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The title pretty much says it all, I have a df with 40+ dimension which I'd like to process into the Umap algorithm in order to have a 2-d output.\nI would like to know if it is possible to weight the input columns differently for the purpose of studying the possible different Umap outcomes.\nThank you for your time\nP.S. I work in python","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":78,"Q_Id":73866396,"Users Score":1,"Answer":"Why not simply applying UMAP to A:\nA = X*W\nwhere X is your Nx40 matrix and W=diag(w) is a 40x40 diagonal matrix of weights w=[w1, w2,..., w40]?\nConsider using normalized weights wi, i=1,2,...,40 such that sum(w) == 1, to distribute normally your information.","Q_Score":0,"Tags":"python,dimensionality-reduction,runumap","A_Id":73898189,"CreationDate":"2022-09-27T10:51:00.000","Title":"How to perform Weighted dimensionality reduction with Umap","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We currently have a working application that is ready to post and get data from an API which displays results of predicted disease (purpose of the ML model). Right now we don't have an exact idea on how to make the .ipynb communicate with the application provided we have large data for training the model.\nWe have 2 .ipynb files Model.py and Predict.py. One performing the required pre-processing, split (for train, test and validation), train and save the model. Predict uses the saved model and classifies the user input.\nThe main concern is how do we send the data from User's end-point(Flutter Application) to Predict.py and get the result data back to the user on the application.\nWe have considered the idea of hosting the model with prediction somewhere, but do not know on how to proceed further.\nThis is my first encounter with handling Deep Learning with Flutter Application. Any kind of information on proceeding forward will be very helpful.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":300,"Q_Id":73869152,"Users Score":0,"Answer":"First, .ipynb files are Jupyter Notebooks.\nSecond, do you have your API ready ? Is there a server dedicated to it or is there only the flutter app, that is ony the front of your application.\nIf you do not have an API, you have to create one (using whatever framework you want).\nTo facilitate things, create it in Python, and you can directly import your model as a Python module and use it.","Q_Score":1,"Tags":"python,flutter,rest,tensorflow,hosting","A_Id":73869441,"CreationDate":"2022-09-27T14:18:00.000","Title":"How do I integrate a Flutter application with ipynb\/colab notebook?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm not used to the TextVectorization Encoder Layer. I created my vocabulary manually before. I was wondering how one can save a Keras Model which uses the TextVectorization layer. When I tried to do it with simply model.save() and later models.load_model() I was prompted with this error:\nAssertionError: Found 1 Python objects that were not bound to checkpointed values, likely due to changes in the Python program. Showing 1 of 1 unmatched objects: []","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":76,"Q_Id":73887948,"Users Score":0,"Answer":"I've solved my problem by using another version of Keras. If someone faces a similar issue I can recommend to use a different (most of the time newer) version of Keras.\nAs I already said in my comment. I can't really recommend Keras and or Tensorflow right now. I've started a big NLP project some time ago (half a year). And since then Keras had multiple updates. Their documents changed like 2 times. And the old examples are not there anymore. The new way to create Text Tokens is quite nice but their example uses Masking_zero=True. Which basically means that It will pad the sequences for you and following layers will ignore the zero. That sounds nice but masking is not compatible with Cuda which makes training larger models a time consuming job because it's not hardware accelerated with the GPU. And most NLP models are quite large.","Q_Score":1,"Tags":"python,tensorflow,keras,text-classification","A_Id":74070916,"CreationDate":"2022-09-28T21:25:00.000","Title":"Save TextVectorization Model to load it later","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to calculate the correlation between 2 multi-index dataframes(a and b) in two ways:\n1)calculate the date-to-date correlation directly with a.corr(b) which returns a result X\n2)take the mean values for all dates and calculate the correlation\na.mean().corr(b.mean()) and I got a result Y.\nI made a scatter plot and in this way I needed both dataframes with the same index.\nI decided to calculate:\na.mean().corr(b.reindex_like(a).mean()) and I again achieved the value X.\nIt's strange for me because I expected to get 'Y'. I thought that the corr function reindex the dataframes one to another. If not, what is this value Y I am getting?\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":73893910,"Users Score":0,"Answer":"I have found the answer - when I do the reindex, I cut most of the values. One of the dataframes consists of only one value per date, so the mean is equal to this value.","Q_Score":0,"Tags":"python,pandas,dataframe,correlation,multi-index","A_Id":73894681,"CreationDate":"2022-09-29T10:20:00.000","Title":"Weird correlation results","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe similar to this yet, but a lot bigger (3000x3000):\n\n\n\n\n\nA\nB\nC\nD\nE\n\n\n\n\nW\n3\n1\n8\n3\n4\n\n\nX\n2\n2\n9\n1\n1\n\n\nY\n5\n7\n1\n3\n7\n\n\nZ\n6\n8\n5\n8\n9\n\n\n\n\nwhere the [A,B,C,D,E] are the column names and [W,X,Y,Z] are the rows indexs.\nI want to compare every cell with its surrounding cells. If the cell has a greater value than its neighbor cell value, create a directed edge (using networkX package) from that cell to its smaller value neighbor cell. For example:\nexamining cell (X,B), we should add the following:\nG.add_edge((X,B), (W,B)) and G.add_edge((X,B), (Y,C)) and so on for every cell in the dataframe.\nCurrently I am doing it using two nested loops. However this takes hours to finish and a lot of resources (RAM).\nIs there any more efficient way to do it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":73899460,"Users Score":0,"Answer":"If you want to have edges in a networkx graph, then you will not be able to avoid the nested for loop.\nThe comparison is actually easy to optimize. You could make four copies of your matrix and shift each one step into each direction. You are then able to vectorize the comparison by a simple df > df_copy for every direction.\nNevertheless, when it comes to creating the edges in your graph, it is necessary for you to iterate over both axes.\nMy recommendation is to write the data preparation part in Cython. Also have a look at graph-tools which at its core is written in C++. With that much edges you will probably also get performance issues in networkx itself.","Q_Score":0,"Tags":"python,dataframe,networkx","A_Id":73901070,"CreationDate":"2022-09-29T17:20:00.000","Title":"Compare every cell in a dataframe with its surrounding cells","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 8x10 2D array, A. I need another 2D array that stores only the row index of each element of A. How do I go about doing so? Appreciate your help!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":73901631,"Users Score":0,"Answer":"I think I got it!\nC = np.array(np.where(A))[0].reshape(10, 8)","Q_Score":0,"Tags":"python,arrays,numpy","A_Id":73901838,"CreationDate":"2022-09-29T21:08:00.000","Title":"How to return the row index of each element from a 2D array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a three-layer MLP classifier model.(Input layer - hidden layer - output layer).\nAnd I'd like to calculate the signed distance of data points from the decision boundary.\nIn the case of SVM or logistic regression, getting the signed distance is not that difficult.\nBut how about the MLP?\nI'd like to check \"how far is a new data from the decision boundary\", without the true label of the new data.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":73,"Q_Id":73912918,"Users Score":0,"Answer":"Basically, the output of the classifier model represents the signed distance from the decision boundary if you don't use sigmoid activation in the output layer. If you do use sigmoid activation in the output layer, you can just use inverse sigmoid to find the signed distance using the following formula.\nIf p is the output of the classifier model with sigmoid activation in the final layer,\nsigned_distance = -ln((1 \/ (p + 1e-8)) - 1)","Q_Score":0,"Tags":"python,math,deep-learning,linear-algebra,mlp","A_Id":73913154,"CreationDate":"2022-09-30T18:53:00.000","Title":"Can I get signed distance from decision boundary for MLP model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to get all the duplicate values of one specific column in dataframe?\nI want to only check values on one column but it's getting output with table or data.\nI want to count the number the times each value is repeated","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":91,"Q_Id":73915741,"Users Score":0,"Answer":"Use df['column'].value_counts().","Q_Score":0,"Tags":"python,database,dataframe,duplicates,analysis","A_Id":73915752,"CreationDate":"2022-10-01T04:07:00.000","Title":"Regarding counting of duplicate values in data","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using a Keras model to calculate Shapley values so I need to make a lot of predictions. After some time, let's say 15 minutes, the script just stops running. The progress bar doesn't update and I can see that the GPU is not used as before. The script doesn't fail or something. If I come back after a few hours, switch on the screen and clicks on the prompt it starts working again.\nI'm running the python script in an anaconda prompt. For the predictions I just use model.predict_on_batch(). I have switched off the sleep mode of my desktop. Running on Windows 11, python 3.9 and keras 2.10.0\nWhat can I do so that the script keeps running?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":73916727,"Users Score":0,"Answer":"Turns out automatic switching off the screen leads to the script to stop. Not really what I expected","Q_Score":0,"Tags":"python,keras","A_Id":73936451,"CreationDate":"2022-10-01T08:03:00.000","Title":"Keras script stops running temporarily","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed libaries with pip but still get errors while importing them.\nModuleNotFoundError Traceback (most recent call last)\nCell In [1], line 2\n1 import random\n----> 2 import keras\n3 import tensorflow as tf\n4 import pandas as pd\nModuleNotFoundError: No module named 'keras'","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":91,"Q_Id":73929128,"Users Score":0,"Answer":"This may be caused by the inconsistency between your environment and the pip installation path.\nWhen you use the conda environment, you can use the conda install keras command to install.","Q_Score":0,"Tags":"python,visual-studio-code,jupyter-notebook,anaconda","A_Id":73931402,"CreationDate":"2022-10-02T20:29:00.000","Title":"importing python libraries like keras, numpy etc","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I don't really like working in Jupyter enviroment and prefer raw Python. Is it posible to plot graphs directly in VSC via .py file? If I put my code to Jupyter cell in Jupyter file (.ipynp) the visualization works fine directly in VSC.\nI use plotly or matplotlib, but i am willing to learn other packages if needed.\nIs there for example some extencion to VSC that I could use? Or is it posible to plot to jpg\/html file that I could open with file explorer?\nEdit:\nI made a newbie mistake. I didn't use command plt.show() to plot the matplotlib graph.\nBut GNU plotter suggested by Charudatta is also great solution","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":106,"Q_Id":73987214,"Users Score":1,"Answer":"try GNU plotter works great for plotting","Q_Score":0,"Tags":"python,matplotlib,visual-studio-code,plotly,visualization","A_Id":73987231,"CreationDate":"2022-10-07T12:32:00.000","Title":"How to plot graph in python (.py) without Jupyter cells in Visual Studio Code?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I don't really like working in Jupyter enviroment and prefer raw Python. Is it posible to plot graphs directly in VSC via .py file? If I put my code to Jupyter cell in Jupyter file (.ipynp) the visualization works fine directly in VSC.\nI use plotly or matplotlib, but i am willing to learn other packages if needed.\nIs there for example some extencion to VSC that I could use? Or is it posible to plot to jpg\/html file that I could open with file explorer?\nEdit:\nI made a newbie mistake. I didn't use command plt.show() to plot the matplotlib graph.\nBut GNU plotter suggested by Charudatta is also great solution","AnswerCount":2,"Available Count":2,"Score":-0.0996679946,"is_accepted":false,"ViewCount":106,"Q_Id":73987214,"Users Score":-1,"Answer":"I don't think you can, you can't display images in a terminal","Q_Score":0,"Tags":"python,matplotlib,visual-studio-code,plotly,visualization","A_Id":73987239,"CreationDate":"2022-10-07T12:32:00.000","Title":"How to plot graph in python (.py) without Jupyter cells in Visual Studio Code?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I use partition = la.find_partition(G) and then len(partition) command on my graph, I get 402303 communities as the result.\nI only want to have 15 communities, not 402303. \nIs there a way to find specific size communities in leidenalg library?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":42,"Q_Id":73990415,"Users Score":1,"Answer":"find_partition() has a second required parameter, the partition_type. You must make an appropriate choice here, and also set the parameters of the partition type suitably.\nA possible reason that you got so many partitions is that you may have chosen CPMVertexPartition and left the resolution parameter at the default of 1. The larger this value the more communities will be returned. With CPMVertexPartition a good starting point for experimentation is the graph density.\nOr you can use RBConfigurationVertexPartition in which case a good starting point is 1.0 (which corresponds to maximizing the classic modularity).","Q_Score":1,"Tags":"python,graph,igraph","A_Id":73991723,"CreationDate":"2022-10-07T17:01:00.000","Title":"Specific numbers of communities in leidenalg library igraph","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm taking socket stream from android app and sending it to server where we need to take each frame and run 4 object detection models. And to run all four models at the same time I'm using threads Library of python. Problem is that when we call one thread (i.e one model) it takes 1sec for 1 iteration but when I call 4 threads it should take 1sec because of parallel processing but it is taking around 3sec.\nCan anyone help me with this whether I'm using threads in a wrong way r is their any way to check whether parallel is happening or not or any alternate for this work","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":74012027,"Users Score":0,"Answer":"Threading doesn't really means parallel processing if you want to run models parallely use multi processing instead of multi threading. If your 1st thread has started running at 1.00s then other thread might start at 1.001s and not 1.00s. If you want to start processes together at 1.00s use multi processing.","Q_Score":0,"Tags":"python,multithreading,parallel-processing,socket.io,object-detection","A_Id":74013409,"CreationDate":"2022-10-10T08:23:00.000","Title":"Is there any way to run multiple detection models (with tracking) at the same time?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is the sklearn.model_selection.train_test_split(shuffle=False) method appropriate for times series forocast? Or one should never use this method to perform the train and test set split when dealing with time series?\nMany people argue that train_test_split should not be used because it does the split randomly, which leads to data leakeage. However, if the setting of shuffle=False is precisely to define that there should be no data leak between the training and test sets, why not use train_test_split(shuffle=False) for time series?\nI know about the TimeSeriesSplit, but I would like to understand, still, if it is correct to use train_test_split(shuffle=False) for time series.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":363,"Q_Id":74025273,"Users Score":0,"Answer":"Here's a simple explanation of why it's not a good idea to shuffle data when working with time series:\nImagine that you have a bunch of data points that represent things that happened over time. Each data point has a time stamp, like a date, that tells you when it happened.\nNow, imagine that you mix up all of the data points and put them in a different order. This is like shuffling a deck of cards. It might seem like a good idea because it helps you check if your model is working well, but it's actually not a good idea because the order of the data points is important.\nThe data points are like a story, and the order they happened in is important to understanding the story. If you shuffle the data points, it's like telling the story out of order, and it doesn't make sense anymore. It's hard for your model to learn from the data if the data doesn't make sense.\nSo, when you're working with time series data, it's important to keep the data points in the order that they happened. This way, your model can learn from the data and make good predictions.","Q_Score":2,"Tags":"python,time-series","A_Id":75041182,"CreationDate":"2022-10-11T08:38:00.000","Title":"Is train_test_split(shuffle=False) appropriate for time series?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My understanding of SLSQP is that, when it is iterating towards a solution, it simultaneously works to reduce constraint violations and minimize the given function. Since these are two side-by-side processes, I would expect there to be someway to set the tolerance for constraint violation and the tolerance for function minimization separately. Yet the SLSQP documentation doesn't indicate any way to set these two tolerances separately.\nFor example, in one minimization I may be ok with letting the constraints be violated to the order of 1e-2 while minimizing, yet in another minimization I would want the constraints to be violated with less than 1e-15 of precision. Is there a way to set this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":74034139,"Users Score":0,"Answer":"Found a solution. Instead of using an equality constraint, can change this to an inequality constraint where the constraint, instead of being set to 0, can be set to be less than desired tolerance.","Q_Score":0,"Tags":"python,scipy,scipy-optimize,scipy-optimize-minimize","A_Id":74034209,"CreationDate":"2022-10-11T21:09:00.000","Title":"How to set objective\/constraint violation tolerance in Scipy SLSQP?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The simple definition of my problem is as follows,\nThere are 3 start nodes[S] and 7[V] nodes that are to be visited.\nThere is a distance matrix for all the nodes comprising of distance of all the nodes from each other.\nThere will be a vehicle travelling from each start node to visit different nodes and return to their start node respectively. I need to minimize the overall distance covered by all three vehicles together.\nCondition- all nodes that are to be visited[V] need to be visited once.\nEvery vehicle must return to their start node at the end of their trip.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":13,"Q_Id":74046490,"Users Score":0,"Answer":"'Greedy' algorithm:\n\nAssign nodes to the closest start node\nApply travelling salesman algorithm three times, once to each start node and its assigned nodes.\n\nOptimize:\n\nchange random V to another start node\napply 'greedy' and keep change if better\nrepeat until all changes exhausted","Q_Score":0,"Tags":"python,graph,routes","A_Id":74046556,"CreationDate":"2022-10-12T18:29:00.000","Title":"Routing algorithm to minimization total cost","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a variable of type .It is 2dim Array Python\nQ = [[0.5 0.5 ] [0.99876063 0.99876063]]\nMy question is how to extract 0.998 and 0.998 from the last row and save them into 2 different variables ?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":74063073,"Users Score":0,"Answer":"Try this,\na, b = Q[1][0], Q[1][1]","Q_Score":0,"Tags":"python,arrays,multidimensional-array","A_Id":74063094,"CreationDate":"2022-10-14T00:15:00.000","Title":"How to extract values separated by white space in Python arrays","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given a numpy array 'a', a[0,0] and a[0][0] return the same result, so how do I choose them and what is the difference between them?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":74079014,"Users Score":0,"Answer":"Assuming that a is 2D, a[0] will return the first row. You can then index into that by column, which is what you're doing with a[0][0]. Both options return the upper left element. The single indexer call (aka, [0,0]) is likely more performant, if that's all you're doing, but it can be convenient to iterate through the rows and work with them individually.","Q_Score":0,"Tags":"python,numpy","A_Id":74079129,"CreationDate":"2022-10-15T11:27:00.000","Title":"Difference between a[0,0] and a[0][0] in numpy array?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just started learning computer vision(newbie to neutral network)\nIf I want to make detect whether a human holding an umbrella or not with a pre-trained human model with an intel open vino model,\nFirst, I train umbrella images\nSecond, Convert TensorFlow model to the Intel Open Vino model\nThird, I am pretty lost here\nI am not sure how to run my model with the pre-trained model. For example, what I want at the end is that if a human is holding an umbrella, then (human holding an umbrella with a rectangular box) and\nif not, it says no human holding umbrella... in a box.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":122,"Q_Id":74085158,"Users Score":1,"Answer":"You will structure your problem, first.\nThink about something like this:\nto read the image (or decode a frame from a video, capture a frame from a camera), and run an inference using the person-detection-model.\nIf you get at least one output (and checking the confidence-level, e.g. see whether it is 0.8 (80%) or higher), then you could run another inference using your trained umbrella-detection-model.\nIf you again get at least one output, checking confidence-level again - then you (only) know there is at least one person and at least one umbrella in the image.\nBut you cannot be sure if a person (at least one, could be many) is holding it in its hand - there could be many persons being detected and many umbrellas being detected.","Q_Score":1,"Tags":"python-3.x,tensorflow,computer-vision,openvino","A_Id":74120310,"CreationDate":"2022-10-16T06:55:00.000","Title":"Run Tensorflow model with Intel Open Vino Model Zoo","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a Windows user. I run Jupyter Notebook from Anaconda Navigator on a newly created environment. Pandas were working fine until yesterday gives me an error for import pandas.\nImportError: cannot import name 'registry'\nThe version shown on Anaconda Navigator - 0.24.1\nThe version shown on Jupyter Notebook - 1.1.5 (!pip show pandas)\nPython version - 3.6.1(Anaconda3 64-bit)\nI have tested clearing kernel and restarting the Anaconda app and my PC.\nI did not do any changes to pandas.\nIm working on a VDI(virtual environment)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48,"Q_Id":74105647,"Users Score":1,"Answer":"You can try updating pandas using pip3 install --upgrade pandas","Q_Score":1,"Tags":"python,pandas,dataframe,jupyter-notebook,anaconda","A_Id":74106680,"CreationDate":"2022-10-18T04:15:00.000","Title":"ImportError: cannot import name 'registry' - import pandas error(Anaconda Navigator) Jupyter Notebook","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a data frame with 6 column a,b,c,d,e,f\ni want to sort a,b,c by a (ascending) and d,e,f by f (descending)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":74110918,"Users Score":0,"Answer":"I don't really know the easy way out but you could use this until someone point it out.\ndf_desc=self.orderbook_agreg_btc[[\"bids_qty\",\"bids_price\",\"exchange_name_bid\"]].sort_values([\"bids_price\"],ascending= False)\ndf_asc=self.orderbook_agreg_btc[[\"asks_qty\",\"asks_price\",\"exchange_name_ask\"]].sort_values([\"asks_price\"],ascending= True)\ndf = df_desc.append(df_asc)","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":74111093,"CreationDate":"2022-10-18T12:23:00.000","Title":"Sort part datframe decreasing and an another part increasing","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A lot of functions in NetworkX are mentioned like this in the reference. What does this mean starting the square bracket with a comma.\ne.g. clustering(G[, nodes, weight]) node_redundancy(G[, nodes]) etc. without any first argument.\nIs this python syntax or networkx convention?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20,"Q_Id":74111020,"Users Score":1,"Answer":"clustering(G[, nodes, weight]) simply means that the function clustering() takes one required argument G, and optionally two other arguments - nodes and weight. This is fairly standard notation in documentation, regardless of language.","Q_Score":0,"Tags":"python,networkx","A_Id":74111077,"CreationDate":"2022-10-18T12:30:00.000","Title":"Understanding syntax `clustering(G[, nodes, weight])`","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run my first kubeflow pipeline in GCP. This basically ingest data from a BigQuery data frame then sends that to a dataset where the next component in the pipeline pulls the dataset and runs that data inside a PyMC model. But I'm getting errors, because the code does not recognize the dataset as a dataframe.\nI've tried: df = pd.DataFrame(input_data) but that errors out.\nHas anyone had success converting a GCP kubeflow dataset into a pandas dataframe?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":69,"Q_Id":74113959,"Users Score":0,"Answer":"I figured this out....I needed to put a .output on the input_data variable inside the DataFrame function.","Q_Score":0,"Tags":"python,pandas,kubernetes,google-cloud-platform","A_Id":74152872,"CreationDate":"2022-10-18T15:55:00.000","Title":"How to convert a GCP kubeflow dataset into a pandas dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have not been able to come up with a better title, it's a really simple issue though, I just don't know what to call it exactly.\nI have a database of horses simplified here:\n\n\n\n\nhorse_name\nstable_name\n\n\n\n\nHorse1\nStable1\n\n\n\n\nI am only interested in further analyzing records which feature stables that own many horses so I wanted to filter out the small stables (ones with less than 10 horses).\nWhat I've tried:\nAttempt 1:\nStep 1: df['Stable'].value_counts() > 10 -> gives me boolean values, I inteded to use this to only query the part of the database that satisfied this condition.\nStep 2: df[df['Stable'].value_counts() > 10] -> I wrap this in another df, hoping I get the result that I want, but I don't, I get a key error.\nAttempt 2:\nStep 1: df['Stable'].value_counts().sort_values(ascending=False).head(21) -> a little clunky, but by trial and error, I figured out there are 21 stables with more than 10 horses, and this query returned just those stables. All I needed now is to filter the database out using this result.\nStep 2: df[df['Stable'].value_counts().sort_values(ascending=False).head(21)] -> same issue, returns a key error.\nI also tried: df[df['Stable'] in df['Stable'].value_counts() > 10] again, that didn't work, and I don't think I'll sleep today.\nCan anyone explain why this is happening in a way that I can understand? And how should this be done instead?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":74118560,"Users Score":0,"Answer":".value_counts() returns a series where it counts the unique values of the values in the column.\nTry this:\ndf[df['Stable'] > 10]","Q_Score":0,"Tags":"python,pandas","A_Id":74118586,"CreationDate":"2022-10-18T23:56:00.000","Title":"Selecting rows based on a '>' condition of the iteration of one of the columns","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Because of the huge amount of data, I want to train my data on GPU. My model is also based on Numpy. How can I modify the data and model to speed up my calculation?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":74121033,"Users Score":0,"Answer":"Numpy does not have native support GPU. However, you can use library like Numba, which is utilized for big datasets and GPU usage.","Q_Score":0,"Tags":"python,machine-learning,gpu","A_Id":74121108,"CreationDate":"2022-10-19T06:55:00.000","Title":"How to train Numpy-based data on GPU?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm parsing every XBRL files from the SEC through EDGAR in order to retrieve some data (in json format on python).\nI have no problem parsing those files. My problem lies in the structure of the XBRL files provided by the SEC, i noticed that some companies use some tags and others dont. Some will use \"Revenues\" while others won't have any tags pertaining to revenues, i have the same issue with \"ShortTermBorrowings\"...\nIs there a list of XBRL tags from the SEC that are used throughout all companies ?\nThank's","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":102,"Q_Id":74122589,"Users Score":0,"Answer":"I would not rely solely on any list of tags the SEC or anyone else provides.\nI'd also check the source data for the tags actually being used.\nI'd also ask:\nHow can I create a list of all tags used throughout all SEC Edgar filings, for each \"filing type\" (10K, 10Q, Form 3, Form 4, Dorm 5, Form 13F, etc.)?","Q_Score":1,"Tags":"python,xbrl,edgar","A_Id":74413591,"CreationDate":"2022-10-19T09:00:00.000","Title":"Inconsistent tags between XBRL files from the SEC (EDGAR)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using XGBoost model to predict attacks, But I get 100% accuracy, I tried Random Forest as well, and same, I get 100%. How can I handle this ovrefitting problem ?\nThe steps I followed are:\nData cleaning\nData splitting\nFeature scaling\nFeature selection\nI even tried to change this order, but still get the same thing.\nDo you have any idea how to handle this? Thanks","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":45,"Q_Id":74131790,"Users Score":2,"Answer":"Overfitting occurs when your model becomes too complex for its task. Simply said instead of learning patterns in your data, the model will be able to learn every case it is presented in the training set by heart.\nTo avoid this, you will have to choose a model that is less complex, in your case reduce the depth of your trees. Split your data in separate train, validation and test sets, then train different models of different complexities. When you evaluate these models, you will notice that its predictive capabilities on the training set will increase with complexity. Initially its capabilities on the validation set will follow until a point is reached where no more increase on the validation set can be achieved. On the contrary, it will likely decrease beyond this point, because you are starting to overfit.\nUse this data to find a suitable model, then finally evaluate the model you decided to use by using the test set you have kept aside until now.","Q_Score":0,"Tags":"python,security,classification,random-forest,xgboost","A_Id":74132053,"CreationDate":"2022-10-19T20:54:00.000","Title":"How can I handle overfitting of a model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pandas df as below where the scores of two players are tabulated. I want to calculate the sum of each game of each player where each game is scored consecutively. For example the first game played by A has a total score of 12, the second game played by A has a total score of 10, the first game played by B has a total score of 4 etc. How can I do this pandas way (vectorised or groupby etc) please?\ndf_players.groupby(\"Player\").sum(\"Score\")\ndoes only give overall total score and not for each game individually.\nMany thanks.\n\n\n\n\nPlayer\nScore\n\n\n\n\nA\n10\n\n\nA\n2\n\n\nB\n1\n\n\nB\n3\n\n\nA\n3\n\n\nA\n7\n\n\nB\n2","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":74132365,"Users Score":0,"Answer":"You don't have Game in your DataFrame ... I assume the first two scores in your table are for Player A in Game #1 but I'm just guessing that since you said you expected the result to be 12. There is no way to figure this out from the data you provided. Add a column for Game to the DataFrame and then group by player and game ... the by= parameter of groupby() can take a list of columns to group by.","Q_Score":1,"Tags":"python,pandas","A_Id":74132439,"CreationDate":"2022-10-19T21:59:00.000","Title":"Summing each play of a player in a pandas dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to delete the contents of a column but would like to keep the column.\nFor instance I have a table like.\n\n\n\n\nNumbers1\nNumbers2\nNumbers3\nNumbers4\nNumbers5\n\n\n\n\nfive\nfour\nthree\ntwo\ntwo\n\n\nsix\nseven\neight\nnine\nten\n\n\nnine\nseven\nfour\ntwo\ntwo\n\n\nseven\nsix\nfive\nthree\none\n\n\n\n\nI would like to remove all the contents of column b but I want to keep column Numbers2\nthe desired output be like\n\n\n\n\nNumbers1\nNumbers2\nNumbers3\nNumbers4\nNumbers5\n\n\n\n\nfive\n\nthree\ntwo\ntwo\n\n\nsix\n\neight\nnine\nten\n\n\nnine\n\nfour\ntwo\ntwo\n\n\nseven\n\nfive\nthree\none\n\n\n\n\nkindly help\nThankyou","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":74141926,"Users Score":0,"Answer":"First, you could delete the column with df = df.drop('Numbers2', axis=1)\nSecond, replace the column with df['Numbers2'] = \"\"","Q_Score":1,"Tags":"python,pandas","A_Id":74142015,"CreationDate":"2022-10-20T14:50:00.000","Title":"how to remove the contents of a column without deleting the column in pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a function of some parameters that will return a probability. How can I set scipy's minimize to terminate as soon as finds some parameters that will return a probability below a certain threshold (even if it is a \"large\" probability like 0.1 or so)?\nThanks a lot!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":117,"Q_Id":74154972,"Users Score":0,"Answer":"You can use the callback argument to minimize. This is a function that is called at each iteration of the minimization. You can use this to check the value of the function and terminate the minimization if it is below the threshold.","Q_Score":1,"Tags":"python,scipy,scipy-optimize-minimize","A_Id":74154996,"CreationDate":"2022-10-21T14:12:00.000","Title":"Terminate scipy minimize as soon as function values is below threshold","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ImportError: C extension: dlopen(mach-o file, but is an incompatible architecture (have (x86_64), need (arm64e))) not built. If you want to import pandas from the source directory, you may need to run 'python setup.py build_ext --force' to build the C extensions first.\nHow to roslove this problem\uff1f\nThank you for your help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":74168503,"Users Score":0,"Answer":"It has been solved, it is a problem with the virtual environment","Q_Score":0,"Tags":"python","A_Id":74178317,"CreationDate":"2022-10-23T03:37:00.000","Title":"An error occurred while importing the package\uff08m1 macbook pro)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pandas dataframe that looks something like this:\n\n\n\n\nID\nValue\n\n\n\n\n00001\nvalue 1\n\n\n00001\nvalue 2\n\n\n00002\nvalue 3\n\n\n00003\nvalue 4\n\n\n00004\nvalue 5\n\n\n00004\nvalue 6\n\n\n\n\nWhat I want to do is remove it so that I am left with this:\n\n\n\n\nID\nValue\n\n\n\n\n00001\nvalue 1\n\n\n00002\nvalue 3\n\n\n00003\nvalue 4\n\n\n00004\nvalue 5\n\n\n\n\nWhat's the best way to achieve this?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":74171463,"Users Score":0,"Answer":"df.drop_duplicates(subset='id', keep=\"first\")","Q_Score":0,"Tags":"python-3.x,pandas","A_Id":74171485,"CreationDate":"2022-10-23T13:19:00.000","Title":"How do I remove rows on duplicated ids?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am a bit confused about a behavior of my code.\nI have an image tensor with values in range [0, 255] to which I have added some Gaussian noise so that the resulting tensor has values in larger and now continuous range, e.g. ca. [-253.234, 581.613].\nThis tensor should then be visualized via plt.imshow(...).\nFor this and other purposes, I would like to cast the tensor to a uint type. However, I encountered some weird differences between the following approaches and I would like to identify the right approach:\n\nplt.imshow(image.astype(np.uint32))\nplt.imshow(image.astype(np.uint8))\nplt.imshow(np.clip(image.astype(np.uint32), 0, 255))\n\nApproach (1) leads to the expected \"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\" warning. And I assume that this image is then clipped like np.clip to values in the range [0, 255].\nApproaches (2) and (3) lead to values in range [0, 255] so no exception is thrown but their mean values differ.\nApproaches (1) and (3) lead to the same visualization, while (2) leads to a different image (e.g. slightly darker and more noisy).\nI am currently clueless about why this happens. Is converting to uint32 and then clipping different from converting to uint8 in the first place?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":89,"Q_Id":74178516,"Users Score":1,"Answer":"if you have any negative values in the image, then casting to uint32 is or uint8 will create different results.","Q_Score":0,"Tags":"python,matplotlib,visualization,noise,unsigned-integer","A_Id":74179066,"CreationDate":"2022-10-24T08:38:00.000","Title":"Why is image.astype(uint8) different from np.clip(image.astype(uint32), 0, 255) in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install pytorch-geometric for a deep-learning project. Torch-sparse is throwing segmentation faults when I attempt to import it (see below). Initially I tried different versions of each required library, as I thought it might be a GPU issue, but I've since tried to simplify by installing cpu-only versions.\n\nPython 3.9.12 (main, Apr 5 2022, 06:56:58) \n[GCC 7.5.0] :: Anaconda, Inc. on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import torch\n>>> import torch_scatter\n>>> import torch_cluster\n>>> import torch_sparse\nSegmentation fault (core dumped)\n\nAnd the same issue, presumably due to torch_sparse, when importing pytorch_geometric:\nPython 3.9.12 (main, Apr 5 2022, 06:56:58) \n[GCC 7.5.0] :: Anaconda, Inc. on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import torch_geometric\nSegmentation fault (core dumped)\n\nI'm on an Ubuntu distribution:\nDistributor ID: Ubuntu\nDescription: Ubuntu 22.04.1 LTS\nRelease: 22.04\nCodename: jammy\n\nHere's my (lightweight for DL) conda installs:\n# Name Version Build Channel\n_libgcc_mutex 0.1 main \n_openmp_mutex 5.1 1_gnu \nblas 1.0 mkl \nbrotlipy 0.7.0 py310h7f8727e_1002 \nbzip2 1.0.8 h7b6447c_0 \nca-certificates 2022.07.19 h06a4308_0 \ncertifi 2022.9.24 py310h06a4308_0 \ncffi 1.15.1 py310h74dc2b5_0 \ncharset-normalizer 2.0.4 pyhd3eb1b0_0 \ncpuonly 2.0 0 pytorch\ncryptography 37.0.1 py310h9ce1e76_0 \nfftw 3.3.9 h27cfd23_1 \nidna 3.4 py310h06a4308_0 \nintel-openmp 2021.4.0 h06a4308_3561 \njinja2 3.0.3 pyhd3eb1b0_0 \njoblib 1.1.1 py310h06a4308_0 \nld_impl_linux-64 2.38 h1181459_1 \nlibffi 3.3 he6710b0_2 \nlibgcc-ng 11.2.0 h1234567_1 \nlibgfortran-ng 11.2.0 h00389a5_1 \nlibgfortran5 11.2.0 h1234567_1 \nlibgomp 11.2.0 h1234567_1 \nlibstdcxx-ng 11.2.0 h1234567_1 \nlibuuid 1.0.3 h7f8727e_2 \nmarkupsafe 2.1.1 py310h7f8727e_0 \nmkl 2021.4.0 h06a4308_640 \nmkl-service 2.4.0 py310h7f8727e_0 \nmkl_fft 1.3.1 py310hd6ae3a3_0 \nmkl_random 1.2.2 py310h00e6091_0 \nncurses 6.3 h5eee18b_3 \nnumpy 1.23.3 py310hd5efca6_0 \nnumpy-base 1.23.3 py310h8e6c178_0 \nopenssl 1.1.1q h7f8727e_0 \npip 22.2.2 py310h06a4308_0 \npycparser 2.21 pyhd3eb1b0_0 \npyg 2.1.0 py310_torch_1.12.0_cpu pyg\npyopenssl 22.0.0 pyhd3eb1b0_0 \npyparsing 3.0.9 py310h06a4308_0 \npysocks 1.7.1 py310h06a4308_0 \npython 3.10.6 haa1d7c7_0 \npytorch 1.12.1 py3.10_cpu_0 pytorch\npytorch-cluster 1.6.0 py310_torch_1.12.0_cpu pyg\npytorch-mutex 1.0 cpu pytorch\npytorch-scatter 2.0.9 py310_torch_1.12.0_cpu pyg\npytorch-sparse 0.6.15 py310_torch_1.12.0_cpu pyg\nreadline 8.1.2 h7f8727e_1 \nrequests 2.28.1 py310h06a4308_0 \nscikit-learn 1.1.2 py310h6a678d5_0 \nscipy 1.9.1 py310hd5efca6_0 \nsetuptools 63.4.1 py310h06a4308_0 \nsix 1.16.0 pyhd3eb1b0_1 \nsqlite 3.39.3 h5082296_0 \nthreadpoolctl 2.2.0 pyh0d69192_0 \ntk 8.6.12 h1ccaba5_0 \ntqdm 4.64.1 py310h06a4308_0 \ntyping_extensions 4.3.0 py310h06a4308_0 \ntzdata 2022e h04d1e81_0 \nurllib3 1.26.12 py310h06a4308_0 \nwheel 0.37.1 pyhd3eb1b0_0 \nxz 5.2.6 h5eee18b_0 \nzlib 1.2.13 h5eee18b_0 \n\nAny help would be greatly appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":693,"Q_Id":74180286,"Users Score":0,"Answer":"I've found a combination of packages that works for me - hopefully someone else will have this issue at some point and be able to reproduce the steps from me talking to myself here. The full process for getting stuff working was:\n\nFresh conda environment with forced Python=3.9 (conda create -n ENVNAME python=3.9)\nActivate that environment\nInstall basic python packages (conda install numpy pandas matplotlib scikit-learn)\nCheck CUDA version if working with a GPU (nvidia-smi in terminal prints these details for NVIDIA cards)\nInstall Pytorch using their suggested conda command (conda install pytorch torchvision torchaudio cudatoolkit=CUDA_VERSION -c pytorch -c conda-forge). This had to go through the env solving process on my machine.\nInstall pytorch geometric (or just torch sparse if that's all you need) with conda install pyg -c pyg. Again this had a solving process.\nCheck that torch_sparse imports without fault\n\nHere's the conda list for this working combination of packages:\n# Name Version Build Channel\n_libgcc_mutex 0.1 main \n_openmp_mutex 5.1 1_gnu \nblas 1.0 mkl \nbottleneck 1.3.5 py39h7deecbd_0 \nbrotli 1.0.9 h5eee18b_7 \nbrotli-bin 1.0.9 h5eee18b_7 \nbrotlipy 0.7.0 py39hb9d737c_1004 conda-forge\nbzip2 1.0.8 h7f98852_4 conda-forge\nca-certificates 2022.9.24 ha878542_0 conda-forge\ncertifi 2022.9.24 py39h06a4308_0 \ncffi 1.14.6 py39he32792d_0 conda-forge\ncharset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge\ncryptography 37.0.2 py39hd97740a_0 conda-forge\ncudatoolkit 11.6.0 hecad31d_10 conda-forge\ncycler 0.11.0 pyhd3eb1b0_0 \ndbus 1.13.18 hb2f20db_0 \nexpat 2.4.9 h6a678d5_0 \nffmpeg 4.3 hf484d3e_0 pytorch\nfftw 3.3.9 h27cfd23_1 \nfontconfig 2.13.1 h6c09931_0 \nfonttools 4.25.0 pyhd3eb1b0_0 \nfreetype 2.11.0 h70c0345_0 \ngiflib 5.2.1 h7b6447c_0 \nglib 2.69.1 h4ff587b_1 \ngmp 6.2.1 h58526e2_0 conda-forge\ngnutls 3.6.13 h85f3911_1 conda-forge\ngst-plugins-base 1.14.0 h8213a91_2 \ngstreamer 1.14.0 h28cd5cc_2 \nicu 58.2 he6710b0_3 \nidna 3.4 pyhd8ed1ab_0 conda-forge\nintel-openmp 2021.4.0 h06a4308_3561 \njinja2 3.0.3 pyhd3eb1b0_0 \njoblib 1.1.1 py39h06a4308_0 \njpeg 9e h7f8727e_0 \nkiwisolver 1.4.2 py39h295c915_0 \nkrb5 1.19.2 hac12032_0 \nlame 3.100 h7f98852_1001 conda-forge\nlcms2 2.12 h3be6417_0 \nld_impl_linux-64 2.38 h1181459_1 \nlerc 3.0 h295c915_0 \nlibbrotlicommon 1.0.9 h5eee18b_7 \nlibbrotlidec 1.0.9 h5eee18b_7 \nlibbrotlienc 1.0.9 h5eee18b_7 \nlibclang 10.0.1 default_hb85057a_2 \nlibdeflate 1.8 h7f8727e_5 \nlibedit 3.1.20210910 h7f8727e_0 \nlibevent 2.1.12 h8f2d780_0 \nlibffi 3.3 he6710b0_2 \nlibgcc-ng 11.2.0 h1234567_1 \nlibgfortran-ng 11.2.0 h00389a5_1 \nlibgfortran5 11.2.0 h1234567_1 \nlibgomp 11.2.0 h1234567_1 \nlibiconv 1.17 h166bdaf_0 conda-forge\nlibllvm10 10.0.1 hbcb73fb_5 \nlibpng 1.6.37 hbc83047_0 \nlibpq 12.9 h16c4e8d_3 \nlibstdcxx-ng 11.2.0 h1234567_1 \nlibtiff 4.4.0 hecacb30_0 \nlibuuid 1.0.3 h7f8727e_2 \nlibwebp 1.2.4 h11a3e52_0 \nlibwebp-base 1.2.4 h5eee18b_0 \nlibxcb 1.15 h7f8727e_0 \nlibxkbcommon 1.0.1 hfa300c1_0 \nlibxml2 2.9.14 h74e7548_0 \nlibxslt 1.1.35 h4e12654_0 \nlz4-c 1.9.3 h295c915_1 \nmarkupsafe 2.1.1 py39h7f8727e_0 \nmatplotlib 3.5.2 py39h06a4308_0 \nmatplotlib-base 3.5.2 py39hf590b9c_0 \nmkl 2021.4.0 h06a4308_640 \nmkl-service 2.4.0 py39h7f8727e_0 \nmkl_fft 1.3.1 py39hd3c417c_0 \nmkl_random 1.2.2 py39h51133e4_0 \nmunkres 1.1.4 py_0 \nncurses 6.3 h5eee18b_3 \nnettle 3.6 he412f7d_0 conda-forge\nnspr 4.33 h295c915_0 \nnss 3.74 h0370c37_0 \nnumexpr 2.8.3 py39h807cd23_0 \nnumpy 1.23.3 py39h14f4228_0 \nnumpy-base 1.23.3 py39h31eccc5_0 \nopenh264 2.1.1 h780b84a_0 conda-forge\nopenssl 1.1.1q h7f8727e_0 \npackaging 21.3 pyhd3eb1b0_0 \npandas 1.4.4 py39h6a678d5_0 \npcre 8.45 h295c915_0 \npillow 9.2.0 py39hace64e9_1 \npip 22.2.2 py39h06a4308_0 \nply 3.11 py39h06a4308_0 \npycparser 2.21 pyhd8ed1ab_0 conda-forge\npyg 2.1.0 py39_torch_1.12.0_cu116 pyg\npyopenssl 22.0.0 pyhd8ed1ab_1 conda-forge\npyparsing 3.0.9 py39h06a4308_0 \npyqt 5.15.7 py39h6a678d5_1 \npyqt5-sip 12.11.0 py39h6a678d5_1 \npysocks 1.7.1 pyha2e5f31_6 conda-forge\npython 3.9.13 haa1d7c7_2 \npython-dateutil 2.8.2 pyhd3eb1b0_0 \npython_abi 3.9 2_cp39 conda-forge\npytorch 1.12.1 py3.9_cuda11.6_cudnn8.3.2_0 pytorch\npytorch-cluster 1.6.0 py39_torch_1.12.0_cu116 pyg\npytorch-mutex 1.0 cuda pytorch\npytorch-scatter 2.0.9 py39_torch_1.12.0_cu116 pyg\npytorch-sparse 0.6.15 py39_torch_1.12.0_cu116 pyg\npytz 2022.1 py39h06a4308_0 \nqt-main 5.15.2 h327a75a_7 \nqt-webengine 5.15.9 hd2b0992_4 \nqtwebkit 5.212 h4eab89a_4 \nreadline 8.2 h5eee18b_0 \nrequests 2.28.1 pyhd8ed1ab_1 conda-forge\nscikit-learn 1.1.2 py39h6a678d5_0 \nscipy 1.9.1 py39h14f4228_0 \nsetuptools 63.4.1 py39h06a4308_0 \nsip 6.6.2 py39h6a678d5_0 \nsix 1.16.0 pyhd3eb1b0_1 \nsqlite 3.39.3 h5082296_0 \nthreadpoolctl 2.2.0 pyh0d69192_0 \ntk 8.6.12 h1ccaba5_0 \ntoml 0.10.2 pyhd3eb1b0_0 \ntorchaudio 0.12.1 py39_cu116 pytorch\ntorchvision 0.13.1 py39_cu116 pytorch\ntornado 6.2 py39h5eee18b_0 \ntqdm 4.64.1 py39h06a4308_0 \ntyping_extensions 4.4.0 pyha770c72_0 conda-forge\ntzdata 2022e h04d1e81_0 \nurllib3 1.26.11 pyhd8ed1ab_0 conda-forge\nwheel 0.37.1 pyhd3eb1b0_0 \nxz 5.2.6 h5eee18b_0 \nzlib 1.2.13 h5eee18b_0 \nzstd 1.5.2 ha4553b6_0","Q_Score":2,"Tags":"python,ubuntu,pytorch,conda,pytorch-geometric","A_Id":74192025,"CreationDate":"2022-10-24T11:15:00.000","Title":"Segmentation fault when importing torch-sparse (installing pytorch-geometric)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large dataframe which combines data from multiple excel (xlsx) files. The problem is every column with decimal values is seperated with a dot.I need to replace every dot with a comma. I have already tried using the replace function, but the issue some columns also contains string values. So my question is, how do I replace dot with comma on each column in my dataframe and also keep the string values?\nExample:\nColumn a: \n14.01 -> 14,01 \nNo data (keep)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":42,"Q_Id":74181374,"Users Score":0,"Answer":"This is probably your default language setting for Office tool is US or UK where . is used a decimal denoter where as in languages like German it is a ,. If you are using Libre Office, you can go to Tools -> Language -> For all text -> More and change the default decimal separator key. If you are using Microsoft excel, there should be something similar. Afterwards save the excel and then open it back in pandas. Voila.","Q_Score":0,"Tags":"python,pandas","A_Id":74181711,"CreationDate":"2022-10-24T12:52:00.000","Title":"Replace dot with comma using pandas on dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 2d numpy array psi with shape (nx,ny). I want to create a new array phi of the same shape where for each element phi[i][j] I need to evaluate an expression containing psi[i][j] and neighboring elements psi[i-1][j],psi[i+1][j],psi[i][j+1] and psi[i][j-1],except for edge cases where any of these neighbors are not in the bounds of psi, treat that element as 0 in the expression.\nI can implement this using nested for loops and checking for boundary conditions, but I would like to perform this operation as time efficient as possible. I've tried by assigning\nphi[1:-1,1:-1] = f(psi[1:-1,1:-1], psi[0:-2,1:-1], psi[2:,1:-1], psi[1:-1,0:-2], psi[1:-1,2:])\nbut this does not cover edge cases which get messy, so if there were some conditional way to only reference when within bounds else just be 0 it might work. Or, of course, if there is an even more time efficient way that would be better.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":74186157,"Users Score":0,"Answer":"This problem smells like finite differences. Your best bet is to write a (fast, possibly recursive) loop for the inner points, and then loop over the boundary points separately, imposing the desired boundary conditions there. Obviously, the other way around also works: start by assigning boundary points, then loop over inner points.\nThat said, if you are having issues with speed (probably because your grid is gigantic), you may want to do a few optimizations, as 2d arrays in python are S L O W:\n\ntry reversing the order of looping: in python (NumPy, in case you are using that), 2d arrays are traversed by rows first. You may want to experiment with that at least.\n\ntry allocating your 2d thing as a big 1d chunk where its unique index is n = i + nx * j, with i,j your original 2d indices. Again, experiment with running the new index n along rows vs columns first.\n\n\nThese two suggestions combined should give you a massive speedup.","Q_Score":0,"Tags":"python,arrays,numpy,indexing,conditional-statements","A_Id":74186365,"CreationDate":"2022-10-24T19:56:00.000","Title":"Indexing numpy arrays only if within bounds","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 2d numpy array psi with shape (nx,ny). I want to create a new array phi of the same shape where for each element phi[i][j] I need to evaluate an expression containing psi[i][j] and neighboring elements psi[i-1][j],psi[i+1][j],psi[i][j+1] and psi[i][j-1],except for edge cases where any of these neighbors are not in the bounds of psi, treat that element as 0 in the expression.\nI can implement this using nested for loops and checking for boundary conditions, but I would like to perform this operation as time efficient as possible. I've tried by assigning\nphi[1:-1,1:-1] = f(psi[1:-1,1:-1], psi[0:-2,1:-1], psi[2:,1:-1], psi[1:-1,0:-2], psi[1:-1,2:])\nbut this does not cover edge cases which get messy, so if there were some conditional way to only reference when within bounds else just be 0 it might work. Or, of course, if there is an even more time efficient way that would be better.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":74186157,"Users Score":0,"Answer":"I've realized that using numpy array operations is definitely the way to go to make the code faster. Pairing this with np.pad to add zeros to the edges of a matrix makes this fairly simple.","Q_Score":0,"Tags":"python,arrays,numpy,indexing,conditional-statements","A_Id":74330382,"CreationDate":"2022-10-24T19:56:00.000","Title":"Indexing numpy arrays only if within bounds","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using gym==0.26.0 and I am trying to make my environment render only on each Nth step. So that my nn is learning fast but that I can also see some of the progress as the image and not just rewards in my terminal. With the newer versions of gym, it seems like I need to specify the render_mode when creating but then it uses just this render mode for all renders.\nHow to make the env.render() render it as \"human\" only for each Nth episode? (it seems like you order the one and only render_mode in env.make)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":382,"Q_Id":74191935,"Users Score":0,"Answer":"My solution is to create a new 'human' env to be used on Nth step.","Q_Score":1,"Tags":"python,machine-learning,deep-learning,reinforcement-learning,openai-gym","A_Id":75612037,"CreationDate":"2022-10-25T09:32:00.000","Title":"gym env.render() on Nth step","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to find the minimum distance from a point (X,Y) to a curve defined by four coefficients C0, C1, C2, C3 like y = C0 + C1X + C2X^2 + C3X^3\nI have used a numerical approach using np.linspace and np.polyval to generate discrete (X,Y) for the curve and then the shapely 's Point, MultiPoint and nearest_points to find the nearest points, and finally np.linalg.norm to find the distance.\nThis is a numerical approach by discretizing the curve.\nMy question is how can I find the distance by analytical methods and code it?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":130,"Q_Id":74207528,"Users Score":1,"Answer":"You need to differentiate (x - X)\u00b2 + (C0 + C1 x + C2 x\u00b2 + C3 x\u00b3 - Y)\u00b2 and find the roots. But this is a quintic polynomial (fifth degree) with general coefficients so the Abel-Ruffini theorem fully applies, meaning that there is no solution in radicals.\nThere is a known solution anyway, by reducing the equation (via a lengthy substitution process) to the form x^5 - x + t = 0 known as the Bring\u2013Jerrard normal form, and getting the solutions (called ultraradicals) by means of the elliptic functions of Hermite or evaluation of the roots by Taylor.\n\nPersonal note:\nThis approach is virtually foolish, as there exist ready-made numerical polynomial root-finders, and the ultraradical function is uneasy to evaluate.\nAnyway, looking at the plot of x^5 - x, one can see that it is intersected once or three times by and horizontal, and finding an interval with a change of sign is easy. With that, you can obtain an accurate root by dichotomy (and far from the extrema, Newton will easily converge).\nAfter having found this root, you can deflate the polynomial to a quartic, for which explicit formulas by radicals are known.","Q_Score":0,"Tags":"python,geometry,shapely","A_Id":74209808,"CreationDate":"2022-10-26T12:10:00.000","Title":"Finding the minimum distance from a point to a curve","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe containing 1440 rows and 8 columns. I have a column names 'number' taking values from 0 to 12. And I have another column named \"correct\" which takes value of True or False.\nWhat I would like to do is a line chart with on my x axis'number' which as I said are ranging from 0 to 12 and on my y axis the number of \"correct\" taking the value True.\nWhat I want is to regroup all of the cases when the 'number' is 0 and number of true, for example.\nI tried to create a small example but I think I'm not precise enough.\nI tried a lot of things with grouby and other things","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":74227430,"Users Score":0,"Answer":"Thanks a lot, I have been able to draw what I wanted but just for my graph to be a bit more precise. How could I add my axis like on x I would like 'number' and on the top left of my graph something saying what's being presented like the name 'correct' because I'm plotting the number of 'correct' ?","Q_Score":0,"Tags":"python,matplotlib","A_Id":74253640,"CreationDate":"2022-10-27T19:36:00.000","Title":"I want to draw a line chart with a sort of condition","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to Fasttext I am using python-wheel(v0.9.2) module with python3.10\nI trained a text classification model\nwhen I run a\nmodel.test(\"datasets\\dataset1.txt\")\nI except an output like:\n(nbr of samples, precision, recall)\nI get\n(1, 1.0, 1.1146408069999442e-05)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":74229174,"Users Score":0,"Answer":"The most simple explanation may be: (1, 1.0, 1.1146408069999442e-05) are the accurately-reported (sample_count, precision, recall) for your model & file.\nWhat makes you sure it's not? Are you sure training succeeded on properly-formatted data?\nHow did you train the model, on what data? What progress\/success was reported in training?\nWhat's inside your dataset1.txt file - such as type & quantity of data? Are you sure it's formatted correctly for the test() operation \u2013 with the proper delimiters of fields, tokens, and lines?\nCan you show a few representative lines of the training & test data?\n(If you need to add such details, it'll be best to edit your question, so there's plen ty of space\/formatting-options to make them clear.)","Q_Score":0,"Tags":"python,python-3.x,fasttext","A_Id":74247089,"CreationDate":"2022-10-27T23:13:00.000","Title":"model.test output is not (nbr of samples, precision, recall)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have multiple variables in my data frame with negative and positive values. Thus I'd like to normalize\/scale the variables between -1, 1. I didnt find a working solution. Any suggestions? Thanks a lot!\nI scaled other variables with the sklearn MinMaxScaler 0, 1. Didn't find an additional -1, 1 solution there.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":74,"Q_Id":74232723,"Users Score":0,"Answer":"Min max scaler uses a mathematical formula that converts values between 0,1 not -1,1\nif you want values between -1,1 try sklean's StandardScaler.\nHope this helps.","Q_Score":0,"Tags":"python,pandas,normalization","A_Id":74233307,"CreationDate":"2022-10-28T08:43:00.000","Title":"Data frame normalization center = 0 solution (-1, 1)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how to use pretrained model on 512x512 images on real images with different sizes like Yolo object detection on real image ?\nCNNs require fixed image sizes, so how do you manage to use the models on real images larger than the inputs?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":74245656,"Users Score":0,"Answer":"If it is just about the image size, you could resize your image to have same size as model input. When you receive the output, assuming that you have bounding boxes or locations etc, you can always rescale them back to original image size. Many ML\/DL frameworks provide this functionality","Q_Score":0,"Tags":"python,pytorch,conv-neural-network","A_Id":74245680,"CreationDate":"2022-10-29T13:22:00.000","Title":"how to use pretrained model CNN with real images various size?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know I can change the data type of columns by passing the column names, for example\ndf = df.astype({'col1': 'object', 'col2': 'int'})\nbut what if I want to change multiple columns by a given range? My df contains 50+ columns so I don't want to change them all by name. I want to set columns 17 to 41 as ints and I've tried a few variations, for example:\ndf = df.astype([17:41], 'int64')\nbut can't get the syntax to work. Any ideas?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":45,"Q_Id":74246826,"Users Score":2,"Answer":"You can access columns by index (position).\ndf.iloc[:,16:42] = df.iloc[:,16:42].apply(pd.to_numeric)","Q_Score":1,"Tags":"python,pandas","A_Id":74246846,"CreationDate":"2022-10-29T16:11:00.000","Title":"Changing datatype of multiple columns by range","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am finetuning a transformer model and during the training cycle, evaluating it at each epoch. The best model is selected based on the highest evaluation accuracy among all epochs. Once the training cycle is completed and the best model is dumped to the disk, I try to regenerate that validation accuracy. I am unable to regenerate the exact validation accuracy reported by the training phase. I am getting a 3% to 4% drop in accuracy on the same evaluation data.\n(For regeneration, I am calling the same evaluation function and passing it model and dataset. Nothing else changed for evaluation accuracy regeneration)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":137,"Q_Id":74259529,"Users Score":0,"Answer":"Are you sure you are saving a checkpoint at each time you evaluate during training? At the end of training, when loading the best model, you will load the best saved checkpoint. If there is no checkpoint for the best model version, you will end up loading some other version, which might explain the drop in accuracy.","Q_Score":3,"Tags":"python,pytorch,huggingface-transformers,transformer-model","A_Id":75743205,"CreationDate":"2022-10-31T06:36:00.000","Title":"Why accuracy of finetune transformer model is less when evaluated after loading from disk, than during training?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using FastAPI for Machine Learning model deployment in two cases based on the nature of the data we feed as inputs (features) which it an array of json. Thus, if inputs are matching method1 we execute the according model to it otherwise we apply method2 and execute the trained model for this case.\nHow can I achieve this process using FastAPI ? (process of verifieng the input data and apply the matching model for that data)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":44,"Q_Id":74265837,"Users Score":1,"Answer":"You can create a pydantic scheme as a dependency, that includes all the possible fields (Optional) for both data types and check incoming data by special field into it. Also you can use different routes. Could you show JSON samples?","Q_Score":0,"Tags":"python,api,machine-learning,fastapi,pydantic","A_Id":74273832,"CreationDate":"2022-10-31T15:48:00.000","Title":"FastAPI inputs check format and execute according method for that given format","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am working on an inventory simulation model. I have this global list variable called current_batches which consists of objects of a custom class Batch and is used to keep track of the current inventory. While running the simulation, a number of functions use this current_batches variable and modify it following certain events in the simulation.\nInside one function, I need to copy this variable and do some operations with the objects of the obtained copy, without modifying the objects of the original list. I used copy.deepcopy() and it works, but it is very slow and I will be running the simulation for many products with many iterations. Therefore, I was wondering if there is a way to copy this (global) list variable without using copy.deepcopy().\nI briefly looked at the pickle module, but it was unclear to me whether this module was useful in my situation.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":70,"Q_Id":74275236,"Users Score":0,"Answer":"If you need a full deepcopy which takes time because the copied object is large I see no way to avoid it.\nI suggest you speed things up creating a current_batches_update object where you save the modifications only and adjust the logic of the code to get values not present in current_batches_update object from the current_batches one. This way you can avoid making a full copy keeping the ability to get all the values.\nAnother option would be to equip current_batches with the ability to store two versions for some of its values, so you can store the special modified ones as a second version in current_batches and allow version (=1 or =2)` as parameter in the function for retrieving the values designed to deliver version 1 if there is no requested version 2.","Q_Score":0,"Tags":"python,performance,class,object,copy","A_Id":74275430,"CreationDate":"2022-11-01T11:30:00.000","Title":"Copying a list of custom class objects without using .deepcopy()","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Below is all of the error, I think it must be some config or version wrong\n2022-11-01 19:43:58 [scrapy.crawler] INFO: Overridden settings:\n{'BOT_NAME': 'spider2022', 'NEWSPIDER_MODULE': 'spider2022.spiders', 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['spider2022.spiders'], 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor', 'USER_AGENT': 'Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36 ' '(KHTML, like Gecko) Chrome\/70.0.3538.102 Safari\/537.36'}\npackages\/scrapy\/downloadermiddlewares\/retry.py\", line 25, in \nfrom twisted.web.client import ResponseFailed\nFile \"\/Users\/zhangyiran\/opt\/anaconda3\/lib\/python3.9\/site-packages\/twisted\/web\/client.py\", line 24, in \nfrom twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS\nFile \"\/Users\/zhangyiran\/opt\/anaconda3\/lib\/python3.9\/site-packages\/twisted\/internet\/endpoints.py\", line 63, in \nfrom twisted.python.systemd import ListenFDs\nFile \"\/Users\/zhangyiran\/opt\/anaconda3\/lib\/python3.9\/site-packages\/twisted\/python\/systemd.py\", line 18, in \nfrom attrs import Factory, define\nModuleNotFoundError: No module named 'attrs'\n(venv) (base) zhangyiran@zhangyirandeair spider2022 % ``\n\"","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":192,"Q_Id":74283869,"Users Score":0,"Answer":"In one code I used import - 'from scrapy.crawler import CrawlerProcess'. Then, in other spiders, I began to have the same problem as you. When I commented out the import CrawlerProcess - the problem went away.","Q_Score":0,"Tags":"python,scrapy","A_Id":74943392,"CreationDate":"2022-11-02T02:52:00.000","Title":"why my scrapy not run? I'm a new user of scrapy and it cannot create csv","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am relatively new to web development and very new to using Web2py. The application I am currently working on is intended to take in a CSV upload from a user, then generate a PDF file based on the contents of the CSV, then allow the user to download that PDF. As part of this process I need to generate and access several intermediate files that are specific to each individual user (these files would be images, other pdfs, and some text files). I don't need to store these files in a database since they can be deleted after the session ends, but I am not sure the best way or place to store these files and keep them separate based on each session. I thought that maybe the subfolders in the sessions folder would make sense, but I do not know how to dynamically get the path to the correct folder for the current session. Any suggestions pointing me in the right direction are appreciated!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":74292386,"Users Score":0,"Answer":"I was having this error \"TypeError: expected string or Unicode object, NoneType found\" and I had to store just a link in the session to the uploaded document in the db or maybe the upload folder in your case. I would store it to upload to proceed normally, and then clear out the values and the file if not 'approved'?","Q_Score":0,"Tags":"python,web2py","A_Id":74293312,"CreationDate":"2022-11-02T16:12:00.000","Title":"Using temporary files and folders in Web2py app","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am relatively new to web development and very new to using Web2py. The application I am currently working on is intended to take in a CSV upload from a user, then generate a PDF file based on the contents of the CSV, then allow the user to download that PDF. As part of this process I need to generate and access several intermediate files that are specific to each individual user (these files would be images, other pdfs, and some text files). I don't need to store these files in a database since they can be deleted after the session ends, but I am not sure the best way or place to store these files and keep them separate based on each session. I thought that maybe the subfolders in the sessions folder would make sense, but I do not know how to dynamically get the path to the correct folder for the current session. Any suggestions pointing me in the right direction are appreciated!","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":74292386,"Users Score":0,"Answer":"If the information is not confidential in similar circumstances, I directly write the temporary files under \/tmp.","Q_Score":0,"Tags":"python,web2py","A_Id":74326905,"CreationDate":"2022-11-02T16:12:00.000","Title":"Using temporary files and folders in Web2py app","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to read in a large amount of Avro files in subdirectories from s3 using spark.read.load on databricks. I either get an error due to the max result size exceeding spark.driver.maxResultSize, or if I increase that limit, the driver runs out of memory.\nI am not performing any collect operation so I'm not sure why so much memory is being used on the driver. I wondered if it was something to do with an excessive number of partitions, so I tried playing around with different values of spark.sql.files.maxPartitionBytes, to no avail. I also tried increasing memory on the driver and using a bigger cluster.\nThe only thing that seemed to help slightly was specifying Avro schema beforehand rather than inferring; this meant the spark.read.load finished without error, however memory usage on the driver was still extremely high and the driver still crashed if I attempted any further operations on the resulting DataFrame.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":241,"Q_Id":74306482,"Users Score":0,"Answer":"I discovered the problem was the spark.sql.sources.parallelPartitionDiscovery.parallelism option. This was set too low for the large number of files I was trying to read, resulting in the driver crashing. Increased the value of this and now my code is working.","Q_Score":0,"Tags":"python,apache-spark,pyspark,databricks","A_Id":74316285,"CreationDate":"2022-11-03T16:28:00.000","Title":"Why am I getting out of memory error on spark driver when trying to read lots of Avro files? No collect operation happening","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've trawled through stack overflow, several youtube videos and can't for the life of me work this out.\nI've unpackaged and pulled from git, all files are where they need to be as far as the installation for Stable Diffusion goes - but when I go to run I get two errors, one being the pip version. I upgraded via 'pip install --upgrade pip' and though the version updated, I'm still getting the below error.\nThe other issue is that pytorch doesn't seem to have installed. I've added it to the requirements.txt and run 'pip install -r requirements.txt' which doesn't seem to work either. I also downloaded 1.12.1+cu113 and ran pip install \"path\/\" and received the error \"ERROR: torch-1.12.1+cu113-cp39-cp39-win_amd64.whl is not a supported wheel on this platform.\"\nError received below:\nstderr: ERROR: Could not find a version that satisfies the requirement torch==1.12.1+cu113 (from versions: none)\nERROR: No matching distribution found for torch==1.12.1+cu113\nWARNING: You are using pip version 20.1.1; however, version 22.3 is available.\nYou should consider upgrading via the 'C:\\Users\\XXX\\Downloads\\STABLE\\stable-diffusion-webui\\venv\\Scripts\\python.exe -m pip install --upgrade pip' command.\nAny help would be greatly appreciated, I've tried my best to be self-sufficient so I'm putting it to the people who may know how to help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":762,"Q_Id":74313444,"Users Score":0,"Answer":"Same problem with python 3.8. I install python3.10 and fixed.\nFor mac:\nbrew install python@3.10","Q_Score":0,"Tags":"python,pip,stable-diffusion","A_Id":75530853,"CreationDate":"2022-11-04T07:19:00.000","Title":"Attempting to install Stable Diffusion via Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a huge \".csv\" file which corresponds to an one-day data file.\nEvery half-hour of the day, data was recorded during ten minutes. In the file, each half-hour is separated by a text tag, such as \"zzzz\" or \"system sleep\", depending on the .csv.\nI would like to split the CSV breaking it into the 48 half-hour blocks, and save each half-hour .csv in a new foder that includes 48 smaller csv. files. I am sure there are ways to do this but I cannot find the way. Because each half-hour values do not have exactly the same number of rows, I cannot split this data according to row numbers.\nThe file will look something like the following (I made a shortened example):\n\n\n\n\nID\nDay\nTime\nRec\nvalue\n\n\n\n\nA1\n2018\/1\/30\n00:00\n1\n251\n\n\nA1\n2018\/1\/30\n00:01\n2\n368\n\n\nA1\n2018\/1\/30\n00:02\n3\n430\n\n\nsystem sleep.\n\n\n\n\n\n\nA1\n2018\/1\/30\n00:30\n1\n195\n\n\nA1\n2018\/1\/30\n00:31\n2\n876\n\n\nA1\n2018\/1\/30\n00:32\n3\n864\n\n\nsystem sleep.\n\n\n\n\n\n\nA1\n2018\/1\/30\n01:00\n1\n872\n\n\nA1\n2018\/1\/30\n01:01\n2\n120\n\n\nA1\n2018\/1\/30\n01:02\n3\n208\n\n\nsystem sleep.\n\n\n\n\n\n\n(...)\n(...)\n(...)\n(...)\n(...)\n\n\nA1\n2018\/1\/30\n23:39\n10\n002\n\n\n\n\nAnd so it goes for the whole day. Please note my actual data has up to 7000 values per half-hour*.\nI would like to split it for each \"system sleep\" (or each time such text appears in the first column); and save the new .csv files in a new folder. Also, if possible, I would like to keep the header (first row) for all the half-hour blocks\/new csv'. Ideally, I'd also like the file to be saved after the first time value\/row of each block (\"Time\") -but I guess it would still work if it was saved as 1, 2, 3, 4.\nCan anyone help me? I usually work with R language, but if it's easily done in another language such as python (I found many answers in python but not exaclty what I need), I wouldn't mind giving it a try eventhough I have no experience with it (but if I know R, it should be doable). Thank you very much..!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":47,"Q_Id":74342924,"Users Score":0,"Answer":"What I did was to use the function\nwhich(startsWith(df$treeID, \"system sleep.\")),\nthis retrieved me all the columns that started with this value.\nThen using the function slice and the previous column I can cut the dataframe. However, I can only do it one by one (selecting with [1], [2], etc. the row and slicing the rest). So I am trying to build a loop now.\nIf you want more details send me a message.\nThank you.","Q_Score":0,"Tags":"python,r,csv,split,intervals","A_Id":74358129,"CreationDate":"2022-11-07T06:54:00.000","Title":"Split a huge csv based on time intervals depending on inner text tags","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have xyz axis for gyro and accelerometer data, and i want to detect between whether the travel path was circular or square\nHave not tried anything, want initial ideas","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":17,"Q_Id":74346346,"Users Score":-1,"Answer":"The way I would approach this would be:\nFirst compute some characteristics about my data such as:\n\ncentroid of my points\nmax distance between two points\n...\n\nThen create reference shapes with that data, for example a circle with its center being the computed centroid of my points, and its diameter being the max distance,...\nThen try to find how close to each reference shape is every point in my path to compute the standard deviation between each reference shape and my path. This might be more or less difficult depending on how complicated your reference shapes are.\nFinally I would just pick the shape with the smallest standard deviation.\nThis might not be very optimal though, since it involves quite a lot of computation, especially if you have a lot of point in your path.","Q_Score":0,"Tags":"python,matplotlib","A_Id":74346551,"CreationDate":"2022-11-07T12:08:00.000","Title":"How can i distinguish between square and circular shapes from accelerometer and gyroscope sensor data using python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I will like to delete all empty row in a column\nprices_dataframe[prices_dataframe['postcode'].isnull()]\nThis only seems to be showing me the empty rows not deleting it.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":74346672,"Users Score":0,"Answer":"Is Null only returns the empty rows it does not drop them. Your empty rows should contain NaN so you can use `prices_dataframe.dropna(inplace=True)\nTo drop them.\nIf your rows don't contain NaN you can first replace the empty rows with NaN\nprices_dataframe.replace('', np.nan, inplace=True) and then drop them","Q_Score":0,"Tags":"python,data-cleaning,rowdeleting","A_Id":74346743,"CreationDate":"2022-11-07T12:33:00.000","Title":"how to delete for an empty row in a column in python csv","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have multiple csv files present in hadoop folder. each csv files will have the header present with it. the header will remain the same in each file.\nI am writing these csv files with the help of spark dataset like this in java\ndf.write().csv(somePath)\nI was also thinking of using coalsec(1) but it is not memory efficient in my case\nI know that this write will also create some redundant files in a folder. so need to handle that also\nI want to merge all these csv files into one big csv files but I don't want to repeat the header in the combined csv files.I just want one line of header on top of data in my csv file\nI am working with python to merging these files. I know I can use hadoop getmerge command but it will merge the headers also which are present in each csv files\nso I am not able to figure out how should I merge all the csv files without merging the headers","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":126,"Q_Id":74364964,"Users Score":1,"Answer":"coalesce(1) is exactly what you want.\nSpeed\/memory usage is the tradeoff you get for wanting exactly one file","Q_Score":0,"Tags":"python,csv,apache-spark,hadoop,hdfs","A_Id":74377309,"CreationDate":"2022-11-08T17:36:00.000","Title":"merge multiple csv files present in hadoop into one csv files in local","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am porting the Matlab code to Python. In Matlab, indices start at 1, but in python, they start at 0. Is there any way to set the first index as 1 through a command line flag?\nIt will be very useful for programming during index iteration.","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":84,"Q_Id":74365495,"Users Score":1,"Answer":"As far as Python is concerned, there cannot be changes in the Indexing part. It always starts with 0 (Zero) only and progresses onwards.\nHope this helps you.","Q_Score":0,"Tags":"python,python-3.x,indexing","A_Id":74365617,"CreationDate":"2022-11-08T18:23:00.000","Title":"Python index starts at 0. Any possibility to set index value as 1 for any terminal?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have got an excel file from work which I amended using pandas. It has 735719 rows \u00d7 31 columns, I made the changes necessary and allocated them to a new dataframe. Now I need to have this dataframe in an Excel format. I have checked to see that in jupyter notebooks the ont_dub works and it shows a dataframe. So I use the following code ont_dub.to_excel(\"ont_dub 2019.xlsx\") which I always use.\nHowever normally this would only take a few seconds, but now it has been 40 minutes and it is still calculating. Sidenote I am working in a onedrive folder from work, but that hasn't caused issues before. Hopefully someone can see the problem.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":54,"Q_Id":74366492,"Users Score":0,"Answer":"Usually, if you want to save such high amount of datas in a local folder. You don't utilize excel. If I am not mistaken excel has a know limit of displayable cells and it wasnt built to display and query such massive amounts of data (you can use pandas for that). You can either utilize feather files (a known quick save alternative). Or csv files, which are built for this sole purpose.","Q_Score":0,"Tags":"python,excel,pandas,dataframe,jupyter-notebook","A_Id":74366619,"CreationDate":"2022-11-08T20:00:00.000","Title":"Writing dataframe to Excel takes extremely long","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Create a row that sums the rows that do not have a data in all the columns.\nI'm working on a project that keeps throwing dataframes like this:\n\n\n\n\n1\n2\n3\n4\n5\n\n\n\n\n\n108.864\n\nINTERCAMBIADORES DE\n1123.60 210.08 166.71 1333.68\n\n\n\n\n\nCALOR 8419500300\n\n\n\n\n147.420 5.000\nPZ\n1A0181810000\n81039.25 15149.52 19237.754880 96188.77\n\n\n\n147.420\n\nINTERCAMBIADORES DE\n3882.25 725.75 921.60 4608.00\n\n\n\n\n\nCALOR 8419500300\n\n\n\n\n566.093 12.000\nPZ\n1A0183660000\n66187.40 12374.29 6546.806709 78561.68\n\n\n\n566.093\n\nINTERCAMBIADORES DE\n3170.76 592.80 313.63 3763.56\n\n\n\n\n\nCALOR 8419500300\n\n\n\n\n3.645 1.000\nPZ\n1A0185890000\n836.64 159.69 996.330339 996.33\n\n\n\n3.645\n\nINTERCAMBIADORES DE\n40.08 7.65 47.73 47.73\n\n\n\n\n\nCALOR 8419500300\n\n\n\n\n131.998 3.000\nPZ\n1A0190390000\n32819.41 6135.17 12984.858315 38954.57\n\n\n\n131.998\n\nINTERCAMBIADORES DE\n1572.24 293.91 622.05 1866.15\n\n\n\n\n\nCALOR 8419500300\n\n\n\n\n123.833 3.000\nPZ\n1A0190790000\n54769.36 10238.84 21669.402087 65008.21\n\n\n\n123.833\n\nINTERCAMBIADORES DE\n2623.77 490.50 1038.09 3114.27\n\n\n\n\n\nCALOR 8419500300\n\n\n\n\n115.214 2.000\nPZ\n1A0195920000\n54642.66 10215.05 32428.851279 64857.70\n\n\n\n115.214\n\nINTERCAMBIADORES DE\n2617.70 489.36 1553.53 3107.06\n\n\n\n\nThis is going to insert a sql database, I don't know how to add the empty rows with the row that has all the information.\nNOTE: Spacing Empty cells is variable","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":74367617,"Users Score":0,"Answer":"Question is unclear: please provide code you've tried, the error message you're getting, and expected output.","Q_Score":0,"Tags":"python,pandas,dataframe,data-science","A_Id":74370323,"CreationDate":"2022-11-08T21:58:00.000","Title":"Create a row that sums the rows that do not have a data in all the columns pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Short description: two computers in the same network, in the new one only those python scripts work that use native packages.\nI have Pycharm in my old computer and it has worked fine. Now I got a new computer, installed the most recent version of Python and Pycharm, then opened one of my old projects. Both the old and the new computer are in the same network and the project is on a shared folder. So I did the following:\n\nFile - Open - selected the project. Got a message that there is no interpreter\nAdd local interpreter - selected the latest Python 311 exe. So location of the venv is the same as in the old computer (because it's a network folder) but Base interpreter is pointing to the C drive of my new computer.\nPyCharm creates a virtual environment and the code runs fine.\nI select another project which uses imported packages such as pandas. Again, same steps as above, add local interpreter. Venv is created.\nI go to File - Setting - Project and see that pip, setuptools and wheel are listed as Packages. If I double click one of these, I can re-install and get a note that installation is succesful, so nothing seems to be wrong in the connection (after all, both the old and the new computer are in the same network.\nI click the plus sign to add a new one, search pandas. Installation fails. Same thing if I try e.g. numpy.\n\nError message has lots of retrying, then \"could not find the version that satisfies the requirement pandas (from versions: none\", \"not matching distribution found for pandas\" (pip etc. have the latest versions).\nAfter few hours of googling for solutions, I have tried the following:\n\nComplety uninstall and reinstall python and PyCharm. Checked that PATH was included in the installation.\nTried launching pip command from shell\nChanged http proxy to auto-detect\nTyped 'import pandas' in PyCharm, then used the dropdown in the yellow bulb but there is no install option\nStarted a new project in the new computer, tried to install pandas\n\nAll failed. I'm surprised that changing computers is this difficult. Please let me know if there are other options than staying in the old computer...","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":34,"Q_Id":74377753,"Users Score":1,"Answer":"If you want to use venv in the network, please use SSH interpreter. Pycharm supports this method. Shared folders are not a recommended usage, For pycharm, it will consider this as a local file. If the file map is not downloaded locally, it will make an error.\nAnother way is to reinstall the project environment on the new computer through requirement.txt. Reasonable use of requirements.txt can effectively avoid many project bugs caused by environment migration or different dependent versions. Before installing some scientific module such as pandas, it is recommended to install visual studio build tools, such as gcc ...","Q_Score":1,"Tags":"python,python-3.x,pandas,pycharm,windows-10","A_Id":74377937,"CreationDate":"2022-11-09T15:50:00.000","Title":"PyCharm cannot install packages","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Short description: two computers in the same network, in the new one only those python scripts work that use native packages.\nI have Pycharm in my old computer and it has worked fine. Now I got a new computer, installed the most recent version of Python and Pycharm, then opened one of my old projects. Both the old and the new computer are in the same network and the project is on a shared folder. So I did the following:\n\nFile - Open - selected the project. Got a message that there is no interpreter\nAdd local interpreter - selected the latest Python 311 exe. So location of the venv is the same as in the old computer (because it's a network folder) but Base interpreter is pointing to the C drive of my new computer.\nPyCharm creates a virtual environment and the code runs fine.\nI select another project which uses imported packages such as pandas. Again, same steps as above, add local interpreter. Venv is created.\nI go to File - Setting - Project and see that pip, setuptools and wheel are listed as Packages. If I double click one of these, I can re-install and get a note that installation is succesful, so nothing seems to be wrong in the connection (after all, both the old and the new computer are in the same network.\nI click the plus sign to add a new one, search pandas. Installation fails. Same thing if I try e.g. numpy.\n\nError message has lots of retrying, then \"could not find the version that satisfies the requirement pandas (from versions: none\", \"not matching distribution found for pandas\" (pip etc. have the latest versions).\nAfter few hours of googling for solutions, I have tried the following:\n\nComplety uninstall and reinstall python and PyCharm. Checked that PATH was included in the installation.\nTried launching pip command from shell\nChanged http proxy to auto-detect\nTyped 'import pandas' in PyCharm, then used the dropdown in the yellow bulb but there is no install option\nStarted a new project in the new computer, tried to install pandas\n\nAll failed. I'm surprised that changing computers is this difficult. Please let me know if there are other options than staying in the old computer...","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":74377753,"Users Score":0,"Answer":"This took a while but here is what happened. Package installation did not work in project settings. Neither did it work when you select Python Packages tab at the bottom of the screen. The only thing that worked was to select the Terminal tab and manually install (pip install) there. We use a trusted repository but for other users, the easier package installation methods work. Not sure why they do not for me but at least there is this manual workaround.","Q_Score":1,"Tags":"python,python-3.x,pandas,pycharm,windows-10","A_Id":74571549,"CreationDate":"2022-11-09T15:50:00.000","Title":"PyCharm cannot install packages","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was learning how to make graphs with python pandas. But I couldn't understand how this code works.\nfig , ax = plt.subplots( ) ax = tips[['total_bill','tip']].plot.hist(alpha=0.5, bins=20, ax=ax)\nI couldn't understand why the code words only when there is fig infront of ax.\nAlso I have no idea what 'ax=ax' means.\nI found everywhere but I couldn't find the answer...","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12,"Q_Id":74381751,"Users Score":0,"Answer":"Pandas is using the library matplotlib to do the plotting. Try to read up a bit about how matploltib works, it will help you understand this code a bit.\nGenerally, plotting with matplotlib involves a figure and one or more axes. A figure can be thought of as a frame where multiple plots can be created inside. Each plot consists of an axes object which contains your x- and y-axis and so on.\nWith the command plt.subplots(), you create in a single function a figure object and one or more axes objects. If you pass no parameters to the function, just a single axes object will get created that is placed on the figure object. The figure and axes are returned as a tuple by the function in the form of (figure, axes). You are unpacking that tuple with the first line into the variable fig and ax.\nThen, when you call the plotting function on your pandas data, you tell the function on which axes object to do the plotting. This is what the parameter ax means in that function. So you are telling the function to use your axes object that your variable ax is assigned to by setting the parameter ax to ax (ax = ax).\nDoing ax = tips[['total_bill','tip']].plot... is redundant. The plotting function returns the axes object on which the plotting was performed by pandas. However, you are just overwriting your already existing axes with the returned axes, which in this case are the same object. This would only be needed if you don't pass the ax parameter to the plotting function, in which case pandas would create a brandnew figure and axes object for you and return the axes object in case you want to do any further tweaks to it.","Q_Score":0,"Tags":"python,pandas,dataframe,graph,series","A_Id":74381853,"CreationDate":"2022-11-09T21:47:00.000","Title":"Why does python pandas need fix infront of ax to draw a graph?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say I have two arrays X=[A,B,C] and Y=[D,E,F], where each element is a 3 by 3 matrix. I would like to make an array Z=[AD,BE,CF] without using for loop. What should I do?\nI have tried using np.tensordot(X,Y,axis=1) but it returns 9 products [[AD,AE,AF],[BD,BE,BF],[CD,CE,CF]]. the troublesome thing is that the matrix size for each element must be the same as the array length, say for 3 by 3 matrix, X and Y should have 3 elements each.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":26,"Q_Id":74384495,"Users Score":0,"Answer":"It turns out the answer is incredibly simple. I just used np.matmul (X,Y) to achieve the result I wanted.","Q_Score":0,"Tags":"python,numpy,matrix,tensor","A_Id":74398030,"CreationDate":"2022-11-10T05:24:00.000","Title":"How to do some tensor multiplication without using for loop in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a lot of categorical columns and want to convert values in those columns to numerical values so that I will be able to apply ML model.\nNow by data looks something like below.\nColumn 1- Good\/bad\/poor\/not reported\ncolumn 2- Red\/amber\/green\ncolumn 3- 1\/2\/3\ncolumn 4- Yes\/No\nNow I have already assigned numerical values of 1,2,3,4 to good, bad, poor, not reported in column 1 .\nSo, now can I give the same numerical values like 1,2,3 to red,green, amber etc in column 2 and in a similar fashion to other columns or will doing that confuse model when I implement it","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60,"Q_Id":74398311,"Users Score":1,"Answer":"You can do this for some of the rated columns by using df[colname].map({})or LabelEncoder() .\nThey will change each categorical data to numbers, so there is a weight between them, which means if poor is one and good is 3, as you can see, there is a difference between them. You want the model to know it, but if it's just something like colors, you know there is no preference in colors, and green is no different from blue .so it is better not to use the same method and use get_dummies in pandas.","Q_Score":0,"Tags":"python,machine-learning,encoding,data-science,categorical","A_Id":74411380,"CreationDate":"2022-11-11T05:21:00.000","Title":"Convert Categorical features to Numerical","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using a custom library to transform a 1D signal to a 2D representation. The output it's printed through plt.imshow(), used by a function inside of the library. I have the result but i don't want to save the picture locally. There is a way to get as a PIL image what is being used by plt.imshow?\nEDIT: The answer is yes, as @Davide_sd is pointing out ax.images[idx].get_array() can be used to retrieve the data","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":54,"Q_Id":74399566,"Users Score":1,"Answer":"You can use ax.images[idx].get_array() to retrieve the data, after which you can use it on PIL. ax is the axes where the image has been plotted. idx is the index of the image you are interested: if you have plotted a single image, then idx=0.","Q_Score":0,"Tags":"python,matplotlib,python-imaging-library,imshow","A_Id":74399647,"CreationDate":"2022-11-11T08:00:00.000","Title":"Take the image printed with plt.imshow as a variable","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm studying k-anonymization and the mondrian algorithm proposed by LeFevre. In it, LeFevre says that at one point in his algorithm, we have to choose a feature in the Dataframe depending on which feature has the largest range of normalized values.\nFor example, if I have the feature Age in my dataset with the values:\n[13, 15, 24, 30], I understand that the range is 13-30, but as soon as you make it normalized wouldn't it always be [0-1]?\nI know that the question seems strange, but I couldn't find anything on the internet nor on the paper itself that documented more what he meant.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":74420861,"Users Score":0,"Answer":"It depends on a normalization technique but yes. If we use min max it will always be between [0,1]. What you can do is split that variable into segments and the normalized your data. However you use minx-max normalization, the minimum value of that feature gets transformed into a 0, and the maximum value gets a 1. Maybe a\nmean normalization could give you a different result in that case.","Q_Score":0,"Tags":"python,anonymity","A_Id":74421556,"CreationDate":"2022-11-13T12:19:00.000","Title":"What is a normalized range of values?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two df's: one has a date in the first column: all dates of the last three years and the second column are names of participants, other columns are information.\nIn the second df, I have some dates on which we did tests in the first column, then second column the names again and more columns information.\nI would like to combine the two dateframes that in the first dataframe the information from the second will be added but for example if we did one test on 2-9-2020 and the same test for the same person on 16-9-2022 then from 2-9-202 until 16-9-2022 i want that variable and after that the other.\nI hope it's clear what i mean.\ni tried\ndata.merge(data_2, on='Date' & 'About')\nbut that is not possible to give two columns for on.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":74421250,"Users Score":0,"Answer":"With Python and Pandas, you can join on 2 variables by using something like:\ndf=pd.merge(df,df2,how=\"left\",on=['Date','About']) # can be how=\"left\" or \"inner\",\"right\",\"outer\"","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":74421321,"CreationDate":"2022-11-13T13:05:00.000","Title":"How do I combine two dataframes on two columns?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently working with a pandas data frame and need to save data via CSV for different categories.so I thought to maintain one CSV and add separate sheets to each category. As per my research via CSV, we can't save data for multiple sheets. is there any workaround for this? I need to keep the format as CSV(cant use excel)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":74442155,"Users Score":3,"Answer":"No.\nA CSV file is just a text file, it doesn't have a standard facility for \"multiple sheets\" like spreadsheet files do.\nYou could save each \"sheet\" as a separate file, but that's about it.","Q_Score":0,"Tags":"python,pandas,csv","A_Id":74442168,"CreationDate":"2022-11-15T07:49:00.000","Title":"Is there any workaround to save csv with multiple sheets in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Thanks to everyone reading this.\nI'm a beginner to pytorch. I now have a .pt file and I wanna print the parameter's shape of this module. As I can see, it's a MLP model and the size of input layer is 168, hidden layer is 32 and output layer is 12.\nI tried torch.load() but it returned a dict and I don't know how to deal with it. Also, I wanna print the weight of input layer to hidden layer(that maybe a 168*32 matrix) but I don't know how to do that. Thanks for helping me!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20,"Q_Id":74442412,"Users Score":0,"Answer":"The state dictionary of does not contain any information about the structure of forward logic of its corresponding nn.Module. Without prior knowledge about it's content, you can't get which key of the dict contains the first layer of the module... it's possibly the first one but this method is rather limited if you want to beyond just the first layer. You can inspect the content of the nn.Module but you won't be able to extract much more from it, without having the actual nn.Module class at your disposal.","Q_Score":0,"Tags":"python,pytorch","A_Id":74442695,"CreationDate":"2022-11-15T08:14:00.000","Title":"How to print the model's parameters'shape and print the parameters while loading a .pt file?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to create a CNN model in Python and I have organized my data in such a way that I have 100 csv files with different sizes (all of them have 141 colunms but some have 33 rows and others have 70 rows). All of those files can be categorized in 6 different categories. All the examples that I have seen so far for buiding a CNN model are using either just one dataset in pandas or using several images of the same size. So the question would be, Can I use my data for creating a CNN model in this fashion? If yes, Can anyone give me some tricks or\/and tips of how to?\nThanks a lot in advance!\nI have seen some Tensorflow or PyTorch examples but I dont know how to use them with my data","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":64,"Q_Id":74457562,"Users Score":0,"Answer":"It depends on the reason that the data are separated in different files in the first place and what you want to achieve.\nIf each file contains observations for a different entity AND you want to predict observations about EACH specific known entity, you can build a model for each entity. In this case, the entities with more training data will of course have better results.\nStill, if the difference between those entities can be described with numerical values, depending on the exact problem, you can also try adding those to the training data and then concatenating everything. In this case, the added features will make the final classification to better classify the observations of each different entity without building 100 models. Note however that this could work only if the \"qualities\" of the entities actually affect the observations in some degree, otherwise (if the observations are randomly distributed amongst the entities) the results will probably be worse.\nIf however the observations of different entities are needed to train a model that works for any entity (including unknown ones), the data can be concatenated to a single table (pandas DataFrames were mentioned in the question) and then train your model with this combined dataset.","Q_Score":1,"Tags":"python,tensorflow,pytorch,conv-neural-network","A_Id":74492920,"CreationDate":"2022-11-16T08:50:00.000","Title":"Using several csv files of different sizes to build a CNN model in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a set of images in which I need to detect which of them needs a perspective transform. The images might be plain documents or photos taken with phone cameras with perspective and I need to perform perspective transform on those. How can I detect which need perspective transform in opencv?\nI can do perspective transform, however, I'm not capable of detecting when an image needs to suffer a perspective transform.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":74473938,"Users Score":0,"Answer":"This could be a possible approach:\n\nTake a reference picture (which does not require a perspective transform).\nDefine four points of interest- (x1,y1) (x2,y2) (x3,y3) (x4,y4) in your reference image. Consider these points as your destination points.\nNow in every other image that you want to check if a perspective transform is necessary, you will detect the same points of interest in those images. Lets call them source points.\nNext you have to check if the source points match your destination points. Also you will have to check if the dimensions(width & height) match.\nIf neither of the two matches(the points or the dimension), there's a need for perspective transform.","Q_Score":0,"Tags":"python,opencv,computer-vision","A_Id":74519688,"CreationDate":"2022-11-17T10:36:00.000","Title":"How to detect when an image needs perspective transform?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to change so True = False or more exact change so True = 0 and False = 1 is there a way to do this?\nI have a dataframe and would like to df.groupby('country',as_index=False).sum() and see how many False values there is in each country\nI have tried df['allowed'] = --df['allowed'] (allowed is the column with True and False values) to swap them but it didn't work","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":74476574,"Users Score":0,"Answer":"Swapping booleans is easy with df[\"neg_allowed\"] = ~df['allowed']","Q_Score":0,"Tags":"python,pandas,boolean","A_Id":74476698,"CreationDate":"2022-11-17T13:47:00.000","Title":"Is there a way to change True to False in python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the meaning of:\nshape=(1, 224, 224, 3)\nI mean what are all the values specifying given here for shape?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":74477420,"Users Score":0,"Answer":"When the shape is of length 4, it means that that you have a \"4D-tensor\". A 4D-tensor is a group of 3D-tensor. For instance if A is a 4D-tensor, A[0] is a 3D-tensor that is a the first element of this group. Here the first number 1 means that you group is only composed of one 3D-tensor. Then, you guess that a 3D-tensor is a group of 2D-tensor (also called matrices). Here your 3D-tensor is composed of 224 2D-tensors (the second number). Then each 2D-tensor is composed of 224 1D-tensors (vectors) of lenght 3.\nIn your particular case you can also (more simply) view your data as a group composed of one RGB image of size 224*224. Each pixel has 3 values (red, green, blue intensity).","Q_Score":0,"Tags":"python,numpy-ndarray","A_Id":74477591,"CreationDate":"2022-11-17T14:47:00.000","Title":"What are the 4 values passed in shape for ndarray in numPy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using colcon for creating ROS2 package. And I can't build any package because of error \"No module named 'numpy.core._multiarray_umath'\"\nwhen i do colcon build command, the terminal says next:\n`Original error was: No module named 'numpy.core._multiarray_umath'\nI've already tried update numpy\npip install numpy --upgrade\nIt didn't help(","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":53,"Q_Id":74478962,"Users Score":0,"Answer":"can you show your full terminal output?\nyou can also try with colcon ignore for the location you have got error and the colcon build","Q_Score":0,"Tags":"python-3.x,ros","A_Id":75181478,"CreationDate":"2022-11-17T16:29:00.000","Title":"Whet using colcon, I get an error \"No module named 'numpy.core._multiarray_umath' \"","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have the data of banner clicks by minute.\nI have the following data: hour, minute, and was the banner clicked by someone in that minute. There are some other features (I omitted them in the example dataframe). I need to predict will be any clicks on banner for all following minutes of this hour.\nFor example I have data for the first 11 minutes of an hour.\n\n\n\n\nhour\nminute\nis_click\n\n\n\n\n1\n1\n0\n\n\n1\n2\n0\n\n\n1\n3\n1\n\n\n1\n4\n0\n\n\n1\n5\n1\n\n\n1\n6\n0\n\n\n1\n7\n0\n\n\n1\n8\n0\n\n\n1\n9\n1\n\n\n1\n10\n1\n\n\n1\n11\n0\n\n\n\n\nMy goal is to make prediction for 12, 13 ... 59, 60 minute.\nIt will be real-time model that makes predictions every minute using the latest data.\nFor example, I made the prediction at 18:00 for the next 59 minutes (until 18:59). Now it is 18:01 and I get the real data about clicks at 18:00, so I want to make more precise prediction for following 58 minutes (from 18:02 to 18:59). And so on.\nMy idea was to mask-out the passed minutes with -1\nI created the example of 11 minutes.There are targets:\n\n\n\n\nminute\ntarget vector\n\n\n\n\n1\n-1 0 1 0 1 0 0 0 1 1 0\n\n\n2\n-1 -1 1 0 1 0 0 0 1 1 0\n\n\n3\n-1 -1 -1 0 1 0 0 0 1 1 0\n\n\n4\n-1 -1 -1 -1 1 0 0 0 1 1 0\n\n\n5\n-1 -1 -1 -1 -1 0 0 0 1 1 0\n\n\n6\n-1 -1 -1 -1 -1 -1 0 0 1 1 0\n\n\n7\n-1 -1 -1 -1 -1 -1 -1 0 1 1 0\n\n\n8\n-1 -1 -1 -1 -1 -1 -1 -1 1 1 0\n\n\n9\n-1 -1 -1 -1 -1 -1 -1 -1 -1 1 0\n\n\n10\n-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0\n\n\n11\n-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1\n\n\n\n\nHowever it seems strange to me to train a model to predict this mask values of -1. I think for neural network it will be not obvious that these -1 are just a padding.\nThe another idea was to use a current minute as a feature and ,therefore, to predict always the sequence of 60 - minute length and then cut the extra prediction. However, the input will have different lengths anyway, so it does not solve the problem.\nSo how I should preprocess the data to use LSTM? Should I use described above padding so all vectors will be have the same length of 60? Is there any better solution?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":51,"Q_Id":74482446,"Users Score":1,"Answer":"An RNN (or LSTM) will return an output for every input, as well as the final hidden state (and cell state for LSTM). So one possible solution: Pad your input of future minutes with with a different token and use an embedding layer with 3 embeddings (0, 1, 2 where 2 represents unseen value). For example, at timestep 3 the input = [0, 0, 1, 2, 2, 2,...2].\nAfter this goes through an embedding layer each token will mapped to some embedding dimension (e.g. 16) and this would be pass to the LSTM. So the input size for your LSTM would be 16 and the hidden size would be one (so that you get a scalar output for every timestep of the input). Then you pass this output through a sigmoid so each prediction is between 0 and 1 and use binary cross entropy between the predictions and targets as your loss function. Additionally, since you probably don't care how accurate the predictions are for the minutes you've already seen, you could ignore their contribution to the loss.","Q_Score":1,"Tags":"python,deep-learning,pytorch,lstm,recurrent-neural-network","A_Id":74486820,"CreationDate":"2022-11-17T21:43:00.000","Title":"How to train real-time LSTM with input and output of varying length?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"TorchVision Detection models have a weights and a weights_backbone parameter. Does using pretrained weights imply that the model uses pretrained weights_backbone under the hood? I am training a RetinaNet model and um unsure which of the two options I should use and what the differences are.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":53,"Q_Id":74489594,"Users Score":0,"Answer":"The difference is pretty simple: you can either choose to do transfer learning on the backbone only or on the whole network.\nRetinaNet from Torchvision has a Resnet50 backbone. You should be able to do both of:\n\nretinanet_resnet50_fpn(weights=RetinaNet_ResNet50_FPN_Weights.COCO_V1)\nretinanet_resnet50_fpn(backbone_weights=ResNet50_Weights.IMAGENET1K_V1)\n\nAs implied by their names, the backbone weights are different. The former were trained on COCO (object detection) while the later were trained on ImageNet (classification).\nTo answer your question, pretrained weights implies that the whole network, including backbone weights, are initialized. However, I don't think that it calls backbone_weights under the hood.","Q_Score":1,"Tags":"python,pytorch,torchvision,retinanet","A_Id":74490746,"CreationDate":"2022-11-18T12:19:00.000","Title":"TorchVision using pretrained weights for entire model vs backbone","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say I have m objects and I want to pick which n will be chosen (where m and n are both known). I could run multi-label classification and get the probability that each of the m is chosen and take the n most likely, but that ignores the correlation between items. I'm wondering if there is a modeling approach (ideally in Keras?) that considers the correlations.\nFor example, suppose a soccer team has 18 players and I'm trying to predict which 11 will start the next game. The 11 players who are individually most likely to start do not necessarily comprise the most likely group of 11 players to start. For instance, maybe the team has two goalkeepers, each of whom has a 50% chance of starting, but no configuration will start both of them.\nOne option is to predict the set of 11 directly, but that would be multiclass categorization problem with (18 choose 11) cases... Any thoughts on better routes?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":23,"Q_Id":74493406,"Users Score":0,"Answer":"Seems kind of similar to a language model where you want to predict the most likely sentence. If you have the output probabilities for all words, you wouldn't just pick the n likeliest since the sentence would probably make no sense. Instead you condition it on the words you've already chosen.\nSo in your case, the input would include the already selected players. Each pass through the model you add the player with the highest output to the team. To increase the quality you may also want to use beam search, where you keep the best k teams each pass through.","Q_Score":0,"Tags":"python,keras,deep-learning,neural-network,classification","A_Id":74495801,"CreationDate":"2022-11-18T17:34:00.000","Title":"Multi-label classification predicting exactly n out of m options","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Pandas to Convert CSV to Parquet and below is the code, it is straight Forward.\nimport pandas as pd\ndf = pd.read_csv('path\/xxxx.csv')\nprint(df)\ndf.to_parquet('path\/xxxx.parquet')\nProblem\nIn a String for Example :- David,Johnson. If there is a , getting error saying there is a problem in the data.\nIf i remove the , the CSV File is converting to Parquet.\nAny suggesions, need help\nThanks\nMadhu\nIf i remove the , the CSV File is converting to Parquet","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":74494249,"Users Score":0,"Answer":"Do you need to keep comma in the name of the file? Otherwise you can do input='David,Johnson', output=input.replace(',','_'). I don't think it is generally a good practice to have comma in your file names.","Q_Score":0,"Tags":"python,pandas,dataframe,pip,pyarrow","A_Id":74495042,"CreationDate":"2022-11-18T18:59:00.000","Title":"Pandas Converting CSV to Parquet - String having , not able to convert","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am getting this error upon running my code.\ntext = str(text.encode(\"utf-8\"))\nAttributeError: 'float' object has no attribute 'encode'\nI tried to convert my data into string using df['Translated_message']=df['Translated_message'].values.astype('string')\nbut that doesnt worked.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":74503855,"Users Score":0,"Answer":"Text is a float. Check to cast as str before encoding.","Q_Score":0,"Tags":"python,pandas,numpy,nltk","A_Id":74503876,"CreationDate":"2022-11-19T21:22:00.000","Title":"Python, Twitter Sentiment analysis","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attempting to subset a pandas DatFrame df with a list L that contains only the column names in the DataFrame that I am interested in. The shape of df is (207, 8440) and the length of L is 6894. When I subset my dataframe as df[L] (or df.loc[:, L]), I get a bizarre result. The expected shape of the resultant DataFrame should be (207, 6894), but instead I get (207, 7092).\nIt seems that this should not even be possible. Can anyone explain this behavior?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":19,"Q_Id":74509554,"Users Score":0,"Answer":"[moving from comment to answer]\nA pandas dataframe can have multiple columns with the exact same name. If this happens, passing a list of column names can return more columns than the size of the list.\nYou can check if the dataframe has duplicates in the column names using {col for col in df.columns if list(df.columns).count(col) > 1} This will return a set of every column that that comes up more than once.","Q_Score":0,"Tags":"python,pandas","A_Id":74509766,"CreationDate":"2022-11-20T15:45:00.000","Title":"Subsetting pandas dataframe with list returns an apparently incorrectly sized resultant dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am training a yolox model and using wandb (weight & biases library) to follow training evolution. My problem is that when I am loading wandb library (version 0.13.5) I get an error message, which is:\nwandb: ERROR Failed to sample metric: Not Supported\nThe surprising thing is that when I run the exact same code on google collab (that has the library version), it works perfectly (problem: can't have unlimited GPU access on collab). So I have to find out how to avoid this error.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":571,"Q_Id":74520555,"Users Score":1,"Answer":"Engineer from W&B here! Would it be possible if you could share the console log so that we can find the line where the error originates.","Q_Score":4,"Tags":"python,tensorboard,yolo,wandb","A_Id":74546796,"CreationDate":"2022-11-21T14:35:00.000","Title":"weights & biases : ERROR Failed to sample metric: Not Supported","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run a Streamlit app importing pickle files and a DataFrame. The pathfile for my script is :\n\n\/Users\/myname\/Documents\/Master2\/Python\/Final_Project\/streamlit_app.py\n\nAnd the one for my DataFrame is:\n\n\/Users\/myname\/Documents\/Master2\/Python\/Final_Project\/data\/metabolic_syndrome.csv\n\nOne could reasonably argue that I only need to specify df = pd.read_csv('data\/df.csv') yet it does not work as the Streamlit app is unexpectedly not searching in its directory:\n\nFileNotFoundError: [Errno 2] No such file or directory: '\/Users\/myname\/data\/metabolic_syndrome.csv'\n\nHow can I manage to make the app look for the files in the good directory (the one where it is saved) without having to use absolute pathfiles ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36,"Q_Id":74530787,"Users Score":0,"Answer":"In which directory are you standing when you are running your code?\nFrom your error message I would assume that you are standing in \/Users\/myname\/ which makes python look for data as a subdirectory of \/Users\/myname\/.\nBut if you first change directory to \/Users\/myname\/Documents\/Master2\/Python\/Final_Project and then run your code from there I think it would work.","Q_Score":0,"Tags":"python,streamlit","A_Id":74557723,"CreationDate":"2022-11-22T10:03:00.000","Title":"Streamlip app not searching files in the good directory","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to find a way to search inside the uploaded files.\nIf a user uploads a pdf, CSV, word, etc... to the system, the user should be able to search inside the uploaded file with the keywords.\nIs there a way for that or a library?\nor\nmaybe should I save the file as a text inside a model and search from that?\nI will appreciate all kind of reccommendation.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":30,"Q_Id":74533417,"Users Score":1,"Answer":"Well If you save the file text in the db and then search it seems to be a practical idea.\nBut I feel there mi8 be decrease in performance.\nOr maybe you If you upload the file in S3 bucket and use the presigned url to generate the file from the db once uploaded and then perform search operation.","Q_Score":0,"Tags":"python,django","A_Id":74544307,"CreationDate":"2022-11-22T13:28:00.000","Title":"How to search inside an uploaded document?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two data frames that I want to merge on a same column name but the values can have different variations of a values.\nExamples. Variations of a value :\n\n\n\n\nVariations\n\n\n\n\nUSA\n\n\nUS\n\n\nUnited States\n\n\nUnited States of America\n\n\nThe United States of America\n\n\n\n\nAnd let's suppose the data frames as below:\ndf1 =\n\n\n\n\ncountry\ncolumn B\n\n\n\n\nIndia\nCell 2\n\n\nChina\nCell 4\n\n\nUnited States\nCell 2\n\n\nUK\nCell 4\n\n\n\n\ndf2 =\n\n\n\n\nCountry\nclm\n\n\n\n\nUSA\nval1\n\n\nCH\nval2\n\n\nIN\nval3\n\n\n\n\nNow how do I merge such that the United States is merged with USA?\nI have tried DataFrame merge but it merges only on the matched values of the column name.\nIs there a way to match the variations and merge the dataframes?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":74555443,"Users Score":0,"Answer":"Use .count to count how many times United States is stated in the list and then make an if command to see if united stated is listed more than once in the list. Do it to all of the other options and make a final if command to check if either any of them are in the list to output the value that you want.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":74556675,"CreationDate":"2022-11-24T03:52:00.000","Title":"How to merge two dataframes with different variations of a column values?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"To be more specific the error variance of the x value is half of the variance of error in y.\nI looked over sklearn and couldn't find a function which takes the error variance of x into account.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":74560695,"Users Score":0,"Answer":"for anyone who might find it useful,\nthe lecturer told us the answer, it required using DEMING REGRESSION","Q_Score":1,"Tags":"python,machine-learning,scikit-learn","A_Id":75195087,"CreationDate":"2022-11-24T12:20:00.000","Title":"How to use Linear regression when my **X** values are normally distributed?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On a RFC model, I am trying to figure out how the feature importances change my classification when i am perturbing my data, like\nfeatures(no perturbation)= features(perturbed data)-features(perturbation)\nThen using the features(no perturbation) on my already fit model.\nDo you if it is possible to manually set or change the feature importances of an RFC model ? I tried looking for but no results.\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":16,"Q_Id":74575103,"Users Score":0,"Answer":"The general convention in scikit-learn code is that attributes that are inferred from your data \/ training end with _. feature_importances_ attributes respects that convention as well. They represent impurity-based importances and they are computed \/ inferred from your training set statistics.\nYou have the option to act on the weight you give to the different samples through sample_weight argument as well as weighting your classes through class_weight parameter.","Q_Score":0,"Tags":"python,machine-learning,scikit-learn,random-forest","A_Id":74576747,"CreationDate":"2022-11-25T15:45:00.000","Title":"Random Forest Classifier: Set feature importances?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a data file that I'm cleaning, and the source uses '--' to indicate missing data.\nI ultimately need to have this data field be either an integer or float. But I am not sure how to remove the string.\nI specified the types in a type_dict statement before importing the csv file.\n6 of my 8 variables correctly came in as an integer or float. Of course, the two that are still objects are the ones I need to fix.\nI've tried using the df = df.var.str.replace('--', '')\nI've tried using the df.var.fillna(df.var.mode().values[0], inplace=True)\n(and I wonder if I need to just change the values '0' to '--')\nMy presumption is that if I can empty those cells in some fashion, I can define the variable as an int\/float.\nI'm sure I'm missing something really simple, have walked away and come back, but am just not figuring it out.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":43,"Q_Id":74578696,"Users Score":0,"Answer":"OK, we figured out two options to make this work:\nsolution 1:\ndf = df.replace(r'^--$', np.nan, regex=True)\nsolution 2 (a simplified version of #1):\ndf = df.replace(r'--', np.nan)\nBoth gave the expected output of empty cells when I exported the csv into a spreadsheet. And then when I reimported that intermediate file, I had floats instead of strings as expected.","Q_Score":1,"Tags":"python,pandas,replace,nan,dtype","A_Id":74579480,"CreationDate":"2022-11-25T23:47:00.000","Title":"Replacing a string with NaN or 0","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to create a row if Current End date compared to Start date from next row are discontinuous by each Employee Number. The dataframe looks like this:\n\n\n\n\nEmployee Number\nStart Date\nEnd Date\n\n\n\n\n001\n1999-11-29\n2000-03-12\n\n\n001\n2000-03-13\n2001-06-30\n\n\n001\n2001-07-01\n2002-01-01\n\n\n002\n2000-09-18\n2000-10-05\n\n\n002\n2000-10-06\n2001-06-30\n\n\n002\n2004-05-01\n2005-12-31\n\n\n002\n2008-01-01\n2008-11-25\n\n\n\n\nA Continuous flag column needs to identify these discontinuous values:\n\n\n\n\nEmployee Number\nStart Date\nEnd Date\nContinuous Flag\nExplanation\n\n\n\n\n001\n1999-11-29\n2000-03-12\nY\n2000-03-13 is 1d after 2000-03-12\n\n\n001\n2000-03-13\n2001-06-30\nY\n2001-07-01 is 1d after 2001-06-30\n\n\n001\n2001-07-01\n2002-01-01\nNaN\nmissing 2023-01-01 End Date row\n\n\n002\n2000-09-18\n2000-10-05\nY\n2000-10-06 is 1d after 2000-10-05\n\n\n002\n2000-10-06\n2001-06-30\nN\n2004-05-01 is not 1d after 2001-06-30\n\n\n002\n2004-05-01\n2005-12-31\nN\n2008-01-01 is not 1d after 2005-12-31\n\n\n002\n2008-01-01\n2008-11-25\nNaN\nmissing 2023-01-01 End Date row\n\n\n\n\nThen, for those rows that are 'N', a row needs to be inserted with the discontinuous dates to make them continuous in between rows. If there is no next row, use '2023-01-01' by default. Here is the expected output:\n\n\n\n\nEmployee Number\nStart Date\nEnd Date\nContinuous Flag\n\n\n\n\n001\n1999-11-29\n2000-03-12\nY\n\n\n001\n2000-03-13\n2001-06-30\nY\n\n\n001\n2001-07-01\n2002-01-01\nY\n\n\n001\n2002-01-02\n2023-01-01\nNaN\n\n\n002\n2000-09-18\n2000-10-05\nY\n\n\n002\n2000-10-06\n2001-06-30\nY\n\n\n002\n2001-07-01\n2004-04-30\nY\n\n\n002\n2004-05-01\n2005-12-31\nY\n\n\n002\n2006-01-01\n2007-12-31\nY\n\n\n002\n2008-01-01\n2008-11-25\nY\n\n\n002\n2008-11-26\n2023-01-01\nNaN\n\n\n\n\nI tried idx for loop without success","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":38,"Q_Id":74584590,"Users Score":1,"Answer":"Plan A: (Filling in gaps)\n\nCreate a table of all possible dates (in the desired range). (This is easy to do on the fly in MariaDB by using a seq_..., but messier in MySQL.)\nSELECT ... FROM that-table-of-dates LEFT JOIN your-table ON ...\n\nAs for filling in the gaps with values before (or after) the given hole. I don't understand the goals.\nPlan B: (Simply discovering gaps)\nDo a \"self-join\" of the table with itself. For this you must have consecutive ids. Since you don't have such, I am not sure what to do.\nThen check whether the (end_date + INTERVAL 1 DAY) of one row matches the start_date of the 'next' row.\nPlan C: (requires MySQL 8.0 or MariaDB 10.2)\nUse LAG() (or `LEAD() windowing functions to compare a value in one row to the previous (or next) row.\nThis may be the simplest way to set the \"continuous flag\".\nBe sure to check for discontinuity in EmployeeId as well as INTERVAL 1 DAY as mentioned above.","Q_Score":0,"Tags":"python,pandas,dataframe,date,indexing","A_Id":74584968,"CreationDate":"2022-11-26T18:17:00.000","Title":"Create row from previous and next rows if date are discontinuous","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently using pandas to read an \"output.csv\" file by specifying the filepath as :\ndf = pd.read_csv(r'C:\\users\\user1\\desktop\\project\\output.csv')\nWhile this works perfectly fine on my local machine, is there a way I can code this so anyone who runs the script can use it? I want to be able to hand this script to coworkers who have no knowledge of Python and have them be able to run it without manually changing the username in the path.\nI have tried using os.path to no avail:\ndf = pd.read_csv(os.path.dirname('output.csv'))\nSOLUTION: df = pd.read_csv('output.csv'). Simple, embarrassing, and a wonderful building block to learn from. Thank you all.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":33,"Q_Id":74587277,"Users Score":0,"Answer":"If you're shipping out the output.csv in the same directory as the python script, you should be able to reference it directly pd.read_csv('output.csv').\nIf your need to get the full path + filename for the file, you should use os.path.abspath(__file__).\nFinally, if your output.csv is in a static location in all your coworkers computers and you need to get the username, you can use os.getlogin() and add it to the path.\nSo there's a bunch of solutions here depending on your exact problem.","Q_Score":0,"Tags":"python,pandas","A_Id":74587354,"CreationDate":"2022-11-27T03:27:00.000","Title":"How can I specify a file without listing the entire file path?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I noticed that when I implement\/serve some opencv drawing functions within flask, they are slower compared to when running just the opencv stuff and run imshow. I am thinking this might be due to the fact that when the application (flask) is started it serves as the parent thread which then creates child threads for each request\/context and thus creates more cpu overhead for executing cv2 calls.\nIs it possible to serve flask app separately from the actual services the API is serving like cv2.putText() etc? If so, what is the better design for optimized cv2 calls?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":28,"Q_Id":74610383,"Users Score":0,"Answer":"The solution we were able to come up is to make the opencv process pinned to one CPU. It improved the operation drastically. It might be the memory pipeline is now being utilized by single core and does avoid forking it to the other core.\npsutil.cpu_affinity()","Q_Score":0,"Tags":"python,multithreading,opencv,multiprocessing","A_Id":74884646,"CreationDate":"2022-11-29T07:20:00.000","Title":"Opencv Drawing functions are slow when used within Flask","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2 pandas dataframes and trying to compare if 2 of their columns are equal then update the rest of the dataframe if not append the new data so concat or something like that .\ni tried this amongst other stuff\n\nif demand_history[['Pyramid Key','FCST_YR_PRD']] == azlog_3[['Pyramid Key','FCST_YR_PRD']]:\ndemand_history['DMD_ACTL_QTY'] ==azlog_3['DMD_ACTL_QTY']","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":25,"Q_Id":74620034,"Users Score":0,"Answer":"demand_hist_sku_date = demand_history['Pyramid Key'] + demand_history['FCST_YR_PRD']\nazlog_3_sku_date = azlog_3['Pyramid Key']+ azlog_3['FCST_YR_PRD']\ndemand_history.loc[demand_hist_sku_date.isin(azlog_3_sku_date), 'DMD_ACTL_QTY' ] = azlog_3['DMD_ACTL_QTY']","Q_Score":0,"Tags":"python,pandas,dataframe,merge","A_Id":74620763,"CreationDate":"2022-11-29T20:39:00.000","Title":"how to compare if 2 columns in pandas dataframe are equal then update the rest of the dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"thinking about a problem\u2026 should you standardize two predictors that are already on the same scale (say kilograms) but may have different ranges? The model is a KNN\nI think you should because the model will give the predictor eith the higher range more importance in calculating distance","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":17,"Q_Id":74623115,"Users Score":0,"Answer":"It is better to standardize the data even though being on same scale. Standardizing would reduce the distance (specifically euclidean) that would help weights to not vary much from the point intial to them. Having huge seperated distance would rather have more calculation involved. Also distance calculation done in KNN requires feature values to scaling is always prefered.","Q_Score":0,"Tags":"python,machine-learning,knn,standardization","A_Id":74624060,"CreationDate":"2022-11-30T05:05:00.000","Title":"Standardize same-scale variables?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Pandas not working in AWS GLUE 4.0 version:-\nI tried importing pandas in AWS Glue 4.0 but getting following error, pandas is working in AWS Glue 3.0 version but not in 4.0.\nModuleNotFoundError: No module named '_bz2'","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":183,"Q_Id":74623358,"Users Score":0,"Answer":"I have also encountered this issue and have contacted AWS support about it. It appears that it is an AWS issue and is happening to anyone who uses it. They are currently working on a fix.","Q_Score":0,"Tags":"python,pandas,amazon-web-services,aws-glue","A_Id":74731358,"CreationDate":"2022-11-30T05:46:00.000","Title":"Pandas not working in AWS GLUE 4.0 version","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to run a script which starts like this:\nimport os, sys, subprocess, argparse, random, transposer, numpy, csv, scipy, gzip\nBUT, I got this error:\nImportError: cannot import name 'transpose'\nI work on slurm cluster. Should I install transposer? I work with conda as we don't have permission to install on cluster. But, there is no conda env for that.\nWould you please help on this? Thanks","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":49,"Q_Id":74634768,"Users Score":0,"Answer":"pip install transposer ? worked for me.","Q_Score":0,"Tags":"python,conda,transpose","A_Id":74634804,"CreationDate":"2022-11-30T22:12:00.000","Title":"ImportError: cannot import name 'transpose'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"As an example,\nWe have two algorithms that utilize the same dataset and the same train and test data:\n1 - uses k-NN and returns the accuracy;\n2 -applies preprocessing before k-NN and adds a few more things, before returning the accuracy.\nAlthough the preprocessing \"is a part of\" algorithm number 2, I've been told that we cannot compare these two methods because the experiment's conditions have changed as a result of the preprocessing.\nGiven that the preprocessing is only exclusive to algorithm no. 2, I believe that the circumstances have not been altered.\nWhich statement is the correct one?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":30,"Q_Id":74656128,"Users Score":1,"Answer":"It depends what you are comparing.\n\nif you compare the two methods \"with preprocessing allowed\", then you don't include the preprocessing in the experiment; and in principle you should test several (identical) queries;\n\nif you compare \"with no preprocessing allowed\", then include everything in the measurement.","Q_Score":0,"Tags":"python,algorithm,machine-learning,comparison,theory","A_Id":74656443,"CreationDate":"2022-12-02T12:54:00.000","Title":"Does the preprocessing of one algorithm change the conditions of the experiment?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For some odd reason when I do \u201cimport sklearn\u201d it says ModuleNotFound or something like that. Can anyone please help?\nI tried going online and using bash to fix it but still didn\u2019t work.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":150,"Q_Id":74657703,"Users Score":0,"Answer":"open a shell in the workspace with ctrl-shift-s\non mac command-shift-s command prompt and run this command, it will install scikit\n\npip install scikit-learn","Q_Score":0,"Tags":"python,replit,replit-database","A_Id":74657771,"CreationDate":"2022-12-02T14:58:00.000","Title":"Import sklearn doesn\u2019t exist on my replit","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two series of stock prices (containing date, ticker, open, high, low, close) and I'd like to know how to combine them to create a dataframe just like the way Yahoo!Finance does. Is it possible?\n\"Join and merge\" don't seem to work","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":29,"Q_Id":74659398,"Users Score":3,"Answer":"Use pd.concat([sr1, sr2], axis=1) if neither one of join and merge work.","Q_Score":0,"Tags":"python,pandas,dataframe,yahoo-finance,yahoo-api","A_Id":74659488,"CreationDate":"2022-12-02T17:27:00.000","Title":"Combining series to create a dataframe","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have trained and saved an xgboost regressor model in Jupyter Notebook (Google Colab) and tried to load it in my local machine without success. I have tried to save and load the model in multiple formats: .pkl using pickle library, .sav using joblib library or .json.\nWhen I load the model in VS Code, I get the following error:\n\nraise XGBoostError(py_str(_LIB.XGBGetLastError()))\nxgboost.core.XGBoostError: [10:56:21] ..\/src\/c_api\/c_api.cc:846: Check\nfailed: str[0] == '{' (\n\nWhat is the problem here?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":133,"Q_Id":74662799,"Users Score":1,"Answer":"The issue was a mismatch between the two versions of xgboost when saving the model in Google Colab (xgboost version 0.9) and loading the model in my local Python environment (xgboost version 1.5.1).\nI managed to solve the problem by upgrading my xgboost package to the latest version (xgboost version 1.7.1) both on Google Colab and on my local Python environment. I resaved the model and re-loaded it using the newly saved file.\nNow the loading works well without any errors.\nI will leave my post here on Stackoverflow just in case it may be useful for someone else.","Q_Score":0,"Tags":"python,visual-studio-code,data-science,xgboost","A_Id":74663263,"CreationDate":"2022-12-03T00:02:00.000","Title":"XGBoost Error when saving and loading xgboost model using Pickle, JSON and JobLib","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Help me, I am try to convert CSLA method from R to Python from this paper \"DOI 10.1186\/s12953-016-0107-\" and R code available at \"https:\/\/github.com\/tystan\/clsa\".","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":74685282,"Users Score":0,"Answer":"Thank you, I have solved my problem.","Q_Score":0,"Tags":"python,algorithm,baseline","A_Id":74686577,"CreationDate":"2022-12-05T08:30:00.000","Title":"How to use the continuous line segment algorithm (CSLA) method to substract baseline in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two lists of coordinates:\n\n[37.773972, -122.431297]\n\n\n[37.773972, -122.45]\n\nI want to create a list of tuples like so:\n\n[(37.773972, -122.431297), (37.773972, -122.45)]\n\nI've tried using zip but that merges the two.\nthanks","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":55,"Q_Id":74689329,"Users Score":0,"Answer":"list1 = [37.773972, -122.431297]\nlist2 = [37.773972, -122.45]\ntup=[tuple(list1),tuple(list2)]\nprint(tup)","Q_Score":0,"Tags":"python","A_Id":74690145,"CreationDate":"2022-12-05T13:55:00.000","Title":"Create list of tuples from two lists","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Ive a question about python plotting. How to make a frequency plot for example how to see some behaviours\u2019 frequency ( coded as True or False) based on time of the dat (coded as AM or PM)\nMy ideal plot will be like this but indicates frequency of some behaviour that varies by time of the dat","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":74692243,"Users Score":0,"Answer":"Here is the code:\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\nfig, ax = plt.subplots()\ndf = pd.DataFrame({'numbers': [2, 4, 1, 4, 3, 2, 1, 3, 2, 4]})\ndf['numbers'].value_counts().plot(ax=ax, kind='bar', xlabel='numbers', ylabel='frequency')\nplt.show()","Q_Score":0,"Tags":"python","A_Id":74692256,"CreationDate":"2022-12-05T17:41:00.000","Title":"How to plot a frequency plot use python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In a general case, is there a way in VSC that each time we launch the debugger we pre-load the session with objects coming from a file? For example, dataframes? Because each time each launches it has no objects at all and needs the running code to start creating the objects.\nSo the challenge is when we already have some files on hard-drive with heavy objects, but when we launch the debugg I can't tell it to just pre-load those objects stored in those files, say dataframes or lists or connection objects or whatever.\nTo be more specific. If there's a python code that has two sections of code:\n\nSection 1: Code we know works perfectly and makes heavy calculations to create some\nobjects\nSection 2: Code that takes those objects and performs operations. We want to debug this section. We also know no code line in this section interacts in any way with the code or stacks of Section1. It simply takes the objects created from section 1\n\nExample: Section 1 queries an enormous table and puts it as a dataframe, sorts it, filters, etc... Then Section 2 just needs that dataframe and makes some statistics.\nIs there a way that we just launch from section 2 and we load that dataframe we have stored in a csv? Or do we need to launch always from section 1, run the connection, get the dataframe from a query (which takes a lot of time) and then finally arrive to section 2 to start debugging?\nNote. I could just make a .py file having section 2 code, and hard-coding on it at the begging to just read the csv I have. But is there a fancier way to do this without having to make another .py file for debugging and manually writing code to it, and then debugging that .py file?\nThe question is: Launch VSC python debugger telling it to load python objects from files in folders, rather than launching the session with no objects. Waiting for the code to create them","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":41,"Q_Id":74708925,"Users Score":0,"Answer":"There is no way to convert csv files to python objects before debugging since all Python objects are in-memory.\nIf you don't want to set them in your code, I would suggest using an environment variable for it, and set it by adding \"env\" in your launch.json.","Q_Score":0,"Tags":"python,debugging","A_Id":74781889,"CreationDate":"2022-12-06T20:53:00.000","Title":"VSC pre-load session with heavy objects from file instead of creating them on running code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got multiple excels and I need a specific value but in each excel, the cell with the value changes position slightly. However, this value is always preceded by a generic description of it which remains constant in all excels.\nI was wondering if there was a way to ask Python to grab the value to the right of the element containing the string \"xxx\".","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":51,"Q_Id":74716983,"Users Score":1,"Answer":"try iterating over the excel files (I guess you loaded each as a separate pandas object?)\nsomehting like for df in [dataframe1, dataframe2...dataframeN].\nThen you could pick the column you need (if the column stays constant), e.g. - df['columnX'] and find which index it has:\ndf.index[df['columnX']==\"xxx\"]. Maybe will make sense to add .tolist() at the end, so that if \"xxx\" is a value that repeats more than once, you get all occurances in alist.\nThe last step would be too take the index+1 to get the value you want.\nHope it was helpful.\nIn general I would highly suggest to be more specific in your questions and provide code \/ examples.","Q_Score":0,"Tags":"python,pandas,web-scraping","A_Id":74717113,"CreationDate":"2022-12-07T12:49:00.000","Title":"Is there a Python pandas function for retrieving a specific value of a dataframe based on its content?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have pyarrow table which have column order ['A', 'B', 'C', 'D'] I want to change the order of this pyarrow table to ['B', 'D', 'C', 'A'] can we reorder pyarrows table like pandas dataframe ?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":93,"Q_Id":74718300,"Users Score":0,"Answer":"cols = ['B', 'A']\ndf = df[cols]\n\n\n\n\nB\nA\n\n\n\n\n4\n1\n\n\n5\n2","Q_Score":0,"Tags":"python,pandas,dataframe,pyarrow","A_Id":76297010,"CreationDate":"2022-12-07T14:34:00.000","Title":"how to reorder columns in pyarrow table","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If there's a 8G-RAM GPU, and has loaded a model that takes all the 8G RAM, is it possible to run multiple model prediction\/inference in parallel?\nor you can only run a prediction at a same time period","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":74741987,"Users Score":0,"Answer":"If your single model uses all 8gb of RAM, then it would not be possible to run another model in parallel using the same resources. You would have to allocate more memory or schedule the second model to run afterwards.","Q_Score":0,"Tags":"python,deep-learning,quantization","A_Id":74742025,"CreationDate":"2022-12-09T10:43:00.000","Title":"Is there a way for a single GPU and model to run deep learning model prediction\/inference in parallel","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to python and am struggling with this code. I have a csv file and am trying to create a function. The file, personal_info.csv , has a few columns with one labeled house_area and one named full_name. I am trying to create a code that will find the house with the largest area and return the name of person who owns it.\nI am also not allowed to import anything besides the csv file, so I cannot use pandas.\nHere's what some of the data looks like:\n\n\n\n\nhouse_area\nfull_name\n\n\n\n\n40.132\nJohn Smith\n\n\n85.832\nAnna Lee\n\n\n38.427\nEmma Jones\n\n\n\n\nSo in this small sample I'm trying to find the house with the largest area (85.832) and print the person's name, Anna Lee. Except the actual file has a lot more rows","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":74748676,"Users Score":0,"Answer":"One simple way you can do this is by creating a variable called to track the largest house area. Let's call this largest_area and set it to the value 0 (assuming all house areas in your CSV are greater than 0).\nThen using, the csv library, go through each row in the CSV file, grab the house_area, and compare it to the largest_area variable you created. If it is greater, update largest_area, otherwise ignore and continue through the CSV file.\nAfter you have finished going through the CSV file, the greatest area should be in your largest_area variable.","Q_Score":0,"Tags":"python,csv,multiple-columns,area","A_Id":74748796,"CreationDate":"2022-12-09T22:04:00.000","Title":"Finding a house with the largest area and returning who lives there (no pandas)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on some data and i am required to carry out a person-fit statistical analysis in R.\nWhat is the python equivalent module (that can be imported into Jupyter notebook) of person-fit statistical analysis in R?\nI looked up google and saw goodness of fit but this is not the same as person-fit analysis","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":21,"Q_Id":74752397,"Users Score":0,"Answer":"Chuks, unfortunately as far as I know there isn't any direct equivalent in Python to person-fit stat analysis used in R.","Q_Score":0,"Tags":"python,r,jupyter-notebook","A_Id":74752585,"CreationDate":"2022-12-10T11:14:00.000","Title":"Python equivalent of Person-fit Statistics in R","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to use .resample() to take the last observation in a month of a weekly time series to create a monthly time series from the weekly time series? I don't want to sum or average anything, just take the last observation of each month\nThank you.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":17,"Q_Id":74755671,"Users Score":0,"Answer":"Is the 'week' field as week of year, a date or other?\nIf it's a datetime, and you have datetime library imported , use .dt.to_period('M') on your current date column to create a new 'month' column, then get the max date for each month to get the date to sample ( if you only want the LAST date in each month ? )\nLike max(df['MyDateField'])\nSomeone else is posting as I type this, so may have a better answer :)","Q_Score":0,"Tags":"python,pandas-resample","A_Id":74755721,"CreationDate":"2022-12-10T18:42:00.000","Title":"Using .resemple() in python to go from weekly to monthly time series","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"input:\nimport pandas\noutput:\n\nModuleNotFoundError: No module named 'pandas'\n\nI installed the package with the command line - pip3 install pandas,\nmy python version is 3.10.7\nThe source of the installed panda package: c:\\users\\kfirs\\appdata\\local\\programs\\python\\python310\\lib\\site-packages","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":61,"Q_Id":74763471,"Users Score":0,"Answer":"not sure if this will help but in the command prompt type pip --version check to make sure you're on a version that is fluent with pandas, don't know much about pandas but I assume you should try and do the same with pandas. My knowledge is limited but try installing on the same drive as python as this could possibly be why things are not working. 3rd sometimes I have issues with windows after installing pip packs so try restarting your pc sometimes my imports right after installing don't work and this usually fixes but only if it's truly installed where it needs to be and the version needed. Hope I could help.","Q_Score":0,"Tags":"python,python-3.x,pandas","A_Id":74766124,"CreationDate":"2022-12-11T18:34:00.000","Title":"I installed the pandas package and I'm still getting the same error: No module named 'pandas'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a 2x2 matrix of distances from a depth sensor.\nThe matrix is cropped so only the points we are interested in is in the frame(All the points in the cropped image contains the object).\nMy question is how can we determine if this object is flat or not?\nThe depth image is acquired from Realsense d435. I read the depth image and then multiply it by depth_scale.\nThe object is recognized using AI for the rgb image that is aligned with the depth image.\nAnd I have 4 points on the object. So, all the distances in that rectangle contains the distance of the object from the sensor.\nMy first idea was standard deviation of all the points. But then this falls apart if the image is taken from an angle. (since the standard deviation won't be 0)\nFrom an angle the distance of a flat object is changing uniformly on the y axis. Maybe somehow, we can use this information?\nThe 2x2 matrix is a numpy array in python. Maybe there are some libraries which do this already.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":88,"Q_Id":74764056,"Users Score":0,"Answer":"You can define a surface by choosing three of the four 3D points.\nEvaluate the distance from the remaining point to the surface.\nHow to choose the three points is... it may be good to choose the pattern that maximizes the area of the triangle.","Q_Score":0,"Tags":"python,numpy,opencv,computer-vision,realsense","A_Id":74766928,"CreationDate":"2022-12-11T20:00:00.000","Title":"How to determine if an object is flat or not from depth image?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to better understand how two groups of documents relate to one another through topic modeling. I have performed similarity scoring on them and would like to try and peer deeper into how these documents relate through topic modeling. Rather than just observing the most relevant topics for each document using LDA, is there a method where I could have a model trained on both documents combined as a single corpus and visualize what topics have the most relevance to both documents combined?\nI tried just running LDA on a combined corpus but it returned topics that were clearly divided in relevance between the two different underlying documents of origin. Instead, I want to see what smaller topics the two documents overlap with the most.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":74764918,"Users Score":0,"Answer":"There's no one method for doing this in Gensim. But once you've trained a topic-model (such as LDA) on the combined corpus of all documents, you could do things like:\n\ncompare any two documents, by comparing their topics\ntally top-N topics for all documents in one of the original corpuses, and then top-N topics for all documents the 2nd original corpus, then contrast those counts\ntreat the original two corpuses as two giant composite documents, calculate the topics of those two synthetic documents, and compare their topics","Q_Score":0,"Tags":"python,nlp,gensim,topic-modeling","A_Id":74774972,"CreationDate":"2022-12-11T22:06:00.000","Title":"Is there a method in Gensim to find the most relevant topics between two corpuses?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a project topic modeling tweets using the tweetopic Python library. I want to understand what the parameter \"n_components\" for the tweetopic.dmm.DMM class is. I see in the documentation it's described as the \"Number of mixture components in the model.\" I'm new to topic modeling, so am not quite sure what that means.\nThank you!\nHere is my code:\ntweetopic.dmm.DMM(n_components=10, n_iterations=100, alpha: float = 0.1, beta: float = 0.1)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":15,"Q_Id":74768359,"Users Score":0,"Answer":"Tweetopic is like any other sklearn-compatible topic model. In all of sklearn's topic models you specify the number of topics with n_components.\nI might change the documentation so that this gets clearer. It says mixture components, because DMM is a mixture-model, meaning that it assumes that all texts come from a mixture of distributions, and each distribution (component) can be thought of as a topic.\nI hope I could be of help :)","Q_Score":0,"Tags":"python,topic-modeling,tweets","A_Id":75137195,"CreationDate":"2022-12-12T08:22:00.000","Title":"What is the 'n_components' parameter for tweetopic.dmm.DMM class?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to optimize four input parameters on a numerical model. I have an input file where I have these parameters. I run an application in Python using subprocess and obtain the results on csv files. I run these simulations around 300 times to have some Monte Carlo simulations, obtaining a range of possible values to compare with real data (20 points that follow a Weibull distribution) I have.\nWhich optimization algorithm can I use with the goodness of fit from the quartiles between numerical results and real data (this is the OF) to get optimal initial parameters?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":44,"Q_Id":74783890,"Users Score":0,"Answer":"Regression models are fit on training data using linear regression and local search optimization algorithms.\nModels like linear regression and logistic regression are trained by least squares optimization, and this is the most efficient approach to finding coefficients that minimize error for these models.\nNevertheless, it is possible to use alternate optimization algorithms to fit a regression model to a training dataset. This can be a useful exercise to learn more about how regression functions and the central nature of optimization in applied machine learning. It may also be required for regression with data that does not meet the requirements of a least squares optimization procedure.","Q_Score":1,"Tags":"python,optimization,statistics","A_Id":74785585,"CreationDate":"2022-12-13T11:09:00.000","Title":"Optimizing simulation input parameters to fit statistical data in Python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for guidance to generate multiple classifier training data from document. e.g. if particular document has three sections with each 10 pages in each sections. (total 30 pages)\nI am looking for open source library, where I can pass on document (explicitly specifying section 1, section 2 and section 3 pages) then it can give me list of important words to be used as training data to identify \"section 1\" vs \"section 2\" vs \"section 3\". (multiple classification)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":37,"Q_Id":74789812,"Users Score":0,"Answer":"I had this quite a long time ago and I am not sure if it will help you at all but a book called \"Deep Learning with Python\" by Fran\u00e7ois Chollet 2018 could give you some clues in terms of how to generate such data samples from your document. However, the drawback might be that you would have to prepare such a document in a certain way before you can generate data samples. My comment is based on the fact that I have read something about it a long time ago so I could misremember it. Good luck!","Q_Score":0,"Tags":"python,nlp,classification","A_Id":74789944,"CreationDate":"2022-12-13T19:09:00.000","Title":"generating multi classifier training data from document","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"One of my friend said I should study about GUI for turning on the webcam in that system, Is it correct? or any other solution.\nI made image detection system using opencv in python but its now switching on the camera, can anyone please tell what can be the issue","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":74811020,"Users Score":0,"Answer":"Based on your description, an image input source is essential for an image detection system, and there are many methods to open an image as a source, like cv2.imread(), and camera image source is also acceptable in OpenCV, so the detection system turned on the camera is quite reasonable.\nThe suggestion about WebCam is good if you want to run code in the local but get photos in the remote, else it is unnecessary to use WebCam because OpenCV can use your local camera. A GUI is a choice but not a must.\nIf you don't want it to open your camera, but just read a picture in your local disk, then you can remove those codes in the system controlling the camera as an image input source and add some codes to make a change for the source from camera to your Disk.","Q_Score":0,"Tags":"web-config,python-3.6,runtime.exec","A_Id":74811273,"CreationDate":"2022-12-15T11:45:00.000","Title":"I made image detection system using opencv in python but its now switching on the camera, can anyone please tell what can be the issue?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've got a Keras subclassed model in Tensorflow which stays at a constant GPU memory usage throughout an epoch, then when starting a new epoch it appears to be allocating a whole new set of memory for that epoch.\nIs this normal, expected behaviour?\nCurrently, I'm getting OOM on only my third epoch, and I'm not sure what sort of data needs to be retained after the previous epoch other than the loss. If it is expected behaviour, what large quantity data exactly does need to be retained (e.g. does Tensorflow need to store historic weights for some reason?)\nI've not included any code as I'm asking this as more of a general question about Tensorflow and CNN model behaviour.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":61,"Q_Id":74827510,"Users Score":1,"Answer":"My instinct is that you might see increases in the first two epochs but you should generally have steady state after that.\nOff-handedly, you might want to compare weights between epochs and so get 2N memory that way.\nMaybe there's an out of control snapshot mechanism?","Q_Score":0,"Tags":"python,tensorflow,keras","A_Id":74827563,"CreationDate":"2022-12-16T16:54:00.000","Title":"Should GPU memory be increasing substantially after every epoch in Tensorflow?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I run this code on google colab.\nfrom google.cloud import aiplatform\nThe following error occurred\nImportError: cannot import name 'WKBWriter' from 'shapely.geos' (\/usr\/local\/lib\/python3.8\/dist-packages\/shapely\/geos.py)\nDoes anyone know how to solve this problem?\nI was working fine on 2022\/12\/16, but today it is not working.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":970,"Q_Id":74831594,"Users Score":0,"Answer":"Running into a similiar issue. So far, I am able to tell that google.cloud actions will not run if I have shapley files installed. When I delete the shapley files on my computer I am able to run google.cloud methods","Q_Score":3,"Tags":"python,google-cloud-platform,google-ai-platform","A_Id":74844335,"CreationDate":"2022-12-17T03:27:00.000","Title":"cannot import name 'WKBWriter' from 'shapely.geos' when import google cloud ai platform","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a numpy array:\na = np.array([-1,2,3,-1,5,-2,2,9])\nI want to only keep values in the array which occurs more than 2 times, so the result should be:\na = np.array([-1,2,-1,2])\nIs there a way to do this only using numpy?\nI have a solution using a dictionary and dictionary filtering, but this is kind of slow, and I was wondering if there was a faster solution only using numpy.\nThanks !","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":64,"Q_Id":74840692,"Users Score":0,"Answer":"thanks a lot!\nAll answers solved the problem, but the solution from Matvey_coder3 was the fastest.\nKR","Q_Score":3,"Tags":"python,arrays,numpy","A_Id":74841256,"CreationDate":"2022-12-18T10:39:00.000","Title":"Remove first occurence of elements in a numpy array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install TensorFlow in Python. I am getting the following error message, I tried uninstalling NumPy and re-installing NumPy but still getting the same error message. Can someone please help me to resolve this issue?\nAttributeError: module 'numpy' has no attribute 'typeDict'","AnswerCount":2,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":12831,"Q_Id":74852225,"Users Score":1,"Answer":"Spacy still hasnt upgraded to latest numpy versions. I degraded numpy to 1.21 and that worked.","Q_Score":13,"Tags":"python,numpy,tensorflow,tensor","A_Id":76297230,"CreationDate":"2022-12-19T14:59:00.000","Title":"AttributeError: module 'numpy' has no attribute 'typeDict'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install TensorFlow in Python. I am getting the following error message, I tried uninstalling NumPy and re-installing NumPy but still getting the same error message. Can someone please help me to resolve this issue?\nAttributeError: module 'numpy' has no attribute 'typeDict'","AnswerCount":2,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":12831,"Q_Id":74852225,"Users Score":0,"Answer":"I was able to solve this by upgrading the scipy package to 1.10.","Q_Score":13,"Tags":"python,numpy,tensorflow,tensor","A_Id":76241493,"CreationDate":"2022-12-19T14:59:00.000","Title":"AttributeError: module 'numpy' has no attribute 'typeDict'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install TensorFlow in Python. I am getting the following error message, I tried uninstalling NumPy and re-installing NumPy but still getting the same error message. Can someone please help me to resolve this issue?\nAttributeError: module 'numpy' has no attribute 'typeDict'","AnswerCount":2,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":12831,"Q_Id":74852225,"Users Score":0,"Answer":"You have to degrade your Numpy and pandas version, everything depends on the version that tensorflow supports.\nNo other solution for now","Q_Score":13,"Tags":"python,numpy,tensorflow,tensor","A_Id":76169952,"CreationDate":"2022-12-19T14:59:00.000","Title":"AttributeError: module 'numpy' has no attribute 'typeDict'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install TensorFlow in Python. I am getting the following error message, I tried uninstalling NumPy and re-installing NumPy but still getting the same error message. Can someone please help me to resolve this issue?\nAttributeError: module 'numpy' has no attribute 'typeDict'","AnswerCount":2,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":12831,"Q_Id":74852225,"Users Score":0,"Answer":"I had the same issue. I restarted the kernel and the issue was gone. Try restarting your kernel if you have the correct version of tensorflow and numpy.","Q_Score":13,"Tags":"python,numpy,tensorflow,tensor","A_Id":75739427,"CreationDate":"2022-12-19T14:59:00.000","Title":"AttributeError: module 'numpy' has no attribute 'typeDict'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an excel file that I have imported into Python using pandas and it has two columns, purchase price and sales price. They are both number values. I want to use python to automatically do the math for me to find the difference between the two, in this case I want it to be Sales Price minus Purchase Price. Is it possible to write a script for this? Thanks in advance for any help.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":40,"Q_Id":74854264,"Users Score":0,"Answer":"import pandas as pd\nRead the Excel file into a pandas DataFrame\ndf = pd.read_excel('file.xlsx')\nFind the difference between the two columns\ndf['difference'] = df['column1'] - df['column2']\nPrint the resulting DataFrame\nprint(df)","Q_Score":0,"Tags":"python,excel","A_Id":74854346,"CreationDate":"2022-12-19T17:59:00.000","Title":"Using Python to find the difference between two columns of numbers from an excel file","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For a university project I\u2019m trying to see the relation oil production\/consumption and crude oil price have on certain oil stocks, and I\u2019m a bit confused about how to sort this data.\nI basically have 4 datasets-\n-Oil production\n-Oil consumption\n-Crude oil price\n-Historical price of certain oil company stock\nIf I am trying to find a way these 4 tables relate, what is the recommended way of organizing the data? Should I manually combine all this data to a single Excel sheet (seems like the most straight-forward way) or is there a more efficient way to go about this.\nI am brand new to PyTorch and data, so I apologise if this is a very basic question. Also, the data can basically get infinitely larger, by adding data from additional countries, other stock indexes, etc. So is there a way I can organize the data so it\u2019s easy to add additional related data?\nFinally, I have the month-to-month values for certain data (eg: oil production), and day-to-day values for other data (eg: oil price). What is the best way I can adjust the data to make up for this discrepancy?\nThanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":74860293,"Users Score":0,"Answer":"You can use pandas.DataFrame to create 4 dataframes for each dataset, then proceed with combining them in one dataframe by using merge","Q_Score":1,"Tags":"python,database,machine-learning,deep-learning,pytorch","A_Id":74860362,"CreationDate":"2022-12-20T08:29:00.000","Title":"How to forecast data based on variables from different datasets?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My dataset looks as follows:\n\n\n\n\nCountry code\nValue\n\n\n\n\nIRL\n10\n\n\nIRL\n12\n\n\nIRL\n11\n\n\nFRA\n15\n\n\nFRA\n16\n\n\nIND\n9\n\n\nIND\n11\n\n\nUSA\n19\n\n\nUSA\n4\n\n\nHUN\n30\n\n\nHUN\n1\n\n\nHUN\n31\n\n\nHUN\n11\n\n\n\n\nI am attempting to extract rows with specific country codes using the .loc function, however this doesn't seem to work when multiple strings are added into the function.\nMy code looks as follows:\nsubset = df.loc[df[\"Country Code\"] == (\"IRL\", \"FRA\", \"IND\")]\nWhen I do this, my code doesn't return an error, but rather gives me an empty subset, so I am curious, what is wrong with my syntax, and what is my current code actually doing?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":74867895,"Users Score":0,"Answer":"df[\"Country Code\"] == (\"IRL\", \"FRA\", \"IND\") checks for equality between the tuple (\"IRL\", \"FRA\", \"IND\") and each item in the column Country Code - which is why it doesn't error out and would give you nothing (as none of the values in your column is a tuple).\nyou want to use pd.Series.isin i.e. df[\"Country Code\"].isin((\"IRL\", \"FRA\", \"IND\")) instead","Q_Score":0,"Tags":"python,pandas,syntax,syntax-error","A_Id":74867928,"CreationDate":"2022-12-20T19:16:00.000","Title":"Syntax issue in creating subsets based on column contents pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the title suggests, how can I obtain the feature importances from a OneVsRestClassifier model?\nI tried using model.feature_importances_ but the error message was\n\n\"OneVsRestClassifier' object has no attribute 'feature_importances_\"\n\nTried searching from the internet but was not able to find any clue.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":25,"Q_Id":74871861,"Users Score":0,"Answer":"OneVsRestClassifier() basically builds as much binary classifiers as there are classes. Each has its own set of importances (assuming the base classifier supports them), showing the importance of features to distinguish a certain class from all others when generalizing on the train set. Those can be accessed with .estimators_[i].feature_importances_.\nAlternatively, you may study other sorts of feature importances, like sklearn.inspection.permutation_importance, which are universally applicable.","Q_Score":0,"Tags":"python,scikit-learn","A_Id":74872649,"CreationDate":"2022-12-21T05:58:00.000","Title":"Feature Importance from OneVsRestClassifier","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to merge multiple dataframes to a master dataframe based on the columns in the master dataframes. For Example:\nMASTER DF:\n\n\n\n\nPO ID\nSales year\nName\nAcc year\n\n\n\n\n10\n1934\nxyz\n1834\n\n\n11\n1942\nabc\n1842\n\n\n\n\nSLAVE DF:\n\n\n\n\nPO ID\nYr\nAmount\nYear\n\n\n\n\n12\n1935\n365.2\n1839\n\n\n13\n1966\n253.9\n1855\n\n\n\n\nRESULTANT DF:\n\n\n\n\nPO ID\nSales Year\nAcc Year\n\n\n\n\n10\n1934\n1834\n\n\n11\n1942\n1842\n\n\n12\n1935\n1839\n\n\n13\n1966\n1855\n\n\n\n\nNotice how I have manually mapped columns (Sales Year-->Yr and Acc Year-->Year) since I know they are the same quantity, only the column names are different.\nI am trying to write some logic which can map them automatically based on some criteria (be it column names or the data type of that column) so that user does not need to map them manually.\nIf I map them by column name, both the columns have different names (Sales Year, Yr) and (Acc Year, Year). So to which column should the fourth column (Year) in the SLAVE DF be mapped in the MASTER DF?\nAnother way would be to map them based on their column values but again they are the same so cannot do that.\nThe logic should be able to map Yr to Sales Year and map Year to Acc Year automatically.\nAny idea\/logic would be helpful.\nThanks in advance!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":75,"Q_Id":74872940,"Users Score":0,"Answer":"Generally this is impossible as there is no solid\/consistent factor by which we can map the columns.\nThat being said what one can do is use cosine similarity to calculate how similar one string (in this case the column name) is to other strings in another dataframe.\nSo in your case, we'll get 4 vectors for the first dataframe and 4 for the other one. Now calculate the cosine similarity between the first vector(PO ID) from the first dataframe and first vector from second dataframe (PO ID). This will return 100% as both the strings are same.\nFor each and every column, you'll get 4 confidence scores. Just pick the highest and map them.\nThat way you can get a makeshift logic through which you can map the column although there are loopholes in this logic too. But it is better than nothing as that way the number of columns to be mapped by the user will be less as compared to mapping them all manually.\nCheers!","Q_Score":0,"Tags":"python,pandas,dataframe,merge,data-preprocessing","A_Id":74890716,"CreationDate":"2022-12-21T08:05:00.000","Title":"Automatically Map columns from one dataframe to another using pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use Pyrosm for parsing *.osm.pbf files.\nOn their websites it says \"When should I use Pyrosm? However, pyrosm is better suited for situations where you want to fetch data for whole city or larger regions (even whole country).\"\nHowever when I try to parse to big .osm.pbf files, I get memory problems.\nIs there a solution for that, e.g. like chunking in pandas?\nOr do I need to split up the file, if yes, how?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":85,"Q_Id":74873546,"Users Score":0,"Answer":"So my solution is to split the files up via osmium-tools.","Q_Score":2,"Tags":"python,parsing,bigdata,openstreetmap","A_Id":75024791,"CreationDate":"2022-12-21T09:01:00.000","Title":"How to handle very big OSMdata with Pyrosm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"df.hist() gives you histograms for all variables.\ndf.hist(column='age') gives you a histogram for just age.\nWhat if I want histograms for all variables except one? Do I have to do them separately? And what's the difference between using df.hist() and the Matplotlib version anyway?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":25,"Q_Id":74875442,"Users Score":0,"Answer":"Save the column that you want to exclude in a variable:\nexclude = [\"age\"]\nAnd the plot the histogram accordingly:\ndf.loc[:, df.columns.difference(exclude)].hist(figsize=(15, 10));\nThis should solve your problem.","Q_Score":0,"Tags":"python,pandas,matplotlib,histogram","A_Id":74875522,"CreationDate":"2022-12-21T11:32:00.000","Title":"How do I create histograms for all variables except one in Python?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have created an environment and have python and all other packages installed in it. Openssl is also available when I check using conda list. But unfortunately, I realized pytorch is missing when I check the list of installed packages. When I try to download the pytorch I get the following error.\nCondaSSLError: Encountered an SSL error. Most likely a certificate verification issue.\nException: HTTPSConnectionPool(host='repo.anaconda.com', port=443): Max retries exceeded with url: \/pkgs\/main\/win-64\/current_repodata.json (Caused by SSLError(\"Can't connect to HTTPS URL because the SSL module is not available.\"))","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":103,"Q_Id":74882814,"Users Score":0,"Answer":"I think this problem is related to ssl updates\nrun the below code in terminal and try again;\n\nconda update --all --no-deps certifi","Q_Score":0,"Tags":"python,pytorch,openssl","A_Id":74882885,"CreationDate":"2022-12-21T23:39:00.000","Title":"Error with openssl when trying to install pytorch","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe that has an animals column with different animals (say [\"cat\", \"dog\", \"lion\"]) as rows and a value corresponding to each animal. There are 10 unique animals and 50 entries of each. The animals are not in any particular order.\nI want to split the dataframe into two with one containing 40 of each animal and another containing 10 of each animal. That is one dataframe should contain 40 cats, 40 dogs etc and the other dataframe with 10 cats, 10 dogs etc.\nAny help would be greatly appreciated.\nI have tried to sort by unique values but it did not work. I am not very familiar with Pandas yet and this is the first time I am using it.\nEdit:\nAdding a better example of what I need\n\n\n\n\nAnimal\nvalue\n\n\n\n\ndog\n12\n\n\ncat\n14\n\n\ndog\n10\n\n\ncat\n40\n\n\ndog\n90\n\n\ndog\n80\n\n\ncat\n30\n\n\ndog\n20\n\n\ncat\n20\n\n\ncat\n23\n\n\n\n\nI want to separate this into 2 data frames. In this example the first dataframe would have 3 of each animal and the other one would have 2 of each animal.\n\n\n\n\nAnimal\nvalue\n\n\n\n\ndog\n12\n\n\ndog\n10\n\n\ndog\n90\n\n\ncat\n14\n\n\ncat\n40\n\n\ncat\n30\n\n\n\n\n\n\n\nAnimal\nvalue\n\n\n\n\ndog\n80\n\n\ndog\n20\n\n\ncat\n20\n\n\ncat\n23","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":51,"Q_Id":74891455,"Users Score":2,"Answer":"Does this work? df.groupby('animal', group_keys=False).apply(lambda x: x.sample(frac=0.2)) You could then remove these rows from your original dataframe to create the one with 40 of each animal.","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":74891684,"CreationDate":"2022-12-22T16:44:00.000","Title":"Split dataframe based on number of rows with a column value","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I read somewhere suggesting that in case there are multiple features(multi linear model) no feature scaling is needed because co-efficient takes care of that.\nBut for single feature(simple linear model); feature scaling is needed.\nIs this how python scikilt learn works or I read something wrong?\nNeed answer from someone who has tested both with and without feature scaling in simple linear regression","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":26,"Q_Id":74894560,"Users Score":0,"Answer":"Scaling is used when we want to scale the features in a particular range. In particular algorithms, the model will be sensitive to outliers so it is recommended to scale the features in a particular range. Algorithms like distance-based need feature scale. It also depends on data not in particular for any dataset such as multiple linear regression or linear regression. Sometimes features scaling is not recommended as the data points will shift from a particular range to a normal distribution range as it will lead to an impact on modelling.","Q_Score":0,"Tags":"python-3.x,machine-learning,data-science","A_Id":74896528,"CreationDate":"2022-12-22T22:44:00.000","Title":"Simple linear regressions vs multiple linear regression model scaling","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm having a problem with a pytorch-ignite classification model. The code is quite long, so I'd like to first ask if anyone can explain this behavior in theory.\nI am doing many classifications in a row. In each iteration, I select a subset of my data randomly and perform classification. My results were quite poor (accuracy ~ 0.6). I realized that in each iteration my training dataset is not balanced. I have a lot more class 0 data than class 1; so in a random selection, there tends to be more data from class 0.\nSo, I modified the selection procedure: I randomly select a N data points from class 1, then select N data points from class 0, then concatenate these two together (so the label order is like [1111111100000000] ). Finally, I shuffle this list to mix the labels before feeding it to the network.\nThe problem is, with this new data selection, my gpu runs out of memory within seconds. This was odd since with the first data selectin policy the code ran for tens of hours.\nI retraced my steps: Turns out, if I do not shuffle my data in the end, meaning, if I keep the [1111111100000000] order, all is well. If I do shuffle the data, I need to reduce my batch_size by a factor of 5 or more so the code doesn't crash due to running out of gpu memory.\nAny idea what is happening here? Is this to be expected?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":29,"Q_Id":74904070,"Users Score":0,"Answer":"I found the solution to my problem. But I don't really understand the details of why it works:\nWhen trying to choose a batch_size at first, I chose 90. 64 was slow, I was worried 128 was going to be too large, and a quick googling let me to believe keeping to powers of 2 shouldn't matter much.\nTurns out, it does matter! At least, when your classification training data is balanced. As soon as I changed my batch_size to a power of 2, there was no memory overflow. In fact, I ran the whole thing on a batch_size of 128 and there was no problem :)","Q_Score":0,"Tags":"python,deep-learning,pytorch,gpu,classification","A_Id":74924835,"CreationDate":"2022-12-23T21:20:00.000","Title":"Data shuffling changes gpu memory use drastically","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the difference between flip() and flipud() in NumPy?\nBoth functions do the same things so which one should I use?","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":69,"Q_Id":74959110,"Users Score":-1,"Answer":"flipud can only flip an array along the vertical axis and flip will flip along a given axis. Very similiar.","Q_Score":0,"Tags":"python,arrays,numpy,numpy-ndarray","A_Id":74959149,"CreationDate":"2022-12-30T07:24:00.000","Title":"What is the difference between flip() and flipud() in NumPy?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have the following dataframe:\n\n\n\n\ncountry\ncoin\n\n\n\n\nUSA\ncoin1\n\n\nUSA\ncoin2\n\n\nMexico\ncoin3\n\n\n\n\nEach coin is unique, and it can change the country. For example:\n\n\n\n\ncountry\ncoin\n\n\n\n\nUSA\ncoin1\n\n\nMexico\ncoin2\n\n\nMexico\ncoin3\n\n\n\n\nWhat I'm trying to find is a way to see which lines have changed. My desired output:\n\n\n\n\ncountry\ncoin\n\n\n\n\nMexico\nCoin2","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":47,"Q_Id":74995290,"Users Score":2,"Answer":"You could use concat to combine them, and then use drop_duplicates to get the difference. For example:\nconcat([df1,df2]).drop_duplicates(keep=False)\nEDIT:\nTo get just the one row, you can get the negation of everything common between the two dataframes by turning applying list to them and using .isin to find commonalities.\ndf1[~df1.apply(list,1).isin(df2.apply(list,1))]","Q_Score":1,"Tags":"python,dataframe","A_Id":74995495,"CreationDate":"2023-01-03T15:21:00.000","Title":"Get the differences from two dataframes","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I created a machine learning model to make predictions on future output at work. So far its 97% accurate.\nI wanted to predict the output using the date along with 2 other inputs and since you can't use datetime directly in regression models.\nI converted the date column using ordinal encoding, will I then be able to use the date as an input then?\nOr is there a better method?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":74997541,"Users Score":0,"Answer":"Ordinal encoding is't the best approach for handling date\/time data, especially if in your data occurs seasonality or trends. Depending on your problem, you could extract a lot of different features from dates, e.q:\n\nyear, month, day ....\nhour, minute, second ....\nday of week\nseason\nholiday\netc ...\n\nWhat should you use exactly highly depends on your problem, you should first investigate your data, maybe plot your predicted variable against dates and search for patterns which can help you then achieve best prediction results.","Q_Score":0,"Tags":"python,pandas,machine-learning,regression","A_Id":75003657,"CreationDate":"2023-01-03T18:51:00.000","Title":"Machine Learning predictions using dates","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Assuming that I have monthly datasets showing like these:\ndf1\n\n\n\n\ncompany\ndate\nact_call\nact_visit\npo\n\n\n\n\nA\n2022-10-01\nYes\nNo\nNo\n\n\nB\n2022-10-01\nYes\nNo\nYes\n\n\nC\n2022-10-01\nNo\nNo\nNo\n\n\nB\n2022-10-02\nNo\nYes\nNo\n\n\nA\n2022-10-02\nNo\nYes\nYes\n\n\n\n\ndf2\n\n\n\n\ncompany\ndate\nact_call\nact_visit\npo\n\n\n\n\nD\n2022-11-01\nYes\nNo\nNo\n\n\nB\n2022-11-01\nYes\nNo\nYes\n\n\nC\n2022-11-01\nYes\nYes\nNo\n\n\nD\n2022-11-02\nNo\nYes\nNo\n\n\nA\n2022-11-02\nNo\nYes\nYes\n\n\n\n\nI want to compare the two dataframes and count several conditions:\n\nthe number of company that exists in both dataframes.\n\nthe number of company that exists in both dataframes that has at least one act_call as 'Yes' and act_visit as 'Yes' in df2, but has po as 'No' in df1.\n\n\nFor the 1st condition, I've tried using pandas.Dataframe.sum() and pandas.Dataframe.count_values() but they didn't give the results that I want.\nFor the 2nd condition, I tried using this code:\n(((df1[['act_calling', 'act_visit']].eq('yes'))&(df2['po'].eq('no'))).groupby(df2['company_name']).any().all(axis = 1).sum())\nbut, I'm not sure that the code above will only count the company that exists in both dataframes.\nThe expected output is this:\n\n3, (A, B, C)\n\n1, (C)\n\n\nI'm open to any suggestions. Thank u in advance!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":52,"Q_Id":75001090,"Users Score":0,"Answer":"To See The Companies That Are In Both Data Frames\n1st part\ncombined_dataframe1=df1[df2['company'].isin(df1['company'])]\ncombined_dataframe1['company']\n2nd part\nTo see the company that satisfies your conditions\ncombined_dataframe2=df2[df2['company'].isin(df1['company'])]\njoined_dataframe=pd.merge(combined_dataframe1,combined_dataframe2, on='company',how='outer')\nAs per your condition\nfinal_dataframe=joined_dataframe[joined_dataframe.columns][joined_dataframe['po_x']=='n0'}[joined_dataframe['act_call_yes']=='yes'][joined_dataframe['act_visit_y']=='yes']\nprint(final_dataframe)","Q_Score":0,"Tags":"python,pandas,dataframe,count,compare","A_Id":75002160,"CreationDate":"2023-01-04T04:50:00.000","Title":"Comparing and Count Values from 2 (or More) Different Pandas Dataframes Based on Certain Conditions","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to connect data frame and dict like this . the number of frames for each cell is different\n,so the number of \"0\",\"1\"and so on is different .Total number of cells 16.How can","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":75006406,"Users Score":0,"Answer":"To combine a pandas data frame with a dictionary, you can use the pandas.DataFrame.from_dict() function. This function takes a dictionary as input and returns a pandas data frame.\nFor example, you can create a dictionary with keys as column names and values as data for each column, and then pass this dictionary to the from_dict function to create a data frame:\nimport pandas as pd\ndata = {'col1': [1, 2, 3], 'col2': [4, 5, 6]}\ndf = pd.DataFrame.from_dict(data)\nprint(df)","Q_Score":0,"Tags":"python,pandas,dataframe","A_Id":75006507,"CreationDate":"2023-01-04T13:47:00.000","Title":"How to connect pandas data frame and dict?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to store a numpy array to a file. This array contains thousands of float probabilities which all sum up to 1. But when I store the array to a CSV file and load it back, I realise that the numbers have been approximated, and their sum is now some 0.9999 value. How can I fix it?\n(Numpy's random choice method requires probabilities to sum up to 1)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":92,"Q_Id":75009650,"Users Score":0,"Answer":"Due to floating point arithmetic errors, you can get tiny errors in what seem like ordinary calculations. However, in order to use the choice function, the probabilities don't need to be perfect.\nOn reviewing the code in the current version of Numpy as obtained from Github, I see that the tolerance for the sum of probabilities is that sum(p) is within sqrt(eps) of 1, where eps is the double precision floating point epsilon, which is approximately 1e-16. So the tolerance is about 1e-8. (See lines 955 and 973 in numpy\/random\/mtrand.pyx.)\nFarther down in mtrand.pyx, choice normalizes the probabilities (which are already almost normalized) to sum to 1; see line 1017.\nMy advice is to ensure that all 16 digits are stored in the csv, then when you read them back, the error in the sum will be much smaller than 1e-8 and choice will be happy. I think other people commenting here have posted some advice about how to print all digits.","Q_Score":0,"Tags":"python,numpy,csv,floating-point,probability","A_Id":75023082,"CreationDate":"2023-01-04T18:19:00.000","Title":"How can I store float probabilities to a file so exactly that they sum up to 1?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently reading from dropbox offline using pyspark on my local machine using this code\npre_test_quiz_df = spark \\ .read \\ .option('header', 'true') \\ .csv('\/Users\/jamie\/Dropbox\/Moodle\/Course uptake\/data use\/UserDetails.csv')\nWhile working on from a server I am not able to read dropbox on my local machine. Is there a way to read the same file but from the dropbox on my browser.\nHave tried reading with pandas and converting to pyspark dataframe although it did not work.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22,"Q_Id":75014600,"Users Score":0,"Answer":"I found a work around. I didn't find any direct way of doing this, so the next alternative was using the dropbox API, which works pretty well. You can check their documentation or youtube on how to set up the API.","Q_Score":0,"Tags":"python,apache-spark,pyspark,apache-spark-sql","A_Id":75072952,"CreationDate":"2023-01-05T06:32:00.000","Title":"How to read dropbox online using pyspark","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe (more than 1 million rows) that has an open text columns for customer can write whatever they want.\nMisspelled words appear frequently and I'm trying to group comments that are grammatically the same.\nFor example:\n\n\n\n\nID\nComment\n\n\n\n\n1\nI want to change my credit card\n\n\n2\nI wannt change my creditt card\n\n\n3\nI want change credit caurd\n\n\n\n\nI have tried using Levenshtein Distance but computationally it is very expensive.\nCan you tell me another way to do this task?\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":33,"Q_Id":75016854,"Users Score":0,"Answer":"Levenshtein Distance has time complexity O(N^2).\nIf you define a maximum distance you're interested in, say m, you can reduce the time complexity to O(Nxm). The maximum distance, in your context, is the maximum number of typos you accept while still considering two comments as identical.\nIf you cannot do that, you may try to parallelize the task.","Q_Score":2,"Tags":"python,dataframe,nlp,misspelling,write-error","A_Id":75016968,"CreationDate":"2023-01-05T10:14:00.000","Title":"How can I resolve write errors that I have in my data?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe with timeseries data\n\n\n\n\nTimestamp\nValues\n\n\n\n\n10-26-22 10.00 AM\n1\n\n\n10-26-22 09.04 AM\n5\n\n\n10.26-22 10.06 AM\n6\n\n\n--------\n--------\n\n\n10-27-22 3.32 AM\n9\n\n\n10-27-22 3.36 PM\n5\n\n\n10-27-22 3.31 PM\n8\n\n\n--------\n--------\n\n\n10-27-22 3.37 AM\n8.23\n\n\n10-28-22 4.20 AM\n7.2\n\n\n\n\nI tried to sort the timestamp column into ascending order by :\ndf.sort_values(\"Timestamp\", ascending = True, inplace= True)\nbut this code is not working. I want to get the data like this:\n\n\n\n\nTimestamp\nValues\n\n\n\n\n10-26-22 09.04 AM\n1\n\n\n10-26-22 10.00 AM\n5\n\n\n10-26-22 10.06 AM\n6\n\n\n--------\n--------\n\n\n10-27-22 3.31 AM\n9\n\n\n10-27-22 3.32 PM\n5\n\n\n10-27-22 3.36 PM\n8\n\n\n------\n--------\n\n\n10-27-22 3.37 AM\n8.23\n\n\n10-28-22 4.20 AM\n7.2","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":75018953,"Users Score":0,"Answer":"I guess you'll need to drill down to the timestamp then convert the format before using the sort_values function on the dataframe..\nYou should look through the documentation. This is scarcely implemented.","Q_Score":0,"Tags":"python,pandas,dataframe,sorting,time-series","A_Id":75019067,"CreationDate":"2023-01-05T13:12:00.000","Title":"How to arrange time series data into ascending order","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have dataset for indoor localization.the dataset contain columns for wireless access point about 520 column with the RSSI value for each one .the problem is each row of the dataset has values of one scan for a signals that can be captured by a device and the maximum number of wireless access point that can be captured about only 20 ( the signal can be from 0dbm which is when the device near the access point and minus 100dbm when the device far from the access point but it can capture the signal) the rest of access points which are out of the range of the device scan they have been compensated with a default value of 100 positive.these value (100 dbm ) about 500 column in each row and have different columns when ever the location differ .the question is how to deal with them?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":120,"Q_Id":75042895,"Users Score":0,"Answer":"One option to deal with this issue, you could try to impute (change) the values that are out of range with a more reasonable value. There are several approaches you could take to do this:\n\nReplacing the out-of-range values with the mean or median of the in-range values\nUsing linear interpolation to estimate the missing values based on the surrounding values\n\nThe choice will depend on the goal of your machine learning model and what you want to achieve.","Q_Score":0,"Tags":"python-3.x,machine-learning,deep-learning,localization,data-processing","A_Id":75044928,"CreationDate":"2023-01-07T18:42:00.000","Title":"how to deal with out of range values in dataset (RSSI values)","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"numpy.zeros((100,100,3))\nWhat does number 3 denotes in this tuple?\nI got the output but didn't totally understand the tuple argument.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":16,"Q_Id":75046165,"Users Score":0,"Answer":"This piece of code will create a 3D array with 100 rows, 100 columns, and in 3 dimensions.","Q_Score":1,"Tags":"python,numpy","A_Id":75046408,"CreationDate":"2023-01-08T07:35:00.000","Title":"what does the third number in the tuple argument denotes in numpy.zeros((100,100,3)) function?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run TensorFlow on a Linux machine (ubuntu).\nI've created a Conda env and installed the required packages but I think that there's something wrong with my versions:\nUpdated versions\n\ncudatoolkit 11.6.0 cudatoolkit 11.2.0\ncudnn 8.1.0.77\ntensorflow-gpu 2.4.1\npython 3.9.15\n\nRunning nvcc -V results\n\nnvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA\nCorporation Built on Mon_Oct_24_19:12:58_PDT_2022 Cuda compilation\ntools, release 12.0, V12.0.76 Build\ncuda_12.0.r12.0\/compiler.31968024_0\n\nand running python3 -c \"import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))\" returns an empty list.\nSeems that release 12.0 is the problem here, but I'm not sure and it's not my machine that I'm running on so I don't want to make big changes on my own.\nAlso, from TensorFlow's site, it seems that tensorflow-2.4.0 should run with python 3.6-3.8 and CUDA 11.0 but the versions I mentioned are the versions that the Conda choose for me.\nI know that similar questions have been asked before, but I couldn't find an answer that works for me.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":75058447,"Users Score":0,"Answer":"What finally worked for me was to create a new env from scratch using conda create --name tensorflow-gpu and then adding the other deps to it. Creating a new env and then installing tensorflow-gpu didn't worked.","Q_Score":0,"Tags":"python,linux,tensorflow,conda","A_Id":75071243,"CreationDate":"2023-01-09T14:06:00.000","Title":"Tensorflow 2.4.1 can't find GPUs","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am getting below error when running mlflow app\n\nraise AttributeError(\"module {!r} has no attribute \" AttributeError:\nmodule 'numpy' has no attribute 'object'\n\nCan someone help me with this","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":32334,"Q_Id":75069062,"Users Score":1,"Answer":"Instead of numpy.object:\nyou should use object or numpy.object_.","Q_Score":16,"Tags":"python,python-3.x,numpy,kubernetes,dockerfile","A_Id":76322209,"CreationDate":"2023-01-10T11:08:00.000","Title":"module 'numpy' has no attribute 'object'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How we can use custom mean and var in standard_scaler? I need to calculate mean and var for all data in the dataset (train set+test set) and then use these values to standardize the train set and test set (and later input data) separately. How can I do this?\nI couldn't find any example of it.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":39,"Q_Id":75086268,"Users Score":0,"Answer":"The simplest one is the best one!\nI found that the normal StandardScaler is the best answer to my question.\nStandardScaler(with_mean=False,with_std=False) that means mean=0 and var=1.\nThese values is fix for train set, test set and input data. so it's OK!","Q_Score":0,"Tags":"python,machine-learning,deep-learning","A_Id":75100062,"CreationDate":"2023-01-11T16:34:00.000","Title":"custom mean and var for standard_scaler","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have PDF drawings of target locations on a map. Each target location has a constant value next to it. Let's say \"A\"\nI want to add an increasing value say \"101\"+1 next to each A so that I can give each location a unique identifier.\nThis way a crew member can say \"at location 103\" and I know where on the map he\/she is.\nright now I am manually editing PDFs to add these values which sucks, wondering if I can automate\nI am using PyPDF2 and reportlab but struggling to get the location of each \"A\" and to print the new values","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":38,"Q_Id":75089574,"Users Score":0,"Answer":"Consider using PyMuPDF instead. Will let you find correct locations including whatever text font properties plus color.\nAt each identified location boundary box, append your unique id ..., or add an appropriate annotation as KJ indicated.","Q_Score":1,"Tags":"python","A_Id":75092776,"CreationDate":"2023-01-11T22:15:00.000","Title":"PDF editing. Add an increasing number next to a specific value","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have df1 with around 3,67,000 rows.\ndf2 has 30k rows.\nTheir common columns are first_name, middle_name and last_name, where first name and last name are exact matches, and middle_name has some constraints.\nThe matched df has 20k rows.\nI want to make a dataframe containing df2-matched (30k-20k= 10k rows).\nEssentially, I want to find the rows in df2 that were not a match to any rows in df1, but I cannot concat or merge because the columns are different.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":75095519,"Users Score":0,"Answer":"new_df = df2[~df2.index.isin(matched.index)]\nExplanation: You are saying \"keep only the rows in df2 that are not in the matched data frame, and save this as a new dataframe\"","Q_Score":0,"Tags":"python,sql,pandas,join,merge","A_Id":75095698,"CreationDate":"2023-01-12T11:17:00.000","Title":"Make a dataframe containing rows that were not matched after merging df1 and df2","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install locally Stable Diffusion. I follow the presented steps but when I get to the last one \"run webui-use file\" it opens the terminal and it's saying \"Press any key to continue...\". If I do so the terminal instantly closes.\nI went to the SB folder, right-clicked open in the terminal and used .\/webui-user to run the file. The terminal does not longer close but nothing is happening and I get those two errors:\nCouldn't install torch,\nNo matching distribution found for torch==1.12.1+cu113\nI've researched online and I've tried installing the torch version from the error, also I tried pip install --user pipenv==2022.1.8 but I get the same errors.","AnswerCount":1,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":938,"Q_Id":75099182,"Users Score":1,"Answer":"if has some problems with a python, remove venv folder, this will be generated again by script, because if you have another version to python this config files will be replaced with your paths, everything if you change a python version, don't forgot delete this folder venv.","Q_Score":0,"Tags":"python,pytorch,stable-diffusion","A_Id":75607570,"CreationDate":"2023-01-12T16:07:00.000","Title":"Stable Diffusion Error: Couldn't install torch \/ No matching distribution found for torch==1.12.1+cu113","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to install locally Stable Diffusion. I follow the presented steps but when I get to the last one \"run webui-use file\" it opens the terminal and it's saying \"Press any key to continue...\". If I do so the terminal instantly closes.\nI went to the SB folder, right-clicked open in the terminal and used .\/webui-user to run the file. The terminal does not longer close but nothing is happening and I get those two errors:\nCouldn't install torch,\nNo matching distribution found for torch==1.12.1+cu113\nI've researched online and I've tried installing the torch version from the error, also I tried pip install --user pipenv==2022.1.8 but I get the same errors.","AnswerCount":1,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":938,"Q_Id":75099182,"Users Score":0,"Answer":"I ran into the same problem, found out that I was using python 3.11, instead of the version from instructions - Python 3.10.6; You can uninstall other versions from Programs and Features\/ edit env vars","Q_Score":0,"Tags":"python,pytorch,stable-diffusion","A_Id":75111984,"CreationDate":"2023-01-12T16:07:00.000","Title":"Stable Diffusion Error: Couldn't install torch \/ No matching distribution found for torch==1.12.1+cu113","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have data regarding the years of birth and death of several people. I want to compute efficiently how many people are in each of a group of pre-defined epochs.\nFor example. If I have this list of data:\n\nPaul 1920-1950\nSara 1930-1950\nMark 1960-2020\nLennard 1960-1970\n\nand I define the epochs 1900-1980 and 1980-2023, I would want to compute the number of people alive in each period (not necessarily the whole range of the years). In this case, the result would be 4 people (Paul, Sara, Mark and Lennard) for the first epoch and 1 person (Mark) for the second epoch.\nIs there any efficient routine out there? I would like to know, as the only way I can think of now is to create a huge loop with a lot of ifs to start categorizing.\nI really appreciate any help you can provide.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":75100915,"Users Score":0,"Answer":"Loop over all individuals.\nExpand \"birth .. death\" years into epochs.\nIf epoch granularity was 12 months,\nthen you would generate 30 rows for a 30-year old,\nand so on.\nYour granularity is much coarser,\nwith valid epoch labels being just {1900, 1980},\nso each individual will have just one or two rows.\nOne of your examples would have a \"1900, Mark\" row,\nand a \"1980, Mark\" row, indicating he was alive\nfor some portion of both epochs.\nNow just sort values and group by,\nto count how many 1900 rows and\nhow many 1980 rows there are.\nReport the per-epoch counts.\nOr report names of folks alive in each epoch,\nif that's the level of detail you need.","Q_Score":0,"Tags":"python,data-analysis","A_Id":75100984,"CreationDate":"2023-01-12T18:38:00.000","Title":"Categorize birth-death data in epochs","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to run this code for an LDA Topic Model for free form text responses. The path is referencing the raw text from the reviews. When I run this, the error is\nTypeError: pipe() got an unexpected keyword argument 'n_threads'\nAny possible solutions? This is my first time running a LDA Topic model from scratch. Let me know if more info is needed. thanks\nCODE:\nsw = stopwords.words('english')\nnlp = spacy.load('en_core_web_sm')\nimport time\nt0 = time.time()\nwrite_parsed_sentence_corpus(nlppath+'rawtext.txt', nlppath+'parsedtexts.txt', nlp, batch_size=1000, n_threads=2, sw=sw, exclusions = ['-PRON-'])\ntd = time.time()-t0\nprint('Took {:.2f} minutes'.format(td\/60))","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":75101901,"Users Score":0,"Answer":"Change n_threads=2 to n_process=2 and it should work","Q_Score":0,"Tags":"python,pipe,lda,topic-modeling","A_Id":75158354,"CreationDate":"2023-01-12T20:21:00.000","Title":"pipe() got an unexpected keyword argument 'n_threads'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have python 3.7.0 on windows 11 using vscode. I pip installed tensorflow and keras but when I tried to import them it gave me an error and said cannot import name OrderedDict\nTried uninstalling and reinstalling both tf and keras. Didn\u2019t work\nError Message:\nTraceback (most recent call last):\nFile \"c:\/Users\/Jai K\/CS Stuff\/test.py\", line 1, in \nimport tensorflow\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow_init_.py\", line 37, in \nfrom tensorflow.python.tools import module_util as module_util\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python_init.py\", line 42, in \nfrom tensorflow.python import data\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data_init_.py\", line 21, in \nfrom tensorflow.python.data import experimental\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data\\experimental_init_.py\", line 96, in \nfrom tensorflow.python.data.experimental import service\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\service_init_.py\", line 419, in \nfrom tensorflow.python.data.experimental.ops.data_service_ops import distribute\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data\\experimental\\ops\\data_service_ops.py\", line 25, in \nfrom tensorflow.python.data.ops import dataset_ops\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data\\ops\\dataset_ops.py\", line 29, in \nfrom tensorflow.python.data.ops import iterator_ops\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\data\\ops\\iterator_ops.py\", line 34, in \nfrom tensorflow.python.training.saver import BaseSaverBuilder\nne 32, in from tensorflow.python.checkpoint import checkpoint_management\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\checkpoint_init_.py\", line 3, in from tensorflow.python.checkpoint import checkpoint_view\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\checkpoint\\checkpoint_view.py\", line 19, in from tensorflow.python.checkpoint import trackable_view\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\checkpoint\\trackable_view.py\", line 20, in from tensorflow.python.trackable import converter\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\trackable\\converter.py\", line 18, in from tensorflow.python.eager.polymorphic_function import saved_model_utils\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\eager\\polymorphic_function\\saved_model_utils.py\", line 36, in from tensorflow.python.trackable import resource\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\trackable\\resource.py\", line 22, in from tensorflow.python.eager import def_function\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\eager\\def_function.py\", line 20, in from tensorflow.python.eager.polymorphic_function.polymorphic_function import set_dynamic_variable_creation\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\eager\\polymorphic_function\\polymorphic_function.py\", line 76, in from tensorflow.python.eager.polymorphic_function import function_spec as function_spec_lib\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\python\\eager\\polymorphic_function\\function_spec.py\", line 25, in from tensorflow.core.function.polymorphism import function_type as function_type_lib\nFile \"C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\tensorflow\\core\\function\\polymorphism\\function_type.py\", line 19, in from typing import Any, Callable, Dict, Mapping, Optional, Sequence, Tuple, OrderedDict\nImportError: cannot import name 'OrderedDict' from 'typing' (C:\\Users\\Jai K\\AppData\\Local\\Programs\\Python\\Python37\\lib\\typing.py)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":75105492,"Users Score":0,"Answer":"so OrderedDict is from collections which should be on your pc anyway. it seems like some of python's dependencies are not on your system path. you should double-check check you have everything that needs to be there. I have anaconda\\scripts there\nif that fails:\ntry and pip install it (collections) anyway. then try and uninstall Tensorflow and Keras and everything related and then reinstall.\nfrom experience, I can tell you a lot of times this is something you need to do when modifying your Tensorflow installation since the resolver is just horrendous\nif it still doesn't work try to get a specific version of Tensorflow that is more stable.\nI hope this helps :)","Q_Score":1,"Tags":"python,tensorflow,keras","A_Id":75105563,"CreationDate":"2023-01-13T06:14:00.000","Title":"Cannot import tensorflow or keras: ordered dict","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I came across an issue that while I was able to resolve, I believe would benefit this platform. I will therefore pose the question here and answer it. When attempting to publish an app on binder, you are required to create a Requirements.txt file that outlines your dependencies. Mine was using pandas version 1.4.4.\nWhen attempting to launch binder using my github repo, I was getting:\nERROR: No matching distribution found for pandas==1.4.4","AnswerCount":1,"Available Count":1,"Score":-0.3799489623,"is_accepted":false,"ViewCount":37,"Q_Id":75125939,"Users Score":-2,"Answer":"Reading into the error further, it seems that binder only goes up to a certain version of pandas. If you read carefully it will list your pandas version option. Choose the latest one from that error list, and update your requirements.\nAlhamdulilah!","Q_Score":0,"Tags":"python,jupyter-notebook,android-binder","A_Id":75125940,"CreationDate":"2023-01-15T14:47:00.000","Title":"Binder - ERROR: No matching distribution found for pandas==1.X.X","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"EDIT: (User error, I wasn't scanning entire dataframe. Delete Question if needed )A page I found had a solution that claimed to drop all rows with NAN in a selected column. In this case I am interested in the column with index 78 (int, not string, I checked).\nThe code fragment they provided turns out to look like this for me:\ndf4=df_transposed.dropna(subset=[78])\nThat did exactly the opposite of what I wanted. df4 is a dataframe that has NAN in all elements of the dataframe. I'm not sure how to\nI tried the dropna() method as suggested on half a dozen pages and I expected a dataframe with no NAN values in the column with index 78. Instead every element was NAN in the dataframe.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":75129502,"Users Score":0,"Answer":"df_transposed.dropna(subset=[78], in place=True) #returns dataframe with rows that have missing values in column 78 removed.","Q_Score":0,"Tags":"python,dataframe,sorting,nan","A_Id":75129947,"CreationDate":"2023-01-16T01:18:00.000","Title":"How do I drop all rows in a DataFrame that have NAN in that row, in a specified column?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am very beginner to Linux as I recently started using it. I installed different libraries like numpy, pandas etc.\nimport numpy as np\nimport pandas as pd\nIt raises a ModuleNotFoundError in VS Code. But when I run the same code in Terminal, there's no issue.\nNote: I installed these libraries with\npip3 install package\nOS: Ubuntu 22.04\nI tried to uninstall the package and reinstall but still not working. I also tried to install by\nsudo apt-get install python3-pandas.\nNothing works out.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":28,"Q_Id":75139482,"Users Score":0,"Answer":"It seems that you have two or more interpreter of python.\nYou can use shortcuts \"Ctrl+Shift+P\" and type \"Python: Select Interpreter\" to choose the correct python interpreter in VsCode.","Q_Score":0,"Tags":"python,linux,visual-studio-code,modulenotfounderror,ubuntu-22.04","A_Id":75141177,"CreationDate":"2023-01-16T20:44:00.000","Title":"import in python working on my Linux Terminal but raising ModuleNotFoundError on VS code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am very beginner to Linux as I recently started using it. I installed different libraries like numpy, pandas etc.\nimport numpy as np\nimport pandas as pd\nIt raises a ModuleNotFoundError in VS Code. But when I run the same code in Terminal, there's no issue.\nNote: I installed these libraries with\npip3 install package\nOS: Ubuntu 22.04\nI tried to uninstall the package and reinstall but still not working. I also tried to install by\nsudo apt-get install python3-pandas.\nNothing works out.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":28,"Q_Id":75139482,"Users Score":1,"Answer":"Without all the context, it sounds like you have a few different python environments.\nIn terminal check which python you are using which python\nIn VSCode settings check Python: Default Interpreter Path\nThat might help you understand what is going on. Make sure that the VSCode python path is the same path that your terminal prints out.","Q_Score":0,"Tags":"python,linux,visual-studio-code,modulenotfounderror,ubuntu-22.04","A_Id":75139734,"CreationDate":"2023-01-16T20:44:00.000","Title":"import in python working on my Linux Terminal but raising ModuleNotFoundError on VS code","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create a model that predicts customer status change.\nTo give context, there are 4 statuses a customer can have: [A, B, C, D]\nEach customer must have one status, and that status can change. I'm making a model with the current status as one of the features and the next status as the label.\nIs there a way to hardcode a rule into SVM (or other classifiers) that prevents the model from classifying the label as the current status? In other words, if a customer's current status is A, its next status cannot be A, it has to be either B, C, or D.\nIf anyone knows whether sklearn has similar capabilities that would help.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":75140683,"Users Score":0,"Answer":"As far as I know, there are two ways to solve this problem but it is not inside an SVM.\nFirst Way - series\nImplementing a rule-based classifier first then applying SVM...\nSecond way - Parallel\nImplementing a rule-based classifier and SVM parallel and choosing the best one in the end layer combining together.\ne.x Ensemble learning\nboth ways probably work in some cases, but you should try and see the results to choose the best way I guess the second one might work better.","Q_Score":0,"Tags":"python,machine-learning,scikit-learn","A_Id":75140872,"CreationDate":"2023-01-16T23:32:00.000","Title":"scikit-learn adding rules to classification model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a numpy array r and I need to evaluate a scalar function, let's say np.sqrt(1-x**2) on each element x of this array. However, I want to return the value of the function as zero, whenever x>1, and the value of the function on x otherwise.\nThe final result should be a numpy array of scalars.\nHow could I write this the most pythonic way?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":49,"Q_Id":75147334,"Users Score":2,"Answer":"You can use like numpy.where(condition,if condition holds,otherwise) so np.where(x>1,0,np.sqrt(1-x**2)) will be answer","Q_Score":0,"Tags":"python,arrays,numpy,if-statement","A_Id":75147438,"CreationDate":"2023-01-17T13:43:00.000","Title":"Evaluate scalar function on numpy array with conditionals","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a learning how to fill in NaN in a Python DataFrame. DataFrame called data containing an age column and only one row has an NaN. I applied the following:\ndata.fillna(data.mean(),inplace=True)\nI ask to print out data and I receive a recursion msg.\nMy DataFrame only contains 4 rows if that is important.\nI was expecting the DataFrame to come back with the NaN filled in with the mean value. I also tried replacing data.mean() with a number ex. 2. Same error message.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":14,"Q_Id":75152920,"Users Score":0,"Answer":"Not sure if this was the correct thing todo or not but I cleared out the Kernel in Jupyter Notebook and ran it just fine.","Q_Score":0,"Tags":"python-3.x,recursion,fillna","A_Id":75152978,"CreationDate":"2023-01-17T22:34:00.000","Title":"Python DataFrame .fillna() Recursion Error","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, I have been trying to find optimum solution for the question, but I can not find a solution which is less than o(n3).\nThe problem statemnt is :-\nfind total number of triplet in an array such that sum of a[i],a[j],a[k] is divisible by a given number d and i [28 lines of output]\nTraceback (most recent call last):\nFile \"d:\\py\\lib\\site-packages\\pip_vendor\\pep517\\in_process_in_process.py\", line 351, in \nmain()\nFile \"d:\\py\\lib\\site-packages\\pip_vendor\\pep517\\in_process_in_process.py\", line 333, in main\njson_out['return_val'] = hook(**hook_input['kwargs'])\nFile \"d:\\py\\lib\\site-packages\\pip_vendor\\pep517\\in_process_in_process.py\", line 112, in get_requires_for_build_wheel\nbackend = _build_backend()\nFile \"d:\\py\\lib\\site-packages\\pip_vendor\\pep517\\in_process_in_process.py\", line 77, in build_backend\nobj = import_module(mod_path)\nFile \"d:\\py\\lib\\importlib_init.py\", line 126, in import_module\nreturn _bootstrap._gcd_import(name[level:], package, level)\nFile \"\", line 1030, in _gcd_import\nFile \"\", line 1007, in _find_and_load\nFile \"\", line 972, in _find_and_load_unlocked\nFile \"\", line 228, in _call_with_frames_removed\nFile \"\", line 1030, in _gcd_import\nFile \"\", line 1007, in _find_and_load\nFile \"\", line 986, in _find_and_load_unlocked\nFile \"\", line 680, in _load_unlocked\nFile \"\", line 790, in exec_module\nFile \"\", line 228, in call_with_frames_removed\nFile \"C:\\Users\\zijie\\AppData\\Local\\Temp\\pip-build-env-kqsd82rz\\overlay\\Lib\\site-packages\\setuptools_init.py\", line 18, in \nfrom setuptools.dist import Distribution\nFile \"C:\\Users\\zijie\\AppData\\Local\\Temp\\pip-build-env-kqsd82rz\\overlay\\Lib\\site-packages\\setuptools\\dist.py\", line 47, in \nfrom . import _entry_points\nFile \"C:\\Users\\zijie\\AppData\\Local\\Temp\\pip-build-env-kqsd82rz\\overlay\\Lib\\site-packages\\setuptools_entry_points.py\", line 43, in \ndef validate(eps: metadata.EntryPoints):\nAttributeError: module 'importlib.metadata' has no attribute 'EntryPoints'\n[end of output]\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nerror: subprocess-exited-with-error\n\u00d7 Getting requirements to build wheel did not run successfully.\n\u2502 exit code: 1\n\u2570\u2500> See above for output.\nnote: This error originates from a subprocess, and is likely not a problem with pip.\npy:3.10.0\nos:windows11\nDoes anyone know how to solve the problem? Thanks!\nI tried several times but it doesn't work.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":34,"Q_Id":75167298,"Users Score":0,"Answer":"Have you tried:\npip3 install pandas?","Q_Score":0,"Tags":"python,python-3.x,pandas,pip","A_Id":75167312,"CreationDate":"2023-01-19T02:48:00.000","Title":"Cannot use pip install pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am facing below error message while running the python code(ML model) in the python databricks notebook\nConnectException: Connection refused (Connection refused) Error while obtaining a new communication channel\nConnectException error: This is often caused by an OOM error that causes the connection to the Python REPL to be closed. Check your query's memory usage.\nSpark tip settings","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":75187059,"Users Score":0,"Answer":"The driver may be experiencing a memory bottleneck, which is a frequent cause of this issue. When this occurs, the driver has an out of memory (OOM) crash, restarts often, or loses responsiveness. Any of the following factors might be the memory bottleneck's cause:\n\nFor the load placed on the driver, the driver instance type is not ideal.\nMemory-intensive procedures are carried out on the driver.\nThe same cluster is hosting a large number of concurrent notebooks or processes.\n\nPlease try below options\n\nTry increasing driver-side memory and then retry.\nYou can look at the spark job dag which give you more info on data flow.","Q_Score":0,"Tags":"python,apache-spark,pyspark,databricks,azure-databricks","A_Id":75284443,"CreationDate":"2023-01-20T16:52:00.000","Title":"ConnectException: Connection refused (Connection refused) Error while obtaining a new communication channel. error in databricks notebook","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am taking intro to ML on Coursera offered by Duke, which I recommend if you are interested in ML. The instructors of this course explained that \"We typically include nonlinearities between layers of a neural network.There's a number of reasons to do so.For one, without anything nonlinear between them, successive linear transforms (fully connected layers) collapse into a single linear transform, which means the model isn't any more expressive than a single layer. On the other hand, intermediate nonlinearities prevent this collapse, allowing neural networks to approximate more complex functions.\" I am curious that, if I apply ReLU, aren't we losing information since ReLU is transforming every negative value to 0? Then how is this transformation more expressive than that without ReLU?\nIn Multilayer Perceptron, I tried to run MLP on MNIST dataset without a ReLU transformation, and it seems that the performance didn't change much (92% with ReLU and 90% without ReLU). But still, I am curious why this tranformation gives us more information rather than lose information.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":56,"Q_Id":75192505,"Users Score":0,"Answer":"Neural networks are inspired by the structure of brain. Neurons in the brain transmit information between different areas of the brain by using electrical impulses and chemical signals. Some signals are strong and some are not. Neurons with weak signals are not activated.\nNeural networks work in the same fashion. Some input features have weak and some have strong signals. These depend on the features. If they are weak, the related neurons aren't activated and don't transmit the information forward. We know that some features or inputs aren't crucial players in contributing to the label. For the same reason, we don't bother with feature engineering in neural networks. The model takes care of it. Thus, activation functions help here and tell the model which neurons and how much information they should transmit.","Q_Score":0,"Tags":"python,machine-learning,pytorch,activation-function","A_Id":75192607,"CreationDate":"2023-01-21T10:02:00.000","Title":"Why ReLU function after every layer in CNN?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am taking intro to ML on Coursera offered by Duke, which I recommend if you are interested in ML. The instructors of this course explained that \"We typically include nonlinearities between layers of a neural network.There's a number of reasons to do so.For one, without anything nonlinear between them, successive linear transforms (fully connected layers) collapse into a single linear transform, which means the model isn't any more expressive than a single layer. On the other hand, intermediate nonlinearities prevent this collapse, allowing neural networks to approximate more complex functions.\" I am curious that, if I apply ReLU, aren't we losing information since ReLU is transforming every negative value to 0? Then how is this transformation more expressive than that without ReLU?\nIn Multilayer Perceptron, I tried to run MLP on MNIST dataset without a ReLU transformation, and it seems that the performance didn't change much (92% with ReLU and 90% without ReLU). But still, I am curious why this tranformation gives us more information rather than lose information.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":56,"Q_Id":75192505,"Users Score":1,"Answer":"the first point is that without nonlinearities, such as the ReLU function, in a neural network, the network is limited to performing linear combinations of the input. In other words, the network can only learn linear relationships between the input and output. This means that the network can't approximate complex functions that are not linear, such as polynomials or non-linear equations.\nConsider a simple example where the task is to classify a 2D data point as belonging to one of two classes based on its coordinates (x, y). A linear classifier, such as a single-layer perceptron, can only draw a straight line to separate the two classes. However, if the data points are not linearly separable, a linear classifier will not be able to classify them accurately. A nonlinear classifier, such as a multi-layer perceptron with a nonlinear activation function, can draw a curved decision boundary and separate the two classes more accurately.\nReLU function increases the complexity of the neural network by introducing non-linearity, which allows the network to learn more complex representations of the data. The ReLU function is defined as f(x) = max(0, x), which sets all negative values to zero. By setting all negative values to zero, the ReLU function creates multiple linear regions in the network, which allows the network to represent more complex functions.\nFor example, suppose you have a neural network with two layers, where the first layer has a linear activation function and the second layer has a ReLU activation function. The first layer can only perform a linear transformation on the input, while the second layer can perform a non-linear transformation. By having a non-linear function in the second layer, the network can learn more complex representations of the data.\nIn the case of your experiment, it's normal that the performance did not change much when you removed the ReLU function, because the dataset and the problem you were trying to solve might not be complex enough to require a ReLU function. In other words, a linear model might be sufficient for that problem, but for more complex problems, ReLU can be a critical component to achieve good performance.\nIt's also important to note that ReLU is not the only function to introduce non-linearity and other non-linear activation functions such as sigmoid and tanh could be used as well. The choice of activation function depends on the problem and dataset you are working with.","Q_Score":0,"Tags":"python,machine-learning,pytorch,activation-function","A_Id":75192942,"CreationDate":"2023-01-21T10:02:00.000","Title":"Why ReLU function after every layer in CNN?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using rain as an intrumental variable, so I need to pull hisotry probablity of rain given location and time to each row.\nPrefer python since I clean most of my data on python.\n\n\n\n\nCounty\nState\nDate\nRain\n\n\n\n\nFulton\nSC\n2019-1-1\n?\n\n\nChatham\nGA\n2017-9-3\n?\n\n\n\n\nProbably looking for some python library and code to find the date and create the column.\nAny help would be appreciated! Thank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":30,"Q_Id":75199083,"Users Score":0,"Answer":"The obvious answer is a probability in historical \/ observed datasets does not exist. The probability is derived from probabilistic weather forecasts. When the weather went through, you can say if there was rain or not, means 1 or 0.\nBut from a data science perspective there can be alternative to that. E.g. you can build up a similarity tree or an Analog Ensemble to determine probability for rain on certain weather patterns.\nBut you need more information about the weather and weather regime.\nAt the your information will be independent from the date. The probability information will be a function on the day of year e.g.","Q_Score":0,"Tags":"python,r,weather,meteostat","A_Id":75223296,"CreationDate":"2023-01-22T08:31:00.000","Title":"How to return historical probability of rain given location and date","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Kmeans clustering we can define number of cluster. But is it possible to define that cluster_1 will contain 20% data, cluster_2 will have 30% and cluster_3 will have rest of the data points?\nI try to do it by python but couldn't.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":20,"Q_Id":75200396,"Users Score":0,"Answer":"Using K-means clustering, as you said we specify the number of clusters but it's not actually possible to specify the percentage of data points. I would recommend using Fuzzy-C if you want to specify a exact percentage of data points alloted for each cluster","Q_Score":0,"Tags":"python,cluster-analysis,analysis,abc,pareto-chart","A_Id":75200603,"CreationDate":"2023-01-22T12:27:00.000","Title":"How to manipulate cluster data point of Kmeans clustering algorithm","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to import sklearn library by writing code like from sklearn.preprocessing import MinMaxScaler but it kept showing same error.\nI tried uninstalling and reinstalling but no change. Command prompt is also giving same error. Recently I installed some python libraries but that never affected my enviroment.\nI also tried running the code in jupyter notebook. When I tried to import numpy like import numpy as np, it ran successfully. So the problem is only with sklearn.\nAlso, I have worked with sklearn before but have never seen such an error.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":5096,"Q_Id":75224636,"Users Score":0,"Answer":"You have to read into the error message. For me sklearn was importing something from scipy which uses the outdated np.int, so updating scipy solved the issue for me.","Q_Score":1,"Tags":"python,scikit-learn,importerror","A_Id":75272610,"CreationDate":"2023-01-24T16:46:00.000","Title":"ImportError: cannot import name 'int' from 'numpy'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to import sklearn library by writing code like from sklearn.preprocessing import MinMaxScaler but it kept showing same error.\nI tried uninstalling and reinstalling but no change. Command prompt is also giving same error. Recently I installed some python libraries but that never affected my enviroment.\nI also tried running the code in jupyter notebook. When I tried to import numpy like import numpy as np, it ran successfully. So the problem is only with sklearn.\nAlso, I have worked with sklearn before but have never seen such an error.","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":5096,"Q_Id":75224636,"Users Score":2,"Answer":"Run pip3 install --upgrade scipy\nOR upgrade whatever tool that tried to import np.int and failed\nnp.int is same as normal int of python and scipy was outdated for me","Q_Score":1,"Tags":"python,scikit-learn,importerror","A_Id":75525007,"CreationDate":"2023-01-24T16:46:00.000","Title":"ImportError: cannot import name 'int' from 'numpy'","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset with 15 different meteorological stations (providing T, rh, wind direction through time).\nHow should I implement them in a machine learning model? As independent inputs or can I combine them?\nIf you could provide me with some references or hints to start this project, that would very helpful !\nI have so far cleaned the data and separate each meteorological station.\nI believe that I should try to perform a single prediction on each station and then combine the prediction of each station together ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":23,"Q_Id":75224714,"Users Score":1,"Answer":"There are different ways to implement multiple meteorological stations in a machine learning model depending on the specific problem you are trying to solve and the characteristics of the data. Here are a few options to consider:\n\nIndependent models: One option is to train a separate model for each meteorological station, using the data for that station as input. This approach is useful if the stations have different characteristics or if you want to make predictions for each station independently.\n\nCombined model: Another option is to combine the data from all stations and train a single model to make predictions for all of them at once. This approach is useful if the stations are similar and the relationship between the input variables and the output variable is the same across all stations.\n\nMulti-task learning: You can also consider using multi-task learning, where you train a single model to perform multiple tasks, one for each meteorological station. This approach is useful if the stations are similar but have different characteristics and you want to make predictions for all of them at once.\n\n\nRegarding how to combine the predictions, it depends on the problem you are trying to solve. If you want to make a prediction for each station independently you don't need to combine the predictions. But if you want to make a prediction for all the stations you can use an ensemble method like a majority vote or a weighted average to combine the predictions.\nYou can find more information about these approaches and examples of their implementation in papers and tutorials about multi-task learning, multi-output regression and ensemble methods.\nAlso, it might be helpful to explore the correlation between the meteorological stations. You can use the correlation matrix and heatmap to explore the correlation between the different meteorological stations. If they are highly correlated you can combine them in a single model, otherwise, you can consider them as independent inputs.","Q_Score":0,"Tags":"python,database,machine-learning,deep-learning,data-science","A_Id":75226178,"CreationDate":"2023-01-24T16:54:00.000","Title":"how to input the data of several meteorological stations into a machine learning model?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to find unique combinations of ~70,000 IDs.\nI'm currently doing an itertools.combinations([list name], 2) to get unique 2 ID combinations but it's been running for more than 800 minutes.\nIs there a faster way to do this?\nI tried converting the IDs into a matrix where the IDs are both the index and the columns and populating the matrix using itertools.product.\nI tried doing it the manual way with loops too.\nBut after more than a full day of letting them run, none of my methods have actually finished running.\nFor additional information, I'm storing these into a data frame, to later run a function that compares each of the unique set of IDs.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":56,"Q_Id":75237213,"Users Score":1,"Answer":"(70_000 ** 69_000) \/ 2== 2.4 billion - it is not such a large number as to be not computable in a few hours (update I run a dry-run on itertools.product(range(70000), 2) and it took less than 70 seconds, on a 2017 era i7 @3GHz, naively using a single core) But if you are trying to keep this data in memory at once, them it won't fit - and if your system is configured to swap memory to disk before erroring with a MemoryError, this may slow-down the program by 2 or more orders of magnitude, and thus, that is when your problem come from.\nitertools.combination does the right thing in this respect, and no need to try to change it for something else: it will yield one combination at a time. What you are doing with the result, however, do change things: if you are streaming the combination to a file and not keeping it in memory, it should be fine, and then, it is just computational time you can't speed up anyway.\nIf, on the other hand, you are collecting the combinations to a list or other data structure: there is your problem - don't do it.\nNow. going a step further than your question, since these combinations are check-able and predictable, maybe trying to generate these is not the right approach at all - you don't give details on how these are to be used, but if used in a reactive form, or on a lazy form, you might have an instantaneous workflow instead.","Q_Score":0,"Tags":"python,combinations,python-itertools","A_Id":75237466,"CreationDate":"2023-01-25T16:45:00.000","Title":"More optimized way to do itertools.combinations","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am creating a medical web app that takes in audio input, converts it to text, and extracts keywords from the said text file, which is then used in an ML model. We have the text, but the problem lies in the fact that the person might say, I have pain in my chest and legs but the symptoms in our model are chest_pain or leg_pain.\nHow do we convert the different phrasing used by the user to one that matches our model features? Our basic approach would be using tokenizer and then using NLTK to check synonyms of each word and map pairs to try out multiple phrasings to match the one we currently have, but it would take one too much time.\nIs it possible to do this task using basic NLP?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31,"Q_Id":75246397,"Users Score":0,"Answer":"maybe an improvment of your first idea :\n\nSplit your keywords (chest_pain \u2192 [\"chest\",\"pain\"]\nFind only synonyms of your keywords ([[\"chest\",\"thorax\",...],[\"pain\",\"suffer\",...]]\nFor each words of your sentence check if the word is present in your keywords synonyms.","Q_Score":0,"Tags":"python","A_Id":75246697,"CreationDate":"2023-01-26T12:52:00.000","Title":"Converting a phrase into one being used by the machine leaning model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have multiple excel files with different columns and some of them have same columns with additional data added as additional columns. I created a masterfile which contain all the column headers from each excel file and now I want to export data from individual excel files into the masterfile. Ideally, each row representing all the information about one single item.\nI tried merging and concatenating the files, it adds all the data as new rows so, now I have some columns with repeated data but they also contain additional data in different columns.\nWhat I want now is to recognize the columns that are already present and fill in the new data instead of repeating the all columns using python. I cannot share the data or the code so, looking for some help or idea to get this done. Any help would be appreciated, Thanks in advance!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":21,"Q_Id":75247876,"Users Score":0,"Answer":"You are probably merging the wrong way.\nNot sure about your masterfile, sounds not very intuitive.\nMake sure your rows have a specific ID that identifies it.\nThen always perform the merge with that id and the 'inner' merge type.","Q_Score":0,"Tags":"python,excel,pandas,merge","A_Id":75284917,"CreationDate":"2023-01-26T15:00:00.000","Title":"Merging multiple excel files into a master file using python with out any repeated values","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I kept getting error: ImportError: this version of pandas is incompatible with numpy < 1.20.3 your numpy version is 1.18.5. Please upgrade numpy to >= 1.20.3 to use this pandas version\nAnd I'm unable to update, remove, and reinstall numpy.\nI tried pip install numpy and pip3 install numpy and kept getting this error: AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'\nEdit: I tried pip install --force-reinstall numpy and pip install numpy==1.20.3 and still got AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":51,"Q_Id":75248148,"Users Score":0,"Answer":"Try this.\npip install --force-reinstall numpy\nand then\npip install numpy==1.20.3","Q_Score":0,"Tags":"python,numpy","A_Id":75248224,"CreationDate":"2023-01-26T15:20:00.000","Title":"Version of pandas is incompatible with numpy < 1.20.3","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"error: OpenCV(4.7.0) D:\\a\\opencv-python\\opencv-python\\opencv\\modules\\highgui\\src\\window.cpp:1272: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'\nerror: OpenCV(4.7.0) D:\\a\\opencv-python\\opencv-python\\opencv\\modules\\highgui\\src\\window.cpp:1272: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function 'cvShowImage'","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":19,"Q_Id":75260366,"Users Score":0,"Answer":"I solved this problem by changing the version of opencv.I am using opencv-python==4.5.3.56.Maybe you can try.","Q_Score":0,"Tags":"python-3.x","A_Id":75273456,"CreationDate":"2023-01-27T15:47:00.000","Title":"Opencv Issue while running the code on jupyter","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a technique\/method\/algorithm which will be able to handle time-dependent data (each sample has 20 time steps, but for the most part they occur unevenly between samples, i.e., one sample may have a value at 0.4 seconds while another sample might not). The value itself of the time step corresponds to a categorical position on the body (ranging from 1-20) where the muscle activiation occured.\nSo the data resembles, (time, position):\n(0.1, 16)\n(0.16, 1)\n(0.25, 13)\n(0.26, 12)\n(0.27, 1)\n(0.4, 4)\nIs there a clustering algorithm which will be able to work for this type of data. I would like the algorithm to consider the time dependency of the data. Dynamic time warping is not suitable for unevenly spaced time series data and I am not sure how it would handle the sparse categorical data I have, e.g. a given position will only appear once per sample.\nAny suggestions or help is appreciated.\nI have looked through lots of different models, but none so far work with their given assumptions. Hidden markov models are out of the question (need stochastic time steps), DTW does not work for unevenly spaced time steps, and techniques like Lomb-Scargle do not work for categorical data especially not-periodic categorical data. Fast-fourier transform is also off the table.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":75276008,"Users Score":0,"Answer":"One method you can use for clustering this type of time-dependent data is a Hidden Markov Model (HMM). HMMs can model the dependencies between the positions and the time steps, allowing for the clustering of similar patterns in the data. Another alternative is a Gaussian Mixture Model (GMM), where you can model the position and time values as multivariate Gaussian distributions, and use Expectation-Maximization (EM) to estimate the parameters of the distributions. Both HMMs and GMMs have been used in various time-series analysis and clustering tasks, and both have Python implementations available through popular libraries such as scikit-learn and hmmlearn.\nIt is recommended to try out both algorithms and compare the results to see which one performs better for your specific dataset. You can also experiment with different features and preprocessing techniques, such as interpolation or downsampling, to see if it improves the performance of the clustering algorithm.","Q_Score":0,"Tags":"python,r,statistics,time-series,cluster-analysis","A_Id":75276022,"CreationDate":"2023-01-29T15:17:00.000","Title":"Unsupervised clustering algorithm for unevenly spaced sequential categorical data?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"ChatGPT's point of view is : \"It's generally recommended to upgrade to the latest version of Numpy step by step, rather than directly from version 1.19 to 1.24. This is because newer versions of Numpy may have breaking changes, so by upgrading incrementally, you can minimize the risk of encountering compatibility issues and ensure a smoother transition.\"","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":26,"Q_Id":75283956,"Users Score":1,"Answer":"From my personal point of view, the gap between the versions of NumPy is not so big. I would create a new virtual environment and install the desired version of NumPy. Then by running the code, either you will get Runtime errors for unsupported and no longer existing functions or everything will run just smoothly.\nIn case you have errors, you can try searching them online to find the required fix.\nIn case you have no errors at all, I would still try to come up with a Test script that testes some of the basic functionalities that are used and could break through out the code, you could try copying some values and hardcode set them to see the behaviour.\nThe above, would apply to any kind of package. If you still feel you need to go step by step, feel free.","Q_Score":0,"Tags":"python,numpy,migration,version","A_Id":75284230,"CreationDate":"2023-01-30T11:46:00.000","Title":"I need to migrate python numpy code from version 1.19 to 1.24, is it better to migrate directly or step by step (first 1.20, then 1.21 etc.)?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataframe with three columns: timestamp, variable_name and value. There is a total of 10 variables whose names are in the variable_name column.\nI would like to have a single dataframe, indexed by timestamp, with one column per variable. Ideally, the dataframe should be \"full\", i.e. each timestamp should have an interpolate value for each variable.\nI'm struggling to find a direct way to do that (without looping over the variable list, etc.). The dataframe comes from Spark but is small enough to be converted to Pandas. Any pointers will be most welcome.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24,"Q_Id":75284052,"Users Score":0,"Answer":"Something like\n\ndf.loc[:, ['timestamp','variable_name','values']].pivot(index='timestamp',columns='variable_name')\n\nshould do the trick","Q_Score":0,"Tags":"python,pandas,dataframe,apache-spark,pyspark","A_Id":75284735,"CreationDate":"2023-01-30T11:55:00.000","Title":"Dataframe timestamp interpolation for multiple variables","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am unsure if this kind of question (related to PCA) is acceptable here or not.\nHowever, it is suggested to do MEAN CENTER before PCA, as known. In fact, I have 2 different classes (Each different class has different participants.). My aim is to distinguish and classify those 2 classes. Still, I am not sure about MEAN CENTER that should be applied to the whole data set, or to each class.\nIs it better to make it separately? (if it is, should PREPROCESSING STEPS also be separately as well?) or does it not make any sense?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":51,"Q_Id":75286426,"Users Score":1,"Answer":"PCA is more or less per definition a SVD with centering of the data.\nDepending on the implementation (if you use a PCA from a library) the centering is applied automatically e.g. sklearn - because as said it has to be centered by definition.\nSo for sklearn you do not need this preprocessing step and in general you apply it over your whole data.\nPCA is unsupervised can be used to find a representation that is more meaningful and representative for you classes afterwards. So you need all your samples in the same feature space via the same PCA.\n\nIn short: You do the PCA once and over your whole (training) data and must be center over your whole (traning) data. Libraries like sklarn do the centering automatically.","Q_Score":2,"Tags":"python,data-analysis,pca,centering","A_Id":75286853,"CreationDate":"2023-01-30T15:19:00.000","Title":"Mean centering before PCA","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a tensor with shape torch.Size([3, 224, 225]). when I do tensor.mean([1,2]) I get tensor([0.6893, 0.5840, 0.4741]). What does [1,2] mean here?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":58,"Q_Id":75298356,"Users Score":1,"Answer":"Operations that aggregate along dimensions like min,max,mean,sum, etc. specify the dimension along which to aggregate. It is common to use these operations across every dimension (i.e. get the mean for the entire tensor) or a single dimension (i.e. torch.mean(dim = 2) or torch.mean(2) returns the mean of the 225 elements for each of 3 x 224 vectors.\nPytorch also allows these operations across a set of multiple dimensions, such as in your case. This means to take the mean of the 224 x 224 elements for each of the indices along the 0th (non-aggregated dimension). Likewise, if your original tensor shape was a.shape = torch.Size([3,224,10,225]), a.mean([1,3]) would return a tensor of shape [3,10].","Q_Score":2,"Tags":"python,python-3.x,pytorch","A_Id":75298460,"CreationDate":"2023-01-31T14:05:00.000","Title":"What does [1,2] means in .mean([1,2]) for tensor?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a tensor with shape torch.Size([3, 224, 225]). when I do tensor.mean([1,2]) I get tensor([0.6893, 0.5840, 0.4741]). What does [1,2] mean here?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":58,"Q_Id":75298356,"Users Score":0,"Answer":"The shape of your tensor is 3 across dimension 0, 224 across dimension 1 and 225 across dimension 2.\nI would say that tensor.mean([1,2]) calculates the mean across dimension 1 as well as dimension 2. Thats why you are getting 3 values. Each plane spanned by dimension 1 and 2 of size 224x225 is reduced to a single value \/ scalar. Since there are 3 planes that are spanned by dimension 1 and 2 of size 224x225 you get 3 values back. Each value represents the mean of a whole plane with 224x225 values.","Q_Score":2,"Tags":"python,python-3.x,pytorch","A_Id":75298563,"CreationDate":"2023-01-31T14:05:00.000","Title":"What does [1,2] means in .mean([1,2]) for tensor?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to install a lighter version of opencv on pi pico? If not, is there a way to install opencv library on an SD card and make pico to fetch those library files from that SD card?\nI am trying to record video using a OV7670 camera module and save the video it to SD card. Later I need to upload the video to a custom AWS server. I have the modules to capture images in micropython but can not find any modules or libraries to record video.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":25,"Q_Id":75300200,"Users Score":1,"Answer":"No. OpenCV is a huge library, with many moving parts. Even if there is a minimal version of OpenCV, I highly doubt that the 2MB of flash memory of the RP2040 will be enough for your use case. Coupling this alongside the limited number of cores, internal RAM, etc. of the CPU, you will probably end up with nothing. From what I know, you can use TinyML with MicroPython.","Q_Score":0,"Tags":"esp32,micropython,raspberry-pi-pico","A_Id":75300304,"CreationDate":"2023-01-31T16:31:00.000","Title":"Is there a way to install a lighter version of opencv?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"`I'm using in edge impulse FOMO\nI know that object detection fps is 1\/inference time\nmy model's time per inference is 2ms\nso object detection is 500fps\nbut my model run vscode fps is 9.5\nwhat is the difference between object detection fps and video fps ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":20,"Q_Id":75305535,"Users Score":0,"Answer":"If I understand correctly, your object detection fps indicates the number of frames (or images) that your model, given your system, can process in a second.\nA video fps in your input source's frames per second. For example, if your video has an fps (also referred to as framerate) of 100, then your model would be able to detect objects in all of the frames in 100ms (or 1\/10 of a second).\nIn your case, your video input source seems to have 9.5 frames in a second. This means that your model, given your system, will process 1-second wort of a video in about ~20ms.","Q_Score":0,"Tags":"python,deep-learning,frame-rate","A_Id":75305707,"CreationDate":"2023-02-01T04:05:00.000","Title":"what is the difference between object detection fps and video fps?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Ok so to preface this, I am very new to jupyter notebook and anaconda. Anyways I need to download opencv to use in my notebook but every time I download I keep getting a NameError saying that \u2018cv2\u2019 is not defined.\nI have uninstalled and installed opencv many times and in many different ways and I keep getting the same error. I saw on another post that open cv is not in my python path or something like that\u2026\nHow do I fix this issue and put open cv in the path? (I use Mac btw) Please help :( Thank you!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":75318885,"Users Score":0,"Answer":"Try the following:\n\nInstall OpenCV using Anaconda Navigator or via terminal by running:\nconda install -c conda-forge opencv\nNow you should check if its installed by running this in terminal: conda list\nImport OpenCV in Jupyter Notebook: In your Jupyter Notebook, run import cv2 and see if it works.\nIf the above steps are not working, you should add OpenCV to your Python PATH by writing the following code to your Jupyter NB:\nimport sys\nsys.path.append('\/anaconda3\/lib\/python3.7\/site-packages')\n\nThis should work.","Q_Score":0,"Tags":"opencv,anaconda,jupyter,nameerror,pythonpath","A_Id":75318938,"CreationDate":"2023-02-02T05:07:00.000","Title":"Anaconda Jupyter Notebook Opencv not working","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to save Polars DataFrame into a database, MS SQL for example?\nConnectorX library doesn\u2019t seem to have that option.","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":1076,"Q_Id":75320233,"Users Score":3,"Answer":"Polars exposes the write_database method on the DataFrame class.","Q_Score":3,"Tags":"python-polars,rust-polars","A_Id":76234129,"CreationDate":"2023-02-02T08:10:00.000","Title":"Polars DataFrame save to sql","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to save Polars DataFrame into a database, MS SQL for example?\nConnectorX library doesn\u2019t seem to have that option.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1076,"Q_Id":75320233,"Users Score":2,"Answer":"Polars doesen't support direct writing to a database. You can proceed in two ways:\n\nExport the DataFrame in an intermediate format (such as .csv using .write_csv()), then import it into the database.\nProcess it in memory: you can convert the DataFrame in a simpler data structure using .to_dicts(). The result will be a list of dictionaries, each of them containing a row in key\/value format. At this point is easy to insert them into a database using SqlAlchemy or any specific library for your database of choice.","Q_Score":3,"Tags":"python-polars,rust-polars","A_Id":75396733,"CreationDate":"2023-02-02T08:10:00.000","Title":"Polars DataFrame save to sql","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"In Numpy, Transposing of a column vector makes the the array an embedded array.\nFor example, transposing\n[[1.],[2.],[3.]] gives [[1., 2., 3.]] and the dimension of the outermost array is 1. And this produces many errors in my code. Is there a way to produce [1., 2., 3.] directly?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":43,"Q_Id":75322105,"Users Score":1,"Answer":"Try .flatten(), .ravel(), .reshape(-1), .squeeze().","Q_Score":1,"Tags":"python,numpy","A_Id":75322147,"CreationDate":"2023-02-02T10:53:00.000","Title":"Python NumPy, remove unnecessary brackets","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a mat file with sparse data for around 7000 images with 512x512 dimensions stored in a flattened format (so rows of 262144) and I\u2019m using scipy\u2019s loadmat method to turn this sparse information into a Compressed Sparse Column format. The data inside of these images is a smaller image that\u2019s usually around 25x25 pixels somewhere inside of the 512x512 region , though the actual size of the smaller image is not consitant and changes for each image. I want to get the sparse information from this format and turn it into a numpy array with only the data in the smaller image; so if I have an image that\u2019s 512x512 but there\u2019s a circle in a 20x20 area in the center I want to just get the 20x20 area with the circle and not get the rest of the 512x512 image. I know that I can use .A to turn the image into a non-sparse format and get a 512x512 numpy array, but this option isn\u2019t ideal for my RAM.\nIs there a way to extract the smaller images stored in a sparse format without turning the sparse data into dense data?\nI tried to turn the sparse data into dense data, reshape it into a 512x512 image, and then I wrote a program to find the top, bottom, left, and right edges of the image by checking for the first occurrence of data from the top, bottom, left, and right but this whole processes seemed horribly inefficient.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":29,"Q_Id":75328837,"Users Score":0,"Answer":"Sorry about the little amount of information I provided; I ended up figuring it out.Scipy's loadmat function when used to extract sparse data from a mat file returns a csc_matrix, which I then converted to numpy's compressed sparse column format. Numpy's format has a method .nonzero() that will return the index of every non_zero element in that matrix. I then reshaped the numpy csc matrix into 512x512, and then used .nonzero() to get the non-zero elements in 2D, then used used those indexes to figure out the max height and width of my image I was interested in. Then I created a numpy matrix of zeros the size of the image I wanted, and set the elements in that numpy matrix to the elements to the pixels I wanted by indexing into my numpy csc matrix (after I called .tocsr() on it)","Q_Score":0,"Tags":"python,numpy,scipy,sparse-matrix","A_Id":75342851,"CreationDate":"2023-02-02T20:54:00.000","Title":"Numpy Extract Data from Compressed Sparse Column Format","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As an example, I can cross validation when I do hyperparameter tuning (GridSearchCV). I can select the best estimator from there and do RFECV. and I can perform cross validate again. But this is a time-consuming task. I'm new to data science and still learning things. Can an expert help me lean how to use these things properly in machine learning model building?\nI have time series data. I'm trying to do hyperparameter tuning and cross validation in a prediction model. But it is taking a long time run. I need to learn the most efficient way to do these things during the model building process.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":27,"Q_Id":75345615,"Users Score":0,"Answer":"Cross-validation is a tool in order to evaluate model performance. Specifically avoid over-fitting. When we put all the data in training side, your Model will get over-fitting by ignoring generalisation of the data.\nThe concept of turning parameter should not based on cross-validation because hyper-parameter should be changed based on model performance, for example the depth of tree in a tree algorithm\u2026.\nWhen you do a 10-fold cv, you will be similar to training 10 model, of cause it will have time cost. You could tune the hyper-parameter based on the cv result as cv-> model is a result of the model. However it does not make sense when putting the tuning and do cv to check again because the parameter already optimised based on the first model result.\nP.s. if you are new to data science, you could learn something call regularization\/dimension reduction to lower the dimension of your data in order to reduce time cost.","Q_Score":0,"Tags":"python,machine-learning,cross-validation,hyperparameters","A_Id":75348200,"CreationDate":"2023-02-04T14:05:00.000","Title":"How to do the cross validation properly?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script that modifies a pandas dataframe with several concurrent functions (asyncio coroutines). Each function adds rows to the dataframe and it's important that the functions all share the same list. However, when I add a row with pd.concat a new copy of the dataframe is created. I can tell because each dataframe now has a different memory location as given by id().\nAs a result the functions are no longer share the same object. How can I keep all functions pointed at a common dataframe object?\nNote that this issue doesn't arise when I use the append method, but that is being deprecated.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":13,"Q_Id":75349540,"Users Score":0,"Answer":"pandas dataframes are efficient because they use contiguous memory blocks, frequently of fundamental types like int and float. You can't just add a row because the dataframe doesn't own the next bit of memory it would have to expand into. Concatenation usually requires that new memory is allocated and data is copied. Once that happens, referrers to the original dataframe\nIf you know the final size you want, you can preallocate and fill. Otherwise, you are better off keeping a list of new dataframes and concatenating them all at once. Since these are parallel procedures, they aren't dependent on each others output, so this may be a feasable option.","Q_Score":0,"Tags":"pandas,dataframe,python-asyncio","A_Id":75349863,"CreationDate":"2023-02-05T01:23:00.000","Title":"Pandas dataframe sharing between functions isn't working","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"We are developing a prediction model using deepchem's GCNModel.\nModel learning and performance verification proceeded without problems, but it was confirmed that a lot of time was spent on prediction.\nWe are trying to predict a total of 1 million data, and the parameters used are as follows.\nmodel = GCNModel(n_tasks=1, mode='regression', number_atom_features=32, learning_rate=0.0001, dropout=0.2, batch_size=32, device=device, model_dir=model_path)\nI changed the batch size to improve the performance, and it was confirmed that the time was faster when the value was decreased than when the value was increased.\nAll models had the same GPU memory usage.\nFrom common sense I know, it is estimated that the larger the batch size, the faster it will be. But can you tell me why it works in reverse?\nWe would be grateful if you could also let us know how we can further improve the prediction time.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":204,"Q_Id":75381096,"Users Score":0,"Answer":"There are two components regarding the speed:\n\nYour batch size and model size\nYour CPU\/GPU power in spawning and processing batches\n\nAnd two of them need to be balanced. For example, if your model finishes prediction of this batch, but the next batch is not yet spawned, you will notice a drop in GPU utilization for a brief moment. Sadly there is no inner metrics that directly tell you this balance - try using time.time() to benchmark your model's prediction as well as the dataloader speed.\nHowever, I don't think that's worth the effort, so you can keep decreasing the batch size up to the point there is no improvement - that's where to stop.","Q_Score":1,"Tags":"python,deep-learning,batchsize","A_Id":75381683,"CreationDate":"2023-02-08T03:21:00.000","Title":"In deep learning, can the prediction speed increase as the batch size decreases?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I try to read a xlsx file using pandas, I receive the error \"numpy has no float attribute\", but I'm not using numpy in my code, I get this error when using the code below\ninfo = pd.read_excel(path_info)\nThe xlsx file I'm using has just some letters inside of it for test purpouses, there's no numbers or floats.\nWhat I want to know is how can I solve that bug or error.\nI tried to create different files, change my info type to specify a pd.dataframe too\nPython Version 3.11\nPandas Version 1.5.3","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":783,"Q_Id":75386792,"Users Score":0,"Answer":"Had the same problem. Fixed it by updating openpyxl to latest version.","Q_Score":1,"Tags":"python,excel,pandas,numpy","A_Id":75415344,"CreationDate":"2023-02-08T13:54:00.000","Title":"Numpy has no float attribute error when using Read_Excel","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have recently attempted to install pandas through pip. It appears to go through the process of installing pandas and all dependencies properly. After I update to the latest version through cmd as well and everything appears to work; typing in pip show pandas gives back information as expected with the pandas version showing as 1.5.3\nHowever, it appears that when attempting to import pandas to a project in PyCharm (I am wondering if this is where the issue lies) it gives an error stating that it can't be found. I looked through the folders to make sure the paths were correct and that pip didn't install pandas anywhere odd; it did not.\nI uninstalled python and installed the latest version; before proceeding I would like to know if there is any reason this issue has presented itself. I looked into installing Anaconda instead but that is only compatible with python version 3.9 or 3.1 where as I am using the newest version, 3.11.2","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":204,"Q_Id":75431371,"Users Score":0,"Answer":"When this happens to me\n\nI reload the environment variables by running the command\nsource ~\/.bashrc\nright in the pycharm terminal.\n\nI make sure the I have activated the correct venv (where the package installations go) by cd to path_with_venv then running\nsource ~\/pathtovenv\/venv\/bin\/activate\n\nIf that does not work, hit CMD+, to open your project settings and and under Python Interpreter select the one with the venv that you have activated. Also check if pandas appears on the list of packages that appear below the selected interpreter, if not you may search for it and install it using this way and not the pip install way","Q_Score":1,"Tags":"python,pandas,dataframe,machine-learning,pycharm","A_Id":75431477,"CreationDate":"2023-02-13T02:04:00.000","Title":"pip install of pandas","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1.\nI want to know that is some ways to normalize -1 between 1.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":211,"Q_Id":75432346,"Users Score":1,"Answer":"You can use the min-max scalar or the z-score normalization here is what u can do in sklearn\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import StandardScaler\nor hard code it like this\nx_scaled = (x - min(x)) \/ (max(x) - min(x)) * 2 - 1 -> this one for minmaxscaler\nx_scaled = (x - mean(x)) \/ std(x) -> this one for standardscaler","Q_Score":1,"Tags":"python,machine-learning,deep-learning,data-preprocessing","A_Id":75432397,"CreationDate":"2023-02-13T06:04:00.000","Title":"Normalize -1 ~ 1","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1.\nI want to know that is some ways to normalize -1 between 1.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":211,"Q_Id":75432346,"Users Score":0,"Answer":"Yes, there are ways to normalize data to the range between -1 and 1. One common method is called Min-Max normalization. It works by transforming the data to a new range, such that the minimum value is mapped to -1 and the maximum value is mapped to 1. The formula for this normalization is:\nx_norm = (x - x_min) \/ (x_max - x_min) * 2 - 1\nWhere x_norm is the normalized value, x is the original value, x_min is the minimum value in the data and x_max is the maximum value in the data.\nAnother method for normalizing data to the range between -1 and 1 is called Z-score normalization, also known as standard score normalization. This method normalizes the data by subtracting the mean and dividing by the standard deviation. The formula for this normalization is:\nx_norm = (x - mean) \/ standard deviation\nWhere x_norm is the normalized value, x is the original value, mean is the mean of the data and standard deviation is the standard deviation of the data.","Q_Score":1,"Tags":"python,machine-learning,deep-learning,data-preprocessing","A_Id":75432401,"CreationDate":"2023-02-13T06:04:00.000","Title":"Normalize -1 ~ 1","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1.\nI want to know that is some ways to normalize -1 between 1.","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":211,"Q_Id":75432346,"Users Score":2,"Answer":"Consider re-scale the normalized value. e.g. normalize to 0..1, then multiply by 2 and minus 1 to have the value fall into the range of -1..1","Q_Score":1,"Tags":"python,machine-learning,deep-learning,data-preprocessing","A_Id":75432374,"CreationDate":"2023-02-13T06:04:00.000","Title":"Normalize -1 ~ 1","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to load data into a custom NER model using spacy, I am getting an error:-\n'RobertaTokenizerFast' object has no attribute '_in_target_context_manager'\nhowever, it works fine with the other models.\nThank you for your time!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":122,"Q_Id":75440385,"Users Score":0,"Answer":"I faced the same issue after upgrading my environment from {Python 3.9 + Spacy 3.3} to {Python 3.10 + Space 3.5}. Resolved this by upgrading and re-packaging the model.","Q_Score":1,"Tags":"python,spacy,named-entity-recognition","A_Id":75515376,"CreationDate":"2023-02-13T19:24:00.000","Title":"'RobertaTokenizerFast' object has no attribute '_in_target_context_manager' error while loading data into custom NER model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a problem (that I think I'm over complicating) but for the life of me I can't seem to solve it.\nI have 2 dataframes. One containing a list of items with quantities that I want to buy. I have another dataframe with a list of suppliers, unit cost and quantity of items available. Along with this I have a dataframe with shipping cost for each supplier.\nI want to find the optimal way to break up my order among the suppliers to minimise costs.\nSome added points:\n\nSuppliers won't always be able to fulfil the full order of an item so I want to also be able to split an individual item among suppliers if it is cheaper\nShipping only gets added once per supplier (2 items from a supplier means I still only pay shipping once for that supplier)\n\nI have seen people mention cvxpy for a similar problem but I'm struggling to find a way to use it for my problem (never used it before).\nSome advice would be great.\nNote: You don't have to write all the code for me but giving a bit of guidance on how to break down the problem would be great.\nTIA","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":152,"Q_Id":75447782,"Users Score":0,"Answer":"Some advice too large for a comment:\nAs @Erwin Kalvelagen alludes to, this problem can be described as a math program, which is probably the most common-sense approach.\nThe generalized plan of attack is to figure out how to create an expression of the problem using some modeling package and then turn that problem over to a solver engine which uses diverse techniques to find the optimal answer.\ncvxpy is certainly 1 of the options to do the first part with. I'm partial to pyomo, and pulp is also viable. pulp also installs with a solver (cbc) which is suitable for this type of problem. In other cases, you may need to install separately.\nIf you take this approach, look through a text or some online examples on how to formulate a MIP (mixed integer program). You'll have some sets (perhaps items, suppliers, etc.), data that form constraints or limits, some variables indexed by the sets, and an objective....likely to minimize cost.\nForget about the complexities of split-orders and combined shipping at first and just see if you can get something working with toy data, then build out from there.","Q_Score":1,"Tags":"python,optimization,cvxpy,operations-research","A_Id":75453931,"CreationDate":"2023-02-14T12:23:00.000","Title":"How would I go about finding the optimal way to split up an order","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a numpy array with a shape of (3, 4096). However, I need it's shape to be (4096, 3). How do I accomplish this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":151,"Q_Id":75475397,"Users Score":0,"Answer":"Use:\narr=arr.T\n(or)\narr=np.transpose(arr)\n(or)\narr= arr.reshape(4096, 3)\nwhere arr is your array with shape (3,4096)","Q_Score":1,"Tags":"python,python-3.x,numpy,numpy-ndarray","A_Id":75552015,"CreationDate":"2023-02-16T16:43:00.000","Title":"How to reverse the shape of a numpy array","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to solve the differential equation 4(y')^3-y'=1\/x^2 in python. I am familiar with the use of odeint to solve coupled ODEs and linear ODEs, but can't find much guidance on nonlinear ODEs such as the one I'm grappling with.\nAttempted to use odeint and scipy but can't seem to implement properly\nAny thoughts are much appreciated\nNB: y is a function of x","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":62,"Q_Id":75479380,"Users Score":1,"Answer":"The problem is that you get 3 valid solutions for the direction at each point of the phase space (including double roots). But each selection criterion breaks down at double roots.\nOne way is to use a DAE solver (which does not exist in scipy) on the system y'=v, 4v^3-v=x^-2\nThe second way is to take the derivative of the equation to get an explicit second-order ODE y''=-2\/x^3\/(12*y'^2-1).\nBoth methods require the selection of the initial direction from the 3 roots of the cubic at the initial point.","Q_Score":1,"Tags":"python,scipy,differential-equations,odeint","A_Id":75481202,"CreationDate":"2023-02-17T01:10:00.000","Title":"Solving nonlinear differential equations in python","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset which contains the longitude and latitude of the 1000 largest US cities. I'm designing an API which returns the user's nearest city, given an input of the user's longitude\/latitude.\nWhat is the most efficient algorithm I can use to calculate the nearest city? I know that I can use the haversine formula to calculate the distance between the user's coordinate and each cities, but it seems inefficient to have to do this for all 1000 cities. I've previously used a k-d tree to solve nearest neighbour problems on a plane - is there a similar solution that can be used in the context of a globe?\nEdit: keeping this simple - distance I'm looking for is as the crow flies. Not taking roads or routes into account at this stage.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":196,"Q_Id":75495739,"Users Score":1,"Answer":"You can split the map into squares that do not overlap and they cover the whole US map (i.e., you will have a grid). You will number the squares using the coordinates of their upper left corner (i.e., each one will have a unique ID) and you will do a preprocessing where each city will be assigned with the ID of the square where it belongs. You will find the square where the user lies into and then you will check only the cities that lie into this square and the ones that are one step from this (total: 9 squares). If these are empty of cities, you will check the ones that are two steps of it etc. In this way, on average you will check much less cities to find the closest","Q_Score":3,"Tags":"python,algorithm,data-structures,nearest-neighbor","A_Id":75514588,"CreationDate":"2023-02-18T19:07:00.000","Title":"What algorithm would be most efficient when trying to find the nearest city given a set of coordinates?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dataset which contains the longitude and latitude of the 1000 largest US cities. I'm designing an API which returns the user's nearest city, given an input of the user's longitude\/latitude.\nWhat is the most efficient algorithm I can use to calculate the nearest city? I know that I can use the haversine formula to calculate the distance between the user's coordinate and each cities, but it seems inefficient to have to do this for all 1000 cities. I've previously used a k-d tree to solve nearest neighbour problems on a plane - is there a similar solution that can be used in the context of a globe?\nEdit: keeping this simple - distance I'm looking for is as the crow flies. Not taking roads or routes into account at this stage.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":196,"Q_Id":75495739,"Users Score":1,"Answer":"This answer is very similar to that of ckc.\nFirst, spilt the 1000 cities in 2 groups : a big one located located between Canada and Mexico and the few others cities located outside this rectangle (i.e Alaska, Hawai, ...).\nWhen processing coordinates, check if they belong to the small group : in this case, no optimisation needed.\nTo optimize the other case, you may divide the map in rectangles (example 5\u00b0lat x 7\u00b0 lon) and associate to each rectangle the list of cities belonging to each rectangle.\nTo find the nearest city, consider the rectangle R containing the point.\nCompute the distance to the cities of the rectangle.\nProcess the 8 rectangles adjacent to R by computing the distance of the point to each rectangle : you may then eliminate the adjacent rectangles whose distance is greater than the best distance already found.\nIterate the process to a next level, i.e. the next crown (rectangles located on the outside of the area composed of 5x5 rectangles whose center is R).","Q_Score":3,"Tags":"python,algorithm,data-structures,nearest-neighbor","A_Id":75514598,"CreationDate":"2023-02-18T19:07:00.000","Title":"What algorithm would be most efficient when trying to find the nearest city given a set of coordinates?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Given , sklearn.neural_network and simple Deep Learning by Keras with Sequential and Dese Nodes, are the mathematically same just two API's with computation optimization?\nYes Keras has a Tensor Support and could also liverage GPU and Complex models like CNN and RNN are permissible.\nHowever, are they mathematically same and we will yield same results given same hyper parameter , random state, input data etc ?\nElse apart from computational efficiency what maker Keras a better choice ?","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":48,"Q_Id":75516086,"Users Score":1,"Answer":"I don't think they will give you the exact same results as the internal implementations for 2 same operations are different even across pytorch and tensorflow.\nWhat makes Keras a better option is the ecosystem. You have the DataLoaders which can load the complex data in batches for you in the desired format, then you have the Tensorboard where you can see the model training, then you have preprocessing functions especially for data augmentations. In TF\/Keras, you now even have data augmentation layers, in PyTorch Torchvision provides this in Transforms. Then you have the flexibility, you can define what types of layers in what order you want, what should be the initializer of the layer, do you want batch norm between layers or not, do you want a dropout layer between the layers or not, what should be the activation of hidden layers you can have relu in 1 layer and tanh in other, you can define how your forward pass should exist, blah blah. Then you have the callbacks to customize the training experience, and so on.","Q_Score":1,"Tags":"python,tensorflow,keras,deep-learning,neural-network","A_Id":75516124,"CreationDate":"2023-02-21T03:38:00.000","Title":"Difference between sklearn.neural_network and simple Deep Learning by Keras with Sequential and Dese Nodes?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using MAC and trying to run Ultralytics YOLOv8 pre-trained model to detect objects in my project. However, despite trying to use MPS, I am still seeing the CPU being used in torch even after running the Python code. Specifically, the output I see is: \"Ultralytics YOLOv8.0.43 \ud83d\ude80 Python-3.9.16 torch-1.13.1 CPU\".\nI wanted to know if has support for MPS in YOLOv8, and how can use it?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":222,"Q_Id":75526898,"Users Score":0,"Answer":"Try adding \"--device mps\" as a parameter when running the command line","Q_Score":1,"Tags":"python,macos,cpu,yolo","A_Id":75748615,"CreationDate":"2023-02-21T23:28:00.000","Title":"Unable to Use MPS to Run Pre-trained Object Detection Model on GPU","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have big polars dataframe that I want to write into external database (sqlite for example)\nHow can I do it?\nIn pandas, you have to_sql() function, but I couldn't find any equivalent in polars","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":939,"Q_Id":75559239,"Users Score":1,"Answer":"You can use the DataFrame.write_database method.","Q_Score":2,"Tags":"python,sqlite,rust,python-polars","A_Id":76377130,"CreationDate":"2023-02-24T16:47:00.000","Title":"How do I write polars dataframe to external database?","Data Science and Machine Learning":1,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I make a deep learning model for classification. The model consist of 4 Conv2d layer, 1 pooling layer, 2 dense layer and 1 flatten layer. When i do this arrangement of layers: Conv2D, Conv2D, Conv2D, Conv2D, pooling, dense, flatten, dense then my results are good. But when i follow this arrangement: Conv2D, Conv2D, Conv2D, Conv2D, pooling, flatten, dense, dense then the classification results are not good. My question is putting flatten layer between two dense layer is correct or not?\nCan I follow the pattern of layer by which i am getting good classification results?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":50,"Q_Id":75577762,"Users Score":0,"Answer":"Typically, it is not recommended to sandwich a flatten layer between dense layers, and as suggested by Corralien, It doesn't provide any value. Your other architecture Conv2D, Conv2D, Conv2D, Conv2D, pooling, flatten, dense, dense is more legit. If your model is providing you with good results, you might want to keep it, but technically you do not need the flatten layer between the two dense layers.\nYou can consider using Conv2D, Conv2D, Conv2D, Conv2D, Pooling, dense, dense. Or a better alternative would be to try playing with your architecture. Such as adding another pooling layer between the four Conv2d layers like: Conv2D, Conv2D, Pooling, Conv2D, Conv2D, Pooling, flatten, dense, dense, and proceed with adjusting your hyperparameters.","Q_Score":1,"Tags":"python,tensorflow,keras,deep-learning,conv-neural-network","A_Id":75578267,"CreationDate":"2023-02-27T07:31:00.000","Title":"Placement of Flatten layer in deep learning model","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working with satellite images with different spatial resolutions, understood as pixel\/meter. For experiments I want to artificially down-sample these images, keeping the image size constant. For example I have a 512x512 image with spatial resolution 0.3m\/pixel. I want to downsample it to 0.5m\/pixel 512x512. I got advised to apply a Gaussian kernel to blur the image. But how do I calculate the standard deviation and kernel size of a Gaussian kernel to approximate the desired lower resolution? I can't find a rigorous method to do that calculation. Any help really much appreciated!\nChatGTP says that the formula is:\nsigma = (desired_resolution \/ current_resolution) \/ (2 * sqrt(2 * log(2)))\nand kernel_size = 2 * ceil(2 * sigma) + 1\nBut can't explain why. Can someone explain how standard deviation (sigma) and desired output resolution are connected? And how do I know which sigma to use? Oftentimes these existing resizing functions ask for a sigma, but in their documentation don't explain how to derive it.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":212,"Q_Id":75592762,"Users Score":0,"Answer":"I wonder where that equation for the sigma comes from, I have never seen it. It is hard to define a cutoff frequency for the Gaussian.\nThe Gaussian filter is quite compact in both the spatial domain and the frequency domain, and therefore is an extremely good low-pass filter. But it has no clear point at which it attenuates all higher frequencies sufficiently to no longer produce visible aliasing artifacts, without also attenuating lower frequencies so much that the downsampled image looks blurry.\nOf course we can follow the tradition from the field of electronics, and define the cutoff frequency as the frequency above which the signal gets attenuated with at least 3dB. I think this definition might have lead to the equation in the OP, though I don\u2019t feel like attempting to replicate that computation.\nFrom personal experience, I find 0.5 times the subsampling factor to be a good compromise for regular images. For example, to downsample by a factor of 2, I\u2019d apply a Gaussian filter with sigma 1.0 first. For OP\u2019s example of going from 0.3 to 0.5 m per pixel, the downsampling factor is 0.5\/0.3 = 1.667, half that is 0.833.\nNote that a Gaussian kernel with a sigma below 0.8 cannot be sampled properly without excessive aliasing, applying a Gaussian filter with a smaller sigma should be done through multiplication in the frequency domain.\nFinally, the kernel size. The Gaussian is infinite in size, but it becomes nearly zero very quickly, and we can truncate it without too much loss. The calculation 2 * ceil(2 * sigma) + 1 takes the central portion of the Gaussian of at least four sigma, two sigma to either side. The ceiling operation is the \u201cat least\u201d, it needs to be an integer size of course. The +1 accounts for the central pixel. This equation always produces an odd size kernel, so it can be symmetric around the origin.\nHowever, two sigma is quite small for a Gaussian filter, it cuts off too much of the bell shape, affecting some of the good qualities of the filter. I always recommend using three sigma to either side: 2 * ceil(3 * sigma) + 1. For some applications the difference might not matter, but if your goal is to quantify, I would certainly try to avoid any sources of error.","Q_Score":1,"Tags":"python,image-processing,resolution,image-resizing,gaussianblur","A_Id":75593965,"CreationDate":"2023-02-28T13:30:00.000","Title":"Calculating Gaussian Kernel sigma and width to approximate a desired lower resolution pixel\/m for satellite images","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a way to merge df. However, I don't know what would be the best way to do this.\nfirst df - metro cities\/population\/teams\n\n\n\n\nMetropolitan area\nPopulation (2016 est.)[8]\nNHL\n\n\n\n\nPhoenix\n4661537\nCoyotes\n\n\nLos Angeles\n13310447\nKings Ducks\n\n\nToronto\n5928040\nMaple Leafs\n\n\nBoston\n4794447\nBruins\n\n\nEdmonton\n1321426\nOilers\n\n\nNew York City\n20153634\nRangers Islanders Devils\n\n\n\n\nSecond df - team\/wins\/losses\n\n\n\n\nteam\nw\nL\n\n\n\n\nLos Angeles Kings\n46\n28\n\n\nPhoenix Coyotes\n37\n30\n\n\nToronto Maple Leafs\n49\n26\n\n\nBoston Bruins\n50\n20\n\n\nEdmonton Oilers\n29\n44\n\n\nNew York Islanders\n34\n37\n\n\n\n\nI tried to merge across teams. However, I need to arrange this data so that it collides in the Merge. I don't know how I would do that without looking at it case by case.\nNote: The data set is much larger and with more cities and teams.\nI had a little trouble presenting the DF here, so I only put 6 rows and the main columns.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":80,"Q_Id":75665688,"Users Score":1,"Answer":"If you are trying to get the city part of a NHL team name, you could for example:\nMake a hash map which contains all the possible city names; {\"Toronto\": \"Toronto\"},\nsplit the NHL TEAM string and check if the hash map contains any part of the string. If it does that's the city name.\nWith the limited amount of possible city names that's not too bad.\nBut I'm not exactly sure what you are trying to accomplish, you should elaborate and simplify your question.","Q_Score":1,"Tags":"python,pandas,dataframe,data-cleaning","A_Id":75666854,"CreationDate":"2023-03-07T18:06:00.000","Title":"Thinking about the best way to merge two DataFrame","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using pytorch geometric to train a graph neural network. The problem that led to this question is the following error:\n\nRuntimeError: Expected all tensors to be on the same device, but found\nat least two devices, cpu and cuda:0! (when checking argument for\nargument mat1 in method wrapper_addmm)\n\nSo, I am trying to check which device the tensors are loaded on, and when I run data.x.get_device() and data.edge_index.get_device(), I get -1 for each. What does -1 mean?\nIn general, I am a bit confused as to when I need to transfer data to the device (whether cpu or gpu), but I assume for each epoch, I simply use .to(device) on my tensors to add to the proper device (but as of now I am not using .to(device) since I am just testing with cpu).\nAdditional context:\nI am running ubuntu 20, and I didn't see this issue until installing cuda (i.e., I was able to train\/test the model on cpu but only having this issue after installing cuda and updating nvidia drivers).\nI have cuda 11.7 installed on my system with an nvidia driver compatible up to cuda 12 (e.g., cuda 12 is listed with nvidia-smi), and the output of torch.version.cuda is 11.7. Regardless, I am simply trying to use the cpu at the moment, but will use the gpu once this device issue is resolved.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":136,"Q_Id":75709399,"Users Score":0,"Answer":"-1 means the tensors are on CPU. when you do .to(device), what is your device variable initialized as? If you want to use only CPU, I suggest initializing device as device = torch.device(\"cpu\") and running your python code with CUDA_VISIBLE_DEVICES='' python your_code.py ...\nTypically, if you are passing your tensors to a model, PyTorch expects your tensors to be on the same device as your model. If you are passing multiple tensors to a method, such as your loss function ``nn.CrossEntropy()```, PyTorch expects both your tensors (predictions and labels) to be on the same device.","Q_Score":1,"Tags":"python,pytorch,pytorch-geometric","A_Id":75709515,"CreationDate":"2023-03-11T20:42:00.000","Title":"What does it mean if -1 is returned for .get_device() for torch tensor?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"is there a way to reload automatically a jupyter notebook, each time it crashes ?\nI am actually running a notebook, that trains a Deep learning model (the notebook can reload the last state of model, with state of optimizer and scheduler, after each restart of the kernel ), so that reloading the notebook after a crash enables to get back the last state without a substantial loss of computations.\nI was wondering if there was a simple way to do that using the jupyter notebook API, or a signal from the jupyter notebook for example (maybe on logs).\nAlso, I am running the notebook on google cloud platform (on compute engine), if you know any efficient way to do it, using the GCP troubleshooting services, and the logging agent, it might be interested for me and for others with the same issue.\nThank you again for you time.\nI tried to look up for a solution on stack overflow, but I didn't find any similar question.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":134,"Q_Id":75745337,"Users Score":0,"Answer":"From your comment:\n\"reloading the notebook after a crash enables to get back the last state without a substantial loss of computations.\"\nWhat do you call a crash?, does it generate logs that can be parsed from \/var\/log or other location (e.g journalctl -u jupyter.service) ? If so you can manually create a shell script.\nWith User Managed Notebooks you have the concept of post-startup-script or startup-script\npost-startup-script, is path to a Bash script that automatically runs after a notebook instance fully boots up. The path must be a URL or Cloud Storage path. Example: \"gs:\/\/path-to-file\/file-name\"\nThis script can be a loop that monitors the crash you mention","Q_Score":1,"Tags":"python,google-cloud-platform,jupyter-notebook,gcp-ai-platform-notebook","A_Id":75753196,"CreationDate":"2023-03-15T13:25:00.000","Title":"automatic reloading of jupyter notebook after crash","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I use matplotlib in my lib to display legend on a ipyleaflet map. In my CD\/CI tests I run several checks on this legend (values displayed, colors etc...). My problem is when it's run on my local computer, matplotlib open a legend popup windows that stops the execution of the tests.\nIs it possible to force matplotlib to remain non-interactive when I run my pytest session ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":91,"Q_Id":75769820,"Users Score":0,"Answer":"You can change the matplotlib backend to a non graphical one by calling matplotlib.use('Agg') at the beginning of your scripts. This will prevent matplotlib from opening windows.","Q_Score":1,"Tags":"python,matplotlib","A_Id":75788130,"CreationDate":"2023-03-17T16:08:00.000","Title":"How to prevent matplotlib to be shown in popup window?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"import:from keras.preprocessing import sequence\nbut:\nAttributeError: module 'keras.preprocessing.sequence' has no attribute 'pad_sequences'\nWhy?\nHow can I edit this?","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1238,"Q_Id":75819004,"Users Score":3,"Answer":"Seems like your keras version is greater than 2.8 that's why getting error as\nfrom keras.preprocessing import sequence\nworks for earlier version. Instead, replace with the below given code:\nfrom keras.utils.data_utils import pad_sequences\nYou can also use:\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\nThey both worked for me.","Q_Score":2,"Tags":"python","A_Id":75820277,"CreationDate":"2023-03-23T03:24:00.000","Title":"python keras.preprocessing.sequence has no attribute pad_sequences","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I have a DataFrame, df, for which df.index.empy is True, will this ALWAYS imply that df.empty is also True?\nMy intend is to test only df.index.empy when I need to test both conditions (lazy programming style).","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":35,"Q_Id":75821671,"Users Score":0,"Answer":"yes, if index of DataFrame is empty it will always satisfies condition of DataFrame.empty\nFor e.g\n\nDataFrame.empty = True\n\nworks in both condition for index as well as for columns.\nHence if you want to check any of these are empty then you can go with\n\nDataFrame.empty\n\nelse need to be specific\n\nDataFrame.index.empty","Q_Score":1,"Tags":"python,pandas,dataframe","A_Id":75822061,"CreationDate":"2023-03-23T10:10:00.000","Title":"Does DataFrame.index.empty imply DataFrame.empty?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"matplotlib has completely broken my python env.\nWhen i run:\nimport matplotlib as plt\nI received:\nFileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\SamCurtis.AzureAD\\AppData\\Roaming\\Python\\Python38\\site-packages\\matplotlib.libs\\.load-order-matplotlib-3.7.1'\nI receive the same error if i try to pip install OR pip uninstall matplotlib\nInfact all my pip functionality is broken (i cannot pip freeze, uninstall \/ install) anything.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":288,"Q_Id":75830268,"Users Score":0,"Answer":"I bumped into a similar problem just now after attempting to downgrade back to my old matplotlib version from 3.7.1. pip was throwing up this matplotlib.libs error even when I wasn't trying to do anything involving matplotlib.\nThe solution was to delete the matplotlib and mpl_toolkits directories from site-packages. Then I was able to reinstall my old matplotlib version and use pip as usual.","Q_Score":1,"Tags":"python,matplotlib,filenotfounderror","A_Id":76129774,"CreationDate":"2023-03-24T05:22:00.000","Title":"matplotlib filenotfounderror site-packages","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to run an experiment where I have PageRank values and a directed graph built. I have a graph in the shape of a star (many surrounding nodes that point to a central node).\nAll those surrounding nodes have already a PageRank value precomputed and I want to check how the central node PageRank value is affected by the surrounding ones.\nIs there a way to perform this with networkx? I've tried building the graph with weighs (using the weights to store the precomputed PageRank values) but at the end, it look does not look like the central node value changes much.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":23,"Q_Id":75832557,"Users Score":0,"Answer":"I will answer myself the question. In the PageRank method for NetworX you have the parameter nstart, which specifically is the starting pagerank point for the nodes.\n\nnstart : dictionary, optional\nStarting value of PageRank iteration for each node.\n\nStill, I'm afraid the graph structure is the limiting factor when doing the random walk and obtaining a relevant result.","Q_Score":1,"Tags":"python,networkx,pagerank","A_Id":75832951,"CreationDate":"2023-03-24T10:35:00.000","Title":"Initial pagerank precomputed values with networkx","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to make image some changes(crop, resize, undistort) and I want to know how distortion coefficients and camera intrinsinc parameters change after that.\n\nOrigin Image shape = [848, 480]\ncamera matrix = [[fx, 0, cx], [0, fy, cy], [0, 0, 1]]\ndistortion coefficients = [k1, k2, p1, p2]\n\ncrop\n\n[848, 480] -> [582, 326]\nfx, fy : no changes\ncx, cy : cx -133, cy - 77\ndistortion coefficients -> ??\n\nresize\n\n[582, 326] -> [848, 480]\nfx, cx -> 1.457fx, 1.457cx\nfy, cy -> 1.472fy, 1.472cy\n[k1, k2, p1, p2] -> [k1, k2, p1, p2] same\n\nundistort\n\nfx, fy, cx, cy -> same\n[k1, k2, p1, p2] -> [0, 0, 0, 0]\n\nDoes anyone knows the answer?\nFor me I tried using my camera and calibrate some results but I don't know the exact equation.\norigin\n\nfx = 402.242923\nfy = 403.471056\ncx = 426.716067\ncy = 229.689399\nk1 = 0.068666\nk2 = -0.039624\np1 = -0.000182\np2 = -0.001510\n\ncrop\n\nfx = 408.235312 -> almost no change\nfy = 409.653612 -> almost no change\ncx = 297.611639 -> cx - 133\ncy = 153.667098 -> cy - 77\nk1 = 0.048520 -> I don't know\nk2 = -0.010035 -> I don't know\np1 = 0.000943 -> I don't know\np2 = -0.000870 -> I don't know\n\ncrop_resize\n\nfx = 598.110106 -> almost * 1.457\nfy = 608.949995 -> almost * 1.472\ncx = 430.389861 -> almost * 1.457\ncy = 226.585804 -> almost * 1.472\nk1 = 0.054762 -> I don't know\nk2 = -0.025597 -> I don't know\np1 = 0.002752 -> I don't know\np2 = -0.001316 -> I don't know\n\nundistort\n\nfx = 404.312916 -> almost same\nfy = 405.544033 -> almost same\ncx = 427.986926 -> almost same\ncy = 229.213162 -> almost same\nk1 = -0.000838 -> almost 0\nk2 = 0.001244 -> almost 0\np1 = -0.000108 -> almost 0\np2 = 0.000769 -> almost 0","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":191,"Q_Id":75873241,"Users Score":2,"Answer":"All part you write as \"I don't know\" will be \"same(not changed)\".\nBecause Cropping and Resizing is representable with only (cx,cy,fx,fy).","Q_Score":1,"Tags":"python,opencv,camera,distortion,camera-intrinsics","A_Id":75873422,"CreationDate":"2023-03-29T04:51:00.000","Title":"How does camera distortion coefficients and camera intrinsic parameters change after image crop or resize?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was just asking myself: I understand that calling df[column_name] displays a Series because a DataFrame is built of different arrays.\nThough, why does calling df[[column_name]] (column_name being only one column) returns a DataFrame and not a Series ? I'm ont sure to understand the logic behing how Pandas was built here\nThanks :)\nI was trying to explain to my students why calling a list of one element displays a dataframe and not a Series, but did not manage","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":63,"Q_Id":75886847,"Users Score":0,"Answer":"It may happend because when you give single column_name as a string it takes it perform selection and return single value based on the search key column_name.\nBut when you provide same column_name contained in a list it tries to fetch all the keys of the list which in one in this case. Hence resulting a dataframe.\nI guess they are using some standard logic to return dataframe if list is provided irrespective of length of list.\nimport pandas as pd\ndf = pd.DataFrame(columns=[\"a\",\"b\",\"c\"],\ndata=[[1,4,7],[2,5,8],[3,6,9]])\ncolumn_name = \"a\"\nprint(type(df[column_name]))\nprint(type(df[[column_name]]))\noutput:\n\n","Q_Score":1,"Tags":"python,pandas,dataframe,numpy,data-analysis","A_Id":75887040,"CreationDate":"2023-03-30T10:09:00.000","Title":"Why does using df[[column_name]] dislays a DataFrame?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two numeric and huge np.arrays (let's call them S1 and S2, such that len(S1)>>len(S2)>>N where N is a very large number). I wish to find the most likely candidate part of S1 to be equal to S2.\nThe naive approach would be to compute a running difference between S2 and parts of S1. This would take too long (about 170 hours for a single comparison).\nAnother approach I thought about was to manually create a matrix of windows, M, where each row i of M is S1[i:(i+len(S2)]. Then, under this approach, we can broadcast a difference operation. It is also infeasible because it takes a long time (less than the most naive, but still), and it uses all the RAM I have.\nCan we parallelize it using a convolution? Can we use torch\/keras to do something similar? Bear in mind I am looking for the best candidate, so the values of some convolution just have to preserve order, so the most likely candidate will have the smallest value.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":66,"Q_Id":75965921,"Users Score":1,"Answer":"I am assuming you are doing this as a stepping stone to find the perfect match\nMy reason for assuming this is that you say:\n\nI wish to find the most likely candidate part of S1 to be equal to S2.\n\n\nStart with the first value in the small array.\n\nMake a list of all indices of the big array, that match that first value of the small array. That should be very fast? Let's call that array indices, and it may have values [2,123,457,513, ...]\n\nNow look at the second value in the small array. Search through all positions indices+1 of the big array, and test for matches to that second value. This may be faster, as there are relatively few comparisons to make. Write those successful hits into a new, smaller, indices array.\n\nNow look at the third value in the small array, and so on. Eventually the indices array will have shrunk to size 1, when you have found the single matched position.\n\n\nIf the individual numerical values in each array are 0-255, you might want to \"clump\" them into, say, 4 values at a time, to speed things up. But if they are floats, you won't be able to.\nTypically the first few steps of this approach will be slow, because it will be inspecting many positions. But (assuming the numbers are fairly random), each successive step becomes much faster. Therefore the determining factor in how long it will take, will be the first few steps through the small array.\nThis would demand memory size as large as the largest plausible length of indices. (You could overwrite each indices list with the next version, so you would only need one copy.)\nYou could parallelise this:\nYou could give each parallel process a chunk of the big array (s1). You could make the chunks overlap by len(s2)-1, but you only need to search the first len(s1) elements of each chunk on the first iteration: the last few elements are just there to allow you to detect sequences that end there (but not start there).\nProviso\nAs @Kelly Bundy points out below, this won't help you if you are not on a journey that ultimately ends in finding a perfect match.","Q_Score":1,"Tags":"python,list,numpy","A_Id":75966696,"CreationDate":"2023-04-08T14:54:00.000","Title":"Find the most similar subsequence in another sequence when they both numeric and huge","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a CatboostRanker (with groups) model trained on a dataset of ~1m rows of ~300 features. Three such features are ~90% invalid (have a special category denoting such). My data is time series based, so the invalid data of these features is all but the most recent data.\nFor some reason, these three features are amongst the top 5 most important according to their shapley values (the absolute sum of all shap values for a given feature). When looking at the individual shap values for each individual object, ALL of them are positive, meaning they all contribute positively to the target variable of binary [0,1].\nI don't understand why the 90% of invalid values for these features all carry with them a positive shapley value, since the invalid category theoretically confers no helpful information. I've read several explanations of shap values, and understand their mathematical basis, but still no closer to an answer. Any suggestions would be much appreciated.\nOn the other hand, Catboost's own permutation-based get_feature_importance method ranks these variables lowly which seems far more intuitive.\nMethod: I used shap.TreeExplainer on the trained model, then extract the shap values from the explainer by passing a catboost pool containing my training data.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":157,"Q_Id":76092679,"Users Score":0,"Answer":"SHAP does not determine if your features are valid or not. It determines the importance of those features to your model's output. I had the same confusion when implementing it for some use cases.\nIf you go back to the game theory origin of SHAP, you can understand it as this : SHAP does not determine if the player (your feature) is a good player (valid feature) but it determines how much it contributes to the game (your output).\nIf you want to gain understanding on the validity of your features, you should see the model's performance as well (your metrics).","Q_Score":1,"Tags":"python,feature-selection,shap,catboost","A_Id":76092804,"CreationDate":"2023-04-24T13:56:00.000","Title":"Why are the SHAP values of a feature with mostly invalid values all positive?","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two dataframes of simlar size. Lets say df1 and df2. For both the data frames a common column is selected as index. Let's say the name column which is set as index is Id.\nWhen I run the code df1.equals(df2), it returns False. But when I try to compare both of the data frames using df1.compare(df2) only the indexed column name i.e, Id is returned without any values in it.\nWhat should I conclude from this?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":71,"Q_Id":76110576,"Users Score":0,"Answer":"Use assert_frame_equal(df1, df2, check_names=False)","Q_Score":1,"Tags":"python,pandas,dataframe","A_Id":76332485,"CreationDate":"2023-04-26T12:10:00.000","Title":"Pandas df.compare() returns the name of a column","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use time series cross validation to evaluate my model. I'm trying to leverage TimeSeriesSplit and cross_validate from sklearn.\nLet's say my model has features A, B, and others. In practice I want my model only to give predictions for categories A and B seen during training, for all other categories it will raise an error.\nHow I can cross validate my model enforcing this behaviour? Could I still use the sklearn implementations, adapt them with minor changes or would I have to build my cross validation from scracth?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":37,"Q_Id":76162845,"Users Score":0,"Answer":"Import TimeSeriesSplit.\nThen create an instance of TimeSeriesSplit (set the test size parameter to 1)\nThen define a method, which filters out all the unwanted categories from the train and test dataset and outputs your filtered data.\nImport cross_validate from scikit-learn, then make sure to call your custom function for every iteration of your cross_validate.\n\nThis way you can implement cross validation with minor changes rather than implementing it all from scartch.","Q_Score":1,"Tags":"python,pandas,scikit-learn,cross-validation","A_Id":76163383,"CreationDate":"2023-05-03T10:32:00.000","Title":"Time series cross-validation removing new categories from test","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to using or-tools and I have been tasked with minimizing a quadratic objective function with lots of IntVar variables. I don't know where to start breaking down what the problem could be concerning an infeasable model and welcome any tips to begin.\nI've watched several videos about using or tools and experimented in a test file with a smaller scale but my main problem results in the infeasable model. The videos\/experiments I've done have less variables which I am thinking is the issue but, again, this is my first time using OR-tools.\nhere is the output from when I run the model:\nStarting CP-SAT solver v9.6.2534\nParameters: log_search_progress: true enumerate_all_solutions: true\nSetting number of workers to 1\nInitial optimization model '': (model_fingerprint: 0x8f376cd881ed44f1)\n#Variables: 214 (#ints:1 in objective)\n\n126 Booleans in [0,1]\n2 in [-100000,100000]\n84 in [0,96]\n2 in [0,2000000]\n#kIntProd: 4004\n\nStarting presolve at 0.00s\nUnsat after presolving constraint #1142 (warning, dump might be inconsistent): int_prod { target { vars: 214 coeffs: 3249 offset: 3249 } exprs { vars: 212 coeffs: 570 offset: -102 } exprs { vars: 212 coeffs: 570 offset: -102 } }\nPresolve summary:\n\n2 affine relations were detected.\nrule 'affine: new relation' was applied 2 times.\nrule 'int_prod: divide product by constant factor' was applied 2 times.\nrule 'int_prod: linearize product by constant.' was applied 1140 times.\nrule 'int_prod: removed constant expressions.' was applied 1140 times.\nrule 'int_square: reduced target domain.' was applied 2 times.\nrule 'linear: remapped using affine relations' was applied 1 time.\nrule 'presolve: iteration' was applied 1 time.\nrule 'variables: canonicalize affine domain' was applied 2 times.\nProblem closed by presolve.\n\nCpSolverResponse summary:\nstatus: INFEASIBLE\nobjective: NA\nbest_bound: NA\nintegers: 0\nbooleans: 0\nconflicts: 0\nbranches: 0\npropagations: 0\ninteger_propagations: 0\nrestarts: 0\nlp_iterations: 0\nwalltime: 0.007945\nusertime: 0.007945\ndeterministic_time: 0\ngap_integral: 0","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":47,"Q_Id":76377883,"Users Score":3,"Answer":"I only have simple advices:\n\nreduce the model size if this is a parameter\nremove all constraints while keeping the problem infeasible\nlook at the square constraints and the corresponding domains\nadd assumptions to check the soundness of your data (capacity is >= 0, no crazy ranges of values, ...)\nplay with the domain of the variables to enlarge them\ninject a known feasible solution and try to find where it breaks","Q_Score":1,"Tags":"python,or-tools,mixed-integer-programming,quadratic-programming,cp-sat-solver","A_Id":76382848,"CreationDate":"2023-05-31T23:41:00.000","Title":"When resulting in an infeasible model, what tips are there to break down the nexus of the issue","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Fairly new to python-polars.\nHow does it compare to Rs {data.table} package in terms of memory usage?\nHow does it handle shallow copying?\nIs in-place\/by reference updating possible\/the default?\nAre there any recent benchmarks on memory efficiency of the big 4 in-mem data wrangling libs (polars vs data.table vs pandas vs dplyr)?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":170,"Q_Id":76400975,"Users Score":3,"Answer":"How does it handle shallow copying?\n\nPolars memory buffers are reference counted Copy on Write. That means you can never do a full data copy within polars.\n\nIs in-place\/by reference updating possible\/the default?\n\nNo, you must reassign the variable. Under the hood polars' may reuse memory buffers, but that is not visible to the users.\n\nAre there any recent benchmarks on memory efficiency\n\nThe question how it relates in memory usage is also not doing respect to design differences. Polars currently is developing an out-of-core engine. This engine doesn't process all data in memory, but will stream data from disk. The design philosophy of that engine is to use as much memory as needed without going OOM. Unused memory, is wasted potential.","Q_Score":3,"Tags":"python,r,data.table,python-polars","A_Id":76401263,"CreationDate":"2023-06-04T14:51:00.000","Title":"Polars memory usage as compared to {data.table}","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have folder named with a lot of location and each of them have 360 degree pics of each of those location i give it a random picture and it has to guess where am i, I tried ORB and it gave me good results and i made it better also but the issue is that lighting conditions might give me a hard time like sunny morning or cloudy morning as per some papers these parameters can cause a issue and i wanted to find is there some way that i can handle this issue is there any way of finding key points irrespective of the weather outside.\nI tried to use ORB and getting good results for that movement of time but when lighting changes my dataset is not performing well so i need a solution for that","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":54,"Q_Id":76444908,"Users Score":2,"Answer":"To handle the issue of lighting variations in your scenario, you can try the following approaches:\n\nHistogram Equalization: Apply histogram equalization to normalize\nthe lighting conditions of your images. This technique redistributes\nthe pixel intensities to improve the contrast and make the images\nmore consistent across different lighting conditions.\n\nAdaptive Thresholding: Instead of using a global threshold for image\nbinarization, consider using adaptive thresholding techniques. These\nmethods compute a local threshold for each pixel based on its\nneighborhood, which can help handle variations in lighting.\n\nMulti-Scale Analysis: Perform feature detection and matching at\nmultiple scales to capture different levels of details. By analyzing\nthe image at different scales, you can mitigate the impact of\nlighting variations and improve the chances of finding matching\nkeypoints.\n\nImage Enhancement Techniques: Apply image enhancement techniques,\nsuch as gamma correction or contrast stretching, to improve the\nvisibility of details in the images and compensate for lighting\nvariations.\n\nExplore Deep Learning-based Approaches: Deep learning models, such\nas convolutional neural networks (CNNs), have shown promise in\nhandling variations in lighting conditions. You can consider\ntraining a CNN on your dataset to learn robust features that are\nless sensitive to lighting changes.","Q_Score":1,"Tags":"python,computer-vision,cbir","A_Id":76445034,"CreationDate":"2023-06-10T05:51:00.000","Title":"What can i use better than ORB for matching photos","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a DataFrame with an index column that uses pandas TimeStamps.\nHow can I get the values of the index as if it were a normal column (df[\"index\"])?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":45,"Q_Id":76575561,"Users Score":1,"Answer":"You can convert the index to a regular column by using the reset_index() method.","Q_Score":2,"Tags":"python,pandas,dataframe","A_Id":76575639,"CreationDate":"2023-06-28T17:43:00.000","Title":"Get values of DataFrame index column","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I got 2 columns stating the start and end dates of an event in my data frame. I need to calculate the number of 31.12.XXXX's within that range (while treating it as an open interval). Any help is appreciated, thank you.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":48,"Q_Id":76611287,"Users Score":1,"Answer":"Assuming you mean \"open interval\" in the ordinary mathematical sense, being that the end dates themselves are excluded, then...\nIf you just want to work with the textual form, split each date into DD.MM and YYYY components j1=d1[0:5]; y1=int(d1[6:]); y2=int(d2[6:]).\nSubtract the \"start\" year from the \"finish\" year, subtract one more if the first date is 31.12: n = y2 - y1 - int(j1 == \"31.12\")\nTake the maximum of this and zero (because the answer cannot be negative): if (n<0): n=0\nAlternatively, if you have dates represented in a computable form (e.g. Julian day numbers, Unix Epoch seconds, etc) start by adding one day to the \"start\" date; then take the year of each date, subtract one from the other, and you have your answer.\nSanity checking both approaches, consider:\n\nany two dates in the same year: answer is 0.\n31.12.2020 to 31.12.2021: answer is 0.\n30.12.2020 to 01.01.2021: answer is 1.","Q_Score":1,"Tags":"python,pandas","A_Id":76611698,"CreationDate":"2023-07-04T09:42:00.000","Title":"Counting 31.12.'s within a Date Range","Data Science and Machine Learning":1,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":0,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0}]