Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
48,897,331 | 2018-02-21T02:33:00.000 | 1 | 0 | 0 | 0 | python,numpy | 48,897,354 | 2 | false | 0 | 0 | Please provide your array structure.
you can use img_arrary.reshape(8,8), to work total elements must be 64 | 1 | 1 | 1 | I have of a numpy array of a image.I want to convert this image into 8*8 block using python.How should I do this? | Convert numpy array of a image into blocks | 0.099668 | 0 | 0 | 778 |
48,897,914 | 2018-02-21T03:53:00.000 | 0 | 0 | 1 | 0 | python,regex | 48,899,177 | 3 | false | 0 | 0 | You may want to try this:
(?i)(?<!jan|feb)(?<!uary)\s+[0-9]*[0-9]
Hope it helps. | 2 | 0 | 0 | I am trying to find a pattern which allows me to find a year of four digits. But I do not want to get results in which year is preceded by month e.g "This is Jan 2009" should not give any result, but "This is 2009" should return 2009. I use findall with lookbehind at Jan|Feb but I get 'an 2009' instead of blank. What am I missing? How to do It? | Not able to get desired result from lookbehind in python regex | 0 | 0 | 0 | 41 |
48,897,914 | 2018-02-21T03:53:00.000 | 0 | 0 | 1 | 0 | python,regex | 48,898,384 | 3 | false | 0 | 0 | Any otherwise matching string preceded by a string matching the negative lookbehind is not matched.
In your current regex, [a-z]* \d{4} matches "an 2009".
The negative lookbehind '(?<!Jan|Feb)' does not match the "This is J" part, so it is not triggered.
If you remove '[a-z]*' from the regex, then no match will be returned on your test string.
To fix such problems:
First, write the match you want \d{4}
Then, write what you don't want (?<!Jan |Feb )
That is (?<!Jan |Feb )\d{4} | 2 | 0 | 0 | I am trying to find a pattern which allows me to find a year of four digits. But I do not want to get results in which year is preceded by month e.g "This is Jan 2009" should not give any result, but "This is 2009" should return 2009. I use findall with lookbehind at Jan|Feb but I get 'an 2009' instead of blank. What am I missing? How to do It? | Not able to get desired result from lookbehind in python regex | 0 | 0 | 0 | 41 |
48,897,988 | 2018-02-21T04:04:00.000 | 0 | 0 | 0 | 0 | python,catboost | 51,193,611 | 1 | false | 0 | 0 | Try setting training the parameter allow_writing_files to False. | 1 | 1 | 1 | A number of TSV files and json files are being created when I used the cross validation CV object. I cannot find any way to prevent CV from not producing these in the documentation and end up deleting them manually. These files are obviously coming from CV (I have checked) and are named after the folds or general results such as time remaining and test scores.
Anyone know of the argument to set to turn it off? | catboost cv producing log files | 0 | 0 | 0 | 317 |
48,899,234 | 2018-02-21T06:13:00.000 | 1 | 0 | 0 | 0 | python,tensorflow,recurrent-neural-network,sequence-to-sequence,encoder-decoder | 48,899,306 | 1 | true | 0 | 0 | If for example, you are using Tensorflow's attention_decoder method, pass a parameter "loop_function" to your decoder. Google search for "extract_argmax_and_embed", that is your loop function. | 1 | 1 | 1 | I know how to build an encoder using dynamic rnn in Tensorflow, but my question is how can we use it for decoder?
Because in decoder at each time step we should feed the prediction of previous time step.
Thanks in advance! | How to build a decoder using dynamic rnn in Tensorflow? | 1.2 | 0 | 0 | 342 |
48,904,701 | 2018-02-21T11:26:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,pycharm | 59,266,936 | 1 | true | 0 | 0 | On the command line, go to the location where you had installed your setup and use this command to install the missing package:
pip install pcap | 1 | 0 | 0 | I try install pcap package in pycharam tools but did not install and show blow error :
Collecting pcap Could not find a version that satisfies the
requirement pcap (from versions: ) No matching distribution found for
pcap
how can i fix installing package? | Can't install pcap in pycharm? | 1.2 | 0 | 0 | 717 |
48,905,127 | 2018-02-21T11:47:00.000 | -2 | 0 | 1 | 0 | jupyter-notebook,ipython,google-colaboratory | 60,673,068 | 16 | false | 0 | 0 | A easy way is
type in
from google.colab import files
uploaded = files.upload()
copy the code
paste in colab cell | 1 | 135 | 0 | Is there any way to upload my code in .py files and import them in colab code cells?
The other way I found is to create a local Jupyter notebook then upload it to Colab, is it the only way? | Importing .py files in Google Colab | -0.024995 | 0 | 0 | 209,379 |
48,911,063 | 2018-02-21T16:39:00.000 | 0 | 0 | 1 | 0 | python,list,sequence,scandir | 48,911,155 | 1 | false | 0 | 0 | As mentioned by others, the documentation does not guarantee any particular ordering. In your case it appears to be sorted alphabetically/lexicographically. "10" comes before "2" alphabetically. You'll have to prepend 0s to give every file the same number of digits to get the ordering you want if this behaviour appears to remain consistent on your machine.
For example, "002" will come before "010".
If you want to be safe (for example need to be able to port your code to other machines/OSes), you'll want to manually sort. | 1 | 1 | 0 | I have a directory with 1600 photos and I need to save the path to each foto to a list and then to .txt file.
The photos are enumerated according to the position they should have in the list: img(0), img(1)... and I need this position to be kept.
What I obtain is this order, so now in list index 2 I have img(10):
img(0) img(1) img(10) img(100) img(1000) img(1001)...
img(2) img(2) img(20) img(200) img(2000) img(2001)...
Apparently, I'm the only one having this issue because I didn't find any discussion about this problem. Thank you very much for helping me. | Python3. Why is os.scandir not scanning sequentially? | 0 | 0 | 0 | 1,182 |
48,911,270 | 2018-02-21T16:50:00.000 | 0 | 0 | 0 | 0 | python,django,gettext,xgettext | 53,853,493 | 3 | false | 1 | 0 | In windows you just need to download :
gettext-tools-xx.zip
gettext-runtime-xx.zip
from here: enter link description here
and then you need to unzip them and copy all in bin folder of both files into C:\Program Files\gettext-utils\bin and then you need to go to control panel-> system -> advanced -> environment variables and add this path:C:\Program Files\gettext-utils\bin to path variables. Note:
xx is the version you want to download if you download version 18 you will get an error that some dll file is missing, I suggest to download version 17
this folder: gettext-utils\bin does not exist and you need to create it
restart your pc before you use gettext | 1 | 4 | 0 | I have followed all the steps shared in all stackoverflow and other questions out there to install gettext for windows (10), but still, I get the error: "Can't find msguniq, make sure you have gettext tools installed" when using internationalization in django. I have tried to download the files directly and added them to the PATH, and even an installer that had already compiled everything and added to the path automatically, but it still doesn't work, and I don't know what else to do... Help please!
Thank you for your time. | Impossible to install gettext for windows 10 | 0 | 0 | 0 | 4,679 |
48,911,436 | 2018-02-21T16:58:00.000 | 0 | 0 | 0 | 0 | python,pygame,blender,pyopengl | 48,952,393 | 1 | true | 0 | 1 | Ok I think i have found what you should do
just for the people that have trouble with this like I did this is the way you should do it:
to rotate around a cube with the camera in opengl:
your x mouse value has to be added to the z rotator of your scene
and the cosinus of your y mouse value has to be added to the x rotator
and then the sinus of your y mouse value has to be subtracted of your y rotator
that should do it | 1 | 0 | 1 | I am trying to create a simple scene in 3d (in python) where you have a cube in front of you, and you are able to rotate it around with the mouse.
I understand that you should rotate the complete scene to mimic camera movement but i can't figure out how you should do this.
Just to clarify I want the camera (or scene) to move a bit like blender (the program).
Thanks in advance | PyOpenGL how to rotate a scene with the mouse | 1.2 | 0 | 0 | 553 |
48,912,449 | 2018-02-21T17:53:00.000 | 5 | 0 | 0 | 0 | python,keras | 48,920,286 | 2 | true | 0 | 0 | Keras expects the layer weights to be a list of length 2. First element is the kernel weights and the second is the bias.
You can always call get_weights() on the layer to see shape of weights of that layer. set_weights() would expect exactly the same. | 1 | 3 | 1 | I'm trying to set the weights of a hidden layer.
I'm assuming layers[0] is the inputs, and I want to set the weights of the first hidden layer so set the index to 1.
model.layers[1].set_weights(weights)
However, when I try this I get an error:
ValueError: You called `set_weights(weights)` on layer "dense_64" with a weight list of length 100, but the layer was expecting 2 weights. Provided weights: [ 1.0544554 1.27627635 1.05261064 1.10864937 ...
The hidden layer has 100 nodes.
As it is telling me that it expects two weights, is one the weight and one the bias? | Keras - how to set weights to a single layer | 1.2 | 0 | 0 | 8,132 |
48,914,074 | 2018-02-21T19:35:00.000 | 0 | 0 | 0 | 0 | postgresql,python-3.6 | 53,563,271 | 1 | false | 0 | 0 | You can try using f-Strings and separating out the statement from the execution:
statement = f"INSERT INTO name VALUES({VARIABLE_NAME},'string',int,'string')"
cur.execute(statement)
You might also want to try with '' around {VARIABLE_NAME}: '{VARIABLE_NAME}'
In f-strings, the expressions in {} get evaluated and their values inserted into the string.
By separating out the statement you can print it and see if the string is what you were expecting.
Note, the f-string can be used within the cur.execute function, however I find it more readable to separate out.
In python3.6+ this is a better way of formatting strings then with %s.
If this does not solve the problem, more information will help debug:
what is the name table's schema?
what variable / value are you trying to insert?
what is the exact error you are given? | 1 | 0 | 0 | I got something like this:
cur.execute("INSERT INTO name VALUES(HERE_IS_VARIABLE,'string',int,'string')")
Stuff with %s (like in python 2.*) not working.
I got errors, which tells me that im trying to use "column name" in place where i put my variable. | Python3.6 + Postgresql how to put VARIABLES to SQL query? | 0 | 1 | 0 | 376 |
48,914,528 | 2018-02-21T20:04:00.000 | 5 | 1 | 0 | 0 | python,automation,frameworks,allure | 48,926,889 | 6 | true | 1 | 0 | It's doesn't work because allure report as you seen is not a simple Webpage, you could not save it and send as file to you team. It's a local Jetty server instance, serves generated report and then you can open it in the browser.
Here for your needs some solutions:
One server(your local PC, remote or some CI environment), where you can generate report and share this for you team. (server should be running alltime)
Share allure report folder as files({folder that contains the json files}) to teammates, setup them allure tool, and run command allure server on them local(which one).
Hope, it helps. | 2 | 7 | 0 | Right now, I am generating the Allure Report through the terminal by running the command: allure serve {folder that contains the json files}, but with this way the HTML report will only be available to my local because
The json files that generated the report are in my computer
I ran the command through the terminal (if i kill the terminal, the report is gone)
I have tried: Saving the Allure Report as Webpage, Complete, but the results did not reflect to the page, all i was seeing was blank fields.
So, what im trying to to do is after I execute the command to generate the report, I want to have an html file of the report that i can store, save to my computer or send through email, so i do not have to execute the command to see the previous reports. (as much as possible into 1 html file) | Is there a way to export Allure Report to a single html file? To share with the team | 1.2 | 0 | 1 | 10,030 |
48,914,528 | 2018-02-21T20:04:00.000 | 0 | 1 | 0 | 0 | python,automation,frameworks,allure | 63,722,118 | 6 | false | 1 | 0 | Allure report generates html in temp folder after execution and you can upload it to one of the server like netlify and it will generate an url to share. | 2 | 7 | 0 | Right now, I am generating the Allure Report through the terminal by running the command: allure serve {folder that contains the json files}, but with this way the HTML report will only be available to my local because
The json files that generated the report are in my computer
I ran the command through the terminal (if i kill the terminal, the report is gone)
I have tried: Saving the Allure Report as Webpage, Complete, but the results did not reflect to the page, all i was seeing was blank fields.
So, what im trying to to do is after I execute the command to generate the report, I want to have an html file of the report that i can store, save to my computer or send through email, so i do not have to execute the command to see the previous reports. (as much as possible into 1 html file) | Is there a way to export Allure Report to a single html file? To share with the team | 0 | 0 | 1 | 10,030 |
48,919,328 | 2018-02-22T03:47:00.000 | 2 | 0 | 0 | 0 | python,maya,mel | 48,920,890 | 1 | false | 0 | 0 | I suppose there's no Python/MEL command to lock keframe's axis, because there's no need to do it programmatically. So, just press a shift and slide up/down or left/right to look in the axis you'd like. | 1 | 1 | 0 | So I've been looking for awhile but haven't found anything.
What I'm trying to do is lock the ability to move keyframes forward or backwards so if there is a key on the 10th frame, I don't want to accidentally shift it to 8th or 20th frame, I only want to be able to shift up and down in the graph editor on the various translate curves | Maya Python: Lock transforms of a keyframe | 0.379949 | 0 | 0 | 185 |
48,919,965 | 2018-02-22T05:01:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,python-module,sys | 48,920,027 | 1 | false | 0 | 0 | So finally, my question is, if that is the correct way to solve this?
No, you don't move/copy anything like for fixing library imports.
I guess pip install (or any other pip) command by default installs it to the latest/compatible python2 version. If you want it in python3, pip3 is the command. But I'm assuming you have python2 installed on your machine. Otherwise I don't think you will be facing such issues. | 1 | 0 | 0 | I recently installed the pyinsane2 module using pip install pyinsane2. It was successfully installed. Running >>>import pyinsane2 in my python interpreter returns >>> wich I assume is a good sign, meaning that it was installed.
When I run my .py file in the command line using pathtodirectory> scan.py and hit enter, it gives me a ModuleNotFound Error in line 4, where it says import pyinsane2. I'm working with Python 3.6 on a windows 8 64-bit computer. I write my code with Notepad++.
When looking for answers in the web, it was suggested to look whether my python version and the module are in the same directory.
I found out that my version of Python is in "C:\Users\MyName\AppData\Local\Programs\Python\Python36-32\python.exe" using sys.executable(where python shows the same path). Then I found out where my modules are stored using sys.path. It showed "C:\Users\MyName\AppData\Local\Programs\Python\Python36-32\Lib\site-packages\pyinsane2-2.0.2-py3.6-win32.egg"(Why pyinsane2 didn't show up, even though the directory exists in site-packeges as well, I don't know) and now I am supposed to put both, python and the modules in one directory...
So finally, my question is, if that is the correct way to solve this? If yes, what do I have to move/do to get python and the modules in the same directory?
If not, what would you suggest to fix the error of looking in the wrong directory for the modules?
P. S. I know there are many veriosn of this question out already, so feel free to mark as duplicate, but I couldn't find one that was using either windows python 3.6 or was not using ubuntu or anaconda... | ModuleNotFound Error even though >>>import works | 0 | 0 | 0 | 546 |
48,921,619 | 2018-02-22T07:16:00.000 | 0 | 0 | 0 | 0 | python,django,django-sessions | 48,923,389 | 1 | false | 1 | 0 | You can run a javascript setTimeout in the background which will check if user is logged in and after three minutes the browser window will refresh.
OR (better)
You can run this timer server-side and when the client would try to change something, firstly look at the timer or the value where is the time until when is the client logged in and then based on the time perform the action or not. So After you three minutes interval user would be able to see the content but when he would try to change something the backend would reject the request and would require him to log in again.
It is much better solution because when it comes to the authentication and similar things, it's better everytime to do them server-side rather than in client browser so that it could not be exploited.
BUT
Both solutions can be applied simultaneously (so that client's browser would reload the window and redirect client to the login page and server would reject the request so that data would not be modified in any way). | 1 | 0 | 0 | I have created a login page for my application and set the session out for 3 minutes and it is working fine, but the problem is when session out happened the user is still able to do many activities on the current page i.e the logout page do not show until unless user do a page refresh or redirect to the other page.
So, how is it possible to do the logout once the session out and user do any of the activity on the current page? | Django: detect the mouse click if session out | 0 | 0 | 0 | 231 |
48,924,431 | 2018-02-22T09:58:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine | 48,927,054 | 1 | true | 1 | 0 | I've had the same issue with the old Python gdata library when I exported data from Cloud Datastore (NDB lib) to Google Spreadsheets.
Normally, the issue didn't occur at the first export, but often at some later point. I was looking into the memory usage of instances over time and it was increasing with every export job.
The reason was a memory leak in my Python (2.7) code that handled the export. If I remember correctly, I had dicts and lists with plenty of references, some of them potentially in cycles, and the references haven't been explicitly deleted after the job was completed. At least with gdata there was a lot of meta-data in memory for every cell or row the code referenced.
I don't think this is an issue particular to Google App Engine, Spreadsheets, or the libraries you are using, but how Python deals with garbage collection. If there are references left, they will occupy memory. | 1 | 2 | 0 | While exporting google spreadsheets from ndb datastore which is hardly of 2 MB size, eats up 128 Mb of run time memory of google app engine? how this is possible? i made a bulk.yaml file also, and i am using gapi calls and defer to export sheet on google app engine and it is showing error of EXceeding run time memory | Exceeding Soft private run time memory error | 1.2 | 0 | 0 | 72 |
48,924,787 | 2018-02-22T10:14:00.000 | 1 | 0 | 1 | 0 | python,windows,pycharm,anaconda | 64,291,969 | 13 | false | 0 | 0 | Found a solution. Problem is we have been creating conda environments from within Pycharm while starting a new project.
This is created at the location /Users/<username>/.conda/envs/<env-name>.
e.g. /Users/taponidhi/.conda/envs/py38.
Instead create environments from terminal using conda create --name py38.
This will create the environment at /opt/anaconda3/envs/.
After this, when starting a new project, select this environment from existing environments. Everything works fine. | 3 | 30 | 0 | I have a conda environment at the default location for windows, which is C:\ProgramData\Anaconda2\envs\myenv. Also, as recommended, the conda scripts and executables are not in the %PATH% environment variable.
I opened a project in pycharm and pointed the python interpreter to
C:\ProgramData\Anaconda2\envs\myenv\python.exe
and pycharm seems to work well with the environment in the python console, in the run environment, and in debug mode.
However, when opening the terminal the environment is not activated (I made sure that the checkbox for activating the environment is checked). To be clear - when I do the same thing with a virtualenv the terminal does activate the environment without a problem.
Here are a few things I tried and did not work:
Copied the activate script from the anaconda folder to the environment folder
Copied the activate script from the anaconda folder to the Scripts folder under the environment
Copied an activate script from the virtualenv (an identical one for which the environment is activated)
Added the anaconda folders to the path
None of these worked.
I can manually activate the environment without a problem once the terminal is open, but how do I do it automatically? | PyCharm terminal doesn't activate conda environment | 0.015383 | 0 | 0 | 19,602 |
48,924,787 | 2018-02-22T10:14:00.000 | 4 | 0 | 1 | 0 | python,windows,pycharm,anaconda | 69,735,670 | 13 | false | 0 | 0 | Solution for Windows
Go to Settings -> Tools -> Terminal
set Shell path to:
For powershell (I recommend this):
powershell.exe -ExecutionPolicy ByPass -NoExit -Command "& 'C:\tools\miniconda3\shell\condabin\conda-hook.ps1'
For cmd.exe:
cmd.exe "C:\tools\miniconda3\Scripts\activate.bat"
PyCharm will change environment automatically in the terminal
PS: I'm using my paths to miniconda, so replace it with yours | 3 | 30 | 0 | I have a conda environment at the default location for windows, which is C:\ProgramData\Anaconda2\envs\myenv. Also, as recommended, the conda scripts and executables are not in the %PATH% environment variable.
I opened a project in pycharm and pointed the python interpreter to
C:\ProgramData\Anaconda2\envs\myenv\python.exe
and pycharm seems to work well with the environment in the python console, in the run environment, and in debug mode.
However, when opening the terminal the environment is not activated (I made sure that the checkbox for activating the environment is checked). To be clear - when I do the same thing with a virtualenv the terminal does activate the environment without a problem.
Here are a few things I tried and did not work:
Copied the activate script from the anaconda folder to the environment folder
Copied the activate script from the anaconda folder to the Scripts folder under the environment
Copied an activate script from the virtualenv (an identical one for which the environment is activated)
Added the anaconda folders to the path
None of these worked.
I can manually activate the environment without a problem once the terminal is open, but how do I do it automatically? | PyCharm terminal doesn't activate conda environment | 0.061461 | 0 | 0 | 19,602 |
48,924,787 | 2018-02-22T10:14:00.000 | 0 | 0 | 1 | 0 | python,windows,pycharm,anaconda | 64,384,425 | 13 | false | 0 | 0 | I am using OSX and zshell has become the default shell in 2020.
I faced the same problem: my conda environment was not working inside pycharm's terminal.
File -> Settings -> Tools -> Terminal. the default shell path was configured as /bin/zsh --login
I tested on a separate OSX terminal that /bin/zsh --login somehow messes up $PATH variable. conda activate keep adding conda env path at the end instead of at the beginning. So the default python (2.7) always took precedence because of messed up PATH string. This issue had nothing to do with pycharm (just how zshell behaved with --login),
I removed --login part from the script path; just /bin/zsh works (I had to restart pycharm after this change!) | 3 | 30 | 0 | I have a conda environment at the default location for windows, which is C:\ProgramData\Anaconda2\envs\myenv. Also, as recommended, the conda scripts and executables are not in the %PATH% environment variable.
I opened a project in pycharm and pointed the python interpreter to
C:\ProgramData\Anaconda2\envs\myenv\python.exe
and pycharm seems to work well with the environment in the python console, in the run environment, and in debug mode.
However, when opening the terminal the environment is not activated (I made sure that the checkbox for activating the environment is checked). To be clear - when I do the same thing with a virtualenv the terminal does activate the environment without a problem.
Here are a few things I tried and did not work:
Copied the activate script from the anaconda folder to the environment folder
Copied the activate script from the anaconda folder to the Scripts folder under the environment
Copied an activate script from the virtualenv (an identical one for which the environment is activated)
Added the anaconda folders to the path
None of these worked.
I can manually activate the environment without a problem once the terminal is open, but how do I do it automatically? | PyCharm terminal doesn't activate conda environment | 0 | 0 | 0 | 19,602 |
48,925,086 | 2018-02-22T10:29:00.000 | 0 | 0 | 0 | 0 | python,algorithm,computational-geometry,dimensionality-reduction,multi-dimensional-scaling | 60,953,415 | 4 | false | 0 | 0 | Find the maximum extent of all points. Split into 7x7x7 voxels. For all points in a voxel find the point closest to its centre. Return these 7x7x7 points. Some voxels may contain no points, hopefully not too many. | 2 | 13 | 1 | Imagine you are given set S of n points in 3 dimensions. Distance between any 2 points is simple Euclidean distance. You want to chose subset Q of k points from this set such that they are farthest from each other. In other words there is no other subset Q’ of k points exists such that min of all pair wise distances in Q is less than that in Q’.
If n is approximately 16 million and k is about 300, how do we efficiently do this?
My guess is that this NP-hard so may be we just want to focus on approximation. One idea I can think of is using Multidimensional scaling to sort these points in a line and then use version of binary search to get points that are furthest apart on this line. | Choosing subset of farthest points in given set of points | 0 | 0 | 0 | 3,420 |
48,925,086 | 2018-02-22T10:29:00.000 | 1 | 0 | 0 | 0 | python,algorithm,computational-geometry,dimensionality-reduction,multi-dimensional-scaling | 48,925,457 | 4 | false | 0 | 0 | If you can afford to do ~ k*n distance calculations then you could
Find the center of the distribution of points.
Select the point furthest from the center. (and remove it from the set of un-selected points).
Find the point furthest from all the currently selected points and select it.
Repeat 3. until you end with k points. | 2 | 13 | 1 | Imagine you are given set S of n points in 3 dimensions. Distance between any 2 points is simple Euclidean distance. You want to chose subset Q of k points from this set such that they are farthest from each other. In other words there is no other subset Q’ of k points exists such that min of all pair wise distances in Q is less than that in Q’.
If n is approximately 16 million and k is about 300, how do we efficiently do this?
My guess is that this NP-hard so may be we just want to focus on approximation. One idea I can think of is using Multidimensional scaling to sort these points in a line and then use version of binary search to get points that are furthest apart on this line. | Choosing subset of farthest points in given set of points | 0.049958 | 0 | 0 | 3,420 |
48,930,303 | 2018-02-22T14:47:00.000 | 2 | 0 | 0 | 0 | python-3.x,numpy,scikit-learn,normalization | 48,930,465 | 1 | false | 0 | 0 | Normalization is: (X - Mean) / Deviation
So do just that: (2d_data - mean) / std | 1 | 0 | 1 | l have a dataset called 2d_data which has a dimension=(44500,224,224) such that 44500 is the number of sample.
l would like to normalize this data set using the following mean and std values :
mean=0.485 and std=0.229
How can l do that ?
Thank you | Normalize 2D array given mean and std value | 0.379949 | 0 | 0 | 716 |
48,934,830 | 2018-02-22T18:31:00.000 | 1 | 0 | 0 | 0 | python,numpy,opencv | 48,954,577 | 1 | true | 0 | 0 | "Part of our algorithm involves running a convex hull on some of the points in this space, but cv2.convexHull() requires an ndarray with dtype = int."
cv2.convexHull() also accepts numpy array with float32 number.
Try using cv2.convexHull(numpy.array(a,dtype = 'float32')) where a is a list of dimension n*2 (n = no. of points). | 1 | 1 | 1 | I am working on a vision algorithm with OpenCV in Python. One of the components of it requires comparing points in color-space, where the x and y components are not integers. Our list of points is stored as ndarray with dtype = float64, and our numbers range from -10 to 10 give or take.
Part of our algorithm involves running a convex hull on some of the points in this space, but cv2.convexHull() requires an ndarray with dtype = int.
Given the narrow range of the values we are comparing, simple truncation causes us to lose ~60 bits of information. Is there any way to have numpy directly interpret the float array as an int array? Since the scale has no significance, I would like all 64 bits to be considered.
Is there any defined way to separate the exponent from the mantissa in a numpy float, without doing bitwise extraction for every element? | Converting NumPy floats to ints without loss of precision | 1.2 | 0 | 0 | 563 |
48,936,542 | 2018-02-22T20:22:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn | 48,936,596 | 3 | false | 0 | 0 | Use the feature_importances_ property. Very easy. | 1 | 1 | 1 | Is there a way in python by which I can get contribution of each feature in probability predicted by my gradient boosting classification model for each test observation. Can anyone give actual mathematics behind probability prediction in gradient boosting classification model and how can it be implemented in Python. | gradient boosting- features contribution | 0.066568 | 0 | 0 | 1,153 |
48,936,684 | 2018-02-22T20:32:00.000 | 0 | 0 | 1 | 0 | python-3.x,pycharm,packages,conda,egg | 48,938,838 | 1 | false | 0 | 0 | I was able to resolve this error by reinstalling the package in the context of the new environment. In this case, through the PyCharm terminal I went back to the packages setup.py file and ran that. It installed the package into the correct location in my conda environment. | 1 | 0 | 0 | this might be a fundamental misunderstanding of Python packages, but I could use some help or directions to the right resources.
I have a egg file in my Python 3.6 site-packages directory, i'll call it package.egg. When I run python from the command line I can use modules from that package. However, when I created a new Pycharm Project and a corresponding Conda environment, I can no longer use that package (for obvious reasons). However, it doesn't seem like just copying package.egg file into the project environments site files.
Is there another process, like unzipping that I have to perform before I can call those modules?
I tried running both pip install ./package.egg and conda install ./package.egg
Thank you. | Add package to Conda Environment from site-packages | 0 | 0 | 0 | 1,092 |
48,940,027 | 2018-02-23T01:59:00.000 | 3 | 0 | 1 | 0 | python-3.x,visual-studio | 49,376,621 | 1 | false | 0 | 0 | Just ran into the same problem. Right click on your main script and select "set as startup file". Then try F5 again | 1 | 2 | 0 | I recently downloaded visual studios for windows and am using Python 3.6. VS gives me the message "Debugger operation is in progress" and has a loading bar when f5 or 511 are pressed. After the message goes away the debugger does not open. If I select Project > start with/without debugging programs run fine and the debugger opens regularly. What can I do to fix this? | Debugger operation is in progress | 0.53705 | 0 | 0 | 238 |
48,940,807 | 2018-02-23T03:48:00.000 | 0 | 0 | 0 | 0 | python,post,cookies,request | 48,940,818 | 1 | true | 0 | 0 | First create a session then use GET and use session.cookies.get_dict() it will return a dict and it should have appropriate values you need | 1 | 0 | 0 | I am basically running my personal project,but i'm stuck in some point.I am trying to make a login request to hulu.com using Python's request module but the problem is hulu needs a cookie and a CSRF token.When I inspected the request with HTTP Debugger it shows me the action URL and some request headers.But the cookie and the CSRF token was already there.But how to can do that with request module? I mean getting the cookies and the CSRF token before the post request? Any ideas?
Thanks | How to get cookies before making request in Python | 1.2 | 0 | 1 | 882 |
48,942,393 | 2018-02-23T06:32:00.000 | 3 | 0 | 0 | 0 | python,rest,django-rest-framework,flask-restful,falcon | 48,942,911 | 2 | true | 0 | 0 | You can choose any framework to develop your API, if you want SSL on your API endpoints you need to setup SSL with the Web server that is hosting your application
You can obtain a free SSL cert using Let's encrypt. You will however need a domain in order to be able to get a valid SSL certificate.
SSL connection between client and server does not depend on the framework you choose. Web Servers like Apache HTTPD and Nginx act as the public facing reverse proxy to your python web application. Configuring SSL with your webserver will give you encrypted communication between client and server | 1 | 1 | 0 | I am creating a REST API. Basic idea is to send data to a server and the server gives me some other corresponding data in return. I want to implement this with SSL. I need to have an encrypted connection between client and server. Which is the best REST framework in python to achieve this? | REST API in Python over SSL | 1.2 | 0 | 1 | 6,205 |
48,942,865 | 2018-02-23T07:09:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,deep-learning | 48,942,976 | 1 | false | 0 | 0 | How about adding up/taking the mean of your title scores(since they'd be on the same scale) and content scores for all the methods so now you'll have a single title score and single content score.
To get a single score for a document, you'll have to combine the title and content scores. To do that, you can take a weighted average(you'll have to decide the weights) or you can multiply these scores to get a single metric. Although these may not be close to zero or one, as is your requirement
As an alternate method, you can create a dataset with the added/averaged up title scores and content scores and manually create the confidence score column with zeros and ones. Using this data you can build a logistic regression model to classify your documents with confidence scores of zeros and ones. This will give you the weights as well and more insight to what you are actually looking for | 1 | 0 | 1 | Using different methods, I am scoring documents & it's title. Now I want to aggregate all these scores into single score(confidence score). I want to use unsupervised method. I want confidence score in terms of probability or percentage.
Here , M= Method No, TS = document title score, CS = document content score
eg 1
Doc1 (expected confidence score close to 0)
M - TS - CS
1 - 0.03 - 0.004
2 - 0.054 - 0.06
3 - 0.09 - 0.12
Doc2 (expected confidence score close to 1)
M - TS - CS
1 - 0.50 - 0.63
2 - 0.74 - 0.90
3 - 0.615 - 0.833
Here my hypothis is confidence score should be colse to zero for document-1 and close to 1 for document-2.
It is also possible that all Documents will have lower scores for all the methods(eg 2), so the confidence scores should be close to zero for all documents.
eg.2
Doc1 (expected confidence score close to 0)
M - TS - CS
1 - 0.03 - 0.004
2 - 0.054 - 0.06
3 - 0.09 - 0.12
Doc2 (expected confidence score close to 0)
M - TS - DS
1 - 0.001 - 0.003
2 - 0.004 - 0.005
3 - 0.0021 - 0.013
Can anyone explain me or provide some resource to calculate confidence score? | Calculate confidence score of document | 0 | 0 | 0 | 189 |
48,942,917 | 2018-02-23T07:13:00.000 | 6 | 0 | 0 | 0 | python,api,interactive-brokers | 51,470,089 | 2 | false | 0 | 0 | You have to use flex queries for that purpose. It has full transaction history including trades, open positions, net asset value history and exchange rates. | 2 | 5 | 0 | Basically, I want to use python to query my IB order history and do some analyze afterwards. But I could not find any existing API for me to query these data, does anyone have experience to do this? | Interactive brokers: How to retrieve transaction history records? | 1 | 0 | 1 | 4,976 |
48,942,917 | 2018-02-23T07:13:00.000 | 2 | 0 | 0 | 0 | python,api,interactive-brokers | 49,012,298 | 2 | true | 0 | 0 | TWS API doesn't have this functionality. You can't retreive order history, but you can get open orders using recOpenOrders request and capture executions in realtime by listening to execDetails event - just write them to a file and analyse aftewards. | 2 | 5 | 0 | Basically, I want to use python to query my IB order history and do some analyze afterwards. But I could not find any existing API for me to query these data, does anyone have experience to do this? | Interactive brokers: How to retrieve transaction history records? | 1.2 | 0 | 1 | 4,976 |
48,945,744 | 2018-02-23T10:30:00.000 | 0 | 0 | 1 | 0 | python,visual-studio-code,pylint,buildout | 49,018,806 | 1 | false | 1 | 0 | You can manipulate your PYTHONPATH environment variable with a .env file and that will be used when running Pylint. | 1 | 0 | 0 | I created a python django project use zc.buildout but in vscode pylint cannot recognize the imports in eggs.
The error:
[pylint] E0401:Unable to import 'django.test' | How can i correct linting python import when use zc.buildout in vscode | 0 | 0 | 0 | 319 |
48,950,193 | 2018-02-23T14:35:00.000 | 0 | 1 | 1 | 0 | python-3.x,aws-lambda | 49,135,142 | 1 | false | 0 | 0 | I used this regex:
regex = r'arn:(?P[^:\n]):(?P[^:\n]):(?P[^:\n]):(?P[^:\n]):(?P(?P[^:/\n]*)[:/])?(?P.[^\s|\,]+)'
and pulled the account number
for item in awsAccountIdList:
awsRegexAccountIds = re.search(regex, item).group("AccountID")
compared the list to the whitelist:
nonWhitelistedList = [item for item in listOfAccountIds if item not in accountWhitelist]
Then if the list contained a value sent the SNS with the value:
if len(nonWhitelistedList) == 0:
SendSNS(Value) | 1 | 0 | 0 | I have a lambda I'm writing that will send SNS based on non-whitelisted accounts doing disallowed functions in IAM. The cloudtrail event contains a JSON policyDocument with ARNs like so:
\"AWS\": [\r\n \"arn:aws:iam::999900000000:root\",\r\n \"arn:aws:iam::777700000000:root\"\r\n ]\r\n },\r\n \"Action\": \"sts:AssumeRole\",\r\n \"Condition\": {}\r\n }\r\n ]\r\n}
I will create a whitelist in python with just the account numbers:
accountWhitelist = ["999900000000","1234567891011"]
With this event I need to do something like an if str(policyDocAwsAcctArn) contains accountWhitelist account number do nothing else send SNS. Will I need to use something like regex on the arn to remove the arn:aws:iam:: :root after the account number? I need to be sure to have the account numbers parsed out individually as there might be 2+ arns in the AWS json. Thanks for any ideas. | Python lambda in AWS if str in list contains | 0 | 0 | 0 | 95 |
48,958,005 | 2018-02-24T00:18:00.000 | 1 | 0 | 0 | 0 | python,css,django,amazon-web-services,amazon-s3 | 48,961,939 | 1 | true | 1 | 0 | Yes, imo it would be unusual to edit the files of your production website directly from where they are served.
Edit them locally, check them into your repo and then deploy them to s3 from your repo, perhaps using a tool like Jenkins. If you make a mistake, you have something to roll back to.
I can't think of any circumstances where editing your files directly in production is a good idea. | 1 | 1 | 0 | I'm using AWS S3 to serve my static files - however I've just found out you can't edit them directly from S3, which kind of makes it pointless as I will be continuously changing things on my website. So - is the conventional way to make the changes then re-upload the file? Or do most developers store their base.css file in their repository so it's easier to change?
Because I'm using Django for my project so there is only supposed to be one static path (for me that's my S3 bucket) - or is there another content delivery network where I can directly edit the contents of the file on the go which would be better? | Edit base.css file from S3 Bucket | 1.2 | 0 | 0 | 181 |
48,961,822 | 2018-02-24T10:31:00.000 | 0 | 0 | 0 | 0 | python,nltk,sentiment-analysis,n-gram | 55,395,113 | 1 | true | 0 | 0 | Use textblob package. It offers a simple API to access its methods and perform basic NLP tasks. NLP is natural language processing. Which process your text by tokenization, noun extract, lemmatization, words inflection, NGRAMS etc. There also some other packages like spacy, nltk. But textblob will be better for beginners. | 1 | 0 | 1 | I am doing sentiment analysis on reviews of products from various retailers. I was wondering if there was an API that used n grams for sentiment analysis to classify a review as a positive or negative. I have a CSV file filled with reviews which I would like to run it in python and hence would like an API or a package rather than a tool.
Any direction towards this would be great.
Thanks | N grams for Sentiment Analysis | 1.2 | 0 | 0 | 469 |
48,963,243 | 2018-02-24T13:11:00.000 | -1 | 0 | 1 | 0 | python,nlp,nltk | 48,970,082 | 1 | false | 0 | 0 | Given a corpus of documents, you can apply part of speech tagging to get verb roots, nouns and mapping of those nouns to those verb roots. From there you should be able to deduce the most common 'relations' an 'entity' expresses, although you may want to describe your relations as something that occurs between two different entity types, and harvest more relations than just noun/verb root.
Just re-read this answer and there is definitely a better way to approach this, although not with NLTK. You should take a look at fasttext or another language vectorization library and then use euclidean distance or cosine similarity to find the words closest to University, and then filter by part of speech (verb in this case). | 1 | 2 | 1 | Is there any way to find the related verbs to a specific noun by using NLTK. For example for the word "University" I'd like to have the verbs "study" and "graduate" as an output. I mainly need this feature for relation extraction among some given entities. | using NLTK to find the related verbs to a specific noun | -0.197375 | 0 | 0 | 277 |
48,966,277 | 2018-02-24T18:32:00.000 | 5 | 0 | 1 | 0 | python,python-3.x,thread-safety,python-asyncio | 48,967,238 | 2 | true | 0 | 0 | Using the same asyncio object from multiple tasks is safe in general. As an example, aiohttp has a session object, and it is expected for multiple tasks to access the same session "in parallel".
if so, what does it make safe?
The basic architecture of asyncio allows for multiple coroutines to await a single future result - they will simply all subscribe to the future's completion, and all will be scheduled to run once the result is ready. And this applies not only to coroutines, but also to synchronous code that subscribes to the future using add_done_callback.
That is how asyncio will handle your scenario: tasks A and B will ultimately subscribe to some future awaited by the DB object and. Once the result is available, it will be delivered to both of them, in turn.
Pitfalls typically associated with multi-threaded programming do not apply to asyncio because:
Unlike with threads, it is very predictable where a context switch can occur - just look at await statements in the code (and also async with and async for - but those are still very visible keywords). Anything between them is, for all intents and purposes, atomic. This eliminates the need for synchronization primitives to protect objects, as well as the mistakes that result from mishandling such tools.
All access to data happens from the thread that runs the event loop. This eliminates the possibility of a data race, reading of shared memory that is being concurrently written to.
One scenario in which multi-tasking could fail is multiple consumers attaching to the same stream-like resource. For example, if several tasks try to await reader.read(n) on the same reader stream, exactly one of them will get the new data1, and the others will keep waiting until new data arrives. The same applies to any shared streaming resource, including file descriptors or generators shared by multiple objects. And even then, one of the tasks is guaranteed to obtain the data, and the integrity of the stream object will not be compromised in any way.
1 One task receiving the data only applies if the tasks share the reader and each task separately calls data = await reader.read(n). If one were to extract a future with fut = asyncio.ensure_future(reader.read(n)) (without using await), share the future among multiple tasks, and await it in each task with data = await fut, all tasks would be notified of the particular chunk of data that ends up returned by that future. | 2 | 8 | 0 | Simply speaking, thread-safe means that it is safe when more than one thread access the same resource and I know Asyncio use a single thread fundamentally.
However, more than one Asyncio Task could access a resource multiple time at a time like multi-threading.
For example DB connection(if the object is not thread-safe and supports Asyncio operation).
Schedule Task A and Task B accessing the same DB object.
IO Loop executes Task A.
Task A await IO operation on the DB object.(it will take long time enough)
IO Loop executes Task B
Step3's IO operation is still in progress(not done).
Task B await IO operation on the same DB object.
Now Task B is trying to access the same object at a time.
Is it completely safe in Asyncio and if so, what does it make safe? | Is it safe that when Two asyncio tasks access the same awaitable object? | 1.2 | 0 | 0 | 5,931 |
48,966,277 | 2018-02-24T18:32:00.000 | 2 | 0 | 1 | 0 | python,python-3.x,thread-safety,python-asyncio | 48,966,322 | 2 | false | 0 | 0 | No, asyncio is not thread safe. Generally only one thread should have control over an event loop and/or a resource associated to the event loop. If some other thread wants to access it, it should do it via special methods, like call_soon_threadsafe. | 2 | 8 | 0 | Simply speaking, thread-safe means that it is safe when more than one thread access the same resource and I know Asyncio use a single thread fundamentally.
However, more than one Asyncio Task could access a resource multiple time at a time like multi-threading.
For example DB connection(if the object is not thread-safe and supports Asyncio operation).
Schedule Task A and Task B accessing the same DB object.
IO Loop executes Task A.
Task A await IO operation on the DB object.(it will take long time enough)
IO Loop executes Task B
Step3's IO operation is still in progress(not done).
Task B await IO operation on the same DB object.
Now Task B is trying to access the same object at a time.
Is it completely safe in Asyncio and if so, what does it make safe? | Is it safe that when Two asyncio tasks access the same awaitable object? | 0.197375 | 0 | 0 | 5,931 |
48,969,107 | 2018-02-25T00:38:00.000 | 0 | 0 | 0 | 0 | python,grpc | 49,018,750 | 1 | false | 0 | 0 | Short answer: you can't
gRPC is a request-response framework based on HTTP2. Just as you cannot make a website that initiates a connection to a browser, you cannot make a gRPC service initiating a connection to the client. How would the service even know who to talk to?
A solution could be to open a gRPC server on the client. This way both the client and the server can accept connections from one another. | 1 | 0 | 0 | Hi i am new to GRPC and i want to send one message from server to client first. I understood how to implement client sending a message and getting response from server. But i wanna try how server could initiate a message to connected clients. How could i do that? | How to let server send the message first in GRPC using python | 0 | 0 | 1 | 325 |
48,969,180 | 2018-02-25T00:48:00.000 | 3 | 0 | 0 | 0 | python,pygame | 48,969,232 | 1 | true | 0 | 1 | It is useful because it allows you to further modify the window before actually
changing it.
Imagine you want to use more than just surface.blit() but potentially dozens of functions.
Updating a window takes a memory space and time.
You would want to keep these two things to a minimum. If you wanted to apply multiple things to your window rather than keep updating everything as soon as it is called it waits until you have applied all changes then you can tell it to update the window once.
Why use it when you use only one function? Simply because it cannot "guess" that you only want one function. It is more efficient for you to tell it when to update the window. | 1 | 2 | 0 | I noticed you need 2 steps to draw something in Pygame, first use surface.blit(), and then use display.update(). Why doesn't surface.blit() draw the object directly?
Thank you! | Why do we need to display.update() after surface.blit() something? | 1.2 | 0 | 0 | 72 |
48,970,752 | 2018-02-25T06:10:00.000 | 0 | 0 | 1 | 0 | python,dictionary | 48,970,792 | 4 | false | 0 | 0 | Use the enumerate function that will count all the words like looping it as
for index, value in enumerate(dic): | 1 | 2 | 0 | A simple program about storing rivers and their respective locations in a dictionary. I was wondering how I would go about looping through a dictionary key and looking if the dictionary key (or value) contains a certain word, if the word is present in the key, remove it.
EX: rivers_dict = {'mississippi river': 'mississippi'}
How would I remove the word 'river' in the dictionary key 'mississippi river'? I know i can assign something such as: rivers_dict['mississippi'] = rivers_dict.pop('mississippi river'). Is there a way to do this in a more modular manner? Thanks in advance. | If a dictionary key contains a certain word, how would I remove it? | 0 | 0 | 0 | 899 |
48,973,464 | 2018-02-25T12:34:00.000 | 0 | 0 | 0 | 0 | python,rest,firebase,firebase-realtime-database | 48,975,506 | 1 | false | 1 | 0 | There is a limit of 100 concurrent connections to the database for Firebase projects that are on the free Spark plan. To raise the limit, upgrade your project to a paid plan. | 1 | 0 | 0 | I was working with Pyrebase( python library for firebase) and was trying .stream() method but when I saw my firebase dashboard it showed 100 connection limit reached. Is there any way to remove those concurrent connection? | Firebase connection limit reached | 0 | 0 | 1 | 399 |
48,973,883 | 2018-02-25T13:17:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,gpu | 48,977,717 | 4 | false | 0 | 0 | First of all, if you want to see a performance gain, you should have a better GPU, and second of all, Tensorflow uses CUDA, which is only for NVidia GPUs which have CUDA Capability of 3.0 or higher. I recommend you use some cloud service such as AWS or Google Cloud if you really want to do deep learning. | 2 | 0 | 1 | I've installed tensorflow CPU version. I'm using Windows 10 and I have AMD Radeon 8600M as my GPU. Can I install GPU version of tensorflow now? Will there be any problem? If not, where can I get instructions to install GPU version? | Installing tensorflow on GPU | 0 | 0 | 0 | 691 |
48,973,883 | 2018-02-25T13:17:00.000 | -1 | 0 | 0 | 0 | python,tensorflow,gpu | 48,974,256 | 4 | false | 0 | 0 | It depends on your graphic card, it has to be nvidia, and you have to install cuda version corresponding on your system and SO. Then, you have install cuDNN corresponding on the CUDA version you had installed
Steps:
Install NVIDIA 367 driver
Install CUDA 8.0
Install cuDNN 5.0
Reboot
Install tensorflow from source with bazel using the above configuration | 2 | 0 | 1 | I've installed tensorflow CPU version. I'm using Windows 10 and I have AMD Radeon 8600M as my GPU. Can I install GPU version of tensorflow now? Will there be any problem? If not, where can I get instructions to install GPU version? | Installing tensorflow on GPU | -0.049958 | 0 | 0 | 691 |
48,974,839 | 2018-02-25T14:57:00.000 | 0 | 0 | 0 | 0 | python,wxpython,boa-constructor | 48,990,057 | 3 | false | 0 | 1 | Thank you for all of your answers. I've found the way to do it. When the column is clicked, it will return the value of the column header. This is what I want.
noCol = event.m_col
n = self.lc.GetColumn(noCol).GetText()
print(n) | 1 | 1 | 0 | How to get column number or value in wx.ListControl wxPython? I want to sort the item by column when I click it. I'm using BoaConstructor IDE. Please help me :) | Get column number or value in wx.ListControl wxPython | 0 | 0 | 0 | 416 |
48,977,688 | 2018-02-25T19:43:00.000 | 1 | 1 | 0 | 0 | python,server,putty | 48,977,787 | 1 | true | 0 | 0 | There are many ways you can run a python program after you disconnect from an SSH session.
1) Tmux or Screen
Tmux is a "terminal multiplexer" which enables a number of terminals to be accessed by a single one.
You start by sshing as you do, run it by typing tmux and executing it. Once you are done you can disconnect from putty and when you login back you can relog to the tmux session you left
Screen also does that you just type screen instead of tmux
2) nohup
"nohup is a POSIX command to ignore the HUP signal. The HUP signal is, by convention, the way a terminal warns dependent processes of logout."
You can run it by typing nohup <pythonprogram> & | 1 | 0 | 0 | I've created a script for my school project that works with data. I'm quite new to working remotely on a server, so this might seem like a dumb question, but how do I execute my script named
stats.py
so that it continues executing even after I log off PuTTy? The script file is located on the server. It has to work with a lot of data, so I don't want to just try something and then few days later find out that it has exited right after I logged off.
Thank you for any help! | How to run a python script on a remote server that it doesn't quit after I log off? | 1.2 | 0 | 0 | 1,277 |
48,978,388 | 2018-02-25T20:57:00.000 | 6 | 0 | 1 | 0 | python,heap | 48,978,497 | 4 | false | 0 | 0 | The documentation is somewhat misleading if you're thinking about what list.pop does.
If heap is a minheap, then heap[0] is indeed the smallest item. Python's list.pop method returns the last element of the list, but heapq.heappop returns the smallest (first!) element of the heap. However, it does this by popping off the last element of the heap (which is an O(1) operation on a list), swapping it with heap[0], bubbling it up (this is O(log n)), and then returning the value removed from heap[0] to the caller.
So: list.pop returns the last item from a list and is O(1). heapq.heappop returns the first item to you, but not by shifting the entire array. | 2 | 7 | 0 | For a list, the heappop will pop out the front element. Remove an element from the front of a list has time complexity O(n).
Do I miss anything? | Why heappop time complexity is O(logn) (not O(n)) in python? | 1 | 0 | 0 | 3,023 |
48,978,388 | 2018-02-25T20:57:00.000 | 1 | 0 | 1 | 0 | python,heap | 48,978,473 | 4 | false | 0 | 0 | Heap pop is indeed O(logn) complexity.
What you are missing is that pop from a heap is not like "removing the first element and left shift all elements by one". There is an algorithm to move element inside the list, after popping, there is no guarantee that the remaining elements in the list are in the same order as before. | 2 | 7 | 0 | For a list, the heappop will pop out the front element. Remove an element from the front of a list has time complexity O(n).
Do I miss anything? | Why heappop time complexity is O(logn) (not O(n)) in python? | 0.049958 | 0 | 0 | 3,023 |
48,979,972 | 2018-02-26T00:34:00.000 | 1 | 0 | 0 | 0 | python,grpc | 49,501,641 | 2 | true | 0 | 0 | Things have not changed; as of 2018-03 the response iterator is still blocking.
We're currently scoping out remedies that may be ready later this year, but for the time being, calling next(response_iterator) is only way to draw RPC responses. | 1 | 1 | 0 | I am using a python gRPC client and make request to a service that
responds a stream. Last checked the document says the iterator.next()
is sync and blocking. Have things changed now ? If not any ideas on overcoming this shortcoming ?
Thanks
Arvind | Is grpc server response streaming still blocking? | 1.2 | 0 | 1 | 1,426 |
48,981,022 | 2018-02-26T03:28:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,machine-learning,neural-network,deep-learning | 54,447,128 | 2 | false | 0 | 0 | Functionally, dilations augument in tf.nn.conv2d is the same as dilations_rate in tf.nn.convolution as well as rate in tf.nn.atrous_conv2d.
They all represent the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. The dilation factor for each dimension of input specifying the filter upsampling/input downsampling rate otherwise known as atrous convolution.
The usage differs slightly.
Let rate k >= 1 represent the dilation rate,
in tf.nn.conv2d, the rate k is passed as list of ints [1, k, k,1] for [batch, rate_height, rate_width, channel].
in tf.nn.convolution, rate k is passed as a sequence of N ints as [k,k] for [rate_height, rate_width].
in tf.nn.atrous_conv2d, rate k is a positive int32, a single value for both height and width. This library is deprecated and exists only for backwards compatibility.
Hope it helps :) | 1 | 2 | 1 | I want to make dilated convolution on a feature. In tensorflow I found tf.nn.convolution and tf.nn.conv2d. But tf.nn.conv2d doesn't seem to support dilated convolution.
So I tried using tf.nn.convolution.
Do the 2 formulations below give the same result?
tf.nn.conv2d(x, w, strides=[1, 1, 2, 2], padding='SAME',data_format='NCHW')
tf.nn.convolution(x, w, strides=[1, 1, 2, 2], padding='SAME',data_format='NCHW') | what is the difference between tf.nn.convolution and tf.nn.conv2d? | 0 | 0 | 0 | 2,014 |
48,983,001 | 2018-02-26T06:58:00.000 | 0 | 0 | 1 | 0 | python-3.x | 48,983,077 | 1 | false | 0 | 0 | This is a classic interview question. Open the file. Parse it 1 number (not character) at a time. You know that white space splits the numbers, so that should be easy. Now, what is the formula for averages? Average(N) = (n1+n2+...+nn)/N). N increases each time you read a new number. Hopefully these hints help you. Good luck. | 1 | 0 | 0 | Hello I am very new to python and I need some help because I have no idea how to deal with such a huge list from a file called "months.txt". I have a file that contains numbers taken each day of the year. I need to read the file and make it display the average numbers taken from each month. I would really appreciate any help.
here are the numbers taken each day of the year:
1102
9236
10643
2376
6815
10394
3055
3750
4181
5452
10745
9896
255
9596
1254
2669
1267
1267
1327
10207
5731
8435
640
5624
1062
3946
3796
9381
5945
10612
1970
9035
1376
1919
2868
5847
685
10578
3477
3937
5994
6971
3011
4474
4344
8068
6564
2659
4064
1161
6830
5167
5686
5352
898
4316
7699
6406
6466
2802
1239
8162
398
9908
8251
8419
6245
8484
9012
6318
853
4031
868
8776
10453
1026
1984
8127
5274
6937
1960
9655
1279
9386
6697
6326
2509
7127
7802
8798
6564
7220
10650
3847
7485
10951
3883
9456
4671
2067
6871
1573
8746
7473
4713
1215
8486
6652
4054
10304
5291
2680
9108
6446
1581
7607
2032
7630
1106
3702
986
8602
556
2209
3055
886
5813
6513
3154
1534
6271
611
4001
6522
3819
8396
2364
9660
5937
2506
9002
8586
8805
552
5802
7825
5610
8169
602
5638
2072
3536
5885
9334
6393
9318
6057
5812
5647
4654
1880
634
3084
9606
2287
3032
4030
5627
1314
8489
1601
8559
2083
5520
1829
2890
4533
3225
7405
3985
5521
1127
7109
8083
3615
1475
2896
10523
7108
797
8443
169
8755
5761
9862
9032
1659
10944
6878
1253
4690
9934
8820
41
9367
1898
3554
10650
3136
3574
9682
3950
691
8247
6677
10381
8879
8660
6431
6030
365
10357
10526
9245
5291
4651
5741
800
540
6074
68
8714
5095
4578
10841
5805
6676
2547
203
10988
604
9057
3787
2397
10984
9807
1703
6382
9793
8592
1279
8005
5297
7166
4070
4252
606
6443
10827
8140
5740
10844
8834
3695
4152
10662
8791
7791
9940
831
2999
2254
1161
808
4233
3562
3301
1530
7387
6425
9907
9752
4533
7079
3305
5286
4313
1503
6501
8201
1723
9501
9878
1844
5976
6171
10265
2607
10667
2310
836
2618
9813
5907
6849
470
8794
528
2327
2200
237
618
4898
1307
3212
1007
1322
10409
6956
8653
3462
3207
9210
1309
4431
9106
7737
1698
1117
3826
5297
5589
3199
9089
5967
3156
5919
2855
5985
1780
6267
6303
9855
3843
1816
2876
5973
2888
709
6509
4320
10342
2616
4887
10470
6084
4573
2457
10205
4627
7927
1703
5034
7042
4292 | python: dealing with a list of numbers taken each day of the year to get the average number | 0 | 0 | 0 | 34 |
48,985,145 | 2018-02-26T09:27:00.000 | 0 | 0 | 0 | 0 | python,django,email,user-registration | 48,985,464 | 1 | false | 1 | 0 | I would change it to lowercase and then save it because it looks like the least number of operations to be done. Also it should make the shortest code.
If you'll decide to check if it's unique in lowercase in DB and then save it you may end up checking DB two times (one when first check, 2nd when saving) if you'll implement it in wrong way. | 1 | 0 | 0 | As I know, standard new user process registration (Django 2.x) is validate email field only for exists and equals for E-mail Schemas. But users may be write e-mail address like this: [email protected] (via Caps Lock) and save it to DB.
It's would be dangerous, because other user can register account for these e-mail, but in lowercase: [email protected] or similar, but still that e-mail address!
So, question now is how to (smart) clean up e-mail address when user is registering?. My ideas:
set email to lowercase before save to DB
check if it exist/unique in DB (in lowercase view, of course)
I search for best practice for solve this question, btw. | Clean email field when new user is registering in Django? | 0 | 0 | 0 | 122 |
48,985,544 | 2018-02-26T09:50:00.000 | 0 | 0 | 0 | 0 | django,python-3.x,automation,zapier,zapier-cli | 49,000,888 | 2 | false | 1 | 0 | David here, from the Zapier Platform team. The easiest way to do this is to have the form submit against your server and use a library like requests to POST to Zapier. This way, you don't have to worry about CORS or revealing the hook url to your users.
Hope that makes sense. Let me know if you've got any other questions! | 1 | 0 | 0 | Is there any way I can call the "Zapier trigger" from my Django Code,
Basically, I am having a Django form where the user will enter several email-id and when the user clicks on send button then I want to send this form data to Zapier in order to do the next action like writing in google spreadsheet or sending email to everyone. | Calling Zapier trigger from Django Code | 0 | 0 | 0 | 838 |
48,986,820 | 2018-02-26T10:55:00.000 | 1 | 0 | 1 | 0 | python,module,pip,ibm-cloud-infrastructure | 48,986,960 | 1 | true | 0 | 0 | Use this to import SoftLayer :
import SoftLayer (Capital L) | 1 | 0 | 0 | My softlayer python module is installed but can't be imported
i tried reinstall it ,but still i get the same error
ModuleNotFoundError: No module named 'Softlayer'
Hope someone can help | ModuleNotFoundError: No module named 'Softlayer' | 1.2 | 0 | 0 | 611 |
48,989,366 | 2018-02-26T13:11:00.000 | 0 | 0 | 0 | 0 | django,python-2.7,django-models | 48,990,171 | 2 | false | 1 | 0 | I find the django automigration file generation problematic and incomplete.
Actually I experienced another similar problem with django migration just yesterday.
How I solved it:
delete all migration files from the /migrations folder
do a fresh makemigrations
run python manage.py migrate --fake so django does not try to rebuild..
Hey presto! Working and models updated :D | 2 | 0 | 0 | I have created a model and migrated in Django, with a unique key constraint for one of the field. Now am trying to remove the unique constraint and generate another migration file with the new change, but it says "Nothing seems to have changed".
I tried with the command
python manage.py schemamigration --auto
PS: I am using OnetoOne relationship for the field. | Django: Removing unique constraint and creating migration | 0 | 0 | 0 | 2,524 |
48,989,366 | 2018-02-26T13:11:00.000 | 3 | 0 | 0 | 0 | django,python-2.7,django-models | 49,001,809 | 2 | false | 1 | 0 | Good question. A one to one relationship implies that one record is associated with another record uniquely. Even though the unique constraint is removed(for one to one field) in the code explicitly, it won't be reflected in your DB. So it won't create any migration file.
If you try the same thing for foreign constraint, it will work. | 2 | 0 | 0 | I have created a model and migrated in Django, with a unique key constraint for one of the field. Now am trying to remove the unique constraint and generate another migration file with the new change, but it says "Nothing seems to have changed".
I tried with the command
python manage.py schemamigration --auto
PS: I am using OnetoOne relationship for the field. | Django: Removing unique constraint and creating migration | 0.291313 | 0 | 0 | 2,524 |
48,992,383 | 2018-02-26T16:01:00.000 | 0 | 1 | 0 | 0 | python,email,imaplib | 49,032,647 | 1 | true | 0 | 0 | No. There's not. IMAP considers a moved message to be deleted from one folder and created in a new folder. There is no continuity between folders in general. | 1 | 0 | 0 | I have a python script that connects to an IMAP server. The script downloads the mails from the server in a certain format. The second time the script is run, rather than downloading all the mails, it should download new mails (synchronize) to avoid time overhead.
I have an issue. How to detect if a certain mail has been dragged from one directory to another directory (or mailbox). E.g. if I move a mail from mailbox A to mailbox B, is there any such flag like 'MOVED' to identify such mails.
So the next time the script runs I am able to fetch RECENT or UNSEEN mails but not the one whose path on the server has been changed. | IMAPLIB: Is there any MOVED flag to identify mails moved between mailboxes | 1.2 | 0 | 0 | 41 |
48,998,492 | 2018-02-26T23:01:00.000 | 0 | 0 | 0 | 1 | python,windows,windows-installer,cx-freeze | 48,998,812 | 3 | false | 0 | 0 | Is there a problem with modifying the script to take the directory to process as a command line argument?
You could then configure the different shortcuts to pass in the appropriate directory. | 1 | 1 | 0 | I currently have a Python scrip that runs through all Excel files in the current directory and generates a PDF report.
It works fine now but I don't want the users to be anywhere near frozen Python scripts. I created an MSI with cxFreeze which puts the EXE and scripts in the Program Files directory.
What I would like to be able to do is create a shortcut to this executable and pass the directory the shortcut was run from to the Python program so that can be set as the working directory. This would allow the user to move the shortcut to any folder of Excel files and generate a report there.
Does Windows send the location of a opened shortcut to the executable and is there a way to access it from Python? | Is it possible to access the launching shortcut directory from a Python executalbe? | 0 | 0 | 0 | 1,595 |
48,999,177 | 2018-02-27T00:16:00.000 | 0 | 0 | 1 | 1 | python,macos,python-2.7,pip,anaconda | 61,524,906 | 2 | false | 0 | 0 | I have been struggling for this problem for weeks but finally fixed it what you have to do is move the pip package to and updated 'sites-packages' folder. for me, I downloaded pygame and the pip correctly install but wasn't working on my python3 editor. I went into my finder and went to MacintoshHD/frameworks/python/versions. Then I opened up a new tab in finder at the same location. Then for both the 3.8 and the 2.7 versions go to /lib/python/site-packages. then you will see the pip packages in the 2.7 'site-packages folder' you want to move those files to the 3.8 "site-packages" folder. It worked for me! hope it works for you! | 1 | 0 | 0 | I just got a new mac, and immediately installed Anaconda with python 3.6. However, I now need to go back and use python 2.7 for a project. This project also requires a few packages which could normally be installed with pip. However, after I installed anaconda, pip defaults to working with python3.
How can I access (or install, as it does not appear that the mac comes preloaded with pip for python 2.7) the pip for the python 2.7 that comes preloaded on the mac?
**I have tried pip2, pip2.7 as some other posts have suggested.
*** When I try to install pip (sudo easy_install pip) it defaults to looking at the Anaconda distribution) | Using pip for python 2.7, After having installed Anaconda with 3.6 (mac) | 0 | 0 | 0 | 1,768 |
49,000,335 | 2018-02-27T02:51:00.000 | 0 | 0 | 0 | 1 | macos,python-2.7,pypy | 49,023,589 | 1 | false | 0 | 0 | That sounds like a wrongly linked libSystem.dylib, and that will be hard to fix (e.g. checking libs with otool and modifying lib with install_name_tool). However, there are newer versions of PyPy. Do you have tried them? | 1 | 0 | 0 | I am trying to execute a python code using PyPy2 v5.10 on MacOS El Capitan 10.11.6. However, I keep getting this error during runtime.
dyld: lazy symbol binding failed: Symbol not found: _clock_gettime
Referenced from: /Users/macpro/Downloads/pypy2-v5.10.0-osx64/bin//libpypy-c.dylib
Expected in: flat namespace
dyld: Symbol not found: _clock_gettime
Referenced from: /Users/macpro/Downloads/pypy2-v5.10.0-osx64/bin//libpypy-c.dylib
Expected in: flat namespace
fish: './pypy contactTrace.py' terminated by signal SIGTRAP (Trace or breakpoint trap)
I have read from a few sources that its because El Capitan does not implement the clock but delcares it. Hence, one solution is to upgrade the software or comment out the declaration. Upgrading the software is not an option for me because I have a lot of other scripts running in that particular computer. I was trying to comment out the declaration but I am unable to find where I can comment it off. Also, will it really solve the issue? Or are there any more simpler solutions to it. I am not very familiar with the MacOS platform and am only using it for this project.
Thanks in advance! | dyId: Symbol not found: _clock_gettime | 0 | 0 | 0 | 1,255 |
49,003,874 | 2018-02-27T08:05:00.000 | 0 | 0 | 0 | 1 | python,sdn,pox | 49,146,800 | 1 | false | 0 | 0 | As my understand, you may need to prepare third-party program for collect flow information (e.g. sFlow). and write one program for communicating with SDN Controller. SDN Controller cover all traffic on switches. It don't handle over L4 event in general case | 1 | 0 | 0 | Does someone have a solution for detecting and mitigating TCP SYN Flood attacks in the SDN environment based on POX controller? | Python Code to detect and mitigate TCP SYN Flood attacks in SDN and POX controller | 0 | 0 | 1 | 613 |
49,003,881 | 2018-02-27T08:05:00.000 | 0 | 0 | 1 | 0 | python,windows,python-3.x,python-2.7,anaconda | 49,003,959 | 2 | false | 0 | 0 | For windows, you can simply install a new version of anaconda and add it to the PATH before the old version (and any other python versions). Windows will then find this version of python first, and it will thus be your "OS-wise" python installation. | 2 | 0 | 0 | I would like to be able to switch Python version permanently, in OS-wide manner and reboot tolerant, is it possible?
I don't want to use usual activate approach, which shows environment in command line prompt. | How to switch python version permanently, without "activating" environments? | 0 | 0 | 0 | 116 |
49,003,881 | 2018-02-27T08:05:00.000 | 0 | 0 | 1 | 0 | python,windows,python-3.x,python-2.7,anaconda | 49,003,970 | 2 | false | 0 | 0 | On windows, I think you just have to change your PATH environment variable and add the path to your favorite python.
I think you don't need to reboot your machine but you may have to restart your command line console (cmd.exe) to take it into account | 2 | 0 | 0 | I would like to be able to switch Python version permanently, in OS-wide manner and reboot tolerant, is it possible?
I don't want to use usual activate approach, which shows environment in command line prompt. | How to switch python version permanently, without "activating" environments? | 0 | 0 | 0 | 116 |
49,006,013 | 2018-02-27T10:06:00.000 | 0 | 0 | 0 | 0 | python,machine-learning,statistics,data-science | 49,021,501 | 1 | false | 0 | 0 | Take a look on nearest neighborhoods method and cluster analysis. Metric can be simple (like squared error) or even custom (with predefined weights for each category).
Nearest neighborhoods will answer the question 'how different is the current row from the other row' and cluster analysis will answer the question 'is it outlier or not'. Also some visualization may help (T-SNE). | 1 | 0 | 1 | I have a dataset with m observations and p categorical variables (nominal), each variable X1,X2...Xp has several different classes (possible values). Ultimately I am looking for a way to find anomalies i.e to identify rows for which the combination of values seems incorrect with respect to the data I saw so far. So far I was thinking about building a model to predict the value for each column and then build some metric to evaluate how different the actual row is from the predicted row. I would greatly appreciate any help! | find anomalies in records of categorical data | 0 | 0 | 0 | 826 |
49,010,952 | 2018-02-27T14:24:00.000 | 0 | 0 | 0 | 1 | python,django,multithreading,ssh,kubernetes | 49,011,385 | 1 | false | 1 | 0 | If the replica count will not change after launch you can get this info from kube api and then have your pods run from statefulset so they have sequential ids, then use the good old trick of mod N = 0..N-1 to divide work from the list in an even fashion and you should be fine. | 1 | 0 | 0 | My Django App makes SSH connections to n number of machines (using a multithreaded python function). When replica=n is set in kubernetes deployment.yaml file then I want my app to distribute the connections among the n replicas.
I mean 1 replica should connect to k number machines, another to next k number of machines and so on. When all the replicas are done then it should take the connections in cyclic fashion i.e. next k connections to first machine and another next k to other machine.
I tried with 2 replicas but all the connections are getting established by both the pods (replicas).
I want those connections to be distributed among the pods. How can I achieve this? | Distribute ssh connections made by Django App (multi-threaded python function) among n number of Kubernetes Replicas | 0 | 0 | 0 | 32 |
49,011,180 | 2018-02-27T14:36:00.000 | 1 | 0 | 0 | 0 | android,python,ios,node.js,lyft-api | 52,992,307 | 1 | false | 0 | 0 | The Mystro app does not have any affiliation with either Uber or Lyft nor do they use their APIs to interact with a driver (as neither Uber or Lyft have a publicly accessible driver API like this). They use an Android Accessibility "feature" that let's the phone look into and interact with other apps you have running.
So basically Mystro uses this accessibility feature (Google has since condemned the use of the accessibility feature like this) to interact with the Uber and Lyft app on the driver's behalf. | 1 | 0 | 0 | I want to use Lyft Driver api like in the Mystro android app however iv searched everywhere and all I could find is lyft api.
To elaborate more on what I'm trying to achieve, I want api that will allow me to intergrate with the lyft driver app and not the lyft rider app, I want to be able to for example view nearby ride requests as a driver.
The Mystro android app has this feature, how is it done | How do I use Lyft driver API like Mystro android app? | 0.197375 | 0 | 1 | 249 |
49,011,268 | 2018-02-27T14:40:00.000 | 0 | 0 | 0 | 0 | python,numpy | 49,011,315 | 1 | false | 0 | 0 | Numpy 1.8.1 is very out of date - you should upgrade to the latest version (1.14.1 as of writing) and that error will be resolved.
Out of interest, I've seen this question asked before - are you following a guide that is out of date or something? | 1 | 0 | 1 | I am using python 2.7 on windows 10 . I installed numpy-1.8.1-win32-superpack-python2.7 and extracted opencv-3.4.0-vc14_vc15.
I copied cv2.pyd from opencv\build\python\2.7\x86 and pasted to C:\Python27\Lib\site-packages.
I could import numpy without any error. While I run import cv2 it gives an error like
RuntimeError: module compiled against API version 0xa but this version of numpy is 0x9
Traceback (most recent call last):
File "", line 1, in
import cv2
ImportError: numpy.core.multiarray failed to import. | ImportError: numpy.core.multiarray failed to import on windows | 0 | 0 | 0 | 4,894 |
49,014,634 | 2018-02-27T17:30:00.000 | 0 | 1 | 1 | 0 | python-3.x | 49,014,837 | 1 | true | 0 | 0 | Since it's your homework I'm not going to solve it for you. The pythonic way would be to use some "magic" function that will solve your problem.
However you're here to learn programming in general, so implement this:
Count the characters until the next character is different from the current one.
Append to your output string the number the current character showed up and the current character.
Print. | 1 | 0 | 0 | Given a string of upper case alphabets and we have to compress the string using Run length Encoding.
Input = "AABBBACCDA"
output = 2A3B1A2C1D1A | Run length Encoding for a given String | 1.2 | 0 | 0 | 58 |
49,017,084 | 2018-02-27T20:06:00.000 | 1 | 0 | 0 | 0 | c#,python,unity3d,tensorflow,machine-learning | 49,017,149 | 5 | true | 0 | 0 | You have a few options:
Subprocess
You can open the python script via the Unity's C# then send stdout and stdin data to and from the process. In the Python side it's as simple as input() and print(), and in the C# side it's basically reading and writing from a Stream object (as far as I remember)
UDP/TCP sockets
You can make your Python a UDP/TCP server (preferrably UDP if you have to transfer a lot of data, and it might be simpler to code). Then you create a C# client and send requests to the Python server. The python server will do the processing (AI magic, yayy!) then return the results to the Unity's C#. In C# you'd have to research the UdpClient class, and in Python, the socket module. | 1 | 1 | 1 | I'm developing a simple game for a university project using Unity. This game makes use of machine learning, so I need TensorFlow in order to build a Neural Network (NN) to accomplish certain actions in the game depending on the prediction of the NN.
In particular my learning approach is reinforcement learning. I need to monitor the states and the rewards in the environment [coded in C#], and pass them to the NN [coded in Python]. Then the prediction [from Python code] should be sent back to the environment [to C# code].
Sadly I'm quite confused on how to let C# and Python communicate. I'm reading a lot online but nothing really helped me. Can anybody clear my ideas? Thank you. | What is the best approach to let C# and Python communicate for this machine learning task? | 1.2 | 0 | 0 | 2,512 |
49,018,923 | 2018-02-27T22:22:00.000 | 4 | 0 | 0 | 1 | python-3.x,celery,typeerror,pickle,memoryview | 50,096,071 | 1 | false | 0 | 0 | After uninstalling librabbitmq, the problem was resolved. | 1 | 6 | 1 | Trying to run the most basic test of add.delay(1,2) using celery 4.1.0 with Python 3.6.4 and getting the following error:
[2018-02-27 13:58:50,194: INFO/MainProcess] Received task:
exb.tasks.test_tasks.add[52c3fb33-ce00-4165-ad18-15026eca55e9]
[2018-02-27 13:58:50,194: CRITICAL/MainProcess] Unrecoverable error:
SystemError(' returned a result with an error set',) Traceback (most
recent call last): File
"/opt/myapp/lib/python3.6/site-packages/kombu/messaging.py", line 624,
in _receive_callback
return on_m(message) if on_m else self.receive(decoded, message) File
"/opt/myapp/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 570, in on_task_received
callbacks, File "/opt/myapp/lib/python3.6/site-packages/celery/worker/strategy.py",
line 145, in task_message_handler
handle(req) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/worker.py", line
221, in _process_task_sem
return self._quick_acquire(self._process_task, req) File "/opt/myapp/lib/python3.6/site-packages/kombu/async/semaphore.py",
line 62, in acquire
callback(*partial_args, **partial_kwargs) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/worker.py", line
226, in _process_task
req.execute_using_pool(self.pool) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/request.py",
line 531, in execute_using_pool
correlation_id=task_id, File "/opt/myapp/lib/python3.6/site-packages/celery/concurrency/base.py",
line 155, in apply_async
**options) File "/opt/myapp/lib/python3.6/site-packages/billiard/pool.py", line 1486,
in apply_async
self._quick_put((TASK, (result._job, None, func, args, kwds))) File
"/opt/myapp/lib/python3.6/site-packages/celery/concurrency/asynpool.py",
line 813, in send_job
body = dumps(tup, protocol=protocol) TypeError: can't pickle memoryview objects
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File
"/opt/myapp/lib/python3.6/site-packages/celery/worker/worker.py", line
203, in start
self.blueprint.start(self) File "/opt/myapp/lib/python3.6/site-packages/celery/bootsteps.py", line
119, in start
step.start(parent) File "/opt/myapp/lib/python3.6/site-packages/celery/bootsteps.py", line
370, in start
return self.obj.start() File "/opt/myapp/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 320, in start
blueprint.start(self) File "/opt/myapp/lib/python3.6/site-packages/celery/bootsteps.py", line
119, in start
step.start(parent) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 596, in start
c.loop(*c.loop_args()) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/loops.py", line
88, in asynloop
next(loop) File "/opt/myapp/lib/python3.6/site-packages/kombu/async/hub.py", line 354,
in create_loop
cb(*cbargs) File "/opt/myapp/lib/python3.6/site-packages/kombu/transport/base.py", line
236, in on_readable
reader(loop) File "/opt/myapp/lib/python3.6/site-packages/kombu/transport/base.py", line
218, in _read
drain_events(timeout=0) File "/opt/myapp/lib/python3.6/site-packages/librabbitmq-2.0.0-py3.6-linux-x86_64.egg/librabbitmq/init.py",
line 227, in drain_events
self._basic_recv(timeout) SystemError: returned a result with an error set
I cannot find any previous evidence of anyone hitting this error. I noticed from the celery site that only python 3.5 is mentioned as supported, is that the issue or is this something I am missing?
Any help would be much appreciated!
UPDATE: Tried with Python 3.5.5 and the problem persists. Tried with Django 4.0.2 and the problem persists.
UPDATE: Uninstalled librabbitmq and the problem stopped. This was seen after migration from Python 2.7.5, Django 1.7.7 to Python 3.6.4, Django 2.0.2. | TypeError: can't pickle memoryview objects when running basic add.delay(1,2) test | 0.664037 | 0 | 0 | 3,865 |
49,023,337 | 2018-02-28T06:35:00.000 | 1 | 0 | 0 | 0 | python,numpy,scipy,svd | 49,023,961 | 1 | true | 0 | 0 | If A is a 3 x 5 matrix then it has rank at most 3. Therefore the SVD of A contains at most 3 singular values. Note that in your example above, the singular values are stored as a vector instead of a diagonal matrix. Trivially this means that you can pad your matrices with zeroes at the bottom. Since the full S matrix contains of 3 values on the diagonal followed by the rest 0's (in your case it would be 64x64 with 3 nonzero values), the bottom rows of V and the right rows of U don't interact at all and can be set to anything you want.
Keep in mind that this isn't the SVD of A anymore, but instead the condensed SVD of the matrix augmented with a lot of 0's. | 1 | 0 | 1 | svd formular: A ≈ UΣV*
I use numpy.linalg.svd to run svd algorithm.
And I want to set dimension of matrix.
For example: A=3*5 dimension, after running numpy.linalg.svd, U=3*3 dimension, Σ=3*1 dimension, V*=5*5 dimension.
I need to set specific dimension like U=3*64 dimension, V*=64*5 dimension. But it seems there is no optional dimension parameter can be set in numpy.linalg.svd. | set dimension of svd algorithm in python | 1.2 | 0 | 0 | 646 |
49,027,447 | 2018-02-28T10:35:00.000 | 1 | 0 | 0 | 1 | python,bash,concurrency,background-process | 49,088,871 | 1 | false | 1 | 0 | Here's how it might look like (hosting-agnostic):
A user uploads a file on the web server
The file is saved in a storage that can be accessed later by the background jobs
Some metadata (location in the storage, user's email etc) about the file is saved in a DB/message broker
Background jobs tracking the DB/message broker pick up the metadata and start handling the file (this is why it needs to be accessible by it in p.2) and notify the user
More specifically, in case of python/django + aws you might use the following stack:
Lets assume you're using python + django
You can save the uploaded files in a private AWS S3 bucket
Some meta might be saved in the db or use celery + AWS SQS or AWS SQS directly or bring up something like rabbitmq or redis(+pubsub)
Have python code handling the job - depends on what your opt for in p.3. The only requirement is that it can pull data from your S3 bucket. After the job is done notify the user via AWS SES
The simplest single-server setup that doesn't require any intermediate components:
Your python script that simply saves the file in a folder and gives it a name like [email protected]
Cron job looking for any files in this folder that would handle found files and notify the user. Notice if you need multiple background jobs running in parallel you'll need to slightly complicate the scheme to avoid race conditions (i.e. rename the file being processed so that only a single job would handle it)
In a prod app you'll likely need something in between depending on your needs | 1 | 2 | 0 | I want to create a minimal webpage where concurrent users can upload a file and I can process the file (which is expected to take some hours) and email back to the user later on.
Since I am hosting this on AWS, I was thinking of invoking some background process once I receive the file so that even if the user closes the browser window, the processing keeps taking place and I am able to send the results after few hours, all through some pre-written scripts.
Can you please help me with the logistics of how should I do this? | Concurrent file upload/download and running background processes | 0.197375 | 0 | 0 | 912 |
49,027,972 | 2018-02-28T11:00:00.000 | -1 | 0 | 0 | 0 | python,python-2.7,scikit-learn | 49,028,439 | 1 | false | 0 | 0 | Can't you store only the parameters of your SVM classifier with clf.get_params() instead of the whole object? | 1 | 0 | 1 | I have trained a SVM model with sklearn, I need to connect this to php. To do this I am using exec command to call in the console the python script, where I load the model with pickle and predict the results. The problem is that loading the model with pickle takes some time (a couple of seconds) and I would like it to be faster. Is there a way of having this model in memory so I don't need to load with pickle every time? | Keep sklearnt model in memory to speed up prediction | -0.197375 | 0 | 0 | 186 |
49,028,537 | 2018-02-28T11:29:00.000 | 0 | 0 | 0 | 0 | python,multithreading,iot | 49,919,399 | 1 | true | 1 | 0 | I found out that this command does the job:
sudo gunicorn --bind 0.0.0.0:80 MyApp:app --worker-class gevent --timeout 90
It can serve me as much as possible now. | 1 | 0 | 0 | I implement a server in python and serve it with gunicorn.
I have 3 shared sources (urls) with event streams SSE, to be shared to unknown amount of clients.
Is there any way to enable unlimited processes/threads in gunicorn to enable sharing to unlimited users, according to requests ? | Is there any way to enable unlimited processes/threads in gunicorn to enable sharing to unlimited users? | 1.2 | 0 | 0 | 185 |
49,031,954 | 2018-02-28T14:31:00.000 | 6 | 0 | 0 | 0 | python,django,django-rest-framework,django-registration,django-oauth | 49,129,766 | 4 | false | 1 | 0 | You have to create the user using normal Django mechanism (For example, you can add new users from admin or from django shell). However, to get access token, OAuth consumer should send a request to OAuth server where user will authorize it, once the server validates the authorization, it will return the access token. | 2 | 11 | 0 | I've gone through the docs of Provider and Resource of Django OAuth Toolkit, but all I'm able to find is how to 'authenticate' a user, not how to register a user.
I'm able to set up everything on my machine, but not sure how to register a user using username & password. I know I'm missing something very subtle. How do I exactly register a user and get an access token in return to talk to my resource servers.
OR
Is it like that I've to first register the user using normal Django mechanism and then get the token of the same? | Django OAuth Toolkit - Register a user | 1 | 0 | 0 | 6,515 |
49,031,954 | 2018-02-28T14:31:00.000 | 1 | 0 | 0 | 0 | python,django,django-rest-framework,django-registration,django-oauth | 59,511,833 | 4 | false | 1 | 0 | I'm registering user with regular django mechanism combined with django-oauth-toolkit's application client details (client id and client secret key).
I have separate UserRegisterApiView which is not restricted with token authentication but it checks for client id and client secret key while making post request to register a new user. In this way we are restricting register url access to only registered OAuth clients.
Here is the registration workflow:
User registration request from React/Angular/View app with client_id and client_secret.
Django will check if client_id and client_secret are valid if not respond 401 unauthorized.
If valid and register user data is valid, register the user.
On successful response redirect user to login page. | 2 | 11 | 0 | I've gone through the docs of Provider and Resource of Django OAuth Toolkit, but all I'm able to find is how to 'authenticate' a user, not how to register a user.
I'm able to set up everything on my machine, but not sure how to register a user using username & password. I know I'm missing something very subtle. How do I exactly register a user and get an access token in return to talk to my resource servers.
OR
Is it like that I've to first register the user using normal Django mechanism and then get the token of the same? | Django OAuth Toolkit - Register a user | 0.049958 | 0 | 0 | 6,515 |
49,033,088 | 2018-02-28T15:29:00.000 | 0 | 0 | 1 | 0 | python | 49,033,153 | 1 | false | 0 | 0 | Log level is controlled in the code, so no, not really.
My solution would be to set the log level to warning by default, and check an environment variable to set it to debug, and then set that locally when you want it. | 1 | 0 | 0 | I am developing a tool and play around with it while developing by having it pip installed with the -e editable option. While developing I have set the log level to debug.
I am sure I am going to forget setting the logger to another level as soon as I am going to release the app. Is there a way to put the loglevel inside the setup.py file or something ? | Set python loglevel depending on developing or releasing an app | 0 | 0 | 0 | 26 |
49,033,297 | 2018-02-28T15:39:00.000 | 0 | 0 | 0 | 1 | java,python,windows | 49,046,477 | 2 | true | 1 | 0 | Have you tried this,
Runtime.getRuntime().exec("python helloworld.py");
Please try and if it doesn't work leave a comment. | 1 | 0 | 0 | I've a Python program which just prints "hello world". I only want to get that output in a Java program and print that again, i.e. I want to consume output of Python program in a Java program.
I tried using Runtime.getRuntime().exec("helloworld.py"); but it is giving an exception saying java.lang.IOException : Cannot run program "helloworld.py" : CreateProcess error=193, %1 is not a valid Win32 application.
Can anybody please explain why this exception has occurred and what is solution for it ?
Thanks in advance! | Receive output of python program in java | 1.2 | 0 | 0 | 202 |
49,033,375 | 2018-02-28T15:43:00.000 | 0 | 0 | 1 | 0 | arrays,python-3.x,integer | 49,033,556 | 2 | false | 0 | 0 | What you try to Do is to reverse the first Element of list.
So reverse(1).This is not possible. | 1 | 0 | 0 | I want to reverse a list made of integers
My code:
list = [1, 2, 3, 4, 5]
print(reversed(list[0]))
but it keeps saying int object is not reversible. I want it to print 5 | int object is not reversible error in python | 0 | 0 | 0 | 2,379 |
49,038,111 | 2018-02-28T20:37:00.000 | 2 | 0 | 0 | 0 | python,weka,markov,rweka,markov-models | 51,527,999 | 1 | false | 0 | 0 | Find all parents of the node
Find all children of the node
Find all parents of the children of the node
These altogether gives you the Markov blanket for a given node. | 1 | 2 | 1 | I want to do feature selection using markov blanket algorithm. I am wondering is there any API in java/weka or in python to find the markov blanket .
Consider I have a dataset. The dataset has number of variables and one one target variable. I want to find the markov blanket of the target variable.
Any information would be appreciated | How to find markov blanket for a node? | 0.379949 | 0 | 0 | 1,298 |
49,041,169 | 2018-03-01T01:11:00.000 | 5 | 0 | 1 | 0 | python,dictionary,data-structures,ordereddictionary | 49,041,241 | 1 | true | 0 | 0 | From the source code, it appears to be implemented as a dict with a doubly linked list of keys for ordering, as well as another dict that maps keys to their position in the list.
Insertion just adds to the end of the list.
Deletion uses the second dict to remove an element from the list.
Iteration iterates over the linked list. | 1 | 4 | 0 | I'm curious as to how OrderedDict from the collections library keeps key/pair order? I looked around online and couldn't find an answer. | How does OrderedDict keep things in Order in Python | 1.2 | 0 | 0 | 363 |
49,041,313 | 2018-03-01T01:33:00.000 | 1 | 0 | 0 | 0 | python,raspberry-pi,raspberry-pi3,google-assistant-sdk | 49,223,023 | 1 | false | 0 | 0 | This fixed it for me: pip3 install google-assistant-library==0.1.0 | 1 | 0 | 0 | Both an existing raspberry pi 3 assistant-sdk setup and a freshly created one are producing identical errors at all times idle or otherwise. The lines below are repeating over and do not seem to be affected by the state of the assistant. Replicates across multiple developer accounts, devices and projects. Present with both the stock hotword example and modified scripts that worked previously. All cases are library assistant and python 3 on raspberry pi 3 model B running raspbian stretch.
[9780:9796:ERROR:assistant_ssdp_client.cc(210)] Failed to parse
header: LOCATION: about:blank
[9780:9796:ERROR:assistant_ssdp_client.cc(76)] LOCATION header doesn't
contain a valid url | Assistant SDK on raspberry pi 3 throwing repeated location header errors | 0.197375 | 0 | 1 | 126 |
49,043,162 | 2018-03-01T05:28:00.000 | 1 | 0 | 0 | 0 | python,matplotlib | 49,043,299 | 2 | false | 0 | 0 | This should work
matplotlib.pyplot.yticks(np.arange(start, stop+1, step)) | 1 | 0 | 1 | Let's say if I have Height = [3, 12, 5, 18, 45] and plot my graph then the yaxis will have ticks starting 0 up to 45 with an interval of 5, which means 0, 5, 10, 15, 20 and so on up to 45. Is there a way to define the interval gap (or the step). For example I want the yaxis to be 0, 15, 30, 45 for the same data set. | Custom Yaxis plot in matplotlib python | 0.099668 | 0 | 0 | 40 |
49,045,210 | 2018-03-01T08:08:00.000 | 0 | 0 | 0 | 0 | tensorflow,python-3.5 | 49,045,428 | 1 | false | 0 | 0 | The only option as I see it is creating an initialization loop where every index is set to 0. This eliminates the problem but may not be an ideal way. | 1 | 0 | 1 | Is it in anyway possible to check whether an index in a TensorArray has been initialized?
As I understand TensorArrays can't be initialized with default values.
However I need a way to increment the number on that index which I try to do by reading it, adding one and then writing it to the same index.
If the index is not initialized however this will fail as it cannot read an uninitialized index.
So is there a way to check if it has been initialized and otherwise write a zero to initialize it? | How to checkwhether an index in a tensorarray has been initialized? | 0 | 0 | 0 | 69 |
49,046,224 | 2018-03-01T09:10:00.000 | 0 | 0 | 0 | 0 | python-2.7,odoo-8,odoo | 49,046,718 | 1 | false | 1 | 0 | I solved it myself. I just added _order = 'finished asc' to the class. finished is a record of type Boolean and tells me if the Task is finished or not. | 1 | 0 | 0 | At the moment i am working on an odoo project and i have a kanban view. My question is how do i put a kanban element to the bottom via xml or python. Is there an index for the elements or something like that? | Is there a way to put a kanban element to the bottom in odoo | 0 | 0 | 1 | 63 |
49,048,111 | 2018-03-01T10:54:00.000 | 1 | 0 | 0 | 0 | python,video,cv2 | 58,926,411 | 6 | false | 0 | 0 | I noticed a weird phenomenon that many video DO NOT HAVE as much frames as the vid.get(cv2.CAP_PROP_FRAME_COUNT) gets.
I suppose that the video duration should be the divided value of TOTAL FRAMES by FPS, but it always mismatch. The video duration would be longer than we calculated. Considering what FFMPEG does, the original video might has some empty frames.
Hope this help. | 1 | 31 | 0 | I can only get the number of frames CAP_PROP_FRAME_COUNT using CV2.
However, I cannot find the parameter to get the duration of the video using cv2.
How to do that?
Thank you very much. | How to get the duration of video using cv2 | 0.033321 | 0 | 0 | 50,916 |
49,048,262 | 2018-03-01T11:02:00.000 | 0 | 0 | 0 | 0 | python,tensorflow,object-detection | 68,770,190 | 2 | false | 0 | 0 | I had the same issue using the centernet_mobilenetv2 model, but I just deleted the num_keypoints parameter in the pipeline.config file and then all was working fine. I don't know what is the problem with that parameter but I was able to run the training without it. | 1 | 1 | 1 | So I am currently attempting to train a custom object-detection model on tensorflow to recognize images of a raspberrypi2. Everything is already set up and running on my hardware,but due to limitations of my gpu I settled for the cloud. I have uploaded my data(train & test records ans csv-files) and my checkpoint model. That is what I get from the logs:
tensorflow:Restoring parameters from /mobilenet/model.ckpt
tensorflow:Starting Session.
tensorflow:Saving checkpoint to path training/model.ckpt
tensorflow:Starting Queues.
tensorflow:Error reported to Coordinator: <class tensorflow.python.framework.errors_impl.InvalidArgumentError'>,
indices[0] = 0 is not in [0, 0)
I also have a folder called images with the actual .jpg files and it is also on the cloud, but for some reason I must specify every directory with a preceeding forward slash / and that might be a problem, as I currently do not know whether some of the files are trying to import these images ,but could not find the path because of the missing /.
If any of you happens to share a solution I would be really thankful.
EDIT : I fixed it by downloading an older version of the models folder in tensorflow and the model started training, so note to the tf team. | Error indices[0] = 0 is not in [0, 0) while training an object-detection model with tensorflow | 0 | 0 | 0 | 591 |
49,048,294 | 2018-03-01T11:04:00.000 | 4 | 0 | 0 | 0 | python,resize,pyqt5,qapplication | 49,049,245 | 2 | false | 0 | 1 | Simple 1 line fix for any who need
os.environ["QT_AUTO_SCREEN_SCALE_FACTOR"] = "1" | 1 | 1 | 0 | I am using PyQt5 and Python 3.6.4 to design a ui for a program. It was made on a 720p monitor however now using the same code on a 4k monitor, everything is tiny apart from the text. How would I go about resizing the whole app to look the same on all monitors: (720p, 1080p, 4k, etc.)
The program is to be run on windows through an executable created through compiling the python code.
Cheers | PyQt5 Resize app for different displays | 0.379949 | 0 | 0 | 4,389 |
49,048,520 | 2018-03-01T11:16:00.000 | 12 | 0 | 1 | 0 | python,matplotlib,pycharm | 49,070,994 | 1 | true | 0 | 0 | I had the same problem in PyCharm 2017.3.3 and what helped was to disable the checkbox Show plots in toolwindow in File -> Settings -> Tools -> Python Scientific. | 1 | 10 | 0 | I've set my default backend to Qt5Agg in .config/matplotlib/matplotlibrc. This works if I use a regular ssh prompt and open ipython and run import matplotlib as mpl
I correctly get:
mpl.get_backend() => "Qt5Agg"
When I connect through pyCharm remote console, the default backend is set to 'module://backend_interagg' which seems to be a purpose built helper extension by pycharm.
Using mpl.use("Qt5Agg") works as expected (i.e. correctly sets the backend and allows me to use it).
I'm just trying to get the default working and the pycharm remote console to properly use my rc file parameters.
Fwiw, I've tried actually setting my master rc file (in the site-packages directory) to have Qt5Agg and I still get this problem.
Also, mpl.get_configdir() correctly returns ~/.config/matplotlib
Any ideas? | How to prevent PyCharm from overriding default backend as set in matplotlib? | 1.2 | 0 | 0 | 2,501 |
49,049,714 | 2018-03-01T12:26:00.000 | 0 | 0 | 0 | 0 | postgresql,hash,probability,python-3.6 | 49,123,811 | 1 | false | 0 | 0 | The total number of combinations of 500 objects taken by up to 10 would be approximately 2.5091E+20, which would fit in 68 bits (about 13 characters in base36), but I don't see an easy algorithm to assign each combination a number. An easier algorithm would be like this: if you assign each person a 9-bit number (0 to 511) and concatenate up to 10 numbers, you would get 90 bits. To encode those in base36, you would need 18 characters.
If you want to use a hash that with just 6 characters in base36 (about 31 bits), the probability of a collision depends on the total number of groups used during the lifetime of the application. If we assume that each day there are 10 new groups (that were not encountered before) and that the application will be used for 10 years, we would get 36500 groups. Using the calculator provided by Nick Barnes shows that there is a 27% chance of a collision in this case. You can adjust the assumptions to your particular situation and then change the hash length to fit your desired maximum chance of a collision. | 1 | 0 | 0 | I have a total number of W workers with long worker IDs. They work in groups, with a maximum of M members in each group.
To generate a unique group name for each worker combination, concantating the IDs is not feasible. I am think of doing a MD5() on the flattened sorted worker id list. I am not sure how many digits should I keep for it to be memorable to humans while safe from collision.
Will log( (26+10), W^M ) be enough ? How many redundent chars should I keep ? I there any other specialized hash function that works better for this scenario ? | What is the shortest human-readable hash without collision? | 0 | 0 | 0 | 361 |
49,051,248 | 2018-03-01T13:56:00.000 | 1 | 1 | 0 | 0 | c++,jenkins,tap,allure,python-behave | 49,302,946 | 1 | false | 1 | 0 | As far as I know there is no Allure integrations for C++ test frameworks exist yet. But Allure supports JUnit-style XML report format that is de facto standard format for reporting test results. So if your framework can generate results in such format you can generate Allure report for it. | 1 | 1 | 0 | I'm starting using Allure with Python Behave for high-level BDD testing of a medium size C++ ecosystem of services.
What I get is a web page inside Jenkins with pretty and clear reports, thanks to the Allure-Jenkins plugin.
I have also some unit tests made with TAP, shown in Jenkins with another plugin.
What I would like to get is the integration of the unit test reports inside the same Allure page
Unfortunately, I was not able to find a C++ Unit Testing Framework directly supporting Allure reports: does any exist?
Otherwise, I could I get this integration?
Thank you! | C++ Unit Test Framework with Allure Report | 0.197375 | 0 | 0 | 435 |
49,051,407 | 2018-03-01T14:05:00.000 | 3 | 0 | 1 | 0 | python,itertools | 49,051,527 | 2 | true | 0 | 0 | I presume it stands for "iterable slice", since it takes the same arguments as the slice built-in but generates a sequence of results rather than returning a list.
You may be suffering from some slight misunderstanding of "infinitive," which is a part of speech (in English, "to fall" is the infinitive of the verb "fall"). You perhaps mean "infinite," which is never-ending or uncountable.
If so, you have correctly observed that one advantage of the functions in itertools is that they can be applied to infinite sequences. This is because they return iterators that yield results on demand, rather than functions that return lists. | 2 | 2 | 0 | I'm new in Python and I'm not an English native speaker. Today I learned some functions in the itertools module. There is a function called islice. Does it stand for infinitive slice? As I understand it can be used to slice infinitive sequence of objects and is commonly used with itertools.count(). | Want to confirm the meaning of the name islice in Python itertools.islice | 1.2 | 0 | 0 | 282 |
49,051,407 | 2018-03-01T14:05:00.000 | 1 | 0 | 1 | 0 | python,itertools | 49,052,416 | 2 | false | 0 | 0 | slice is a built-in class. The prefix 'i' for 'iterator' is added to avoid confusion and a name clash if one does from itertools import *.
In Python 2, itertools also had imap and ifilter, to avoid clashing with the old versions of map and filter. In Python 3, imap and ifilter became the new versions of map and filter and were hence removed from itertools. | 2 | 2 | 0 | I'm new in Python and I'm not an English native speaker. Today I learned some functions in the itertools module. There is a function called islice. Does it stand for infinitive slice? As I understand it can be used to slice infinitive sequence of objects and is commonly used with itertools.count(). | Want to confirm the meaning of the name islice in Python itertools.islice | 0.099668 | 0 | 0 | 282 |
49,054,134 | 2018-03-01T16:19:00.000 | 0 | 0 | 1 | 1 | python,file,copy | 49,054,587 | 1 | false | 0 | 0 | In general, you can't.
Because you don't have the information needed to solve the problem.
If you have to know that a file was completely transferred/created/written/whatever successfully, the creator has to send you a signal somehow, because only the creator has that information. From the receiving side, there's in general no way to infer that a file has been completely transferred. You can try to guess, but that's all it is. You can't in general tell a complete transfer from one where the connection was lost, for example.
So you need a signal of some sort from the sender.
One common way is to use a rename operation from something like filename.xfr to filename, or from an "in work" directory to the one you're watching. Since most operating systems implement such rename operations atomically, if the sender only does the rename when the transfer is successfully done, you'll only process complete files that have been successfully transferred.
Another common signal is to send a "done" flag file, such as sending filename.done once filename has been successfully sent.
Since you don't control the sender, you can't reliably solve this problem by watching for files. | 1 | 1 | 0 | An application A (out of my control) writes a file into a directory.
After the file is written I want to back it up somewhere else with a python script of mine.
Question: how may I be sure that the file is completed or that instead the application A is still writing the file so that I should wait until its completion? I am worried I could copy a partial file....
I wanted to use this function shutil.copyfile(src,dst) but I don't know if it is safe or I should check the file to copy in some other way. | How to make sure a file is completed before copying it? | 0 | 0 | 0 | 245 |
49,054,734 | 2018-03-01T16:48:00.000 | 0 | 0 | 0 | 1 | python,sdn,mininet,openflow,pox | 50,180,831 | 2 | false | 0 | 0 | To log the flows which gets flushed, you can use OFPFF_SEND_FLOW_REM field available, which can be set by controller while setting up the flow action. According to openflow-specification:
When a flow entry is removed, either by the controller or the flow expiry mechanism, the switch must check the flow entry’s OFPFF_SEND_FLOW_REM
flag. If this flag is set, the switch must send a flow removed message to the controller. Each flow removed message contains a complete description of the flow entry, the reason for removal (expiry or delete), the flow entry duration at the time of removal, and the flow statistics at the time of removal.
I am not sure about the exact implementation in POX, but this when combined with ovs-ofctl dump-flows may be a good approach | 1 | 2 | 0 | In my understanding, dpctl dump-flows command only helps to view current state of flow table. Flow table gets flushed often. I want to record the flow table entries.
Which class do I need to look into to record flow table? I am using POX controller and mininet on Ubuntu installed in Virtual Box. | How to log all flow table entries periodically in mininet by Python code? | 0 | 0 | 0 | 1,682 |
49,057,888 | 2018-03-01T20:07:00.000 | 0 | 0 | 1 | 1 | python,python-3.x,python-2.7,pip,python-3.6 | 54,014,883 | 4 | false | 0 | 0 | This is how I solved this issue on my end: (short answer, remove this folder C:\Python27)
Problem: I installed python 3, after uninstalling python 2.7. The issue here is that pip remains behind even after you uninstall python 2.7.
Solution:
1. Uninstall python 3 (if you have it installed).
2. Just in case, I would uninstall python 2.7.
3. This is the key: go to C:\Python27 ... and delete the entire directory (which also contains pip).
This solution is good for those that are fine with ONLY running Python 3 on their machines (which was my case). | 2 | 2 | 0 | I've seen many threads about this, and have tried all options except for completely wiping Python off of my machine and re-downloading everything...
I'm using a Windows 10, 64-bit machine, and had already downloaded Python2.7. Commands like 'C:\>pip install seaborn' were not an issue.
I recently downloaded Python3.6, and now my pip will not work - it returns the error in the title.
I have added C:\Python27, C:\Python36, C:\Python27\Scripts, C:\Python36\Scripts to my Path, and still it won't work.
If I type in the command C:\>python27 -m pip install seaborn, however, the pip works. I am really confused why I can no longer just type in pip install and have it work.
Thanks in advance! | Pip error: Fatal error in launcher: Unable to create process using '"' | 0 | 0 | 0 | 15,606 |
49,057,888 | 2018-03-01T20:07:00.000 | 2 | 0 | 1 | 1 | python,python-3.x,python-2.7,pip,python-3.6 | 49,059,305 | 4 | false | 0 | 0 | Okay so I finally worked it out...
I uninstalled Python3.6 and deleted all relevant folders.
I then went to Control Panel>Programs>Progams and Features and repaired my Python2.7 program. pip works now (I think it got messed up since I tried to rename the universal pip.exe file -> don't do that!!).
After re-downloading Python3.6, I put my universal pip.exe download from Python3 in a different directory so the Path would not get it confused. I now have Paths for both pip2 and pip3 and all is okay.
Thanks for your help! | 2 | 2 | 0 | I've seen many threads about this, and have tried all options except for completely wiping Python off of my machine and re-downloading everything...
I'm using a Windows 10, 64-bit machine, and had already downloaded Python2.7. Commands like 'C:\>pip install seaborn' were not an issue.
I recently downloaded Python3.6, and now my pip will not work - it returns the error in the title.
I have added C:\Python27, C:\Python36, C:\Python27\Scripts, C:\Python36\Scripts to my Path, and still it won't work.
If I type in the command C:\>python27 -m pip install seaborn, however, the pip works. I am really confused why I can no longer just type in pip install and have it work.
Thanks in advance! | Pip error: Fatal error in launcher: Unable to create process using '"' | 0.099668 | 0 | 0 | 15,606 |
49,058,060 | 2018-03-01T20:20:00.000 | 2 | 0 | 0 | 0 | python,selenium,contextmenu | 49,062,808 | 1 | true | 0 | 0 | Selenium cannot see or interact with native context menus.
I recommend testing this in a JavaScript unit test, where you can assert that event.preventDefault() was called. It's arguably too simple/minor of a behavior to justify the expense of a Selenium test anyway. | 1 | 0 | 0 | I'm just starting with Selenium in python, and I have set up an ActionChains object and perform()ed a context click. How do I tell whether a context menu of any sort has actually popped up? For example, can I use the return value in some way?
The reason is that I want to disable the context menu in some cases, and want to test if this has actually been done. | Selenium: how to check if context menu has appeared | 1.2 | 0 | 1 | 155 |