Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
49,200,518
2018-03-09T19:05:00.000
0
0
0
0
python,r,pandas,machine-learning,data-science
49,200,765
1
true
0
0
You should be deleting such columns because it will provide no extra information about how each data point is different from another. It's fine to leave the column for some machine learning models (due to the nature of how the algorithms work), like random forest, because this column will actually not be selected to split the data. To spot those, especially for categorical or nominal variables (with fixed number of possible values), you can count the occurrence of each unique value, and if the mode is larger than a certain threshold (say 95%), then you delete that column from your model. I personally will go through variables one by one if there aren't any so that I can fully understand each variable in the model, but the above systematic way is possible if the feature size is too large.
1
0
1
For instance, column x has 50 values and all of these values are the same. Is it a good idea to delete variables like these for building machine learning models? If so, how can I spot these variables in a large data set? I guess a formula/function might be required to do so. I am thinking of using nunique that can take account of the whole dataset.
Should I drop a variable that has the same value in the whole column for building machine learning models?
1.2
0
0
474
49,201,628
2018-03-09T20:25:00.000
0
0
0
1
docker,python-import,docker-volume
49,201,697
1
false
0
0
It looks like there is an issue with your volume mapping. The volume mapping syntax is of format "-v {local volume}:{directory inside container} So you would have to create that particular directory in your image before mapping it.
1
1
0
I have a docker volume that I created by running volume create my_volume and have been running my docker image with the command docker run -v my_volume:/Volumes/docker-volume/ my_image. Within the docker-volume directory I have a python file that I would like to import, but I can't figure out how to do so. Everything I've tried results in a ModuleNotFound error. It feels like there's some fundamental issue, perhaps relating to how a docker image interacts with a volume, that I'm missing. Any help would be greatly appreciated!
import python file from docker volume
0
0
0
700
49,203,197
2018-03-09T22:34:00.000
0
0
1
0
python,macos,32-bit,canopy
49,203,609
1
false
0
0
Sorry, Canopy on Mac has not provided 32-bit Python since January 2015. If you've got a really old (32-bit) version of OSX, then you're out of luck. Otherwise (you've got a recent OSX but just want to run 32-bit Python for some reason (WHY?) then... I'm not clear from your question whether you have a 32-bit / IPython available. If so, then from a terminal where that is your default Python, you can start a 32-bit kernel with ipython kernel. If that version of Ipython is not too old (sorry, not sure what exactly that means), then you should then be able to connect to that kernel from Canopy's Run menu ("Connect to existing kernel"). Not super convenient, as you'd need to redo both steps every time you wanted to do this.
1
0
0
ive been having trouble using a 32 bit python for canopy on mac. I dont know how to import a external version of python. Ive tried sites, but they all are from 2013-14. They just say to download the v1 with 32 bit python. I want any version of python that is 32 bit to work with canopy, I hope someone knows how, thanks.
Canopy python 32 bit Mac os x
0
0
0
38
49,203,567
2018-03-09T23:15:00.000
0
0
1
1
python,windows,python-idle
56,514,353
2
false
0
0
I work with over 30 Python developers and without fail when this happens they were behind a proxy / vpn. Turn off your proxy / vpn and it will work. Must have had this happen hundreds of times and this solution always worked.
1
0
0
I am new to Python and recently installed Python 3.6 on Windows 10. When I try to open IDLE, Python's IDE, I keep getting a message saying that it can not establish a subprocess. I have tried uninstalling and installing several times. I have seen several forums which say that there could be a .py file that is in the directory that is messing up IDLE. This is not my case, as I have not even been able to start using Python and I do not have a firewall either. Can someone tell me how I can get IDLE to work?
Windows 10: IDLE can't establish a subprocess
0
0
0
1,726
49,204,190
2018-03-10T00:42:00.000
8
0
1
0
python,python-2.7,mongodb-query,pymongo
49,204,372
1
true
0
0
Should've relized this far sooner, once I added the $or it needs to be in quotes. So this works: dataout = releasescollection.find( { "$or": [{"l_title":{"$regex": "i walk the line", "$options": "-i"}}, {"artistJoins.0.artist_name":{"$regex": "Johnny Cash", "$options": "-i"}}]}).sort('id', pymongo.ASCENDING).limit(25)
1
1
0
I can't seem to get this to work with pymongo it was working before I added the $or option. Am I missing something obvious with this dataout = releasescollection.find( { $or: [{"l_title":{"$regex": "i walk the line", "$options": "-i"}}, {"artistJoins.0.artist_name":{"$regex": "Johnny Cash", "$options": "-i"}}]}).sort('id', pymongo.ASCENDING).limit(25) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "discogs.py", line 51 dataout = releasescollection.find( { $or: [{"l_title":{"$regex": "i walk the line", "$options": "-i"}}, {"artistJoins.0.artist_name":{"$regex": "Johnny Cash", "$options": "-i"}}]}) ^ SyntaxError: invalid syntax Running the below directly in mongo works but I'm missing something in the switchover to python db.releases.find( { $or: [{"l_title":{"$regex": "i walk the line", "$options": "-i"}}, {"artistJoins.0.artist_name":{"$regex": "Johnny Cash", "$options": "-i"}}]}).sort({'id':1}).limit(25)
PYTHON - PYMONGO - Invalid Syntax with $or
1.2
1
0
1,427
49,206,319
2018-03-10T07:01:00.000
2
0
0
0
python,django
49,206,357
2
true
1
0
The simplest option would be a view function (i.e. a function linked to a URL that receives a GET or POST request) in your app which does the scraping and immediately returns the results by rendering a template. For example you could have a starting page with a form and when that form is submitted that will create a POST request which will contain details that the view can use to decide which page to scrape and so on. This doesn't require Javascript or database models. If you're not comfortable with Django yet, consider starting with Flask instead as it's simpler to get going.
2
0
0
As an exercise, I came up with an idea of the following Django project: a web app with literally one button to scrape room data from Airbnb and one text area to display the retrieved data in a sorted manner. Preferably, for scraping I would like to use Selenium, as there is no API for this page. So the button would somehow need to launch the browser automation. So question number one is: is it possible to launch selenium from a web app? Furthermore, I already have the working script for collecting the data, however I dont't know how to fit it in a Django project: models, views, separate script? My initial idea was to launch the scraping script on button click, then dump retrieved room-related data to database (updating model's Room attributes like "price" and "link" for example) and display the data back in the text area mentioned before. So question two is: is it possbile to launch Python script in a web app on button click, for example by nesting in a Django template? Or would other technologies be required, such as Javascript? I know my question is general, but I am also looking for general advice, not a ready code sample. I am also open to other approach if what I just wrote doesn't make any sense.
How to scrape data from inside Django app
1.2
0
0
740
49,206,319
2018-03-10T07:01:00.000
2
0
0
0
python,django
50,174,783
2
false
1
0
Django follows MVT i.e Model (part where you write things related to the database ) , View (the logic analogous to what we did in controller - ref. Java) , Template(things that you'll actually see) . As suggested by Alex you can have some inputs collected on your home page and using that data to scrape desired pages. Coming to your next question, yes you can launch the script on button click and basic working knowledge of JS would do good. This is like a very general answer synonymous to how general the question is so please feel free to get more specific requests if needed.
2
0
0
As an exercise, I came up with an idea of the following Django project: a web app with literally one button to scrape room data from Airbnb and one text area to display the retrieved data in a sorted manner. Preferably, for scraping I would like to use Selenium, as there is no API for this page. So the button would somehow need to launch the browser automation. So question number one is: is it possible to launch selenium from a web app? Furthermore, I already have the working script for collecting the data, however I dont't know how to fit it in a Django project: models, views, separate script? My initial idea was to launch the scraping script on button click, then dump retrieved room-related data to database (updating model's Room attributes like "price" and "link" for example) and display the data back in the text area mentioned before. So question two is: is it possbile to launch Python script in a web app on button click, for example by nesting in a Django template? Or would other technologies be required, such as Javascript? I know my question is general, but I am also looking for general advice, not a ready code sample. I am also open to other approach if what I just wrote doesn't make any sense.
How to scrape data from inside Django app
0.197375
0
0
740
49,206,488
2018-03-10T07:25:00.000
2
0
0
0
python,tensorflow,google-data-api,google-colaboratory
55,458,337
4
true
0
0
Thanks, guys, for your answers. Google Colab has quickly grown into a more mature development environment, and my most favorite feature is the 'Files' tab. We can easily upload the model to the folder we want and access it as if it were on a local machine. This solves the issue. Thanks.
1
10
1
I am fairly new to using Google's Colab as my go-to tool for ML. In my experiments, I have to use the 'notMNIST' dataset, and I have set the 'notMNIST' data as notMNIST.pickle in my Google Drive under a folder called as Data. Having said this, I want to access this '.pickle' file in my Google Colab so that I can use this data. Is there a way I can access it? I have read the documentation and some questions on StackOverflow, but they speak about Uploading, Downloading files and/or dealing with 'Sheets'. However, what I want is to load the notMNIST.pickle file in the environment and use it for further processing. Any help will be appreciated. Thanks !
Accessing '.pickle' file in Google Colab
1.2
0
0
27,735
49,207,112
2018-03-10T08:41:00.000
3
0
1
0
python,intel,amd
49,372,377
1
true
0
0
Are you asking about compatibility or performance? Both AMD and Intel market CPU products compatible with x86(_64) architecture and are functionally compatible with all software written for it. That is, they will run it with high probability (there always may be issues when changing hardware, even while staying with the same vendor, as there are too many variables to account). Both Intel and AMD offer a huge number of products with widely varying level of marketed performance. Performance of any application is determined not only by a chosen vendor of a central processor, but by a huge number of other factors, such as amount and speed of memory, disk, and not the least the architecture of the application itself. In the end, it is only real-world measurements that decide, but some estimations can be made by looking at relevant benchmarks and understanding underlying principles of computer performance.
1
4
1
I do a lot of coding in Python (Anaconda install v. 3.6). I don't compile anything, I just run machine learning models (mainly sci-kit and tensor flow) Are there any issues with running these on an workstation with AMD chipset? I've only used Intel before and want to make sure I don't buy wrong. If it matters it is the AMD Ryzen 7-1700 processor.
Does Intel vs. AMD matter for running python?
1.2
0
0
11,208
49,213,383
2018-03-10T19:52:00.000
1
0
1
0
python,levenshtein-distance
49,213,542
1
true
0
0
Levenshtein algorithm ("edit distance") doesn't allow different distances between characters, but there's a generalization - the Needleman-Wunsch algorithm - that does. I'm not aware of a Python implementation, but would recommend to look for one before implementing your own - it's possible but non-trivial.
1
1
0
I'm using the python-levenshtein module to analyse Irish language text over a large period of time; over time there are a number of orthographic changes to text e.g. bí -> ḃí -> bhí, the diacritic over the 'b' and the 'h' following the b both represent the same grammatical form of lenition (which is unshown in the first period). Between all these forms I would want a fairly low distance, but using the python-levenshtein distance as it is gives the same distance between Levenshtein.ratio(u'ḃí', u'bí') = 0.5 and Levenshtein.ratio(u'xí', u'bí') = 0.5, obviously a minor orthographic change to the character 'b' and it's outright substitution with 'x' (a foreign borrowing to boot) shouldn't have the same score. So is there a way to modify the values of specific characacter changes e.g. reduce the distance of bí to ḃí but up the distance between bí and xí? Or will I need to produce my own implementation?
Customising python-levenshtein character values
1.2
0
0
52
49,213,647
2018-03-10T20:22:00.000
26
0
0
0
python,django,wsgi
49,214,051
1
false
1
0
Django handles just a request at a time. If you use the very old CGI interface (between your web-server and Django), a new Django process is started at every request. But I think nobody do this. There are many additional interfaces on web servers, not do load at every request a new server side program. FastCGI is one of these (agnostic to programming language), some programs have own module directly implemented in web server (e.g. mod-php) [python had this in the past]. But now Django and in general python, prefer WSGI interface. So webserver open one or more programs (Django app) in parallel. The web server will send request to a free process (or it queue requests, this is handled by web server). How many processes, and for how long, it depend on web server configuration. The databases supported by django supports concurrency, so there is no problem on having different processes handling the same app. [SQLite is different, but you should use this, just for developing/testing Django]. By writing to some log files [usually multiline], one could see some problems (parallel process which write at the same time, the same file). NOTE: in such explanation I use "web server" in a broad sense. This includes gunicorn, mod-wsgi etc.
1
34
0
How does Django handles multiple requests in production environment? Suppose we have one of web server: Apache, Nginx, gunicorn etc. So do those servers for any request from web browser start new process to serve that request? If it's true, doesn't it cause huge overhead? If it's not true, then how the same view (let it be def hello(request) view bound to /hello url) serve several requests at the same time. I've seen answers for question "... handle multiple users"
How does Django handle multiple requests?
1
0
0
21,206
49,214,989
2018-03-10T23:14:00.000
8
0
0
1
python,macos,io
49,215,074
2
false
0
0
External drives can be found under /Volumes on macOS. If you provide the full path and have read access you should be able to read in your csv.
1
6
0
I have a Python file in /Users/homedir/... and I want it to access a csv file on an external hard drive. Does anyone know how to do this? I only need reading permission.
Access file in external hard drive using python on mac
1
0
0
7,016
49,217,448
2018-03-11T06:49:00.000
0
0
0
0
python,sql-server,python-2.7,pycharm,pymssql
49,221,473
2
false
0
0
I needed to change the way I'm connecting to the database - instead of pymssql use pypyodbc.
1
0
0
I'm trying to select data from mssql table using python(I'm using pycharm). One of the fields contains arabic letters, but the result of the select is '???????' Instead of the arabic letters. How do I get the arabic words correctly? Im using pymssql. Im creating a connection and a cursor, and than running: "cursor.execute(command)". The command is: "Select * from Table where Field = XXX" It returns result, just not in the rigth encoding. Btw, in the table the arabic words are written correctly. I tried printing the data to the console and writing it to a file, both failed(returned '????'). I've also added "# -- coding: utf-8 --" at the beginning of the file, so it can handle the non-ascii letters. Any idea? Thanks
Selecting non-ascii words from mssql table using python
0
1
0
139
49,218,390
2018-03-11T09:09:00.000
1
0
1
0
python,python-3.x,cryptography,python-3.5
61,338,036
5
false
0
0
In Python 3.x use pip install secret instead
1
11
0
I am trying to use the library secrets on Python 3.5 on Ubuntu 16.04. It does not come with the python installation and I am not able to install it through pip. Is there a way to get it to work on python 3.5?
Unable to install 'secrets' on python 3.5 (pip, ubuntu 3.5)
0.039979
0
0
16,037
49,218,802
2018-03-11T10:05:00.000
2
0
0
0
python,django,django-rest-framework
49,223,577
1
false
1
0
Serializers are concerned with translating information to/from different formats for a model (text/json, etc.), and so the validation is in reference to this. Model validation is a lower-level check, where the creation/modification of a db model is done. I always have model validation, even if I have serialization validation.
1
4
0
The field validation process can happen in 'Django Model level field declaration' or in 'Deserialization of data on DRF serialization section'. I have the following concerns regarding this validation process: What is the separation of concerns? Which validation section should be placed where? How the DRF serialization section restricts manual database entry with the validation?
Django Model field validation vs DRF Serializer field validation
0.379949
0
0
735
49,220,886
2018-03-11T14:06:00.000
2
0
1
0
python,methods,syntax,self
49,220,922
1
true
0
0
Hardcoding the class name basically prevents you from using polymorphism. This is general OOP, not particularly a Python feature. Your calling code should not need to know, nor care, which exact class object is. This is immediately a problem for code where object can be a member of either Baseclass or Derivedclass, but much more complex inheritance and method overriding scenarios are possible, and sometimes necessary.
1
0
0
Is there a reason we call methods in python like object.method instead of Class.method(object)? Maybe it isn't a strange choice, but personally it made understanding the self parameter much easier when I was shown the second way of calling a method.
Python method call syntax shorthand
1.2
0
0
110
49,221,084
2018-03-11T14:28:00.000
0
0
0
1
python,docker,dockerfile,pypi,dockerhub
49,221,219
1
false
0
0
After a successful build save cache directories somewhere outside docker in persistent storage. Restore the cache in every new container.
1
0
0
I've got a few Docker-based projects that all depend on each other: project 1 depends on python:3-alpine project 2 depends on project 1 project 3 depends on project 1 etc. etc. As such, all of my automated builds are linked as above. When I update project 2, projects 3 4 5 are all automatically rebuilt. It's a pretty slick feature. The thing is, this means that whenever a lower-level project is updated, this triggers lots of rebuilds of lots of projects. In the case of something very low-level being updated, like python or node, I can imagine that Docker Hub is triggering a lot of rebuilds. My question then is: doesn't this put a lot of load on package hosts like PyPI? As each of my projects include a line something like: RUN pip install -r requirements.txt This hits PyPI to pull down all of the requirements, re-downloading each time without the ability to use a local cache because the Docker container is "brand new" for each build. Is there something I can do to lessen the impact my projects are making, or is this somehow fixed with "magic" on Docker Hub? In the absence of such magic, is there a Best Practise I should be following?
Does Docker Hub put a big strain on package hosts?
0
0
0
23
49,221,999
2018-03-11T15:59:00.000
0
0
0
0
python,django,heroku,redis
49,222,038
1
false
1
0
Well, Redis isn't going to be running on localhost on Heroku. You don't say how you have set up Redis, but presumably you are using one of the add-ons available through Heroku. These usually expose their configuration through an environment variable, which you would then use in your settings.py exactly as you do for the database settings. The add-on documentation itself will tell you what variable to use.
1
1
0
i have a django project which was uploaded to heroku. In my django project I used redis also to store some data. the application works on heroku but It happens that when ever I click a link, I get the error Server Error (500) I do not know the cause of the error but here is my redis setting that I use on local and development server. #REDIS FOR VIEWS REDIS_HOST = 'localhost' REDIS_PORT = 6379 REDIS_DB = 3 further codes would be provided on request
django heroku server error
0
0
0
91
49,222,299
2018-03-11T16:30:00.000
0
0
1
0
python,pandas,dataframe,jupyter-notebook,powerpoint
50,358,133
1
false
0
0
One way seems to be to copy the styled pandas table from jupyter notebook to excel. It will keep a lot of the formatting. Then you can copy it to powerpoint and it will maintain its style.
1
1
1
I am trying to copy styled pandas dataframes from Jupyter Notebooks to powerpoint without loss of formatting. I currently just take a screenshot to preserve formatting, but this is not ideal. Does anyone know of a better way? I search for an extension that maybe has a screenshot button, but no luck.
How can I copy styled pandas dataframes from Jupyter Notebooks to powerpoint without loss of formatting
0
0
0
1,613
49,224,477
2018-03-11T19:59:00.000
3
0
0
0
python,tkinter,pyqtgraph
49,224,491
1
true
0
1
No, you cannot embed a PyQtGraph Object inside a tkinter application.
1
3
0
I have a Tkinter GUI I've been working on for some time that has a live-plotting feature currently built in matplotlib. However, I'm finding matplotlib to be too slow (it seems to build up a lag over time, as if filling up a buffer of frames of incoming data), so I'm thinking of switching my plotting to PyQtGraph. Can I put this into my Tkinter app?
Embedding a PyQtGraph into a Tkinter GUI
1.2
0
0
1,645
49,227,490
2018-03-12T02:48:00.000
2
0
0
0
python,python-3.x,pandas,neural-network,decision-tree
49,227,672
1
false
0
0
Yes, in my opinion, encoding yes/no to 1/0 would be the right approach for you. Python's sklearn requires features in numerical arrays. There are various ways of encoding : Label Encoder; One Hot Encoder. etc However, since your variable only has 2 levels of categories, it wouldnt make much difference if you go for LabelEncoder or OneHotEncoder.
1
1
1
My dataset has few features with yes/no (categorical data). Few of the machine learning algorithms that I am using, in python, do not handle categorical data directly. I know how to convert yes/no, to 0/1, but my question is - Is this a right approach to go about it? Can these values of no/yes to 0/1, be misinterpreted by algorithms ? The algorithms I am planning to use for my dataset are - Decision Trees (DT), Random Forests (RF) and Neural Networks (NN).
Categorical Data yes/no to 0/1 python - is it a right approach?
0.379949
0
0
1,491
49,228,214
2018-03-12T04:33:00.000
4
1
1
1
python,linux,python-3.x,virtualenv,virtualenvwrapper
49,228,268
3
true
0
0
TL;DR: 1. no 2. yes 3. no creating a new linux user account for each deamon/script I'm working on, so that both the python virtual environment, and the python project code area can live under directories owned by this user? No. Unnecessary complexity and no real benefit to create many user accounts for this. Note that one user can be logged in multiple sessions and running multiple processes. perhaps just create one new non-administrator account at the beginning, and then just use this account for each project/virtual environment Yes, and use sudo from the non-admin account if/when you need to escalate privilege. create everything under the initial admin user I first log with for raspbian (e.g. "pi" user) - Assume NO for this option, but putting it in for completeness. No. Better to create a regular user, not run everything as root. Using a non-root administrator account would be OK, though.
3
2
0
I'm starting with Python 3, using Raspbian (from Debian), and using virtualenv. I understand how to create/use a virtualenv to "sandbox" different Python project, HOWEVER I'm a bit unclear on whether one should be setting up a different linux user for each project (assuming that the project/virtualenv will be used to create & then run a daemon process on the linux box). So when creating separate python environments the question I think is should I be: creating a new linux user account for each deamon/acript I'm working on, so that both the python virtual environment, and the python project code area can live under directories owned by this user? perhaps just create one new non-administrator account at the beginning, and then just use this account for each project/virtual environmnet create everything under the initial admin user I first log with for raspbian (e.g. "pi" user) - Assume NO for this option, but putting it in for completeness.
should new python virtualenv's be created with new linux user accounts?
1.2
0
0
124
49,228,214
2018-03-12T04:33:00.000
1
1
1
1
python,linux,python-3.x,virtualenv,virtualenvwrapper
49,228,732
3
false
0
0
In the general case, there is no need to create a separate account just for a virtualenv. There can be reasons to create a separate account, but they are distinct from, and to some extent anathema to, virtual environments. (If you have a dedicated account for a service, there is no need really to put it in a virtualenv -- you might want to if it has dependencies you want to be able to upgrade easily etc, but the account already provides a level of isolation similar to what a virtualenv provides within an account.) Reasons to use a virtual environment: Make it easy to run things with different requirements under the same account. Make it easy to install things for yourself without any privileges. Reasons to use a separate account: Fine-grained access control to privileged resources. Properly isolating the private resources of the account.
3
2
0
I'm starting with Python 3, using Raspbian (from Debian), and using virtualenv. I understand how to create/use a virtualenv to "sandbox" different Python project, HOWEVER I'm a bit unclear on whether one should be setting up a different linux user for each project (assuming that the project/virtualenv will be used to create & then run a daemon process on the linux box). So when creating separate python environments the question I think is should I be: creating a new linux user account for each deamon/acript I'm working on, so that both the python virtual environment, and the python project code area can live under directories owned by this user? perhaps just create one new non-administrator account at the beginning, and then just use this account for each project/virtual environmnet create everything under the initial admin user I first log with for raspbian (e.g. "pi" user) - Assume NO for this option, but putting it in for completeness.
should new python virtualenv's be created with new linux user accounts?
0.066568
0
0
124
49,228,214
2018-03-12T04:33:00.000
2
1
1
1
python,linux,python-3.x,virtualenv,virtualenvwrapper
49,228,490
3
false
0
0
It depends on what you're trying to achieve. From virtualenv's perspective you could do any of those. #1 makes sense to me if you have multiple services that are publicly accessible and want to isolate them. If you're running trusted code on an internal network, but don't want the dependencies clashing then #2 sounds reasonable. Given that the Pi is often used for a specific purpose (not a general purpose desktop say) and the default account goes largely unused, using that account would be fine. Make sure to change the default password.
3
2
0
I'm starting with Python 3, using Raspbian (from Debian), and using virtualenv. I understand how to create/use a virtualenv to "sandbox" different Python project, HOWEVER I'm a bit unclear on whether one should be setting up a different linux user for each project (assuming that the project/virtualenv will be used to create & then run a daemon process on the linux box). So when creating separate python environments the question I think is should I be: creating a new linux user account for each deamon/acript I'm working on, so that both the python virtual environment, and the python project code area can live under directories owned by this user? perhaps just create one new non-administrator account at the beginning, and then just use this account for each project/virtual environmnet create everything under the initial admin user I first log with for raspbian (e.g. "pi" user) - Assume NO for this option, but putting it in for completeness.
should new python virtualenv's be created with new linux user accounts?
0.132549
0
0
124
49,228,341
2018-03-12T04:53:00.000
0
0
1
0
python,anaconda,upgrade,conda
49,228,501
1
false
0
0
Open anacondaprompt type conda upgrade spyder should ask for conda upgrade first follow instructions the retype above command hey presto up to date. :)
1
0
0
I have just downloaded/installed Anaconda again after a department store destroyed all my permissions when fixing a power cable and I need to update spyder to 3.2.7 However the updates screen says to not use pip install as it will likely break my installation As aparrently I am using Anaconda/miniconda. Not an option I chose but oh well. Anyway it says to wait until new conda packages are available and update that way. Searched system for conda and nothing so tried in Anaconda prompt. conda install -- upgrade spyder Which should have worked I think. To no avail. Please excuse been awhile
Update Spyder 3.2.6 to 3.2.7
0
0
0
1,137
49,228,574
2018-03-12T05:21:00.000
4
0
0
0
python-3.x,heap
49,232,244
2
true
0
0
heappop is pop out the first element, then move the last element to fill the in the first place, then do a sinking operation, which moving the the element down through consecutive exchange. thus restore the head it is O(logn) then you headpush, place the element in the last place, and bubble-up like heappop but reverse another O(logn) while heappushpop, pop out the first element, instead of moving the last element to the top, it place the new element in the top, then do a sinking motion. which is almost the same operation with heappop. just one O(logn) as above even though they are both O(logn), it is easier to see heappushpop is faster than heappop then heappush.
2
2
0
In the docs for heapq, its written that heapq.heappushpop(heap, item) Push item on the heap, then pop and return the smallest item from the heap. The combined action runs more efficiently than heappush() followed by a separate call to heappop(). Why is it more efficient? Also is it considerably more efficient ?
How is heapq.heappushpop more efficient than heappop and heappush in python
1.2
0
1
963
49,228,574
2018-03-12T05:21:00.000
2
0
0
0
python-3.x,heap
57,665,038
2
false
0
0
heappushpop pushes an element and then pops the smallest elem. If the elem you're pushing is smaller than the heap's minimum, then there's no need to do any operations., because we know that the element we're trying to push (which is smaller than the heap min), will be popped if we do it in two operations. This is efficient, isn't it?
2
2
0
In the docs for heapq, its written that heapq.heappushpop(heap, item) Push item on the heap, then pop and return the smallest item from the heap. The combined action runs more efficiently than heappush() followed by a separate call to heappop(). Why is it more efficient? Also is it considerably more efficient ?
How is heapq.heappushpop more efficient than heappop and heappush in python
0.197375
0
1
963
49,229,610
2018-03-12T06:55:00.000
9
0
0
0
python,arrays,numpy,rounding
49,229,831
2
false
0
0
The main difference is that round is a ufunc of the ndarray class, while np.around is a module-level function. Functionally, both of them are equivalent as they do the same thing - evenly round floats to the nearest integer. ndarray.round calls around from within its source code.
1
16
1
So, I was searching for ways to round off all the numbers in a numpy array. I found 2 similar functions, numpy.round and numpy.around. Both take seemingly same arguments for a beginner like me. So what is the difference between these two in terms of: General difference Speed Accuracy Being used in practice
Difference between numpy.round and numpy.around
1
0
0
11,776
49,231,130
2018-03-12T08:44:00.000
0
0
0
1
python,flask,uwsgi
57,765,810
1
false
1
0
Worker load balancing is handled by the kernel and there's no way to force requests to hit a specific worker. You'll have to either move your cache to somewhere all workers can access (redis, mongo, sql db, etc) or have a process/thread running on your workers to refresh the cache (Celery, etc)
1
7
0
I'm running a python flask application on uswgi with 4 workers. The application has a cache that needs to be periodically refreshed and warmed up. I'd like to do this with an external job that hits a url but I need to ensure the cache is warmed up on all 4 workers. Is there a way to route a request to a particular worker? Ideally I'd just like to have a special header or query parameter that does this.
Routing a request to a particular uwsgi worker
0
0
0
899
49,231,322
2018-03-12T08:57:00.000
0
0
1
0
python,pip,packages,conda
49,231,482
1
false
0
0
install conda create new environment (conda create --name foobar python=3.x list of packages use anaconda to activate foobar (activate foobar) check pip location by typing in cmd 'where pip' to be sure you use pip from withing the python from withing the foobar environment and not the default python installed in your system outside of your conda environment and next use the pip from above location to install requested library into your environment. ps. you may want to consider to install Cygwin on your Windows machine to get use to work with Linux environment.
1
1
1
How do you use a Python package such as Tensorflow or Keras if you cannot install the package on the drive on which pip always saves the packages? I'm a student at a university and we don't have permission to write to the C drive, which is where pip works out of (I get a you don't have write permission error when installing packages through pip or conda`). I do have memory space available on my user drive, which is separate from the C drive (where the OS is installed). So, is there any way I can use these Python libraries without it being installed? Maybe I can install the package on my user drive and ask the compiler to access it from there? I'm just guessing here, I have no knowledge of how this works.
Installing python packages in a different location than default by pip or conda
0
0
0
1,385
49,231,589
2018-03-12T09:14:00.000
3
0
1
0
python,cron,sleep,schedule
49,231,744
1
false
0
0
sleep will mark the process (thread) for being inactive until the given time is up. During this time the kernel will simply not schedule this process (thread). It will not waste resources. Hard disks typically have spin-down policies based solely on their usage. If they aren't accessed for a specific time, they will spin down. They will spin up as soon as some process (thread) is accessing them again. This means that letting a process (thread) sleep for some time gives the hard disk a chance to spin down (especially if the sleep duration is large, say, more than some minutes).
1
3
0
What exactly is happening when I call time.sleep(5) in a python script? Is the program using a lot of resources from the computer? I see people using the sleep function in their programs to schedule tasks, but this requires you leave your hard drive running the whole time right? That would be taking for you computer over the long haul right? I'm trying to figure out what's to run programs at specific times remotely, but I haven't found an explanation of how to do this that is very intuitive. Any suggestions?
What is happening when you use the python sleep module?
0.53705
0
0
101
49,234,736
2018-03-12T12:01:00.000
0
0
0
0
python,scikit-learn,reinforcement-learning,q-learning
49,244,770
1
true
0
0
Normalizing the input can lead to faster convergence. It is highly recommended to normalize the inputs. And as the network will progress through different layers due to use of non-linearities the data flowing between the different layers will not be normalized anymore and therefore, for faster convergence we often use batch normalization layers. Unit Gaussian data always helps in faster convergence and therefore make sure to keep it in unit Gaussian form as much as possible.
1
0
1
I am well known with that a “normal” neural network should use normalized input data so one variable does not have a bigger influence on the weights in the NN than others. But what if you have a Qnetwork where your training data and test data can differ a lot and can change over time in a continous problem? My idea was to just run a normal run without normalization of input data and then see the variance and mean from the input datas of the run and then use the variance and mean to normalize my input data of my next run. But what is the standard to do in this case? Best regards Søren Koch
Normalization of input data to Qnetwork
1.2
0
0
333
49,235,442
2018-03-12T12:37:00.000
0
0
0
0
python,qpython
49,477,242
1
true
0
1
All your site packages aren't stored in the qpython folder especially if you are not using an external sdcard On my phone all libraries are stored in "/data/data/org.qpython.qpy/files/lib/python2.7/site-packages" path If you cant access this path on your phone ,then you need to root your phone
1
0
0
Am having a problem in getting the libraries such as kivy which i have installed through QPYPI. How can i get them since when i navigate to / qpython/ lib/ python2.7/ site-packages/ the folder is empty
How to get libraries which i have installed using QPYPI in the file manager of Android
1.2
0
0
393
49,235,894
2018-03-12T12:59:00.000
0
1
0
0
python-3.x,odoo,point-of-sale,odoo-11
52,870,087
1
false
1
0
You can create a wizard at the time of validation of POS order which popup after validating order. In that popup enter mail id of customer and by submit that receipt is directly forwarded to that customer.
1
0
0
I have required to send POS Receipt to customer while validating POS order, the challenge is ticket is defined in point_of_sale/xml/pos.xml receipt name is <t t-name="PosTicket"> how can i send this via email to customer.
Send POS Receipt Email to Customer While Validating POS Order
0
0
1
205
49,238,744
2018-03-12T15:21:00.000
0
0
0
1
python,windows-10,bluetooth-lowenergy
69,203,279
2
false
0
0
As far as I know, gattlib is designed for linux and debian system so you can use another one. Another side, if you are using a Python version greater than 3.9, you can directly Bluetooth RFCOMM Support for Windows 10.
1
18
0
I want to create a BLE Connection between my Laptop (Windows 10) and a BLE Device which will be the Master. I installed Bluez and I can detect Bluetooth devices like my Smartphone but no device that only supports BLE. I want to download gattlib with pip install gattlib but I got an OSError: Not supported OS which brings me to the conclusion that I can't do it this way on Windows 10. Is there any other possibility than installing Linux on my Laptop?
Python using gattlib for BLE Scanning on Windows 10
0
0
0
3,830
49,241,733
2018-03-12T18:03:00.000
0
0
0
0
python,algorithm,machine-learning,cluster-analysis,data-mining
49,294,793
1
false
0
0
The parent node is the aggregated cluster. It's not a single point, so you can't just use it as representative. But you can use the medoids, for example.
1
0
1
The intention is to merge clusters which have similarity higher than the Jaccard similarity based on pairwise comparison of cluster representative. My logic here is that because the child nodes are all under the parent node for a cluster, it means that the parent node is somewhat like a representative of the cluster.
Can the parent nodes of clusters formed using disjoint set forest be used as cluster representative?
0
0
0
27
49,243,269
2018-03-12T19:41:00.000
1
0
0
1
python,wireshark,scapy
49,250,191
1
true
0
0
send() uses Scapy's routing table (which is copied from the host's routing table when Scapy is started), while sendp() uses the provided interface, or conf.iface when no value is specified. So you should either set conf.iface = [iface] ([iface] being the interface you want to use), or specify sendp([...], iface=[iface]).
1
1
0
I am playing around with scapy (module for Python). I want to build packages and send them across my local network from one host to another. When I buil my package like that, I do not receive anything on my destination host: packet = Ether() / IP(dst='192.168.0.6') / TCP(dport=8000) => sendp(packet). However, when I build it like that it works: packet = IP(dst='192.168.0.6') / TCP(dport=8000), send(packet). I capture the packages on my destination host with the help of wireshark. Why doesn't the Ethernet-Variant work? I have all my PCs connected with ethernet cables... Thanks for help!
Can't send ethernet packages across my LAN
1.2
0
1
404
49,245,314
2018-03-12T22:21:00.000
0
0
1
0
python,parsing,code-formatting
49,245,937
1
false
0
0
Maybe using dis.dis() to dump instructions and comparing or checksumming the outputs?
1
0
0
Is there a tool, or method, to check that given two python files, they will parse identically? The specific use case I'm thinking of: I'm currently making a large number of code changes to improve readability. Many of them (reindenting, removing spaces around = in keyword arguments) introduce no changes to the meaning of the code (unless done incorrectly) whatsoever. I would be able to make such changes more quickly if I could quickly verify that the new code was identical to the old, as far as Python is concerned.
Tool to check reformatting Python code does not change meaning
0
0
0
26
49,245,779
2018-03-12T23:06:00.000
0
0
1
0
python,ubuntu,spyder
49,248,423
1
false
0
0
Your PATH may be pointing to the wrong python environment. Depending on which one is conflicting, you may have to do some exploring to find the culprit. My guess is that Spyder is not using your created conda environment where Pytorch is installed. To change the path in Spyder, open the Preferences window. Within this window, select the Python interpreter item on the left. The path to the Python executable will be right there. I'm using a Mac, so the settings navigation may be different for you, but it's around there somewhere.
1
0
1
Hi I'm using Ubuntu and have created a conda environment to build a project. I'm using Python 2.7 and Pytorch plus some other libraries. When I try to run my code in Spyder I receive a ModuleNotFoundError telling me that torch module hasn't been installed. However, when I type conda list into a terminal I can clearly see torch is there. How can I configure this to work with Spyder? Thanks.
ModuleNotFoundError in Spyder with Python
0
0
0
1,203
49,247,108
2018-03-13T01:52:00.000
0
0
0
0
python,python-3.x,pandas
49,266,730
2
false
0
0
I was getting thrown off in that open(...) actually gets a line. I was doing a separate readline(...) after the open(...)and so unwittingly advancing the iterator and getting bad results. There is a small problem with csv write which I'll post on new question.
1
1
1
I am new to Python 3, coming over from R. I have a very large time series file (10gb) which spans 6 months. It is a csv file where each row contains 6 fields: Date, Time, Data1, Data2, Data3, Data4. "Data" fields are numeric. I would like to iterate through the file and create & write individual files which contain only one day of data. The individual dates are known only by the fact that the date field suddenly changes. Ie, they don't include weekends, certain holidays, as well as random closures due to unforseen events so the vector of unique dates is not deterministic. Also, the number of lines per day is also variable and unknown. I envision reading each line into a buffer and comparing the date to the previous date. If the next date = previous date, I append that line to the buffer. I repeat this until next date != previous date, at which point I write the buffer to a new csv file which contains only that day's data (00:00:00 to 23:59:59). I had trouble appending the new lines with pandas dataframes, and using readline into a list just got too mangled for me. Looking for Pythonic advice.
Process Large (10gb) Time Series CSV file into daily files
0
0
0
626
49,247,310
2018-03-13T02:19:00.000
1
1
1
0
python,python-3.x,ubuntu,python-imaging-library,pillow
67,286,251
6
false
0
0
I solved the issue with the command python3 -m pip install Pillow.
3
11
0
I'm running into an error where when I try from PIL import Image, ImageFilter in a Python file I get an error stating ModuleNotFoundError: No module named 'PIL'. So far I've tried uninstalling/reinstalling both PIL and Pillow, along with just doing import Image, but the error keeps on occurring and I have no idea why. All the solutions I've found so far have had no effect on my issue. I'm running Python 3.5 on Ubuntu 16.04
No module named 'PIL'
0.033321
0
0
24,852
49,247,310
2018-03-13T02:19:00.000
5
1
1
0
python,python-3.x,ubuntu,python-imaging-library,pillow
49,247,556
6
true
0
0
Alright, I found a fix To fix the issue, I uninstalled PIL and Pillow through sudo pip3 uninstall pillow and sudo apt-get purge python3-pil. I then restarted and then used sudo -H pip3 install pillow to reinstall Pillow The only step I was missing before was rebooting, and not reinstalling PIL afterwards. It seems to have worked without any issues so far.
3
11
0
I'm running into an error where when I try from PIL import Image, ImageFilter in a Python file I get an error stating ModuleNotFoundError: No module named 'PIL'. So far I've tried uninstalling/reinstalling both PIL and Pillow, along with just doing import Image, but the error keeps on occurring and I have no idea why. All the solutions I've found so far have had no effect on my issue. I'm running Python 3.5 on Ubuntu 16.04
No module named 'PIL'
1.2
0
0
24,852
49,247,310
2018-03-13T02:19:00.000
2
1
1
0
python,python-3.x,ubuntu,python-imaging-library,pillow
52,448,736
6
false
0
0
In my case the problem had to do with virtual environments. The python program ran in a virtual environment, but I called pip install Pillow from a normal command prompt. When I ran the program in a non-virtual environment, from PIL import Image worked. It also worked when I called venv/scripts/activate before calling pip install Pillow. So apparently PIL is not found when installed in the python root but the program runs in a virtual environment.
3
11
0
I'm running into an error where when I try from PIL import Image, ImageFilter in a Python file I get an error stating ModuleNotFoundError: No module named 'PIL'. So far I've tried uninstalling/reinstalling both PIL and Pillow, along with just doing import Image, but the error keeps on occurring and I have no idea why. All the solutions I've found so far have had no effect on my issue. I'm running Python 3.5 on Ubuntu 16.04
No module named 'PIL'
0.066568
0
0
24,852
49,247,626
2018-03-13T03:00:00.000
1
0
0
0
python,machine-learning,k-means
49,248,106
1
false
0
0
I think this method would work: Run KMeans. Mark all clusters exceeding intracluster distance threshold. For each marked cluster, run KMeans for K=2 on the cluster's data. Repeat 2, until no clusters are marked. Each cluster is split in two, until the intra cluster distance is not violated. Another option: Run KMeans. If any clusters exceed intracluster distance threshold, increase K and repeat 1.
1
0
1
I often come across a situation where I have bunch of different addresses (input data in Lat Long) mapped all over the city. What i need to do is use cluster these locations in a way that allows me to specify "maximum distance netween any two points within a cluster". In other words, specify maximum intra-cluster distance. For example, to cluster all my individual points in a way that -- maximum distance between any two points within a cluster is 1.5KM.
K Means Cluster with Specified Intra Cluster Distance
0.197375
0
0
233
49,248,489
2018-03-13T04:47:00.000
1
0
0
0
python,pip,mariadb,centos7
49,254,109
2
false
0
0
You must not name your script mysql.py — in that case Python tries to import mysql from the script — and fails. Rename your script /root/Python_environment/my_Scripts/mysql.py to something else.
2
0
0
I have installed MySQL connector for python 3.6 in centos 7 If I search for installed modules with below command it's showing as below pip3.6 freeze mysql-connector==2.1.6 mysql-connector-python==2.1.7 pymongo==3.6.1 pip3.6 search mysql-connector mysql-connector-python (8.0.6) -MYSQL driver written in Python INSTALLED: 2.1.7 LATEST: 8.0.6 mysql-connector (2.1.6) - MySQL driver written in Python INSTALLED: 2.1.6 (latest) MySQL connector installed.But when trying to run the program using MySQL connector then its showing error no module installed MySQL connector.I am using MariaDB 10.0 python3.6 mysql1.py Traceback (most recent call last): File "mysql1.py", line 2, in import mysql.connector as mariadb File "/root/Python_environment/my_Scripts/mysql.py", line 2, in import mysql.connector ModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package can any one know how to resolve
Mysql Connector issue in Python
0.099668
1
0
455
49,248,489
2018-03-13T04:47:00.000
0
0
0
0
python,pip,mariadb,centos7
49,376,529
2
true
0
0
This is the problem I faced in Environment created by python.Outside the python environment i am able to run the script.Its running succefully.In python environment i am not able run script i am working on it.if any body know can give suggestion on this
2
0
0
I have installed MySQL connector for python 3.6 in centos 7 If I search for installed modules with below command it's showing as below pip3.6 freeze mysql-connector==2.1.6 mysql-connector-python==2.1.7 pymongo==3.6.1 pip3.6 search mysql-connector mysql-connector-python (8.0.6) -MYSQL driver written in Python INSTALLED: 2.1.7 LATEST: 8.0.6 mysql-connector (2.1.6) - MySQL driver written in Python INSTALLED: 2.1.6 (latest) MySQL connector installed.But when trying to run the program using MySQL connector then its showing error no module installed MySQL connector.I am using MariaDB 10.0 python3.6 mysql1.py Traceback (most recent call last): File "mysql1.py", line 2, in import mysql.connector as mariadb File "/root/Python_environment/my_Scripts/mysql.py", line 2, in import mysql.connector ModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package can any one know how to resolve
Mysql Connector issue in Python
1.2
1
0
455
49,248,824
2018-03-13T05:22:00.000
0
0
1
0
python,opencv,background-subtraction
49,259,261
2
false
0
0
You can use morphology (erode or dilate, depends on white or black your blob is). Then find contour. It should be faster than distance transform.
1
0
0
I'm trying to set equal intervals along the boundary of a black and white image. Is there a way to do it? I thought about first finding the edge of object using distance transform then scanning the image for the edge. I was thinking of starting with first pixel that is on the edge then find the pixel closest to it, eventually we'll get the list of edge pixels in order. But the runtime of that seems really slow. Can someone help me with this?
how to set equal intervals along boundaries of irregular shaped object in image in python?
0
0
0
139
49,249,451
2018-03-13T06:16:00.000
0
0
1
0
python,windows,virtualenv,virtualenvwrapper
49,249,485
2
true
0
0
I found the default location at %userprofile%\Envs
1
0
0
I used virtualenvwrapper to make a virtual environment on windows and now I need to point my IDE to the python interpreter I created but I cannot find it. I can use workon from cmd but I can't find the actual location of the new interpreter.
Where does virtualenvwrapper put python files in Windows?
1.2
0
0
587
49,250,968
2018-03-13T07:58:00.000
1
0
1
0
python,python-3.x
49,251,270
2
false
0
0
The error means that there is no such package as git. Check the name of the package you want to install.
1
6
0
I just changed my project's interpreter to python 3.6 and have to install git library again. When i run the command "pip install --proxy=some_proxy git" i get the following error message: "Could not find a version that satisfies the requirement git (from versions: ) No matching distribution found for git". Why does it happen ?
Error with pip install git (after switching to python 3.6)
0.099668
0
0
18,369
49,252,880
2018-03-13T09:46:00.000
-2
0
0
0
python,selenium,selenium-webdriver
68,574,224
4
false
0
0
I found that sometimes the webpage is not fully loaded and the answer is as simple as adding a time.sleep(2)
2
11
0
I am trying to click on an element but getting the error: Element is not clickable at point (x,y.5) because another element obscures it. I have already tried moving to that element first and then clicking and also changing the co-ordinates by minimizing the window and then clicking, but both methods failed. The possible duplicate question has answers which I have already tried and none of them worked for me. Also, the same code is working on a different PC. How to resolve it?
Element is not clickable at point (x,y.5) because another element obscures it
-0.099668
0
1
15,329
49,252,880
2018-03-13T09:46:00.000
11
0
0
0
python,selenium,selenium-webdriver
49,261,182
4
true
0
0
There is possibly one thing you can do. It is very crude though, I'll admit it straight away. You can simulate a click on the element directly preceding the element in need, and then simulate a key press [TAB] and [ENTER]. Actually, I've been seeing that error recently. I was using the usual .click() command provided by bare selenium - like driver.find_element_by_xpath(xpath).click(). I've found that using ActionChains solved that problem. Something like ActionChains(driver).move_to_element(element).click().perform() worked for me. You will need: from selenium.webdriver.common.action_chains import ActionChains
2
11
0
I am trying to click on an element but getting the error: Element is not clickable at point (x,y.5) because another element obscures it. I have already tried moving to that element first and then clicking and also changing the co-ordinates by minimizing the window and then clicking, but both methods failed. The possible duplicate question has answers which I have already tried and none of them worked for me. Also, the same code is working on a different PC. How to resolve it?
Element is not clickable at point (x,y.5) because another element obscures it
1.2
0
1
15,329
49,253,395
2018-03-13T10:09:00.000
2
0
0
0
python,django
49,253,516
1
true
1
0
ModelForm.save() is called first and it is calling Model.save() internally. Method in ModelForm is an helper to build or update Model object from data provided in form and save it to database. It also saves any many to many or reversed foreign key relations.
1
2
0
I understand that both models.Model and forms.ModelForm both contain .save() method that you can override. My question is how and when are they used to save an object and in what sequence.
In what sequence Model.save() and ModelForm.save() called
1.2
0
0
35
49,254,062
2018-03-13T10:42:00.000
4
0
0
0
python,qt,pyqt,pyqt5,qtabwidget
49,256,651
1
false
0
1
Add a generic QWidget as the corner widget. Give it a QHBoxLayout. Add your buttons to the layout. I use this frequently, often by subclassing QTabWidget and creating accessor functions that return the individual buttons. Adding signals like buttonClicked(int) with the index and buttonClicked(QAbstractButton) with the button itself are helpful, too.
1
1
0
I want to remove the actual Close, minimize and maximize buttons of a window and create my own custom buttons, just like in chrome. I therefore want to add corner widgets to my tabwidget. Is there a way so that I can add three buttons as corner widgets of a QTabWidget? Is it somehow possible to achieve using the QHBoxLayout ? The setCornerWidget function just takes one widget as its input.
PyQt QTabWidget Multiple Corner WIdgets
0.664037
0
0
545
49,255,283
2018-03-13T11:42:00.000
0
0
0
0
python,python-3.x,flask
61,596,986
2
false
1
0
What I did was: which flask, to find flask binary executable path (in my case /home/myuser/.local/bin/flask edit /home/myuser/.local/bin/flask, changing the first line from #!/usr/bin/python to #!/usr/bin/python3 In summary, making flask use python3, regardless of which shebang was specified in other scripts, since those scripts are not the entrypoint of execution, but flask is. I didn't have to change the shebang in any of my scripts, just the flask executable.
1
8
0
How can you run Flask app which uses a specific version of python? On my env "python" = python2.7 and "python3" = python3.6. I have installed Flask using pip3. I start the app with FLASK_APP=app.py flask run. It uses python2.7 to execute, I would like it to use python3.6. Also tried adding #!flask/bin/python3 to app.py, but it did not have an effect.
Run Flask using python3 not python
0
0
0
17,444
49,256,259
2018-03-13T12:29:00.000
0
0
0
0
django,python-2.7
49,256,519
1
false
1
0
You have somewhere in your code import from rest_framework_httpsignature which is not installed. It can be either set in your settings file as default authentication method for DRF or used somewhere in other default authentication method.
1
0
0
unable to import from rest_framework_httpsignature.authentication import SignatureAuthentication in django 1.8v and python 2.7.6 ,it causeing importing error only to this class SignatureAuthentication in vs studio , please help but i can able to import this class rest_framework.authentication using djangorestframework==3.4.1 and djangorestframework-httpsignature==0.2.1
import error rest_framework_httpsignature.authentication django rest frame work?
0
0
0
62
49,257,867
2018-03-13T13:47:00.000
-1
0
0
0
python,r,gis,satellite
62,542,674
1
false
0
0
You can give a try to Google Engine. That would be the easiest way to obtain access to image series. If your research applies only to that period, you may work less downloading by hand and processing in QGIS. If programming is a must, use Google Engine. They have much of the problem resolved. Otherwise you will have to develop routines for handling the communication with the Sentinel Open Hub, downloading L1C (if L2A not present) and converting to L2A using Sen2Cor, then obtaining ndvi, croping, etc.
1
2
1
Currently, I am working on a project for a non-profit organization. Therefore, I need the average NDVI values for certain polygons. Input for my search: Group of coördinates (polygon) a range of dates (e.g. 01-31-2017 and 02-31-2017) What I now want is: the average NDVI value of the most recent picture in that given date range with 0% cloud coverage of the given polygon Is there a simple way to extract these values via an API (in R or Python)? I prefer working with the sentinel-hub, but I am not sure if it's the best platform to extract the data I need. Because I am working time series I should use the L2A version (there is an NDVI layer).
What is a simple way to extract NDVI average from polygon [Sentinel 2 L2A]
-0.197375
0
0
355
49,258,681
2018-03-13T14:24:00.000
3
0
1
0
python-3.x,google-colaboratory
49,269,079
1
false
0
0
There's no way to do this right now, unfortunately: you'll need to move the code into a .py file that you load (say by cloning from github).
1
2
0
I'm sharing a colaboratory file with my colleagues and we are having fun with it. But it's getting bigger and bigger, so we want to offload some of the functions to another colaboratory file. How can we load one colaboratory file into another?
Share functions across colaboratory files
0.53705
0
0
68
49,262,646
2018-03-13T17:47:00.000
0
0
0
0
python,mysql,mysql-connector
49,262,686
1
true
0
0
Store the zip file somewhere else on your server and simply store the name or the file location string in your dB. Mysql really isn't intended to store large files or better yet zip folders. Then when you go to retrieve it just unload the file location into an <a> tag it will link to it.
1
0
0
Im trying to do some querys on my data base, but the file is to large and is spending to much time, exists some way to upload one zip file with those querys to my table? ps: the file have between 350Mb to 500Mb
Do mysql querys in python with zip file
1.2
1
0
198
49,263,913
2018-03-13T18:58:00.000
1
0
1
0
python,django,github,requirements.txt
49,264,255
3
false
1
0
Make sure the requirements.txt is not inside the .gitignore file, which will prevent it from being updated.
2
0
0
I'm working in this shared Django project, a colleague is the owner of the repo in Github. The problem I am facing right now is that he added raven to his packages and in github the requirements.txt file is updated, however when I tried with git pull, locally, my requirements.txt does not have raven added. He told me that I have to reinstall requirements.txt so I tried with pip freeze > requirements.txt but nothing change. How can I update my requirements.txt file according the updates made from Github?
How to reinstall requirements.txt
0.066568
0
0
1,223
49,263,913
2018-03-13T18:58:00.000
2
0
1
0
python,django,github,requirements.txt
49,263,958
3
false
1
0
After you've pulled the latest changes into your requirements.txt, you can absolutely rerun pip. Run the command with pip install -r requirements.txt and it will install any new modules.
2
0
0
I'm working in this shared Django project, a colleague is the owner of the repo in Github. The problem I am facing right now is that he added raven to his packages and in github the requirements.txt file is updated, however when I tried with git pull, locally, my requirements.txt does not have raven added. He told me that I have to reinstall requirements.txt so I tried with pip freeze > requirements.txt but nothing change. How can I update my requirements.txt file according the updates made from Github?
How to reinstall requirements.txt
0.132549
0
0
1,223